[MMTK] Problems with Normal Mode analysis

Konrad Hinsen hinsen@cnrs-orleans.fr
05 Jan 2003 21:09:02 +0100

Pawel Kedzierski <kedziers@pkmk486.ch.pwr.wroc.pl> writes:

>    Q: Is there a way to limit/estimate memory requirements? Ideally I'd like
>       to force MMTK to do the same calculation using less memory
>       (compromising time), but a clean way to give up if the requirements
>       are too high would also be welcome. I tried setting timeouts using

MMTK uses LAPACK routines whose workspace arrays are all allocated in 
Python code (see MMTK.NormalModes.NormalModes._diagonalize), so it is
easy to predict memory usage exactly. It would also be rather easy to
write replacement code that uses less memory, assuming that suitable
diagonalization algorithms exist. Just create subclass of NormalModes
and override the method _diagonalize.

>       signal module but the MMTK code seem to ignore signals?  Can I, for

Python handles signals in its main interpreter loop. There is no signal
handling while C routines (such as LAPACK code) are executed.

>   2. Random segfaults which cannot be caught using Python exception
>      handling.
>    Q: Can I prevent them? Or, at least, does anybody know how to debug
>      them (noninteractively at best, since they usually happen after
>      several hours of calculations...)

If you have a core dump, you can run a post-mortem debugger. There is no
other way for dealing with segmentation faults. Ideally they should
never happen, of course...

Konrad Hinsen                            | E-Mail: hinsen@cnrs-orleans.fr
Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-
Rue Charles Sadron                       | Fax:  +33-
45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
France                                   | Nederlands/Francais