[MMTK] Partial energy computation
Dmitriy Morozov
mmtk at foxcub.org
Thu Nov 20 04:22:49 CET 2003
Dear Konrad,
Thank you for your reply, and thank you and my best complements
on such a great library.
As for my question, while your advice does explain the 4 times
increase in the intersubset calculations, it does not explain
why it takes so long to compute the terms within the subset (as
well as the new timing for the intersubset calculations). The
adjusted code (that performs 1+20 energy evaluations (and times
the 20)) is attached. It's output is:
2539.67 + 2753.09 + -299.20 = 4993.56
Time as a whole 18.6157569885
Time for the first set 12.9738460779
Time for the second set 15.5259660482
Time for between the sets 19.7227729559
Is the problem with using Collections for subsets? Is there any
other way to describe subsets?
What I'm trying to do is to split the work for computing the
total energy of the system across multiple (a lot for a large
system) processors by assigning each processor to compute the
contributions to the energy due to a subset of atoms.
Intuitively I expect to have a speedup on the order of P (P
being the number of processors) - assuming a correct (even)
split - for an individual energy computation (not the first one,
but for the rest of them - i.e., discounting initialization).
However, I'm getting something different - if just the energy
due to the interactions between only half of the atoms takes
almost as long to compute as the whole system, I don't have much
hope for an order P speedup (or any speedup for that matter).
I also want to perturb the system along the way (basically do a
Monte-Carlo simulation), so I don't know how that affects the
internal structures that MMTK maintains (reading internals will
be my next step). So, I would appreciate your opinion? Do you
think this (splitting energy computation between the processors)
is doable? Is it doable with MMTK?
As a separate question, in Universe.energyEvaluator() I notice
that it saves an instance of ForceField.EnergyEvaluator in
_evaluator dictionary (that's a hashtable, right?) with subsets
as keys - how does it hash subsets? Perturbations won't affect
the hash, will they?
Thank you in advance for your reply.
Best,
Dmitriy
On Wednesday 19 November 2003 03:33 pm, you wrote:
> On Wednesday 19 November 2003 19:25, Dmitriy Morozov wrote:
> > Why does the following happen? When I run the code
> > (attached) the time that it takes to compute the energy for
> > each half of the atoms of the protein is almost the same as
> > for the whole protein, and the time that it takes to compute
> > the energy for
>
> In your script, you calculate each energy only once. In the
> first call for each subsystem, the internal data structures
> (list of bond and angle terms, Lennard-Jones parameters, etc.)
> are set up, which is a rather expensive procedure. What you
> are measuring is probably just that.
>
> To get a more realistic measure for repeated energy
> evaluations, calculate each energy twice in sequence and
> measure the time for the second evaluation.
>
> Konrad.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: energy.py
Type: text/x-python
Size: 1822 bytes
Desc: not available
Url : http://starship.python.net/pipermail/mmtk/attachments/20031119/ecf38494/energy.py
More information about the mmtk
mailing list