|Message: Re: How is parallelization implemented using MPI and Geant4?||Not Logged In (login)|
Click on the Forum title, e.g. on the "Forums by Category" page, to read a sequence of postings to the Forum and its threads all in one page. If you are only interested in one thread or the thread following a specific posting, click the thread or the posting, which takes you to a smaller page, which contains only the part you are interested in and may be easier to navigate.
Messages are "chained" if there are only replies at the first level, i.e. 1/1.html, 1/1/1.html etc. In case of "chained" messages the message number is replaced by the icon and there is no indentation.
Inline: Display the subject line only or also the text of the posting(s); for the choice "All" the "Outline" choices are switched off.
|1||0||1||no text / full text of posting|
|2||1||All||text for level 1 only / text for All postings|
Outline: Choose the depth of the posting thread, successive toggle controls provide increasing detail.
|1||2||1||2 levels / 1 level (original posting)|
|2||3||2||3 levels / 2 levels|
|3||3||All||3 levels / all levels (all postings)|
First of all thank you for your reply.
> Note that Geant4 is not currently capable of parallelizing tracking within an > event, though I believe there is a group looking at it; I have looked at it (in > another context) and there are many difficulties.
Do you mean that the calculation of the tracks could be parallelized? Actually I don't see the advantage of such a separation and don't need it either. As you said the events are independent and thus a perfect idea for time parallel simulation.
> In my program, G4beamline, I seed the random number generator with the event > number before each event, so the user need merely arrange for the different > jobs to use disjoint event numbers, and to name each job's output file using > the first event number.
This random generator seeding does sound reasonable. If you don't do that, you'll always get the same result running your application. It's all pseudo random after all. I however would use the thread number (if possible) to name the files.
> By outputting NTuples into Root files, combining the results is simple. Note > that MPI is not needed at all -- a simple shell script starts all the jobs and > Root combines the output files into plots and histograms.
I don't use ROOT, because I read that it is a total mess. If you have to collect all data yourself and put the thing together, you are absolutely right. MPI is theoretically not needed at all. However doing an:
mpdboot -n 20
mpiexec -n 20 mympiapp
Is easier than writing a shell script. On the other hand you have to setup MPI on every box, so the real advantage I see is only when running on a multicore CPU, because that can be utilized by MPI, but by no shell script.
> This works for parallelization up to perhaps 32 jobs. Beyond that it gets > rather cumbersome; certainly 10,000 parallel jobs is not feasible this way.
Well 10.000 sounds hilarious. There is a point where the speed-up gained from parallelization ends. You just have to find it. That's the key.
|Inline Depth:||Outline Depth:||Add message:|