Message: Re: How is parallelization implemented using MPI and Geant4? Not Logged In (login)
 Next-in-Thread Next-in-Thread
 Next-in-Forum Next-in-Forum

Agree Re: How is parallelization implemented using MPI and Geant4? 

Forum: Documentation and Examples
Re: Question How is parallelization implemented using MPI and Geant4? (Krisztian Balla)
Re: Sad Re: How is parallelization implemented using MPI and Geant4? (Krisztian Balla)
Re: Idea Re: How is parallelization implemented using MPI and Geant4? (Krisztian Balla)
Re: None Re: How is parallelization implemented using MPI and Geant4? (Tom Roberts)
Date: 31 Jan, 2009
From: Krisztian Balla <Krisztian Balla>

First of all thank you for your reply.

> Note that Geant4 is not currently capable of parallelizing tracking within an
> event, though I believe there is a group looking at it; I have looked at it (in
> another context) and there are many difficulties.

Do you mean that the calculation of the tracks could be parallelized? Actually I don't see the advantage of such a separation and don't need it either. As you said the events are independent and thus a perfect idea for time parallel simulation.

> In my program, G4beamline, I seed the random number generator with the event
> number before each event, so the user need merely arrange for the different
> jobs to use disjoint event numbers, and to name each job's output file using
> the first event number.

This random generator seeding does sound reasonable. If you don't do that, you'll always get the same result running your application. It's all pseudo random after all. I however would use the thread number (if possible) to name the files.

> By outputting NTuples into Root files, combining the results is simple. Note
> that MPI is not needed at all -- a simple shell script starts all the jobs and
> Root combines the output files into plots and histograms.

I don't use ROOT, because I read that it is a total mess. If you have to collect all data yourself and put the thing together, you are absolutely right. MPI is theoretically not needed at all. However doing an:

mpdboot -n 20

mpiexec -n 20 mympiapp

Is easier than writing a shell script. On the other hand you have to setup MPI on every box, so the real advantage I see is only when running on a multicore CPU, because that can be utilized by MPI, but by no shell script.

> This works for parallelization up to perhaps 32 jobs. Beyond that it gets
> rather cumbersome; certainly 10,000 parallel jobs is not feasible this way.

Well 10.000 sounds hilarious. There is a point where the speed-up gained from parallelization ends. You just have to find it. That's the key.

Inline Depth:
 1 1
 All All
Outline Depth:
 1 1
 2 2
 All All
Add message: (add)

1 None: Re: How is parallelization implemented using MPI and Geant4?   (Tom Roberts - 31 Jan, 2009)
2 Question: Re: How is parallelization implemented using MPI and Geant4?   (Geng - 01 Jul, 2012)
1 None: Re: How is parallelization implemented using MPI and Geant4?   (Youming Yang - 02 Jul, 2012)
(_ None: Re: How is parallelization implemented using MPI and Geant4?   (Youming Yang - 02 Jul, 2012)
 Add Message Add Message
to: "Re: How is parallelization implemented using MPI and Geant4?"

 Subscribe Subscribe

This site runs SLAC HyperNews version 1.11-slac-98, derived from the original HyperNews


[ Geant 4 Home | Geant 4 HyperNews | Search | Request New Forum | Feedback ]