Message: Re: How is parallelization implemented using MPI and Geant4? Not Logged In (login)
 Next-in-Thread Next-in-Thread
 Next-in-Forum Next-in-Forum

None Re: How is parallelization implemented using MPI and Geant4? 

Forum: Documentation and Examples
Re: Question How is parallelization implemented using MPI and Geant4? (Krisztian Balla)
Re: Sad Re: How is parallelization implemented using MPI and Geant4? (Krisztian Balla)
Re: Idea Re: How is parallelization implemented using MPI and Geant4? (Krisztian Balla)
Re: None Re: How is parallelization implemented using MPI and Geant4? (Tom Roberts)
Re: Agree Re: How is parallelization implemented using MPI and Geant4? (Krisztian Balla)
Date: 31 Jan, 2009
From: Tom Roberts <Tom Roberts>

> > In my program, G4beamline, I seed the random number generator with the event
> > number  
> This random generator seeding does sound reasonable. If you don't do
> that, you'll always get the same result running your application. It's
> all pseudo random after all. I however would use the thread number (if
> possible) to name the files.

Yes, one definitely need different seeds. Thread number is not guaranteed to be unique on multiple systems (e.g. a Linux cluster); first event number is (because we arranged it).

> However doing an:
> mpdboot -n 20
> mpiexec -n 20 mympiapp
> Is easier than writing a shell script.

Only after installing MPI on every machine. I use a shell script on my 4-CPU Mac Pro all the time:
    g4beamline input.file first=0 last=999999 >0.out 2>&1 &
    g4beamline input.file first=1000000 last=1999999 >1000000.out 2>&1 &
    g4beamline input.file first=2000000 last=2999999 >2000000.out 2>&1 &
    g4beamline input.file first=3000000 last=3999999 >3000000.out 2>&1 &
The input.file defines the simulation; it uses $first to name the output Root file and processes events from $first to $last ($ means parameter expansion).

> > This works for parallelization up to perhaps 32 jobs. Beyond that it gets
> > rather cumbersome; certainly 10,000 parallel jobs is not feasible this way.
> 
> Well 10.000 sounds hilarious. 

There are a number of applications that need it. I expect that in a few years both CMS and ATLAS will scale to that level of parallelization. Perhaps not for simulations, and perhaps not for a single "job", but the need is there, and tool providers must be ready.

 Add Message Add Message
to: "Re: How is parallelization implemented using MPI and Geant4?"

 Subscribe Subscribe

This site runs SLAC HyperNews version 1.11-slac-98, derived from the original HyperNews


[ Geant 4 Home | Geant 4 HyperNews | Search | Request New Forum | Feedback ]