Message: Re: How is parallelization implemented using MPI and Geant4? Not Logged In (login)
 Next-in-Thread Next-in-Thread
 Next-in-Forum Next-in-Forum

None Re: How is parallelization implemented using MPI and Geant4? 

Forum: Documentation and Examples
Re: Question How is parallelization implemented using MPI and Geant4? (Krisztian Balla)
Re: Sad Re: How is parallelization implemented using MPI and Geant4? (Krisztian Balla)
Re: Idea Re: How is parallelization implemented using MPI and Geant4? (Krisztian Balla)
Re: None Re: How is parallelization implemented using MPI and Geant4? (Tom Roberts)
Re: Agree Re: How is parallelization implemented using MPI and Geant4? (Krisztian Balla)
Re: Question Re: How is parallelization implemented using MPI and Geant4? (Geng)
Date: 02 Jul, 2012
From: Youming Yang <Youming Yang>

Hi Geng,

Could you give a stripped down sample of your MPI implementation, or some form of pseudocode?

In general, MPI can be used for synchronization of different "processes", and passing reasonable amounts of data (or unreasonably large data, but over a fast interconnect).

However if you launch multiple MPI processes on one computer, the most straightforward implementation of it means that you will end up launching multiple, full simulations, each with their own identical geometry, full sets of physics tables, etc. The only difference will be that different processes (or ranks) will generate primaries from a different random number seed.

This is fine if you are using a distributed computing setup, where multiple standalone computers/nodes, each with dedicated CPU and RAM can run a simulation starting at different random number seeds, and combine their final results.

However if you are running on one computer, you go from trying to initialize one physics table using one CPU and ~2GB of RAM (for voxelized geometry), to suddenly trying to initialize (lets say you use four separate processes) four identical physics tables and geometries (4x the RAM consumption), in an attempt to allow four CPU cores to do the work.

If you end up exhausting all of your system memory and end up writing to a hard drive's swap space or something, you can see where the slowdown can occur.

OpenMP on the other hand, is very well suited to running one program, while using additional cores/threads to speed up certain portions of the code (such as for loops, etc.).

These extra threads can pop in and out of existence as the code runs. This means that you are effectively running one simulation, on one CPU with one set of physics tables, but during any portion that you deem parallelisable, you can enable additional cores to help with the computation.

Was this what you were trying to do? Does this make sense?

Also, please feel free to refer to me as Ming, I am not a PhD yet C: Ming

On Sun, 01 Jul 2012 08:47:33 GMT, Geng wrote:

> Dear Dr:
> 
> I tried to use Geant4 with MPI. During running, the cpu only be 1% each
> core.So the time consumed by the application would even be more than one
> core. Could you explain what is wrong with this situation?
> 
> Thanks sir. geng

Inline Depth:
 1 1
 All All
Outline Depth:
 1 1
 2 2
 All All
Add message: (add)

1 None: Re: How is parallelization implemented using MPI and Geant4?   (Youming Yang - 02 Jul, 2012)
 Add Message Add Message
to: "Re: How is parallelization implemented using MPI and Geant4?"

 Subscribe Subscribe

This site runs SLAC HyperNews version 1.11-slac-98, derived from the original HyperNews


[ Geant 4 Home | Geant 4 HyperNews | Search | Request New Forum | Feedback ]