|Message: Re: Simulation hang up||Not Logged In (login)|
Click on the Forum title, e.g. on the "Forums by Category" page, to read a sequence of postings to the Forum and its threads all in one page. If you are only interested in one thread or the thread following a specific posting, click the thread or the posting, which takes you to a smaller page, which contains only the part you are interested in and may be easier to navigate.
Messages are "chained" if there are only replies at the first level, i.e. 1/1.html, 1/1/1.html etc. In case of "chained" messages the message number is replaced by the icon and there is no indentation.
Inline: Display the subject line only or also the text of the posting(s); for the choice "All" the "Outline" choices are switched off.
|1||0||1||no text / full text of posting|
|2||1||All||text for level 1 only / text for All postings|
Outline: Choose the depth of the posting thread, successive toggle controls provide increasing detail.
|1||2||1||2 levels / 1 level (original posting)|
|2||3||2||3 levels / 2 levels|
|3||3||All||3 levels / all levels (all postings)|
I would like to make an update on this thread and clarify the situation.
In this thread three separate issues have been discussed. The three are separate issues that do not have anything in common. So it is important not to create confusion. I will go through the three reported problems:
ISSUE 1: "In an MPI job, when using command line scoring or histograms from g4analysis the end of job is extremely long with a large number of MPI ranks " STATUS: This indeed is true and is caused by the algorithm we use to merge results in MPI. This is a problem only for very large data (e.g. millions of voxels in command line scoring) or for very large number of MPI ranks (e.g. ~100 ranks). In Geant4 V10.2 we will provide a solution to this problem. Waiting for the new version of G4 that will fix this, to alleviate it we suggest to use a combined approach of MPI+MT: reduce the number of MPI ranks using MT when possible (e.g. on the same node).
ISSUE 2: "In MPI job, it seems that the ranks have all the same seeds." STATUS: We cannot observe this problem with the G4 examples provided. To verify if this is the case, add to your macros the following lines before /run/beamOn: /run/verbose 1 /run/printProgress 1 Then execute your application and send output to a file: <myApplication> [myoption] > output.log Then execute the following command: grep seeds output.log | cut -d\( -f2 | sort | uniq | nl | tail -n1 | cut -f1 Check the number printed on screen, if this is exactly the number of events selected than no two events have used the same random number seeds. NOTE: While I've not seen events with the same random number seed in small jobs, studying our code, I think it is theoretically possible that with multi-threading enabled in some cases two events may share the same seeds. We will work on a fix for this for 10.2
ISSUE 3: "I've observed my job entering an "infinite loop" when I use multi-threading and a magnetic field" STATUS: I've not been able to reproduce this error, but if confirmed this sounds like a bug in G4. If you have any additional information on how to reproduce this problem, please contact me
|Inline Depth:||Outline Depth:||Add message:|