|Message: Re: Clarifying the correct way to run MT jobs with MPI on a cluster with multiple nodes||Not Logged In (login)|
Click on the Forum title, e.g. on the "Forums by Category" page, to read a sequence of postings to the Forum and its threads all in one page. If you are only interested in one thread or the thread following a specific posting, click the thread or the posting, which takes you to a smaller page, which contains only the part you are interested in and may be easier to navigate.
Messages are "chained" if there are only replies at the first level, i.e. 1/1.html, 1/1/1.html etc. In case of "chained" messages the message number is replaced by the icon and there is no indentation.
Inline: Display the subject line only or also the text of the posting(s); for the choice "All" the "Outline" choices are switched off.
|1||0||1||no text / full text of posting|
|2||1||All||text for level 1 only / text for All postings|
Outline: Choose the depth of the posting thread, successive toggle controls provide increasing detail.
|1||2||1||2 levels / 1 level (original posting)|
|2||3||2||3 levels / 2 levels|
|3||3||All||3 levels / all levels (all postings)|
sorry for the long delay in answering this. It slipped out of my e-mail inbox
What you would like to do is probably the following: - Have 21 MPI ranks. - The 21 ranks should be distributed one per node - Use multi-threading to use the entire node
I suggest you to use, instead of "/run/numberOfThreads <n>" the macro command: "/run/useMaximumLogicalCores". In this way the G4 application will use always the maximum number of available cores. This should remove one parameter (n).
At this point k becomes the number of nodes you want to run on (k=21).
The tricky part is to convince PBS to submit your jobs one per node and ask the full node. I am not an expert of PBS and you should verify with your sys admins (note that sys admins can block some type of requests). But I think that:
#Give me 21 nodes and I want all 12 processors per node #PBS -l nodes=21:ppn=12
So: i=k=21 and j=12
Suggestion, in your main, put the MPI call to get the host name so you can verify that the jobs end in all different nodes.