|Message: Re: How do I know the event random numbers are not repeated?||Not Logged In (login)|
Click on the Forum title, e.g. on the "Forums by Category" page, to read a sequence of postings to the Forum and its threads all in one page. If you are only interested in one thread or the thread following a specific posting, click the thread or the posting, which takes you to a smaller page, which contains only the part you are interested in and may be easier to navigate.
Messages are "chained" if there are only replies at the first level, i.e. 1/1.html, 1/1/1.html etc. In case of "chained" messages the message number is replaced by the icon and there is no indentation.
Inline: Display the subject line only or also the text of the posting(s); for the choice "All" the "Outline" choices are switched off.
|1||0||1||no text / full text of posting|
|2||1||All||text for level 1 only / text for All postings|
Outline: Choose the depth of the posting thread, successive toggle controls provide increasing detail.
|1||2||1||2 levels / 1 level (original posting)|
|2||3||2||3 levels / 2 levels|
|3||3||All||3 levels / all levels (all postings)|
On Sun, 23 Dec 2018 14:33:31 GMT, Huagang Yan wrote:
> [...] I have to save every random > number for every event
You only need to save the first random number for each event, because it is a PSEUDO-random number generator. Note, however, that finding duplicates this way does not ensure that the sequences of random numbers were the same, because any good generator has a larger internal state. Finding no duplicates does ensure no duplicate events.
A different approach is to seed the random number generator with the event number, at the beginning of each event. This has several advantages: a) you can re-run any given event, so if you find an outlier in some histogram and can determine its event number, you can re-run it with visualization on; b) unless you explicitly run duplicate event numbers, you know that no events are duplicated.
This last is much simpler than saving all those numbers and then searching for duplicates.