I would like to share some of my recent code comparisons between Serpent, MCNP5 coupled with BGCORE and MCNP5 coupled with MCODE for burnup. I am running a three dimensional Hexagonal RBWR assembly, and below is a figure showing the k-eff as a function of burnup.

As you can see, you can not run the exact same input file in 1 processor and in N (when N is large) processors and expect to get the same results. This is due to the way Serpent is structured. In MCNP5, you may due this and you will get the exact same mean and standard deviation. In Serpent, you will never be able to get exact reproducibility because it does not conserve the random number sequence between a single proc calculation and a multiprocessor calculation. However, we should still get an answer that is within statistics. As you can see from the attached plot there is a significant deviation when running the same input file ( 1 proc 2000 hist/cycle vs. 33 proc 2000 hist/cycle).

I have attached some simple flow charts to illustrate the differences between Serpent parallel calculations and MCNP5 parallel calculations (from what I understand, please correct me if I am wrong):

MCNP5 will communicate back to the master, keff and source sites after each cycle from each slave. The master will then combine the keffs and order the source sites where it will randomly sample for the next cycle. The master then sends the

**same**keff to each slave and divides up the source sites as needed. When doing this method, one can preserve the sequence of random numbers such that you can get the same results with the same input file using N processors.

With Serpent, the master divides the source histories evenly between the slaves at the beginning of the calculation and each slave does an

**independent**calculation. In this case, one will not be able to conserve the random number sequence since the keffs and source sites are never combined and depend on the number of processors. So currently, there is a dependence on the number of processors that you choose to run. This however is not the reason for the strong deviation in the above burnup plot. When I ran the same input file on 33 processors with 2000 histories/cycle, I effectively am running 33 independent calculations with ~60 histories per cycle. Therefore, I am not converging the source distribution on each slave and will have inherent biases in keff do to the renormalization procedure after each cycle. I then tested this out by having Serpent run a parallel calculation with 66,000 histories/cycle on 33 processors where now I am running 33 independent calculations with 2000 histories/cycle. From the plot, the results are more in line with MCNP5. Obivously, while running the 66000 case I reduced the standard deviation in the answers because the 33 independent runs are combined statistically at the end, but the time it took compared to the single proc case was equivalent.

Therefore a suggestion for a future task with Serpent would be to implement a parallel structure similar to MCNP5 and a dedicated random number (instead of built-in C RNG) where the random number sequence can be preserved for N processors and achieve the same results with the same input file.

**This is extremely important for reproducibility where increasing the number of processors just means speeding up the calculation while not changing the answer.**

-Bryan