Scientific Research

An Academic Publisher

History-by-History Variance in Monte Carlo Simulation of Radiation Interactions with Matter

**Author(s)**Leave a comment

*In-silico*simulation of random samples of radiation histories places into our hands data which may be treated and analysed in radically different ways. This work reports history-by-history computation of the variance in simulation output from FLUKA, a general-purpose Monte Carlo code. Variance computed history by history is compared with variance estimated from varying batch sizes. This work also addresses the issue of under-sampling where, against conventional expectations, the variance spikes as we sample more histories. The background to the spikes is traced by reconstructing single histories interaction by interaction—a novel level of details atypical of Monte Carlo simulations in the field, where statistical convergence of averaged quantities has been the focus.

KEYWORDS

1. Introduction

Radiation interaction with matter is a stochastic process. Two particles of the same type and the same energy colliding with the same target may lead to dramatically different outcomes. Take a third particle of the same type and the same energy, the outcome would again be very different. Collect enough samples, however, we find a well-behaved Gaussian distribution. Single collisions are not reproducible, but the collisions are predictable.

1.1. Radiation Histories

A radiation history begins with a particle colliding with a matter (something which is not vacuum). Each radiation history is typically very eventful, as the particle traverses the material, takes steps and collides with different atoms or nuclei. Most collisions produce progenies, which may be particles of different types. In a photoneutron collision, for example, the photon produces neutron(s), a different particle type. Some collisions lead to the absorption of a particle; it will no longer appear in the radiation history but the radiation history does not necessarily stop there, as there is all too likely to be progenies propagating the history further. A radiation history ends when the first parent (the source particle) and all progeny particles have either been absorbed or depleted their energy through collisions.

A radiation field consists of many radiation histories, each started off by a particle―the source particle which would later produce progeny particles. A radiation field may be of a nuclear reactor, a research collider (e.g. the Large Hadron Collider at CERN), an archeological or forensic laboratory, a student's laboratory or a radiotherapy clinic. Such is the broad spectrum of applications of ionizing radiation where we seek solutions from Monte Carlo simulation of radiation interactions. Analytical solutions based on probability distributions alone cannot solve the problem.

The predictable behavior described in the first paragraph, or the probability distribution, differs not only by the particle type (e.g. photon, electron, neutron, alpha or...) and the particle energy (spanning over ten orders of magnitude), but also the composition of the target (e.g. a% hydrogen, b% carbon, c% sulfur and d% lead). The target is itself far from homogeneous, each commanding a separate set of probability distributions. Radiation fields in real life are usually mixed with different types of particles, each at different energies evolving by different trends. Radiation histories are complex; different probability distributions take effect at different points in time and in space. This complexity easily overwhelms analytical methods. We resort to Monte Carlo simulations―taking random numbers at each step and each collision, to sample the step length, angle, energy, collision type, and progenies.

Generating random numbers and looking up probability densities are but the most straight-forward components in Monte Carlo simulation of radiation interactions. The physics components are far more complex, ruling out in-house software production. General-purpose Monte Carlo codes for radiation interactions have therefore emerged from CERN, Los Alamos National Laboratory and National Research Council Canada. FLUKA [1] , MCNP [2] , EGSnrc [3] , PENE- LOPE [4] and GEANT4 [5] , each with distinct design philosophy and development history, serve user communities worldwide. MCNP is particularly elaborate in neutron transport and statistical analysis. EGSnrc and PENELOPE simulate photons, electrons and positrons only. GEANT4 is a toolkit built on a contemporary object-oriented software design. FLUKA is not a toolkit; it simulates seamlessly a broad variety of particle types and heavy ions over a wide energy spectrum from eV to TeV [6] [7] [8] .

1.2. Averaging and Convergence

Monte Carlo simulations in the literature generally report “statistically meaningful” averaged quantities produced from simulating millions to trillions of radiation histories. In radiotherapy for example, beams of particles are shot into the body in a carefully-controlled delivery. No two radiation histories would be the same because the dice is rolled too many times within a single radiation history. Each particle deposits different amounts of energy at different points; some in the tumour, others missing the tumour and deposit their energies within heal- thy tissues instead―which are never reported―such a suggestion could even appear scandalous.

Quantities of interest, the intent of the simulation requested by the user in the input, are called “tallies” and “scores” by the MCNP and FLUKA communities, respectively. Tallies vary with the application at hand. Absorbed dose, dose equivalent and displacement per atom (DPA), for instance, apply to treatment planning, radiation protection and studies of material damage, respectively.

Authors generally report averaged quantities―not outliers. Whereas it is customary for referees to request for the number of histories in the absence of such a statement, it is hardly ever practical or possible to infer any statistical uncertainty of a Monte Carlo output from the number of histories simulated―unless one has detailed knowledge of the full collection of probability densities involved, along with the full knowledge of the techniques (e.g. variance reduction^{1}) and transport parameters the user applied. That error-bars decrease inversely with the square-root of the number of histories is a common expectation but will not apply unless the simulation is sufficiently sampled i.e. there is no under- sampling in space (e.g. obscure corners hardly tracked by particles), geometry (e.g. materials of small dimensions bound to be missed) and physics (e.g. resonance regions of neutron cross-sections).

1.3. Variance Estimation

The variance of a score may be estimated either batch by batch or history by history. The former is the existing practice in the FLUKA community. The latter estimates variance more accurately; a brute-force implementation, however, would require storing the score from each history, which can be prohibitive in terms of computer memory.

Batch-by-batch estimation of variance is realised by dividing the simulation into batches. Instead of running a simulation of $N$ histories, $B$ simulations of $N/B$ histories are ran. Each of the $B$ simulations is made independent by applying unique random number seeds. At the completion of each simulation, the averaged score,

${x}_{b}\mathrm{=}\frac{{\displaystyle \underset{i\mathrm{=1}}{\overset{N\mathrm{/}B}{\sum}}}{x}_{i}}{N/B}$ (1)

is obtained. ${x}_{i}$ , the contribution by individual histories, is no longer recoverable. The standard deviation,

$\sigma \mathrm{=}\sqrt{\frac{{\displaystyle \underset{i\mathrm{=1}}{\overset{N}{\sum}}}{\left({x}_{i}-\frac{{\displaystyle \underset{i\mathrm{=1}}{\overset{N}{\sum}}}{x}_{i}}{N}\right)}^{2}}{N-1}}$ (2)

is therefore not calculable. What remains calculable is the standard error, which is the standard deviation of $N/B$ samples of ${x}_{b}$ rather than $N$ samples of ${x}_{i}$ . The standard error is therefore used as an estimate for the standard deviation; B is most often taken to be ten. This estimation is known to be $B$ dependent unless $B$ is sufficiently large.

1.4. Single Histories Interaction by Interaction

Although least pursued, outliers, exotic events and single histories do have its place in Monte Carlo simulations [9] [10] [11] . In fact, this is where the potentiality of Monte Carlo simulation is expressed to the full, in that virtual experiments are tapped for data physical experiments cannot provide:

・ Controlled experiments can be carried out clean, variables can be controlled exactly;

・ Rare events can be studied with complete confidence, in the absence of instrument noise;

・ Particles can be detected unperturbed, as opposed to physical detectors which, by the principle of radiation detection, detect only if and when particles deposit energy;

・ Particles can be detected with memory of their ancestry history, in a specialised technique called latching [12] .

2. Materials & Methods

In a FLUKA simulation, 10^{5} independent samples of 7 MeV photons impinge a tungsten block which is partitioned into 128,797 voxels of 5 mm × 5 mm × 5 mm. History-by-history computation of variance is applied to each voxel. Spatially-differentiated energy deposition is studied alongside event-by-event single histories.

2.1. History-by-History Computation of Variance

In this work, history-by-history computation of variance is implemented in FLUKA via the comscw.f, usrini.f and usrout.f routines. The algorithm implemented here is adopted from Sempau et al. [13] ; the same as that implemented in BEAMnrc by Walters et al. [14] . It provides an economical solution to calculating variance on the fly as a simulation progresses with growing number of histories. The memory footprint does not increase with increasing number of histories. The algorithm summons three counters for each score:

・ a counter for the score ( ${x}_{i}$ ) for the current history;

・ a counter for the score, cumulative over all the histories so far $\underset{i\mathrm{=1}}{\overset{N}{\sum}}}{x}_{i$ ;

・ a counter for the square of the score, cumulative over all the histories so far

$\underset{i\mathrm{=1}}{\overset{N}{\sum}}}{x}_{i}^{2$ .

At the end of simulation, the two cumulative counters are used to calculate the standard deviation,

$\sigma \mathrm{=}\sqrt{\frac{1}{N-1}\left(\frac{{\displaystyle \underset{i\mathrm{=1}}{\overset{N}{\sum}}}{x}_{i}^{2}}{N}-{\left(\frac{{\displaystyle \underset{i\mathrm{=1}}{\overset{N}{\sum}}}{x}_{i}}{N}\right)}^{2}\right)}$ (3)

For scores with multiple bins, the three counters come in the form of three arrays, one array element to each bin. The tungsten block simulated here consists of 128,797 voxels, corresponding to 128,797 bins.

2.2. Single Histories Interaction by Interaction

Interaction-by-interaction single histories are studied using FTREE; technical implementation has previously been reported [9] . It operates via the stupre.f, stuprf.f and mgdraw.f routines, extracting data at various points as the simulation develops, as variables change values and as particles pop in and out of the stack. Particles and events are then post-processed and rearranged to reconstruct the entire radiation history interaction by interaction.

3. Results & Discussion

Spatial distribution of energy deposition (E) in tungsten (Figure 1) presents the penetration problem. Energy deposition along the central beam axis (radial position around zero) and at shallow depth is well sampled. Deeper into tungsten, further off axis, sampling is poor and $\sigma $ is high.

3.1. History-by-History Computation of Variance

With increasing number of histories, there is considerable fluctuation (Figure 2) along the central axis itself in both E and $\sigma $ , which was computed history by history. Low photon interaction cross-sections in air is evident from the fluctuating E and high $\sigma $ at d < 0. Difficult penetration pf photons in tungsten is evident from the fluctuating E and increasing $\sigma $ at d > 10 cm.

Two spikes in $\sigma $ are particularly noticeable in Figure 2(b): at (history, depth) = (24392, 3.5 cm to 4.0 cm) and (65395, 5.0 cm to 5.5 cm). In both pixels, $\sigma $ had been decreasing with increasing histories right up to that point, defying conventional expectations that $\sigma $ decreases with increasing histories. Moving from the 24391th to the 24392nd history, E changed from 0.116 ± 0.005 (1 s.d.)

(a) (b)

Figure 1. Penetration of 7 MeV photons in tungsten. (a) r is the radial position, d is the depth in tungsten. The beam crosses the air-tungsten boundary at d = 0 cm. Colour-coding is log_{10} (E), where E (MeV) is the energy deposited per 5 mm × 5 mm × 5 mm voxel per history. b) Dose deposition profile, E, with error bars, along the central axis of the beam.

(a) (b)

Figure 2. (a) Evolution of log_{10} (E) with increasing number of histories at varying depth. (b) Evolution of
$\sigma $ (expressed as a fraction) with increasing number of histories at various depths. Data shown is for pixels along the beam central axis.

to 0.156 ± 0.039 MeV. Moving from the the 65394th to the 65395th history, E changed from 0.032 ± 0.002 to 0.048 ± 0.016 MeV.

The effects of dividing the simulation into 50, 25, 10 and 5 batches are shown in Figure 3 and Figure 4 for both spikes. Deviations from history-by-history computation of
$\sigma $ are visible even with 50 batches up to about 2.5 × 10^{4} and 4 × 10^{4} histories in Figure 3 and Figure 4, respectively.

3.2. Single Histories Interaction by Interaction

From the 10^{5} histories ran, five photoneutrons were observed. As expected, (
$\gamma $ ,

(a) (b)(c) (d)

Figure 3. Effects of the number of batches on $\sigma $ estimation of E in pixel at depth 3.5 cm to 4.0 cm on the beam central axis, which spiked in Figure 2. Overlayed on the $\sigma $ computed history by history (in red) are the lines in blue for: (a) 50 batches; (b) 25 batches; (c) 10 batches; (d) 5 batches. Moving from the 24391th to the 24392nd history, $\sigma $ (fraction) computed history by history leaps from 0.04 to 0.25.

n) production by 7 MeV photons in tungsten is rare. All five were produced from photon collision with ^{183}W, which has an isotopic abundance of 14%. ^{182}W and ^{184}W have a higher abundance but the (
$\gamma $ , n) production threshold is 8.1 MeV and 7.4 MeV, respectively; the source photons simulated here are below threshold. Two of the five histories are presented in FTREE format ( [9] ) in Listing 1 and Listing 2.

In the listings, each row represents either the creation of a new particle (the particle type is named, preceded by an arrow) or a collision (the collision type is named, preceded by an asterisk). nth level of indentation indicates the nth generation within the radiation history. + > precedes particles which are siblings of the previous particle of the same generation; − > otherwise. The kinetic energy of the particle is given in the first column. For neutrons below 20 MeV, an

(a) (b)(c) (d)

Figure 4. Effects of the number of batches on $\sigma $ estimation of E in pixel at depth 5.0 cm to 5.5 cm on the beam central axis, which spiked in Figure 2. Overlayed on the $\sigma $ computed history by history (in red) are the lines in blue for: (a) 50 batches; (b) 25 batches; (c) 10 batches; (d) 5 batches. Moving from the 65394th to the 65395th history, $\sigma $ (fraction) computed history by history leaps from 0.05 to 0.32.

energy range is shown in place of its exact energy; this is due to the way FLUKA simulates neutrons below 20 MeV, referred to as multigroup treatment. The elapsed time since the beginning of the radiation history is given in the second column. The corresponding lateral displacement and depth (in tungsten) are given in the third and fourth columns, respectively. For a simple problem of 7 MeV photons penetration of tungsten, histories are very short compared to those of other applications; a history for the 20 GeV n_TOF [15] facility at CERN easily spans 65,000 rows.

This is now an opportune point to revisit the two spikes in Figure 2(b), Figure 3 and Figure 4. The spike at the 24392nd history, at depth between 3.5 cm and 4.0 cm on the beam central axis, in fact traces back to the history in Listing 1, where rows 2 and 3 correspond to the exact same pixel. It is the improbable

Listing 1. The 24392nd history, where the 7 MeV source photon undergoes a nuclear collision with ^{183}W at a depth of 3.936 cm in tungsten, 0.4649 ns since the history began. The photoneutron is created with energy 0.8045 MeV. Thereafter it collides and produces secondary photons which undergo photoelectric absorption, emitting electrons. The electron on row 16 is the 6th and the last generation.

Listing 2. The 65395th history, where, prior to (n,) nuclear collision, the source pho- ton undergoes Rayleigh scattering twice. The six electrons seen from row 90 to row 100 are of the 7th generation, which is the last. They are produced from Moller scattering of a photoelectron, whose parent was a bremsstrahlung photon.

event of a nuclear reaction that causes a sudden spike in energy deposition; this is a manifestation of the under-sampling outlined in Section 1.2. By the same token, Listing 2 answers for the spike at the 65395th history at depth 5.435 cm on the beam central axis; rows 4 and 5 correspond exactly to that pixel.

4. Conclusion

History-history computation of variance has been implemented and compared against batch-by-batch estimation. The issue of undersampling was addressed. Two spikes in variance, even as it was apparently decreasing monotonically up to that point, were traced to their origin by examining single histories interaction by interaction using FTREE. The particle undergoing a low-probability event

(photoneutron collision), of entirely correct physics, was uniquely identified. Two radiation histories were presented, featuring invaluable data which would have been washed out by averaging and which could only be available from Monte Carlo simulations. Dynamic versions of this and other histories are available online^{2}.

NOTES

^{1}Variance reduction techniques cause simulations to converge statistically within a shorter amount of computation time by cutting corners during simulations. When correctly applied, quantities of interest to the user will not be biased.

^{2}FTREE online http://www.marychin.org/ftree.html.

Conflicts of Interest

The authors declare no conflicts of interest.

Cite this paper

*Applied Mathematics*,

**8**, 293-306. doi: 10.4236/am.2017.83024.

[1] | Ferrari, A., Sala, P.R., Fassò, A. and Ranft, J. (2005) FLUKA: A Multi-Particle Transport Code. Technical Report CERN-2005-10, INFN/TC 05/11, SLAC-R-773, CERN, INFN, SLAC. |

[2] | Goorley, T., James, M., Booth, T., Brown, F., Bull, J., Cox, L.J., Durkee, J., Elson, J., Fensin, M., Forster, R.A., Hendricks, J., Hughes, H.G., Johns, R., Kiedrowski, B., Martz, R., Mashnik, S., McKinney, G., Pelowitz, D., Prael, R., Sweezy, J., Waters, L., Wilcox, T. and Zukaitis, T. (2016) Features of MCNP6. Annals of Nuclear Energy, 87, 772-783. |

[3] | Kawrakow, I. (2000) Accurate Condensed History Monte Carlo Simulation of Electron Transport. I. EGSnrc, The New EGS4 Version. Medical Physics, 27, 485-498. |

[4] | Sempau, J., Fernández-Varea, J.M., Acosta, E. and Salvat, F. (2003) Experimental Benchmarks of the Monte Carlo Code PENELOPE. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, 207, 107-123. |

[5] | Agostinelli, S., Allison, J., Amako, K., Apostolakis, J., Araujo, H., Arce, P., Asai, M., Axen, D., Banerjee, S., Barrand, G., Behner, F., Bellagamba, L., Boudreau, J., Broglia, L., Brunengo, A., Burkhardt, H., Chauvie, S., Chuma, J., Chytracek, R., Cooperman, G., Cosmo, G., Degtyarenko, P., Dell’Acqua, A., Depaola, G., Dietrich, D., Enami, R., Feliciello, A., Ferguson, C., Fesefeldt, H., Folger, G., Foppiano, F., Forti, A., Garelli, S., Giani, S., Giannitrapani, R., Gibin, D., Gómez Cadenas, J.J., González, I., Gracia Abril, G., Greeniaus, G., Greiner, W., Grichine, V., Grossheim, A., Guatelli, S., Gumplinger, P., Hamatsu, R., Hashimoto, K., Hasui, H., Heikkinen, A., Howard, A., Ivanchenko, V., Johnson, A., Jones, F.W., Kallenbach, J., Kanaya, N., Kawabata, M., Kawabata, Y., Kawaguti, M., Kelner, S., Kent, P., Kimura, A., Kodama, T., Kokoulin, R., Kossov, M., Kurashige, H., Lamanna, E., Lampén, T., Lara, V., Lefebure, V., Lei, F., Liendl, M., Lockman, W., Longo, F., Magni, S., Maire, M., Medernach, E., Minamimoto, K., Mora de Freitas, P., Morita, Y., Murakami, K., Nagamatu, M., Nartallo, R., Nieminen, P., Nishimura, T., Ohtsubo, K., Okamura, M., O’Neale, S., Oohata, Y., Paech, K., Perl, J., Pfeiffer, A., Pia, M.G., Ranjard, F., Rybin, A., Sadilov, S., Di Salvo, E., Santin, G., Sasaki, T., Savvas, N., Sawada, Y., Scherer, S., Sei, S., Sirotenko, V., Smith, D., Starkov, N., Stoecker, H., Sulkimo, J., Takahata, M., Tanaka, S., Tcherniaev, E., Safai Tehrani, E., Tropeano, M., Truscott, P., Uno, H., Urban, L., Urban, P., Verderi, M., Walkden, A., Wander, W., Weber, H., Wellisch, J.P., Wenaus, T., Williams, D.C., Wright, D., Yamada, T., Yoshida, H. and Zschiesche, D. (2003) Geant4—A Simulation Toolkit. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 506, 250-303. |

[6] | Battistoni, G., Bauer, J., Boehlen, T.T., Cerutti, F., Chin, M.P.W., Dos Santos Augusto, R., Ferrari, A., Garcia Ortega, P., Koz lowska, W., Magro, G., Mairani, A., Parodi, K., Sala, P.R., Schoofs, P., Tessonnier, T. and Vlachoudis, V. (2016) The FLUKA Code: An Accurate Simulation Tool for Particle Therapy. Frontiers in Oncology, 6, 116. |

[7] | Battistoni, G., Bohlen, T., Cerutti, F., Chin, M.P.W., Luigi, S.E., Fassò, A., Ferrari, A., Lechner, A., Empl, A., Mairani, A., Mereghetti, A., Garcia Ortega, P., Ranft, J., Roesler, S., Sala, P.R., Vlachoudis, V. and Smirnov, G. (2015) Overview of the FLUKA Code. Annals of Nuclear Energy, 82, 10-18. |

[8] | Bohlen, T., Cerutti, F., Chin, M.P.W., Fassò, A., Ferrari, A., Garcia Ortega, P., Mairani, A., Sala, P.R., Smirnov, G. and Vlachoudis, V. (2014) The FLUKA Code: Developments and Challenges for High Energy and Medical Applications. Nuclear Data Sheets, 120, 211-214. |

[9] | Chin, M.P.W. (2015) FTREE: Single-History Monte Carlo Analysis for Radiation Detection and Measurement. Journal of Radioanalytical & Nuclear Chemistry, 306, 565. |

[10] | Chin, M.P.W., Cerutti, F., Ferrari, A., Mairani, A. and Sala, P.R. (2011) Sample Histories from Carbon Therapy of a Human Brain. Transactions American Nuclear Society, 105, 55-56. |

[11] | Chin, M.P.W. and Spyrou, N.M. (2007) Event-by-Event Monte Carlo Tracking of Neutron-Nucleus Collisions in Neutron Detectors. Transactions American Nuclear Society, 97, 288-289. |

[12] | Chin, M.P.W., Cerutti, F., Ferrari, A., Garcia Ortega, P. and Sala, P.R. (2014) Monte Carlo Latching Studies of Prompt-Gamma Detection during Hadron Therapy. IEEE Transactions on Nuclear Science, 61, 2540-2546. |

[13] | Sempau, J., Sánchez-Reyes, A., Salvat, F., Tahar, H.O., Jiang, S.B. and Fernández-Varea, J.M. (2001) Monte Carlo Simulation of Electron Beams from an Accelerator Head Using PENELOPE. Physics in Medicine and Biology, 46, 1163. |

[14] | Walters, B.R.B., Kawrakow, I. and Rogers, D.W.O. (2002) History by History Statistical Estimators in the BEAM Code System. Medical Physics, 29, 2745-2752. |

[15] | Chiaveri, E., Altstadt, S., Andrzejewski, J., Audouin, L., Barbagallo, M., Bécares, V., Becvár, F., Belloni, F., Berthoumieux, E., Billowes, J., Boccone, V., Bosnar, D., Brugger, M., Calviani, M., Calvino, F., Cano-Ott, D., Carrapico, C., Cerutti, F., Chin, M.P.W., Colonna, N., Cortés, G., Cortés-Giraldo, M.A., Diakaki, M., Domingo-Pardo, C., Duran, I., Dressler, R., Dzysiuk, N., Eleftheriadis, C., Ferrari, A., Fraval, K., Ganesan, S., García, A.R., Giubrone, G., Gómez-Hornillos, M.B., Gonialves, I.F., González-Romero, E., Griesmayer, E., Guerrero, C., Gunsing, F., Gurusamy, P., Hernández-Prieto, A., Jenkins, D.G., Jericha, E., Kadi, Y., Kappeler, F., Karadimos, D., Kivel, N., Koehler, P., Kokkoris, M., Krticka, M., Kroll, J., Lampoudis, C., Langer, C., Leal-Cidoncha, E., Lederer, C., Leeb, H., Leong, L.S., Losito, R., Mallick, A., Manousos, A., Marganiec, J., Martínez, T., Massimi, C., Mastinu, P.F., Mastromarco, M., Meaze, M., Mendoza, E., Mengoni, A., Milazzo, P.M., Mingrone, F., Mirea, M., Mondalaers, W., Paradela, C., Pavlik, A., Perkowski, J., Plompen, A., Praena, J., Quesada, J.M., Rauscher, T., Reifarth, R., Riego, A., Robles, M.S., Roman, F., Rubbia, C., Sabaté-Gilarte, M., Sarmento, R., Saxena, A., Schillebeeckx, P., Schmidt, S., Schumann, D., Tagliente, G., Tain, J.L., Tarrío, D., Tassan-Got, L., Tsinganis, A., Valenta, S., Vannini, G., Variale, V., Vaz, P., Ventura, A., Versaci, R., Vermeulen, M.J., Vlachoudis, V., Vlastou, R., Wallner, A., Ware, T., Weigand, M., Weiss, C., Wright, T. and Zugec, P. (2014) The CERN n TOF Facility: Neutron Beams Performances for Cross Section Measurements. Nuclear Data Sheets, 119, 1-4. |

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.