Clefs CEA | Article | Nuclear energy

Clefs CEA n°64 - Les voix de la recherche - Journey to the heart of Big Data

# Neutronics and big data

Neutronics is the study of the path of neutrons through matter. The neutron population in a reactor core is governed by equations, the resolution of which can be used in particular to determine the reactivity, the power output by the nuclear fission reactions for ultimate conversion into electrical power, the isotopic composition, in all operating conditions – normal, incident and accident.

Complete version of the article published in Clefs CEA n°64 - Journey to the heart of Big Data.

Quite apart from measurements, these physical parameters of interest are obtained by running a neutronics calculation called the “core calculation”. Once they have been obtained, it is then possible to access key safety parameters, that is the hot spot factors corresponding to the power peaks in the core, the reactivity coefficients which reflect the sensitivity of the chain reaction to the variation in physical parameters such as the temperature of the nuclear fuel and that of the moderator, and the anti-reactivity margin which indicates the amplitude of the reduction in core reactivity (or subcriticality level) during a reactor scram.

## Physical scales used in neutronics

- Space: 10-15 m: neutron-nucleus interaction distance; 10-3 to 10-1 m: average free neutron path before interaction; 10-1 to 1 m: average free neutron path before absorption; 1 to several tens of metres: dimension of a nuclear reactor.
- Time: 10-23 - 10-14 s: Neutron-nucleus interaction; 10-6 to 10-3s : lifespan of neutrons in reactors; 10-2s: transient in a criticality accident; 10 s: delay in emission of delayed neutrons; 100 s: transient in achieving thermal equilibrium; 1 day: transient due to xenon 135; 1 to 4 years: irradiation of a nuclear fuel; 50 years: approximate lifetime of a nuclear power plant: 300 years: “radiation extinction” of fission products (a few exceptions); 103 to 109 years: long-lived nuclides.
- Energy: 20 MeV – 10-11 MeV (or even a few 10-13 MeV for ultracold neutrons). For spallation systems*: a few GeV to 10-11 MeV.

- nuclear data (cross-sections, etc.) characterising the possible interactions between a neutron and a target nucleus, as well as the resulting sources of radiation,
- technological data specifying the shape, dimensions, compositions of the various structures and components of the nuclear reactor,
- the reactor operating data and power history,
- the data specific to the numerical methods used, such as the spatial, energy, angular, time meshes for deterministic modelling, or the number of particles to be simulated in a probabilistic calculation.

## Reactor core efficient optimisation strategy

- randomly generating a set of N individuals, called the initial population in the optimisation space;
- eliminating individuals who do not meet a set of fixed constraints;
- classifying the individuals according to the degree to which the selected criteria are met (the “ranking” as defined by Pareto);
- creating a new population by selection, recombination and mutation;
- evaluating the new population produced against the defined criteria.

## Application to optimisation of fuel reloading

## Application to the design of the ASTRID reactor

For this reactor, the criteria to be minimised are the variation in reactivity at the end of the cycle, the void coefficient, the maximum sodium temperature during an ULOF* accident sequence, an economic criterion linked to the mass of nuclear fuel and the volume of the core. The constraints to be met are the maximum variation in reactivity over a cycle (2600 pcm*), the regeneration gain range (-0.1 - +0.1), the maximum number of dpa* (140), the maximum core volume (13.7 m3), the maximum sodium temperature during an ULOF accident sequence (1200°C), the maximum fuel temperature at the end of the cycle (2500°C).

To carry out this study, the performance of more than ten million reactors was evaluated for selection of the optimum reactor configurations with respect to the criteria and constraints defined. Exploration in the research space is by means of meta-models (neural networks for example) to reduce the computing time.

When applied to the ASTRID design studies, this method therefore led to the characterisation of an “optimal Pareto population” (in other words, impossible to improve one criterion without penalising the others) of 25,000 individuals-reactors. It gave results such as those illustrated in the following figure by means of a parallel coordinates visualisation (also called COBWEB) in which each line corresponds to an individual (a reactor) and each axis represents the variation range, within the population, of the characteristics (criteria, parameters, properties) reflecting the reactor performance criteria mentioned above.

These studies showed the benefits of evolutionary algorithms run in HPC for resolving complex multi-criteria optimisation problems with constraints. HPC computing resources today enable meta-heuristics to be used, naturally making use of the parallelism of the computing means and finally combining various research strategies. These works as a whole drew attention to the possible improvements for processing what is referred to as “many-objectives”, when the number of objectives is high. Other fields of research have appeared, in particular optimisation under uncertainties, whether random or/and epistemic.

### References

[1] J.M. Do, G. Arnaud, A.M. Baudron, J.J. Lautard, “Fuel loading pattern for heterogeneous EPR core configuration using a distributed evolutionary algorithm, International Conference on Mathematics”, Computational Methods & Reactor Physics (M&C 2009), Saratoga Springs, New York, May 3-7, 2009.

[2] J.M. Do, J.J. Lautard, A.M. Baudron, S. Douce, G. Arnaud, “Use of metaheuristics for design of fuel loading pattern in light reactors comprising some radial and axial heterogeneities”, Proceedings of the 2011 IEEE International Symposium on Parallel and Distributed Processing Workshop, pp. 374-380.

[3] E. Hourcade, F. Gaudier, G. Arnaud, D. Funtowiez, K. Ammar, “Supercomputing application for reactors code design and optimization”, Joint International Conference on Supercomputing in Nuclear Applications and Monte Carlo 2010, Tokyo, Japan, October 17-21, 2010.

[4] K. Ammar, Conception multi-physique et multi-objectif des cœurs de RNR-Na hétérogènes : développement d’une méthode d’optimisation sous incertitudes, thèse de doctorat, Université Paris Sud, 2015.

[5] VIZIR: F. Gaudier, “URANIE: The CEA/DEN Uncertainty and Sensitivity platform”, 6th International Conference on Sensitivity Analysis of model output - Procedia and Behavioral Sciences, Chapitre VI : Conclusions, perspectives vol. 2, pp. 7660-7661, 2010.

[6] URANIE: https://sourceforge.net/projects/uranie/

### contributors

| Gilles Arnaud is a researcher in the Thermohydraulics and Fluid Mechanics Service (Nuclear Energy Division/Systems and Structures Modelling Department) |

| Jean-Marc Martinez is a researcher in the Thermohydraulics and Fluid Mechanics Service (Nuclear Energy Division/Systems and Structures Modelling Department) |

| Karim Ammar is a researcher in the Reactor Studies and Applied Mathematics Service (Nuclear Energy Division/Systems and Structures Modelling Department) |

| Jean-Michel Do is a researcher in the Reactor Studies and Applied Mathematics Service (Nuclear Energy Division/Systems and Structures Modelling Department) |

### OTHER articles OF Clefs