E-Book, Englisch, 470 Seiten, eBook
Azmy / Sartori Nuclear Computational Science
1. Auflage 2010
ISBN: 978-90-481-3411-3
Verlag: Springer Netherland
Format: PDF
Kopierschutz: 1 - PDF Watermark
A Century in Review
E-Book, Englisch, 470 Seiten, eBook
ISBN: 978-90-481-3411-3
Verlag: Springer Netherland
Format: PDF
Kopierschutz: 1 - PDF Watermark
Nuclear engineering has undergone extensive progress over the years. In the past century, colossal developments have been made and with specific reference to the mathematical theory and computational science underlying this discipline, advances in areas such as high-order discretization methods, Krylov Methods and Iteration Acceleration have steadily grown.
Nuclear Computational Science: A Century in Review addresses these topics and many more; topics which hold special ties to the first half of the century, and topics focused around the unique combination of nuclear engineering, computational science and mathematical theory. Comprising eight chapters, Nuclear Computational Science: A Century in Review incorporates a number of carefully selected issues representing a variety of problems, providing the reader with a wealth of information in both a clear and concise manner. The comprehensive nature of the coverage and the stature of the contributing authors combine to make this a unique landmark publication.
Targeting the medium to advanced level academic, this book will appeal to researchers and students with an interest in the progression of mathematical theory and its application to nuclear computational science.
Zielgruppe
Research
Autoren/Hrsg.
Weitere Infos & Material
Advances in Discrete-Ordinates Methodology.- Second-Order Neutron Transport Methods.- Monte Carlo Methods.- Reactor Core Methods.- Resonance Theory in Reactor Applications.- Sensitivity and Uncertainty Analysis of Models and Data.- Criticality Safety Methods.- Nuclear Reactor Kinetics: 1934–1999 and Beyond.
"Chapter 6 Sensitivity and Uncertainty Analysis of Models and Data (p. 291-293)
Dan Gabriel Cacuci
6.1 Introduction
This chapter highlights the characteristic features of statistical and deterministic methods currently used for sensitivity and uncertainty analysis of measurements and computationalmodels. The symbiotic linchpin between the objectives of uncertainty analysis and those of sensitivity analysis is provided by the “propagation of errors” equations, which combine parameter uncertaintieswith the sensitivities of responses (i.e., results of measurements and/or computations) to these parameters.
It is noted that all statistical uncertainty and sensitivity analysis methods first commence with the “uncertainty analysis” stage, and only subsequently proceed to the “sensitivity analysis” stage. This procedural path is the reverse of the procedural (and conceptual) path underlying the deterministic methods of sensitivity and uncertainty analysis, where the sensitivities are determined prior to using them for uncertainty analysis.
In particular, it is emphasized that the Adjoint Sensitivity Analysis Procedure (ASAP) is themost e?cientmethod for computing exactly the local sensitivities for large-scale nonlinear problems comprising many parameters. This e?ciency is underscored with illustrative examples. The computational resources required by the most popular statistical and deterministic methods are discussed comparatively. A brief discussion of unsolved fundamental problems, open for future research, concludes this chapter.
6.2 Sensitivities and Uncertainties in Measurements and Computational Models: Basic Concepts
In practice, scientists and engineers often face questions such as: How well does the model under consideration represent the underlying physical phenomena? What confidence can one have that the numerical results produced by the model are correct? How far can the calculated results be extrapolated? How can the predictability and/or extrapolation limits be extended and/or improved?
Answers to such questions are provided by sensitivity and uncertainty analyses. As computerassisted modeling and analyses of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable investigative scientific tools in their own right. Since computers operate on mathematical models of physical reality, computed results must be compared to experimental measurements whenever possible.
Such comparisons, though, invariably reveal discrepancies between computed and measured results. The sources of such discrepancies are the inevitable errors and uncertainties in the experimental measurements and in the mathematical models. In practice, the exact forms of mathematical models and/or exact values of data are not known, so their mathematical form must be estimated. The use of observations to estimate the underlying features of models forms the objective of statistics.
This branch of mathematical science embodies both inductive and deductive reasoning, encompassing procedures for estimating parameters from incomplete knowledge and for refining prior knowledge by consistently incorporating additional information. Thus, assessing and, subsequently, reducing uncertainties in models and data require the combined use of statistics together with the axiomatic, frequency, and Bayesian interpretations of probability."