Bayesian uncertainty quantification of computer models with efficient calibration and computation
The use of mathematical models, typically implemented in the form of computer code, proliferates to solve complex problems in many scientific applications such as nuclear physics and climate research. The computational and statistical tools of Uncertainty Quantification (UQ) are instrumental in assessing how accurately a computer model describes a physical process. Bayesian framework for UQ has become the dominant approach, because it provides a principled way of quantifying uncertainty in the language of probabilities. The ever-growing access to high performance computing in scientific communities has meanwhile created the need to develop next-generation tools and theory for analysis of computer models. Motivated by practical research problems, this dissertations proposes novel computational tools and UQ methodology aimed to enhance the quality of computer models which leads to improved predictive capability and a more h03000300onest" UQ.First, we consider model uncertainty, which arises in situations when several competing models are available to describe the same or a similar physical phenomenon. One of the historically dominant methods to account for this source of uncertainty is Bayesian Model Averaging (BMA). We perform systematic analysis of prediction errors and show the use of BMA posterior mean predictor leads to mean squared error reduction. In a response to a recurrent research scenario in nuclear physics, BMA is extended to a situation where models are defined on non-identical study regions. We illustrate our methodology via pedagogical simulations and applications of forecasting nuclear observables, which exhibit improvements in both prediction error and empirical coverage probabilities.In the second part of this dissertation, we concentrate on individual computer models with particular focus on those which are computationally too expensive to be used directly for predictions. Furthermore, we consider computer models that need to be calibrated with experimental observations, because they depend on inputs whose values are generally unknown. We develop an efficient algorithm based on variational Bayes inference (VBI) for the calibration of computer models with Gaussian processes (GPs). To preserve the efficiency of VBI in the presence of dependent data, we adopt the pairwise decomposition of the data likelihood using vine copulas that separate the information on dependence structure in data from their marginal distribution. We provide both theoretical and empirical evidence for the computational scalability of our algorithm and demonstrate the opportunities given by our method on a real-data example through calibration of the Liquid Drop Model of nuclear binding energies.As a fast and easy-to-implement alternative to the fully Bayesian treatment (such as the VBI approach), we propose an empirical Bayes approach to computer-enabled predictions of physical quantities. We offer a new perspective to the Bayesian calibration framework with GPs and provide its representation as a Bayesian hierarchical model. Consequently, a posterior consistency of the physical process is established, assuming certain smoothness properties of the GP priors and the existence of a strongly consistent estimator of a noise scale. A simulation study and a real-data example that support the consistency and efficiency of the empirical Bayes method are provided as well.
Read
- In Collections
-
Electronic Theses & Dissertations
- Copyright Status
- In Copyright
- Material Type
-
Theses
- Authors
-
Kejzlar, Vojtech
- Thesis Advisors
-
Maiti, Tapabrata
Viens, Frederi
- Committee Members
-
Ramamoorthi, Ramanathapuram V.
Nazarewicz, Witold
- Date
- 2020
- Subjects
-
Statistics
- Program of Study
-
Statistics - Doctor of Philosophy
- Degree Level
-
Doctoral
- Language
-
English
- Pages
- 149 pages
- ISBN
-
9798662478008
- Permalink
- https://doi.org/doi:10.25335/3bqy-0n20