Generation of Training Data for a Gaussian Process Based Surrogate Model

Generation of Training Data for a Gaussian Process Based Surrogate Model

Default-Image-Projects Default Image - ProjectHessen Agentur/Jürgen Kneifel

Introduction

This project was a follow-up project for our initial project intended at setting up Lichtenberg Cluster for the use within our research projects at the research group System Reliability, Adaptive Structures, and Machine Acoustics. The frame of this project application is the DFG project “Efficient statistical parameter calibration for complex structural dynamics systems under consideration of model uncertainty” (Project number 460838752). Within this project, the demonstrator of the Collaborative Research Center 805 (“Control of Uncertainty in Load-Carrying Structures in Mechanical Engineering”) undergoes statistical model calibration to improve its predictive capability. Precise models are of paramount importance in the virtualization of the modern industrial product development process, since they help replacing the costly development of prototypes and support in the decision-making process. In preparation for the statistical parameter calibration, a sensitivity analysis has been conducted, that quantifies how much variation in the model outputs can be attributed to the variation in the respective model parameters. This information helps identifying influential and non-influential model parameters in order to reduce the number of model parameters in subsequent model calibration by fixing model parameters without influence. For a computationally efficient parameter calibration, we are using a two-stage approach that combines the accuracy of an expensive high-fidelity model and the speed of a less accurate low-fidelity model. The high-fidelity model is the MATLAB model of a complex multibody system. The low-fidelity model is a PCE-Kriging model, that is trained using many evaluations of the high-fidelity model. Within this project, the Lichtenberg Cluster is used to generate the required number of high-fidelity model evaluations to train the low-fidelity model.

Methods

The low-fidelity model is a Gaussian Process Model with a Polynomial Chaos Expansion (PCE) as mean function. This combination is termed a PCE-Kriging model and originates from the Work of SCHOBI, who combines the respective advantages of the good global approximation of the PCE model with the good local approximation of the GP by combining both methods [1]. First, high-fidelity model evaluations are generated as training data. The PCE model is first trained using this training data and subsequently the Gaussian Process Model is trained using the training data and the PCE model as mean function.
The PCE-Kriging Model was generated using the MATLAB toolbox UQLAB of ETH Zurich.

Results

The training yielded exceptionally precise surrogate models with a small validation and test error while using relatively few training data. Different validation schemes were used and confirmed this impression. The use of a PCE-Kriging model proved to be an excellent choice for the developed calibration algorithm.

Discussion

Based on the results achieved within this project, we could continue as planned with the working program of our DFG project. Using the results of this project, we could successfully apply our developed method for efficient statistical model calibration on the demonstrator of the Collaborative Research Center 805

Last Update

  • Last Update:

Participating Universities