| GitHub |
Single-to-multi-fidelity history-dependent learning with uncertainty quantification and disentanglement: Application to data-driven constitutive modeling
First publication: November 4, 2025
Data-driven learning is generalized to consider history-dependent multi-fidelity data, while quantifying epistemic uncertainty and disentangling it from data noise (aleatoric uncertainty). This generalization is hierarchical and adapts to different learning scenarios: from training the simplest single-fidelity deterministic neural networks up to the proposed multi-fidelity variance estimation Bayesian recurrent neural networks. The proposed methodology is demonstrated by applying it to different data-driven constitutive modeling scenarios for history-dependent plasticity of elastoplastic biphasic materials that include multiple fidelities with and without aleatoric uncertainty (noise). The method accurately predicts the response and quantifies model error while also discovering the noise distribution (when present). The versatility and generality of the proposed method open opportunities for future real-world applications in diverse scientific and engineering domains; especially, the most challenging cases involving design and analysis under uncertainty.
Authors:
- Jiaxiang Yi (J.Yi@tudelft.nl)
Authors afilliation:
- Delft University of Technology (Bessa Research Group)
Maintainer:
- Jiaxiang Yi (J.Yi@tudelft.nl)
Maintainer afilliation:
- Delft University of Technology (Bessa Research Group)
If you find any issues, bugs or problems with this package, please use the GitHub issue tracker to report them.
Copyright (c) 2025, Jiaxiang Yi
All rights reserved.
This project is licensed under the BSD 3-Clause License. See LICENSE for the full license text.