Physics-based modeling and simulation has become indispensable in aerospace applications ranging from aircraft design to uncertainty quantification. However, as designs are pushed to extreme operating conditions and decisions are driven by increasingly fine-grained tradeoffs in noise, safety, cost, and efficiency, greater demands are being placed on simulation fidelity. This high fidelity necessitates modeling fine spatiotemporal resolution, which can lead to extreme-scale, nonlinear dynamical-system models whose simulation consumes months on thousands of computing cores. Thus, a ‘computational barrier’ arises that precludes truly high-fidelity models from being used in important time-critical aerospace applications such as design under uncertainty, structural health monitoring, and model predictive control.

In this talk, I will present several advances in the field of nonlinear model reduction that exploit simulation data to overcome this barrier. These methods combine concepts from machine learning, computational mechanics, and optimization to produce low-dimensional reduced-order models (ROMs) that are 1) accurate, 2) low cost, 3) structure preserving, 4) reliable, and 5) certified. First, I will describe least-squares Petrov–Galerkin projection, which leverages subspace identification and optimal projection to ensure accuracy. Second, I will describe the sample mesh concept, which employs empirical regression and greedy-optimal sensor-placement techniques to ensure low cost. I will also describe novel methods that exploit time-domain data to further reduce computational costs. Third, I will present a technique that ensures the ROM is globally conservative in the case of finite-volume discretizations, thus ensuring structure preservation. Fourth, I will describe model reduction on nonlinear manifolds, wherein we employ convolutional autoencoders from deep learning to improve the predictive accuracy of the ROM. I will also describe ROM h-adaptivity, which employs concepts from goal-oriented mesh adaptivity to ensure that the ROM is reliable, i.e., it can satisfy any prescribed error tolerance. Finally, I will present machine-learning error models, which apply regression methods from machine learning to construct a statistical model for the ROM error; this quantifies the ROM-induced epistemic uncertainty and provides a mechanism for certification.