Scientific machine learning for PDEs
Partial differential equations (PDEs) lie at the heart of engineering applications ranging from aerodynamics to imaging to structures. Traditional numerical PDE solvers may take days or weeks to run, making tasks like optimization and uncertainty quantification prohibitively expensive because they require multiple PDE solutions with different inputs or parameters. My work seeks to develop, analyze, and deploy new scientific machine learning methods that use data to learn efficient reduced models for PDEs. [click arrow at right for more details]
In , we present Lift & Learn, a new physics-informed model learning method that uses knowledge of the governing PDE to identify a lifted variable representation in which the PDE has quadratic nonlinearities, allowing quadratic reduced operators to be learned from data and guarantees about the learned model fit to data to be proved. Numerical experiments demonstrate the accuracy of the learned models on the compressible Euler equations and the FitzHugh-Nagumo reaction-diffusion system.
In , we present the first formulation of Lift & Learn and the related Operator Inference method for the spatially continuous setting. This is a first step towards laying rigorous mathematical foundations for trust through error and convergence analyses analogous to those available for traditional model reduction methods. We demonstrate the scalability of the method by learning reduced models for a 3D rocket combustion simulation with over 18 million degrees of freedom.
Multi-fidelity methods for many-query tasks like optimization, uncertainty quantification, and control use reduced models to accelerate the computational task in combination with high-fidelity models to retain accuracy. My work introduces new multi-fidelity methods, provides accompanying error and accuracy guarantees, and collaborates with engineers to deploy these methods in real-world application. [click arrow at right for more details]
My work on certified PDE-constrained optimization uses a multi-fidelity trust-region framework  that combines low-fidelity reduced-basis models that accelerate the optimization with high-fidelity PDE solves that update the reduced model along the optimization path. I derive new efficiently computable error bounds for reduced-basis approximations of quadratic cost functions that indicate when the reduced-basis model should be updated and guarantee convergence of the method to the true high-fidelity optimum.
In , I proposed new multifidelity Monte Carlo estimators for variance and Sobol global sensitivity indices, which divide the total variance of a random output of interest into portions attributed to each random input or interactions between random inputs. In the multifidelity approach to Monte Carlo estimation, a few high-fidelity samples are used to guarantee unbiasedness of the resulting estimator while many low-fidelity samples are used to achieve a low estimator variance at low cost. Our approach leads to order-of-magnitude speedups on a 2D combustion model. I am also collaborating with Giuseppe Cataldo (NASA Goddard) to apply my method to the JW Space Telescope (launching December 2021).