Learn more about my research interests and vision on this page. If it sounds interesting, more information on joining my group here. See also my lists of publications and selected seminars and conference talks.

Research mission

To make design, control, and operational decisions, engineers rely on computational simulations which predict how the system will respond to different choices. However, state-of-the-art simulations for complex multi-disciplinary systems can be both time- and memory-intensive, requiring supercomputers and days or weeks to run. My research develops faster computational methods to support engineering decision-making, along two main lines of questioning:

My work spans the theory-to-method-to-application spectrum, with greatest emphasis on developing new computational methods that answer the above questions. My theoretical research asks how we can guarantee accuracy and robustness of the methods that allow us to trust them for use in engineering design. I collaborate with engineering domain experts to deploy the methods in application.

Why do we need faster computational methods for engineering?

Partial differential equations (PDEs) lie at the heart of engineering applications ranging from vehicle and structure design, medical imaging, subsurface oil and water management, and more. To make design decisions, engineers rely on many-query computations, in which they simulate the system in question multiple times, sometimes hundreds or thousands of times, to find an optimal design that satisfies constraints. Traditional engineering simulations use computationally expensive numerical PDE solvers that may take days or weeks to run, making many-query computations prohibitively expensive. We need new methods for both (1) fast approximate modeling of PDEs and (2) fast many-query analyses to enable better engineering design decisions faster.

My research tackles (1) through model reduction and scientific machine learning, and (2) through multi-fidelity methods.

What is model reduction?

Traditional engineering simulations are expensive because they rely on discretizing a problem's spatial domain on highly resolved computational grids or meshes. The large number of grid points/cells (often in the millions or billions) makes the equations underlying the simulation high-dimensional and thus time- and memory-intensive to solve. Model reduction derives cheap approximate models by projecting the high-dimensional equations onto a low-dimensional subspace in which the system solution can be well-approximated.

Model reduction methods are diverse and can draw on both data and properties of the equations themselves to find good approximating spaces. My work uses both data-driven and equation-driven approaches and often blends the two to create approaches for scientific machine learning (see below).

Related publications

What is scientific machine learning and why do we need it?

Broadly speaking, 'machine learning' describes methods for learning models from data. These models include modern deep neural networks, which have demonstrated impressive abilities to process and generate images, human speech, and more. So it's natural that there's been a lot of interest in using machine learning methods to learn fast approximate models for engineering simulations from data. The challenge is that the engineering/scientific setting is different from previous ML successes in important ways: (i) scientific/engineering data are often limited, far below `big data' regimes; (ii) we require engineering models to be able to predict in new regimes beyond those in which they were trained, (iii) decisions based on engineering models have consequences in human safety. These reasons contribute to distrust of machine-learned models for engineering use. 

To harness the power of data for engineering design and decision-making, we need tailored scientific machine learning methods that incorporate scientific knowledge into the learning formulations to operate in limited data regimes and extrapolate to new regimes so that we can trust their results.

Related publications

What are multi-fidelity methods for many-query analyses?

Many-query analyses include iterative methods for optimization, sampling methods for uncertainty quantification, and online methods for control. These computations are expensive because they require simulating the system not once but many times, at every optimization iterate, sample, or iteration of the control loop. Traditional simulation uses high-fidelity PDE solves, which is prohibitively expensive for the many-query setting. However, while we can instead use faster low-fidelity models in many-query analyses, making the computations faster, doing so introduces modeling errors into the final result of our many-query computation.

Multi-fidelity methods give us the best of both worlds: a limited number of high-fidelity solves are combined with a large number of low-fidelity solves, letting us accelerate the many-query computation while ensuring the final result is not subject to modeling error.

Related publications