Jian Cao, Northwestern University
Monday, July 24, 8:30am, Ballrooms A,B,C
Physics-Based AI-Assisted Numerical Simulations for Manufacturing Process Design and Control
Abstract
I view manufacturing as an integration platform that translates ideas and resources into products used by societies. I will present our current research efforts in advancing metal powder-based additive manufacturing processes and forming processes using the combination of the mechanics-driven and data-driven approaches. Specifically, I will show how the integration of the fundamental process mechanics, process control, and techniques including machine learning to achieve effective and efficient predictions of material’s mechanical behavior due to or during a manufacturing process. Our solutions particularly target three notoriously challenging aspects of the process, i.e., long history-dependent properties, complex geometric features, and the high dimensionality of their design space.
Jacqueline Chen, Sandia National Laboratories
Tuesday, July 25, 8:30am, Ballrooms A,B,C
The Convergence of Exascale Computing and Data Science Towards Zero-Carbon Fuels for Power and Transportation
Abstract
Mitigating climate change while providing the nation’s transportation and power generation are important to energy and environmental security. The shift to hydrogen as a clean energy carrier is one of the most promising strategies to reduce CO2 emissions in the face of increasing energy demand. While hydrogen has a few drawbacks as an energy carrier due to its low energy density, ammonia is simpler to transport and store for extended periods of time, making it an attractive carbon-free energy carrier for off-grid localized power generation and marine shipping. However ammonia has poor reactivity and forms NOx and N2O emissions. The poor ammonia reactivity can be circumvented by partial cracking of ammonia to form ammonia/hydrogen/nitrogen blends tailored to match conventional hydrocarbon fuel properties. However, combustion of ammonia/hydrogen/nitrogen blends at high pressure, and in particular, the coupling between turbulence and fast hydrogen diffusion remains poorly understood. Exascale computing provides a unique opportunity for direct numerical simulation (DNS) of turbulent combustion with ammonia/hydrogen blends to investigate pressure effects on combustion rate, blow-off limits and chemical pathways for NOx and N2O formation.
Exascale computing introduces challenges for data management and the need for reduced order surrogate models (ROMS) for chemical species dimension reduction and for novel in situ analysis and visualization methods. A novel model driven on-the-fly ROM recently formulated and implemented in reactive flow DNS to reduce the computational cost of chemistry will be described.
Gianluca Iaccarino, Stanford University
Thursday, July 26, 8:30am, Ballrooms A,B,C
Advanced Simulation and Computing: Accelerated Computations, Machine Learning and Uncertainty Quantification for Multiphysics Applications
Abstract
Jet engines, scramjets, rockets and solar energy receivers have been the focus of a sequence of computational projects at Stanford University. Featuring a combination of computer science and multi-physics simulations, the research is funded by Department of Energy. within the Advanced Simulation and Computing (ASC) Program. A common theme of the applications above is the coupled nature of the physical processes involving turbulent transport, combustion, radiation, compressible fluid mechanics, multiphase flow phenomena, etc. The research portfolio includes not only the engineering models and software tools required for the simulations of the overarching applications, but also innovations in high-performance computing and uncertainty quantification aimed at providing quantitative estimates of the prediction accuracy. This talk will trace back the history of the projects at Stanford and how the initial efforts targeting demonstrations on a fastest supercomputer in 2000 (ASCI White) have evolved to enable the present ensemble simulations on today’s exascale class machines.
Charbel Farhat, Stanford University
Thursday, July 26, 8:30am, Ballrooms A,B,C
Physics-Based Digital Twinning
Charbel Farhat, Stanford University
Marie Jo Azzi, Stanford University
Marco Pavone, Stanford University
Christian Soize, Gustave Eiffel University
Abstract
A digital twin usually refers to a digital replica of an asset – whether a physical platform or a process – that can be used, for example, to optimize in near real-time the operation and/or life cycle management of this asset; or more generally, to drive the Intelligent Enterprise by linking engineering and operations such as maintenance. The advocated enabler of such a computational capability is the integration of artificial intelligence, machine learning, and software analytics with data, to create living digital simulation models capable of updating themselves as their physical counterparts evolve. Preliminary forms of such digital twins are often described as the result of the integration of data analytics with the model-based prediction of a few, scalar, quantities of interest (QoIs). This lecture however will first question whether a few QoIs can always be identified to represent the critical state of a newly designed then deployed physical platform. Next, it will present a more robust approach for realizing digital twins based on adaptable, stochastic, low-order but high-fidelity computational models grounded in physics – that is, partial differential equations. The proposed approach features novel mathematical ideas for integrating the modeling and quantification of model-form uncertainty with probabilistic reasoning, projection-based model order reduction, and machine learning. It constructs stochastic, physics-based computational models that self-adapt using information extracted from sensor data; and operate in real time. Finally, the lecture will demonstrate the potential of the proposed approach for digital twinning using four sample realizations: a digital twin for a small-scale replica of an X-56 type aircraft; another one for a bridge; a third digital twin for a component of an automotive system; and a fourth one for the autonomous carrier landing of a UAV.
Bio
Charbel Farhat is the inaugural James and Anna Marie Spilker Chair of the Department of Aeronautics and Astronautics, the Vivian Church Hoff Professor of Aircraft Structures, and the Director of the Stanford-KACST Center of Excellence for Aeronautics and Astronautics at Stanford University. His research interests are in computational engineering sciences for the design, analysis, and operation of complex systems in aerospace, mechanical, and naval engineering. He is a Member of the National Academy of Engineering (US); a Member of the Royal Academy of Engineering (UK); a Vannevar Bush Faculty Fellow; a Doctor Honoris Causa from Ecole Nationale Supérieure d’Arts et Métiers, Ecole Centrale de Nantes, and Ecole Normale Supérieure Paris-Saclay; a designated ISI Highly Cited Author in Engineering; and a Fellow of AIAA, ASME, IACM, SIAM, USACM, and WIF. He has trained so far about 100 PhD and post-doctoral students. For his research on aeroelasticity, aeroacoustic scattering, CFD, dynamic data-driven systems, fluid-structure interaction, high performance computing, model reduction, and physics-based machine learning, he has received many professional and academic distinctions including: the Ashley Award for Aeroelasticity and the Structures, Structural Dynamics and Materials Award from AIAA; the Spirit of St Louis Medal and a Lifetime Achievement Award from ASME; the Gordon Bell Prize and the Sidney Fernbach Award from IEEE; the Gauss-Newton Medal from IACM; the Grand Prize from the Japan Society for Computational Engineering Science; and the John von Neumann Medal from USACM. He was appointed on the Scientific Advisory Board of the US Air Force and on the Space Technology Industry-Government-University Roundtable. He was also selected by the US Navy recruiters as a Primary Key-Influencer and flown by the Blue Angels.
Mike Pritchard, University of California, Irvine/NVIDIA Corporation
Wednesday, July 25, 8:30am, Ballrooms A,B,C
Machine Learning our Way to More Accurate Weather Forecasts and More Interactive Climate Projections
Abstract
Contributions from industry over the past year have seen fully data driven methods for weather prediction dramatically increase in accuracy and ambition, rivaling and in some cases outperforming deterministic state of the art. I will review NVIDIA’s own contributions in this regard, including the challenges associated with making predictions that extend beyond weather to the climate timescale, in the context of “Earth-2” – a digital twin envisioned for climate impacts planning, and associated research at the interface of hybrid physics/machine learning methods for atmospheric simulation.
Anima Anandkumar, California Institute of Technology
Monday, July 24, 1pm, Ballroom A
Neural Operators for Accelerating Scientific Simulations
Abstract
Deep learning surrogate models have shown promise in modeling complex physical phenomena such as fluid flows, molecular dynamics, and material properties. However, standard neural networks assume finite-dimensional inputs and outputs, and hence, cannot withstand a change in resolution or discretization between training and testing. We introduce Fourier neural operators that can learn operators, which are mappings between infinite dimensional function spaces. They are discretization-invariant and can generalize beyond the discretization or resolution of training data. When applied to modeling weather forecasting, Carbon Capture and Storage (CCS), material plasticity and many other processes, neural operators capture fine-scale phenomena and have similar skill as the gold-standard numerical weather models, while being 4-5 orders of magnitude faster.
John Evans, University of Colorado Boulder
Tuesday, July 25, 1pm, Ballroom A
Interpolation-Based Immersed Finite Element and Isogeometric Analysis
Abstract
Immersed finite element methods enable the simulation of physical systems out of reach by classical finite element analysis, and they also streamline the development of powerful shape and topology optimization technologies. However, the development of an immersed finite element analysis code is a daunting and burdensome task even for domain experts. In this talk, a new approach to immersed finite element analysis will be presented that overcomes this issue. In our approach, finite element basis functions defined on a non-body-fitted background mesh are first interpolated onto a Lagrange basis defined on a body-fitted integration mesh. These background basis function approximations are then employed for immersed finite element analysis. By construction, the background basis function approximations can be represented in terms of Lagrange shape functions over each integration mesh element using Lagrange extraction operators. This in turn enables one to transform a classical finite element analysis code into an immersed finite element analysis code with minimal implementation effort. Namely, one only needs to provide the classical finite element analysis code with Lagrange extraction operators, a connectivity array relating local and global degrees of freedom, and the ability to compute the values and derivatives of the background basis function approximations using the Lagrange extraction operators. Moreover, one can use the same ingredients to transform a classical finite element analysis code into an immersed isogeometric analysis code.
This talk will begin with a general overview of our approach followed by a presentation of stability and convergence theory for a simple model problem. The efficacy of our approach will then be illustrated using a number of example problems from structural mechanics, fluid dynamics, and fluid-structure interaction. Finally, our open-source software package for generating the data structures necessary for our approach will be presented, as well as how this enables immersed finite element and isogeometric analysis within the popular open-source finite element analysis platform FEniCS. This talk is based on joint work with Jennifer Fromm, Ru Xiang, Han Zhao, and David Kamensky of the University of California San Diego and Nils Wunsch and Kurt Maute of the University of Colorado Boulder.
Hector Gomez, Purdue University
Tuesday, July 25, 1pm, Ballroom C
Direct Van der Waals Simulation (DVS) of Phase-Transforming Fluids
Abstract
Cavitating flows are ubiquitous in engineering and science. Despite their significance, a number of fundamental problems remain open; and our ability to make quantitative predictions is very limited. The Navier-Stokes-Korteweg equations constitute a fundamental model of cavitation, which has potential for predictive computations of liquid-vapor flows, including cavitation inception —one of the most elusive aspects of cavitation. However, numerical simulation of the Navier-Stokes-Korteweg equations is very challenging, and state of the art simulations are limited to very small Reynolds numbers, open flows (no walls), and in most cases, micrometer length scales. The computational challenges emerge from, at least, (a) the presence of third-order derivatives in the governing equations, (b) a complicated eigenstructure of the spatial partial-differential operators in the governing equations, which limits the use of standard finite volume techniques, and (c) the need to resolve the liquid-vapor interface, which without special treatment, has a thickness in the order of nanometers. Here, we present a stabilized discretization scheme that permits, for the first time as far as we are aware, large-scale simulations of wall-bounded flows with large Reynolds numbers. The proposed stabilization scheme is a residual-based approach that emanates from the eigenstructure of the equations and outperforms standard stabilization schemes for advection-dominated problems. We feel that this work opens possibilities for predictive simulations of cavitation.
Lori Graham-Brady, Johns Hopkins University
Monday, July 24, 1pm, Ballroom C
Leveraging Machine Learning and High-Throughput Experimentation in Materials Design
Abstract
Recent advances in high-throughput experimental techniques offer exciting opportunities to generate materials characterization data in statistically significant quantities. The trade-off is that these high-throughput experiments may offer lower quality in terms of resolution and accuracy. In the context of materials design, however, one may be willing to sacrifice some precision if the goal is simply to discern whether or not the material performance has changed between one specimen and the next. In this way, materials of interest are rapidly selected from a large candidate pool and can be subsequently evaluated using more rigorous, lower-throughput processes. In a similar vein, machine learning approaches promise an efficient assessment of material behavior that may lack the sophistication and spatial accuracy of physics-based computational solutions but that also may be sufficient to identify material chemistries and microstructures that merit further exploration. Such models support rapid decision-making for control and optimization of high-throughput processes on the path to materials design. This talk will provide an overview of the challenges and opportunities afforded by an integrated high-throughput and automated materials design framework, as demonstrated in the AI for Materials Design (AIMD) facility at Johns Hopkins. The role of machine learning in guiding this automated materials design is highlighted through a series of example applications.
Ihor Skrypnyk, The Goodyear Tire & Rubber Company
Wednesday, July 26, 1pm, Ballroom A
Hybrid Physics/Data Driven Modeling for Virtual Tire Development
Abstract
The automotive industry often goes through significant changes. Currently, one of the many aspects of these changes is the acceleration of new product development. This process is evolving from one heavily reliant on building and testing of physical prototypes to one that heavily relies on modeling and simulation. New advances in Machine Learning play a significant role.
Over the last three or four decades, Finite Element Analysis (FEA) and other types of structural models have been used more broadly to provide design recommendations and to reduce the number of product prototypes built. The same trend also has been present in tire manufacturing. The capabilities to predict tire performance, including tire durability, rolling resistance, treadwear, force and moment, noise, have been developed and published over the years.
These modeling capabilities target objective measures of tire performance that can easily be measured in a laboratory. Most of these methodologies have reached their maturity, and further gains in predictive accuracy are getting harder to achieve.
However, further improvements in product performance predictions can be achieved by combining physical modeling approaches with Machine Learning (ML) methodologies. With the recent progress in data technologies, such as new HPC capabilities, cheap data storage with fast access and novel ML algorithms, the idea of combining physics-based simulations and ML approaches moves beyond the concept and into engineering design practice.
This presentation will provide an outlook of a tire development process that utilizes both “classic” FEA approaches as well as novel methods where FEA is combined with new data-based modelling methods.
Bio
An experienced academic and industrial research leader, Dr. Skrypnyk has been active in multiple areas of computational modeling research including Finite Element Analysis, computational material modeling, fracture mechanics and Data Science. Dr. Skrypnyk started his scientific endeavors with Ph.D. study at National Academy of Sciences of Ukraine in the field of fracture mechanics and computational material science. For the last 20 years Dr. Skrypnyk holds different research and managerial positions of increasing responsibilities in Global Technology Division of Goodyear Tire & Rubber Company.
Dakshina Valiveti, ExxonMobil
Wednesday, July 26, 1pm, Ballroom C
Mechanics of Hydraulic Fracturing and Computational Challenges in Building a Predictive Model
Abstract
Hydraulic fracturing has significantly transformed the oil and gas industry enabling economic production of hydrocarbons from very low permeability shale formations. It involves injection of fracturing fluids into deep reservoir zones at pressures that overcome the compressive in-situ stress, creating fractures in the rocks to generate large permeable pathways for hydrocarbons to reach the well. Uncertainties in subsurface stress state and rock mechanical properties drive the need for numerical simulations to understand the complex multi-physics involved. This talk will present the overview of hydraulic fracturing, computational challenges, some of our evolving technical efforts for more than a decade towards understanding the mechanics of the fluid driven fracturing, in turn developing a scalable simulator, building surrogate models, and constraining models with field observations.