Below is a current list of Congress Minisymposia.
101 - Isogeometric Methods – A Symposium in Honor of Thomas J.R. Hughes, Alessandro Reali, Yuri Bezilevs, David Benson, Rene de Borst, Trond Kvamsdal, Giancarlo Sangalli, Clemens Verhoosel
Alessandro Reali, University of Pavia
Yuri Bezilevs, Brown University
David Benson, Ansys
Rene de Borst, University of Sheffield
Trond Kvamsdal, Norwegian University of Science and Technology
Giancarlo Sangalli, University of Pavia
Clemens Verhoosel, Eindhoven University of Technology
This symposium is dedicated to Thomas J.R. Hughes on the occasion of his 80th birthday, and aims at honoring his amazing contributions to Computational Mechanics by collecting the latest developments on his most recent breakthrough idea, i.e., Isogeometric Analysis.
Isogeometric Analysis (IGA) has in fact been originally introduced and developed by Tom Hughes et al. in 2005, to generalize and improve finite element analysis in the area of geometry modeling and representation. However, in the course of IGA development, it was found that isogeometric methods not only improve the geometry modeling within analysis, but also appear to be preferable to standard finite elements in many applications on the basis of per-degree-of-freedom accuracy. Non-Uniform Rational B-Splines (NURBS) were used as a first basis function technology within IGA. Nowadays, a well-established mathematical theory and successful applications to solid, fluid, and multiphysics problems render NURBS functions a genuine analysis technology, paving the way for the application of IGA to solve a number of problems of academic and industrial interest. Further fundamental topics of research within IGA include the analysis of trimmed NURBS, as well as the development, analysis, and testing of flexible local refinement technologies based, e.g., on T-Splines, hierarchical B-Splines, or locally-refined splines, in the framework of unstructured multipatch parameterizations. Moreover, an important issue regards the development of efficient strategies able to reduce matrix assembly and solver costs, in particular when higher-order approximations are employed. Aiming at reducing the computational cost still taking advantage of IGA geometrical flexibility and accuracy, isogeometric collocation schemes have attracted a good deal of attention and appear to be a viable alternative to standard Galerkin-based IGA. Another more than promising topic, deserving a special attention in the IGA context, is finally represented by structure-preserving discretizations. Along (and/or beyond) these research lines, the purpose of this symposium is to gather experts in Computational Mechanics with interest in the field of IGA with the aim of contributing to further advance its state of the art, and, at the same time, to honor the extraordinary achievements of the Computational Mechanics and IGA pioneer Tom Hughes on the occasion of his 80th birthday.
102 - Minisymposium in Memory of Professor Ilinca Stanciulescu: Computational Mechanics for Complex Material and Engineering Systems, Tao Jin, Tod Laursen
Tao Jin, University of Ottawa
Tod Laursen, SUNY Polytechnic Institute
This minisymposium is dedicated to the memory of Professor Ilinca Stanciulescu, in honor of her life and her many contributions in computational mechanics. Prof. Stanciulescu’s academic career spanned a wide range of research interests, starting from earning her bachelor’s and master’s degrees in structural engineering from the Technical University of Civil Engineering in Bucharest, a master’s degree in applied mathematics from Bucharest University, her PhD study and postdoctoral work in civil engineering at Duke University, and her faculty appointments at the University of Illinois Urbana-Champaign and Rice University. We are soliciting presentations on the general topic of computational and simulation methods for complex material and engineering systems, including but not limited to, computational modeling of biological tissues and bio-inspired materials in addition to numerical methods for linear and nonlinear dynamics and structural mechanics. We invite Ilinca’s colleagues, former students, friends, and the broader community to join us in celebrating her short but brilliant life.
The Computational Biological Systems Emphasis Area includes minisymposia that focus on theory, methods and applications relevant to, but are not restricted to, the biomechanics of cells and tissues; topics in medicine and surgery such as cardio-vascular models, neurological applications and obstetrics; as well as cross-cutting themes such as growth and remodeling, patterning and morphogenesis.
David Pierce, University of Connecticut
Corey Neu, University of Colorado
Stéphane Avril, MINES Saint-Étienne
Computational mechanics and numerical methods play an increasingly significant role in the study of biological systems at the organism, organ system, organ, tissue, cell, and molecular scales. New and exciting applications of computational mechanics go beyond the classical theories and incorporate biomechanical mechanisms inherent in biology such as adaptation, growth, remodelling, active (muscle) response, and inter- and intra-patient variables. Synergies among fundamental mechanical experiments, multi-modal imaging and image analyses, new mathematical models and computational methods enable studies of, e.g., microphysical (mechanobiological) cellular stimuli and response, structure-function relationships in tissues, organ and tissue integrity, disease initiation and progression, engineered tissue replacements, and surgical interventions.
The goal of this minsymposium is to promote cross-fertilization of ideas and collaborative experimental and numerical efforts towards more rapid progress in advancing the overall field of computational biomechanics. To this end, contributions considering the following topics are particularly welcome: coupled analyses of chemo-mechanical processes; methods coupling multiple scales and/or multiple physics; growth and remodelling of biological tissues; characterization and impact of inter- and intra-patient variability; applications with clinical impact or potential clinical impact; new constitutive models; mechanobiology and cellular mechanics; applications of medical images and image analyses in mechanics; mechanics of pathological processes; and experimental methods and computational inverse analyses towards model calibration.
Shiva Rudraraju, University of Wisconsin-Madison
Hector Gomez, Purdue University
Christian Franck, University of Wisconsin-Madison
Krishna Garikipati, University of Michigan
The mechanical response of biological cells and cell-scale structures is central to understanding their development, functioning, interactions, influence on physiology and related disease conditions. There is an increased awareness and appreciation of the mechanical and mechanobiological underpinnings of various biological phenomena like evolution of biomembranes, cytoskeletal dynamics, embryogenesis, cell division, collective cell motion, cell packing in tissues and tumors, wound healing, role in tissue formation and regeneration, to name a few. Given the invariable complexity (multi-scale, multi-physics, and multi-phase) of these phenomena, one often needs advanced theoretical, experimental and computational techniques for modeling the underlying movement, mechanics, mechano-chemistry, phase evolution and configurational change processes. The goal of this minisymposium is to foster a vibrant discussion on the development of mathematical models, numerical methods, and computational simulations to study mechanics and mechanobiology in biological cells and cell-scale structures. The scope of the phenomena being modeled or investigated can range from intra-cell to single cell and cell aggregates. Numerical methods include, but are not limited to, multi-phasic material modeling, computational mechanics, phase field modeling, level set methods, particle-based methods and machine learning.
Guillermo Lorenzo, University of Pavia
Ryan Woodall, City of Hope
David A. Hormuth II, The University of Texas at Austin
Michael R. A. Abdelmalik, Eindhoven University of Technology
Russell C. Rockne, City of Hope
Alessandro Reali, University of Pavia
Thomas E. Yankeelov, The University of Texas at Austin
Thomas J. R. Hughes, The University of Texas at Austin
Cancers are highly heterogeneous diseases that involve diverse biological mechanisms, interacting and evolving at various spatial and temporal scales. Multiple experimental, histopathological, clinical, and imaging methods provide a means to characterize the heterogeneous and multiscale nature of these diseases by providing a wealth of temporally and spatially resolved data describing tumor morphology, architecture, cellular subtypes, genetic profile, metabolism, vascularity, growth dynamics, and response to therapy. These multimodal, multiscale datasets can be exploited to constrain biophysical models of tumor growth and treatment response both in preclinical and clinical settings. These models can then be leveraged to test hypotheses, produce individualized tumor forecasts to guide clinical decision-making, and, ultimately, to design optimized therapies. To this end, robust computational methodologies are required for solving, parameterizing, and finding optimal therapeutic plans for individual patients. Additionally, uncertainty quantification and model selection strategies are fundamental to assessing model performance, establishing confidence in model outcomes, and selecting the best model formulation given the inherent uncertainty in both data and models. Moreover, the combination of mechanistic models and machine learning may provide superior computational forecasts while enabling the integration of multimodal, multiscale data within a common modeling framework.
Thus, the overall goal of this minisymposium is to provide a forum to present and discuss recent developments in data-informed computational models and methods for predicting tumor growth and treatment response, with special focus on the following research areas:
development of data-informed mechanistic models to investigate cancer development and therapeutic response both in vitro and in vivo
computational methods to initialize, solve, and parameterize these models on a personalized basis
clinical and preclinical studies leveraging these data-informed computational models to assess real-world applications in cancer research and oncology
patient-specific optimization of treatments using data-informed mechanistic models
uncertainty quantification and model selection methods
hybrid strategies combining mechanistic modeling and machine learning
digital twinning in clinical oncology
Adrian Buganza Tepole, Purdue University
Ellen Kuhl, Stanford University
Michael Sacks, University of Texas at Austin
Krishna Garikipati, University of Michigan
Continued improvements in computational power, software, and massive data availability over the past two decades have led to an increasing interest in Artificial Intelligence (AI) in general and in neural networks (NN) in particular. Traditionally associated with image processing and informatics, NN are now increasingly used to directly represent Partial Differential Equations (PDE) in many areas of computational mechanics. This is especially the case in biomedical applications, where major breakthroughs in computational methods and imaging technologies have reshaped the biomedical research landscape. For example, in many areas, NN have successfully replaced physics solvers. Other machine learning (ML) techniques include Gaussian processes, and automatic model discovery. ML has also allowed for the seamless integration of imaging data and mechanical test data; it naturally incorporates inverse problems to identify model parameters, either by replacing the forward solver with ML surrogates, or by directly learning the inverse mapping.
ML and AI are now widely used to simulate and understand different organ systems including the heart, skin, arteries, and the brain. For instance, ML surrogates can be used to solve the inverse problem of skin growth in tissue expansion from large animal data, and the resulting calibrated model can improve expansion protocols that produce the desired shape of newly grown skin flaps. Another important application is the use of ML surrogates for a wide variety of living systems, trained on a large number of simulations that span a broad range of material parameters, loading, and boundary conditions, and ultimately enable the prediction of personalized disease and treatment scenarios that are impossible with traditional high-fidelity physics solvers. Beyond tissues and organ systems, ML tools such as Physics Informed NNs can be used to model and predict disease dynamics across populations.
This mini-symposium invites contributions in both: i) fundamental advances in ML and AI algorithms specialized for computational mechanics of biomedical systems, and ii) specific application pipelines for biomedical problems enabled by ML methods.
Debanjan Mukherjee, University of Colorado Boulder
Adarsh Krishnamurthy, Iowa State University
Ming-Chen Hsu, Iowa State University
Computational modeling and simulation based approaches in cardiovascular biomechanics and biomedicine have seen rapid progress in recent years. Computational approaches provide a non-invasive modality for understanding the underlying mechanics of cardiovascular diseases, as well as guiding device design and treatment planning. The future of computational cardiovascular biomechanics lies in patient-specific simulation of real disease events, enabling simulation assisted diagnostics, device design and deployment, and treatment planning decisions. The primary challenge in this regard is that patient-specific phenomena involve the synergistic interplay of multiple underlying physical, mechanical, and chemical processes, coupled to each other across several spatial and temporal scales. Concurrently, the availability of high-resolution imaging and clinical data, and recent innovations in data-driven models and artificial intelligence, have enabled new avenues for advancing patient-specific predictive models of cardiovascular phenomena. Together, multiphysics and data-driven modeling have thus slowly emerged as a new frontier in high-fidelity modeling of cardiovascular systems, aiming to resolve physiological and pathological phenomena in real patient-specific scenarios. Advancements in this field calls for inter-disciplinary research efforts that go beyond current multiscale computational mechanics approaches in cardiovascular biomechanics.
This minisymposium will bring together scientists working across various domains to provide a platform for discussing the state-of-the-art and future directions in multiphysics, multiscale, and data-driven modeling of cardiovascular systems. Fundamental and applied contributions from a wide range of topics focusing on theoretical and computational approaches for cardiovascular phenomena will be discussed. The term multiphysics in this context refers to coupled physical interactions including not only fundamental fluid and solid mechanics, but also multiscale transport phenomena, biological growth and remodeling, electrophysiology, biochemical interactions including drug delivery and other related aspects. Data-driven approaches include artificial intelligence, machine learning, data-augmented models, image analytics, uncertainty quantification, and related techniques. Topics include (but are not restricted to):
Coupled multiphysics models for cardiac mechanics.
Multiphysics and multiscale models for vascular biology and biomechanics – arterial and venous systems.
Patient-specific multiphysics modeling of cardiovascular diseases like stroke, aneurysm, thrombosis, atherosclerosis, embolisms.
Numerical methods and algorithms for multiphysics coupling – staggered and monolithic approaches; mesh-based, mesh-free, and particle-based methods.
Artificial intelligence and machine learning in models for cardiovascular phenomena.
Assimilation of experimental data into multiphysics models.
Integration of cardiovascular imaging into multiphysics models.
Applications in cardiovascular medical and surgical treatments for patients.
Applications in design, deployment, and operation of medical devices in vivo.
Thrombotic/embolic risk assessment for biomedical devices and mechanically assisted circulation.
Computational tools, specialized software, and databases for cardiovascular simulations.
Christian Peco,The Pennsylvania State University
Hector Gomez, Purdue University
Franck Vernerey, University of Colorado Boulder
Sulin Zhang, The Pennsylvania State University
Biological materials and soft matter are challenging to study due to their complex composition and the concomitance of different physics at several spatial and temporal scales. The mechanics of biomaterials involve the combination of solid and fluid responses, the ability to withstand large deformations, and fined tuned responsiveness to a variety of stimuli. The aggregation of natural/synthetic soft units generates a new biomechanical hierarchy level, which adds a layer to its complexity and presents a whole plethora of relevant emerging properties. Biological and artificial soft materials exhibit a truly remarkable capacity to develop and adapt to different circumstances, self-organizing, and reacting to external agents. The goal of this minisymposium is to bring together outstanding works in the field of biological structure development and soft matter interfaces, along with the arising physical, chemical, and electrical interactions with the surrounding media. We welcome contributions focusing on any of the different scales involved, comprehensive multiscale studies, and artificial intelligence applications in growth and morphogenesis. Cell-to-cell communication, cell aggregation, tissue and infection growth, and even colony type organisms or robotic biomimetic equivalents are subject of interest of this gathering. We also encourage original contributions in the form of numerical methodologies that could improve the lens through which these phenomena are evaluated. Contributions related to the development of bio-printing technology and its potential application in biomaterial analysis and tissue engineering are also welcome.
Yongjie Jessica Zhang, Carnegie Mellon University
Adrian Buganza Tepole, Purdue University
Rafael Grytz, University of Alabama, Birmingham
Maria Holland, University of Notre Dame
Johannes Weickenmeier, Stevens Institute of Technology
Aishwarya Pawar, Iowa State University
Medicine relies on a broad range of imaging modalities to visualize, measure, and understand disease. Concurrently, computational models of tissues are developed with higher fidelity due to improvements in medical imaging acquisition time, deep tissue imaging, and resolution across scales. New imaging modalities and novel applications of existing technologies have also enabled the characterization of tissue properties beyond geometry or microstructure. Imaging methods used in computational medicine include magnetic resonance imaging, computed tomography, ultrasound, optical coherence tomography, digital image correlation, multi-photon microscopy, and various other microscopy modalities. Imaging data thus provides the geometry indispensable for the generation of any realistic computational mechanics model. It also enables measurements of the changes in the geometry - elastic strains or permanent deformations - that occur during tissue development, regeneration, aging, and disease. In combination with other measurements such as forces, and an increasing understanding of the physics at different length scales, imaging data plays a fundamental role in the development of validated and predictive constitutive models for biological tissues at the cell, tissue, and organ levels.
In this minisymposium, we solicit contributions that describe advances in computational mechanics and data-driven modeling in medicine. Novel methods for models informed by or based on innovative use of imaging modalities across the scales are welcomed. This minisymposium would also like to highlight interdisciplinary efforts of basic and clinical scientists, biophysicists, engineers, and mathematicians that jointly address the most important challenges and trends in imaging-based modeling of biological phenomena, e.g.: neuron material transport, growth and remodeling mechanisms in myopia, keratoconus and glaucoma, skin growth, brain tissues, and cardiovascular systems.
The Computational Fluid, Solid and Structural Mechanics Emphasis Area includes a number of minisymposia that address aspects of numerical modeling pertaining to this topic. It encompasses, although not exclusively, minisymposia on phenomena of turbulence, creeping flows, nanofluidics, nonlinear response and failure of solids, contact mechanics, fluid-structure interaction, large structural systems and other coupled problems.
Jun Xu, University of North Carolina at Charlotte
Chao Zhang, Northwestern Polytechnical University
Howie Fang, Liberty University
Qian Wang, Manhattan College
Qing Li, University of Sydney
Significant impact and safety events can push physical systems to their limits and result in catastrophic consequences such as widespread human casualties/injuries and loss of property. A good understanding of the systems subject to impact/blast loading and intrinsic safety problems is required for system design and/or hazard mitigation. Due to the destructive nature of safety events, physical experiments, especially full-scale testing, have remained a significant challenge due to their high costs associated with specialized facilities and equipment, personnel expertise, and setup of full-scale test specimens. While full-scale controlled testing may never become obsolete, recent advances in computing hardware and numerical algorithms have enabled numerical modeling and simulation to play an increasingly critical role in this important area of research.
This minisymposium aims to bring together researchers and engineers working on all types of impact and safety problems. It seeks to synthesize recent advances in mathematical models and computational methods/algorithms for impact and safety problems as well as for designing impact/blast resistant structures/systems. Research and industrial applications addressing all aspects of responses of structures, bodies, and materials subject to impact and blast loading are welcomed. In particular, topics targeting advanced materials (composite materials), and energy systems (lithium-ion batteries, electric vehicles) are especially encouraged.
Kent Danielson, U.S. Army Engineer Research and Development Center
Stephen Beissel, Southwest Research Institute
Michael Puso, Lawrence Livermore National Laboratory
David Littlefield, University of Alabama at Birmingham
This minisymposium is focused on computational methods for solids and structures experiencing extreme loads, such as shock and high-speed impact. A broad area of contributions is sought to include numerical modeling of both the prediction of severe loads and subsequent dynamic response, which may include the coupling of multiple areas of computational mechanics. Typical contributions to this forum might come from defense, construction, petroleum, mining, space, or counterterrorist and law enforcement applications. The use of numerical simulation for weapon-structural interactions has seen significant popularity in recent years, primarily due to accuracy, practicality, and the expense of testing. New technical developments rely on modeling impact, penetration, and explosive effects for weapon effectiveness and structural damage evaluation to vehicles, body armor, and protective structures. Recent Lagrangian higher-order finite element, isogeometric, and meshfree methodologies enable analysts to look at old problems more easily and in a new light while use of airblast, explosive detonation, and other Eulerian and ALE approaches are also an important aspect of weapon-structural simulation. New multiscale and machine learning approaches for constitutive modeling can provide greater fidelity for complex material responses. In addition, assessment of force protection and terrorist threats to government facilities and civilian infrastructure has seen a tremendous utilization of computational mechanics for blast-structural modeling. This is particularly true for cases, such as large buildings, dams, or bridges, where full scale testing of a threat is not feasible but is also important for post-event structural integrity assessments in large areas of standard construction. Oil, mining, and construction operations such as drilling, excavation, demolition, explosive anchor driving, and disaster protection and recovery/damage assessment can also utilize such technologies, and the modeling of impact has become important in aircraft and spacecraft design. The nature of all of these applications typically involves some of the most challenging aspects in structural mechanics, such as nonlinear material behavior under large strains and/or high strain rates, large and nonlinear deformations, failure and dynamic fracture, initiation, burning, and detonation of energetic materials, phase change and transition, and high velocity contact and friction. Parallel and chip hardware advances have made it possible to conduct simulations in three dimensions on unprecedented length and timescales.
The purpose of this mini-symposium is to provide a forum for technical presentation and exchange, establish communication and collaboration between academic, government and industrial software researchers in the field of computational mechanics for extreme loading applications. Papers dealing with theoretical developments, multi-spectral physics coupling, new higher-order and isogeometric element technologies, meshfree modeling, algorithms and numerical methods, implementation and parallel computational issues, Exploitation of GPU programming, new constitutive modeling, experimental validation, and practical applications are all welcome.
Timothy Truster, University of Tennessee
Varun Gupta, Exxon-Mobil
Soheil Soghrati, The Ohio State University
Haim Waisman, Columbia University
To complement physics-based models, data science approaches are playing an increasingly important role in the design and evaluation of materials operating in harsh environments. The aim of this mini-symposium is to provide a forum for discussing the novel computational approaches that pertain to mechanics of materials. This mini-symposium seeks to bring together students, academicians and professionals working in the area of data science and materials engineering.
The topics covered include (but are not limited to):
Physics-based and machine learning modeling combinations
Data-driven approaches
Damage modeling in materials
Multiscale modeling: Strategies for representing the inherently multiscale nature of the problem covering different spatial or temporal scales
Process-structure models: Part-scale and multiscale simulation of the manufacturing process for predicting surface topology and microstructure including defects
Process-part optimization for design of structural parts/components
Methods to accelerate failure modeling in solids such as adaptive remeshing techniques, specialized linear and nonlinear solvers, and reduced order models
Structure based property prediction
Modeling of advanced manufacturing or joining processes
Modeling of novel material systems
Advanced discretization methods such as enrichment and XFEM/GFEM methods, phase field methods and cohesive zone models
Probabilistic methods in multiscale modeling
Yoshitaka Wada, Kindai University
Hiroshi Okada, Tokyo University of Science
Toshio Nagashima, Sophia University
Xiaosheng Gao, University of Akron
This mini-symposium deals with the state-of-the-art computational fracture mechanics applications. Applications of computational methodologies, such as, FEM, X-FEM, G-FEM, S-FEM, BEM and other advanced numerical techniques will be discussed in the mini- symposium. Fields of interests span a wide range of areas, such as aerospace, automobile, naval architecture, nuclear power, mechanical/civil engineering, and other structural applications. Outcomes of both the applied and fundamental researches are warmly welcome to the mini-symposium.
Lampros Svolos, Los Alamos National Laboratory
Mostafa Mobasher, New York University Abu Dhabi
Luc Berger-Vergiat, Sandia National Laboratories
Hashem Mourad, Los Alamos National Laboratory
Haim Waisman, Columbia University
Over the past decades, various computational approaches have been successfully proposed in the literature to model failure modes in a wide range of materials. Frequently encountered failure modes include fracture, damage, shear bands, necking, and erosion. Despite extensive prior research, state-of-the-art approaches still face several challenges that hamper progress towards predictive and computationally efficient modeling of such failure mechanisms. For instance, some challenges include calibration, characterization and modeling of complex processes that are coupled to damage and failure such as microstructure evolution, experimental validation, robust numerical schemes, and efficient solvers. These are especially difficult under extreme loading conditions such as those observed in impact and thermal-shock experiments, hydraulic fracture of geomaterials, and other multiphysics problems.
The purpose of this MS is to provide a forum for discussion of challenges and advancements in computational approaches for the reliable and efficient treatment of material failure modes. The topics of interest include, but are not limited to the following:
Continuum damage approaches (e.g., phase-field, local/non-local, and cohesive zone models)
Damage and fracture in ductile materials, including computational approaches that take into account the effects of texture and anisotropy
Constitutive and phenomenological modeling of failure mechanisms (e.g., void nucleation, microcracking, and crack initiation/propagation)
High-order theories in continuum mechanics (e.g., second-gradient theories, micropolar elasticity)
Failure in multiphysics settings (e.g., thermomechanical, electrochemical, fluid-driven fractures, or hygrothermal)
Advances in algorithms and solver technologies, such as preconditioners, staggered/monolithic schemes, or iterative methods
Advances in discretization techniques (e.g., particle and material point methods)
Calibration and experimental validation of models capturing shear bands, fracture, and microstructural defects.
Reduced order modeling and machine learning techniques.
Example applications areas: continuum damage in brittle and ductile materials, dynamic response of polycrystalline materials, multiphysics problems (thermo-elasticity and temperature-dependent viscoplasticity, hydraulic fracture), and rupture of soft materials.
Marco ten Eikelder, Technical University of Darmstadt
Laura De Lorenzis, ETH Zürich
Krishna Garikipati, University of Michigan
Yu Leng, Purdue University
Hector Gomez, Purdue University
In many processes in industrial applications and natural sciences, the evolution of interfaces is of paramount importance. Examples occur in a wide range of research areas including multi-phase flows, crack propagation, fluid-structure interaction, solidification, crystal growth and biomembranes. The phase-field methodology is a powerful mathematical modeling approach for systems with moving interfaces like these. In the phase-field method, moving boundary problems are reformulated as PDEs on fixed domains in which the interface evolution is governed by a PDE of a scalar order parameter (the phase field). Phase-field models are diffuse-interface models meaning that the interface is a smooth region described by the smooth phase field.
The phase field method has favorable properties, such as a rigorous thermodynamical structure and a physical interface description, but introduces new challenges for computations. Important challenges include the discretization of higher order spatial derivatives that typically occur in phase-field models, the design of thermodynamically stable numerical methods (both in space and time) and the treatment of a relatively sharp interface. This minisymposia is dedicated to modeling and computation with the phase-field method. We welcome talks on novel phase-field modeling approaches and numerical algorithms as well as applications in fluids, solids and biomechanics.
Erkan Oterkus, University of Strathclyde
Erdogan Madenci, University of Arizona
Selda Oterkus, University of Strathclyde
This mini-symposium focuses on the recent developments of peridynamic mechanical and mathematical models and its applications on the solution of multiscale and multiphysics problems including damage and fracture as well as numerical solution methods. Other non-local models are also welcome.
Richard Regueiro, University of Colorado Boulder
Remi Dingreville, Sandia National Laboratories
Nathan Miller, Los Alamos National Laboratory
Matthias Neuner, Stanford University
Christian Linder, Stanford University
Higher order continuum theories involving additional field variables (generalized continua), gradients of fields or internal state variables, or nonlocal representations, have been developed over the past century in an attempt to capture underlying microscopic material scale behavior within macroscale, continuum field theories. With the development of advanced experimental diagnostics (such as ultrafast computed tomography) achieving higher spatial and temporal resolution, these theories have become richer in their representation of underlying microscale behavior and thus have gained more applicability in multiscale analysis. A question to ask then is which numerical methods are suitable for implementing such theories computationally, including finite difference and volume methods, the finite element method, isogeometric analysis, the material point method, meshless methods, and the like. Once implemented numerically, other issues include (i) application of boundary conditions, (ii) communication between non-overlapping micro and macro domains, (iii) adaptive spatial and temporal resolution, (iv) continuity requirements, (v) overlapped couplings for upscaling underlying direct numerical simulation data, and (vi) demonstration of spatial discretization independence, to name a few. Thus, this minisymposium provides a venue for researchers interested in any aspect of computational generalized continua, gradients, and nonlocal mechanics to present their recent work and interact with other researchers also working in the area.
Anh Tran, Sandia National Laboratories
Hojun Lim, Sandia National Laboratories
Philip Eisenlohr, Michigan State University
Marko Knezevic, University of New Hampshire
Coleman Alleman, Sandia National Laboratories
Robert Carson, Lawrence Livermore National Laboratory
Nicole Aragon, Sandia National Laboratories
Unraveling the process-structure-property relationship is one of the hallmarks of materials science across different materials systems. Accelerated by the Materials Genome Initiative, numerous integrated computational materials engineering (ICME) models have been proposed and developed over the last decade to reliably and efficiently predict the behavior and response of materials. With the advent of new advanced manufacturing techniques such as additive manufacturing, these ICME workflows are playing an even larger role. To that end, the crystal plasticity finite element method has been extensively used to investigate structure-property relationships. Within the broader field of crystal plasticity computational mechanics, we cordially invite submissions that are related to
applications and enhancements of crystal plasticity computational mechanics in ICME contexts
development, numerical verification, and experimental validation of constitutive models for crystal plasticity
quantification of materials and microstructure variability, e.g. effects of crystallographic texture and grain/phase size distribution
quantification of uncertainty in the structure-property relationship
machine learning in crystal plasticity applications
workflows of crystal plasticity methods for structure-property relationships
Coleman Alleman, Sandia National Laboratories
John Emery, Sandia National Laboratories
Tom Seidl, Sandia National Laboratories
Accurate modeling of solid materials requires careful calibration of appropriate constitutive models, which typically requires the solution of an inverse problem to determine model parameter values that yield the closest match to an observed response. The ability to obtain accurate, credible results from simulations requires the calibrated model to be valid throughout a potentially large region of the parameter space, the extent of which is not generally known during calibration. Thus, proper use of material constitutive models requires that 1) calibration produces a set of model parameters that is optimal in some sense, and 2) the fitness of the model and parameter values are assessed with respect to a scenario which is generally not fully specified prior to calibration.
Several significant challenges arise in this context, including the following:
It is difficult in general to define objective functions that are smooth and convex with a unique global minimum, so local and gradient-based optimization techniques can be inadequate.
Rigorous validation of a calibrated model for its intended use is often time-consuming and complicated.
Evaluations of the objective function and its derivatives at each iteration of the optimization process typically requires the solution of an expensive forward problem.
Probabilistic (e.g. Bayesian) calibration methods suffer from high computational costs and the curse of dimensionality.
For this minisymposium, we are soliciting contributions that address one of these or other challenges with model calibration more generally. We are particularly interested in research that addresses one or more of the following: 1) constrained optimization in the context of model calibration; 2) machine learning and associated techniques that generate surrogate or reduced-order models for increased efficiency; 3) methods that provide uncertainty quantification for model parameters; 4) approaches that address multiphysics and multi-fidelity aspects of model calibration; 5) techniques that leverage full-field data.
Jean-Luc Guermond, Texas A&M University
Madison Sheridan, Texas A&M University
The objective of this mini-symposium is to gather engineers and applied mathematicians in computational fluid mechanics to discuss current developments on approximation methods that are invariant-domain preserving. The methods in this class guarantee that the approximate solution stays in the physical domain assigned by physics (positivity of density, internal energy, temperature, minimum principle on entropy, maximum compressibility). The topics that are of interest to this minisymposium are space and time approximations for the compressible Euler equations and the Navier-Stokes equations, written either in Eulerian or Lagrangian coordinates.
Christian Peco, The Pennsylvania State University
Koji Nishiguchi, Nagoya University
John A. Evans, University of Colorado Boulder
Artem Korobenko, University of Calgary
Tomohiro Sawada, National Institute of Advanced Industrial Science and Technology
Jinhui Yan, University of Illinois Urbana-Champaign
Guglielmo Scovazzi, Duke University
Yuri Bazilevs, Brown University
Fluid-structure interaction (FSI) development is fundamental in the analysis of boundary and interface problems in science and engineering. This minisymposium welcomes topics that overcome relevant challenges in the field and advance the feasibility of simulation-driven applications involving interfaces. Works on complex behavior at the interface in biological, mechanical, aeronautical, and civil engineering FSI applications are encouraged. Contributions in the scope of this gathering include, but are not restricted to, novel computational frameworks, new discretization and high-order approaches, theoretical developments, phase-field and advanced interface capturing techniques, coupling strategies, Eulerian and arbitrary Lagrangian-Eulerian (ALE) hydrocodes, and high-performance computing. Presentations focusing on thorough methodology comparison and implementation details of new algorithms and methods are relevant to this meeting. Recent trends in Machine Learning techniques for accelerating interface FSI problems in engineering are also of interest in this minisymposium. We aim to bring together experts from academia and industry to foster a collaborative environment of exchange and discovery and discuss the most recent advances and research directions in FSI.
Justin Kauffman, Virginia Tech
Scott Miller, Sandia National Laboratories
John Gilbert, Virginia Tech
The goal of this mini-symposium (MS) is to provide a forum for investigators to share state-of-the art modeling solutions and numerical challenges unique to Fluid-Structure Interaction (FSI) applications. The MS will include novel FSI modeling approaches and numerical methods for the simulation of a variety of applications, including but not limited to biomechanical FSI, blast-on-structures, cavitation-induced damage, and fluid-thermal-structure interaction. Other areas of interest for this MS include applications of FSI to all scales, communication of software implementation details, performance evaluation of original and commercial codes, benchmark problems, and additional verification and validation schemes.
Ado Farsi, Imperial College London
Giacomo Capodaglio, Los Alamos National Laboratory
This series of talks will present an overview of the latest developments in mixed discretization methods for the simulation of solids, fluids and coupled problems. We consider a mixed discretization any new numerical method that is obtained by combining existing techniques such as for instance the finite element method (FEM), the discrete element method (DEM), or the material point method (MPM), with the goal of simulating challenging computational physics problems. Numerical methods for the coupling of mixed physics, such as local-nonlocal coupling, are also a good fit for this minisymposium. The invited talks will cover the scientific output from researchers from a variety of applied and multidisciplinary fields and institutions around the world. It will bring together academics and industry specialists who are using and developing themselves new mixed discretization codes, as well as showcase more theoretical work in the field. Special emphasis will be placed on the best practices to achieve efficient implementations of mixed discretization methods on high-performance computing architectures, such as GPUs and multicore CPUs, to guarantee their scalability an practical deployment on large scale and grand challenge applications. Research areas that will be discussed during the minisymposium include (but are not limited to):
Coupling methods and applications for multi-physics (e.g. fluid and thermal) structural problems;
Chemical and pharmaceutical applications (powder compaction, tableting, reactors, etc.);
Fluid-structure interaction applications (biophysical flows, aerodynamics);
Geophysical flows (sea ice dynamics and evolution);
Nonlocal and fractional problems (subsurface transport and subsurface flow, material fracture);
Civil and mechanical applications (track ballast, tunnelling, mechanical components, etc.);
Rock mechanics, petroleum and mining applications (underground excavations, hydraulic fracturing, CO2 sequestration, etc.).
Eirik Valseth, The University of Texas at Austin
Ethan Kubatko, The Ohio State University
Clint Dawson, The University of Texas at Austin
Kauo Kashiyama, Chou University
Many problems in geophysical and environmental fluid mechanics exhibit a wide range of scales and must be solved over large, geometrically complex spatial domains, often for long periods of time. Computational methods for these types of problems have matured considerably in recent years. This minisymposium will examine the latest developments in solving geophysical and environmental fluid mechanics problems. Topics of interest include:
Model development and application.
Coupling of flow and transport processes and models.
High-performance computing and parallelization strategies.
Error analysis, verification and validation.
Unstructured mesh generation algorithms and criteria.
Fluid-structure interactions.
Novel discretization methods.
Tamas Horvath, Oakland University
Loic Cappanera, University of Houston
Giselle Sosa Jones, Oakland University
Coupled problems appear in many important real-life applications, such as multiphase flows, water waves, magnetohydrodynamics, fluid-structure interactions, etc. One possible way to approximate the solution to such problems is to use finite element methods. Finite element methods have been of great interest in the applied mathematics and engineering communities due to their applicability to a wide range of problems. As a result, many different FE methods have been developed to handle these problems. In this minisymposium, we aim to provide a platform to researchers developing novel FE techniques for coupled problems in incompressible fluid dynamics. Numerical methods of interest include but are not limited to stabilized finite elements, discontinuous Galerkin, hybridized/embedded discontinuous Galerkin, Trefftz discontinuous Galerkin.
David Del Rey Fernandez, University of Waterloo
Brian Vermeire, Concordia University
Siva Nadarajah, McGill University
Modern and future computational architectures promise unprecedented power that could enable the simulation of unsteady nonlinear partial differential equations in realistic scenarios (e.g., complex geometry) at unprecedented scale and accuracy. High-order methods are excellent candidates for such systems as a result of their dense compute kernels, particularly in the context of unsteady problems requiring high accuracy. However, for such problems high-order methods have traditionally suffered from stability issues making them impractical. In the context of computational fluid dynamics, there have been huge advances towards developing high-order schemes with provable properties (e.g., entropy-stability, positivity preservation, etc) and the required mechanics to make them efficient (e.g., adaptation) and practical. In this minisymposium we look broadly at nonlinear partial differential equations and the mathematics required to develop efficient high-order schemes with provable properties. Example PDEs of interest include but are not limited to, incompressible and compressible flow equations, multiphase equations, nonlinear wave equations, and nonlinear reaction diffusion equations. Moreover, numerical techniques of interest include but are not limited to, provably stable schemes, positivity preservation, time stepping, stabilization, adaptation, space-time methods, unstructured schemes, and mechanics for the efficient deployment of high-order methods on modern and future compute architectures.
Michael Puso, Lawrence Livermore National Laboratory
Peter Wriggers, Institute of Continuum Mechanics
Jerome Solberg, Lawrence Livermore National Laboratory
Mechanical treatment at interface surfaces is an important aspect to many analyses. These interfaces may be physical in form such as contact/impact or fracture/crack interfaces or numerical in form such as immersed boundary/embedded or domain decomposition. Although a lot of progress has been achieved, there are still many outstanding challenges, and the field has diversified in many directions. This session is devoted to recent developments on the various aspects of interface mechanics:
Computational treatment of unilateral contact, friction, adhesion, fretting etc.
Discretization methods for overlapping immersed and embedded meshes.
Optimization, Reduced Order Modeling and Machine Learning applied to interface problems.
Multilevel approaches (molecular and nano-micro-macro models) for interfaces.
Multiphysics modeling (e.g., piezo, thermal, fluids) involving interface surfaces.
Computational methods: fast solvers, multigrid, isogeometric analysis, NURBS, virtual elements, etc.
Dynamics of structures and rigid bodies in unilateral contact or interface coupling.
Industrial applications involving interface modeling.
Besides presentations of new results and new contributions to the understanding of interface mechanics and its numerical treatment, this session will provide an opportunity to discuss and exchange ideas on the various topics related to the field.
Nikhil Chandra Admal, University of Illinois Urbana-Champaign
Brandon Runnels, University of Colorado Colorado Springs
Giacomo Po, University of Miami
Enrique Martinez, Clemson University
Interfaces play a critical role in determining the physical properties of materials. They occur as grain and phase boundaries in crystalline materials; transformation fronts that separate solid and fluid phases; and liquid-like membranes in biological materials. The thermodynamic and kinetic properties of evolving interfaces emerge from the physics at multiple length and time scales. At the macroscale, interfaces are effectively modeled using continuum mechanics. Examples include classical grain microstructure evolution, stress-induced solid-solid phase transitions, chemo-mechanics with localized chemical reaction fronts, biomembranes, and the growth of free surfaces. Since the constitutive laws that govern continuum models stem from physics at lower scales, the multiscale modeling paradigm has been instrumental in building first principles-inspired constitutive laws.
The aim of this mini-symposium is to discuss recent developments and open questions in multiscale modeling and computational methods to study interfaces. The topics include, but are not limited to the following:
Diffuse interface and sharp interface methods, such as phase field, thresholding methods, XFEM, GFEM, and cutFEM
Multiscale modeling of grain and phase boundaries under extreme thermomechanial loads with a focus on coupling atomistic models, mesoscale defect mechanics models, and macroscale continuum models
Statistical mechanics of interfaces
Coevolution of bulk and interfaces - interaction between grain/phase boundaries and dislocations, bulk and surface diffusion in biomembranes
Interface mechanics of 2D heterostructures
Franck Ledoux, CEA
Steven Owen, Sandia National Laboratories
Matthew Staten, Sandia National Laboratories
Automatic unstructured mesh generation continues to be a vital technology in numerical simulations. While computing and solver technology advance and modeling requirements become more precise, the automatic generation of suitable meshes for complex configurations constitutes a principal bottleneck in the simulation workflow process. It remains a very-time consuming task for many engineers around the world. The symposium on Trends in Unstructured Mesh Generation, or MeshTrends for short, focuses on the mesh generation process for numerical simulation purposes. It is a forum for exploring and synthesizing many of the technologies needed to develop a computational grid suitable for simulation. This symposium promotes bringing together top researchers and practitioners in academia, government, and industry from around the world to exchange and network in the broad field of meshing and geometry for computational mechanics.
Engineers and researchers are invited to submit abstracts on the following topics:
Mesh generation algorithms: including theoretical foundations and new algorithms for automatic methods for linear and high order tet, hex and polyhedral methods.
Application of Artificial Intelligence and Machine Learning to solve meshing problems.
Parallel and scalable algorithms: including methods for generating and managing mesh and geometry for massively parallel systems.
Meshing tools and applications: including commercial meshing tools and their application to current problems in industry.
Multiphysics meshing issues: including tools and methods for managing meshing and geometry for mutiscale, mutliphysics applications.
Infrastructure and tools for meshing: including APIs and tools for managing and interfacing meshing tools.
Adaptive meshing tools and applications: including tools and methods for adaptively modifying mesh and geometry based on run-time results or optimization parameters.
Meshing and CAD Geometry: including tools and methods for properly interacting with CAD geometry, and when needed to characterizing and resolve geometry problems to ensure the reliable generation of controlled meshes.
In addition to the MeshTrends23 symposium, the organizing committee will select some works that will have been presented at the symposium for proposing an extending version a journal special edition of MeshTrends23 with per-review process.
John Mersch, Sandia National Laboratories
Jeffrey Smith, Sandia National Laboratories
Matthew Brake, Rice University
Threaded fasteners represent common connectors between parts in assemblies, and analyzing their integrity can be a critical aspect of safety assessment, component/system qualification, and quantifying margin and uncertainties. However, difficulties arise throughout the assessment process that can make this evaluation particularly challenging, including material property uncertainty, calibration challenges, large-scale modeling feasibility, and as-built geometric realities. Thus, engineers must employ wide-ranging techniques to assess the integrity of joints subjected to diverse mechanical environments. This minisymposium aims to highlight many facets of threaded fastener joint computational analysis, including but not limited to:
fastener representation and geometric fidelity in finite element models
constitutive modeling and material model calibration
experimental techniques and model validation
reduced-order and surrogate modeling of fasteners
plastic deformation and failure analysis
fastener performance in high-rate, thermal, multiaxial, and other mechanical environments
threaded fastener joint response in normal and extreme environments
larger-scale system response with threaded fasteners
design evaluation and qualification of systems with threaded fastener joints
application of machine learning techniques
uncertainty quantification
Shawn Chester, New Jersey Institute of Technology
Jaafar El-Awady, Johns Hopkins University
WaiChing Sun, Columbia University
This mini symposium brings researchers from a variety of backgrounds together in order to exchange and discuss ideas related to computational mechanics of active materials and structures. This is an exciting area related to the incorporation of multi-physics mechanics, multi-functional materials, varying length and time scales, and more, into the response of the material and/or structure.
Topics of interest include, but are not limited to:
4D printed materials and structures
Chemically actuated biological materials and structures
Shape-morphing materials and structures
Shape memory materials and structures
Thermo-active materials and structures
Electro-active materials and structures
Magneto-active materials and structures
Light-active materials and structures
Mimicking natural actuators
This mini symposium is sponsored by the USACM Technical Thrust Area on Multi-Scale, Multi-functional Materials and Structures.
James W. Foulk III, Sandia National Laboratory
Alejandro Mota, Sandia National Laboratory
Brandon Talamini, Lawrence Livermore National Laboratory
Michael Tupek, PTC
Julian Rimoli, University of California, Irvine
N. Sukumar, University of California, Davis
In this minisymposium we seek to highlight challenging problems in computational solid mechanics that require rapid modeling building and mesh adaptivity for solution. We focus on the finite element method and relevant element technologies for large deformations and accompanying inelasticity, localization, and failure. Discussion will center on Lagrangian descriptions and the necessary computational components to resolve, preserve, and evolve the fields that govern these processes. Prototypical material systems may include, but are not limited to, polymers, structural metals, and biomaterials.
Topics of interest:
Novel methods for discretization
Global and local remeshing including topological changes and smoothing
Field recovery
Mapping of internal variables
Tetrahedral, hexahedral, and other 3D element technology
The Data-driven Modeling and Uncertainty Quantification Emphasis Area includes minisymposia focused on development of algorithms focused on uncertainty quantification, and data-driven modeling, e.g., artificial intelligence and machine learning (AI/ML) algorithms, to exploit data for solutions to vexing computational mechanics problems. These include, but are not limited by, novel methods of leveraging models and data of varying fidelity, supplementing sparse data sets, reducing solution cost for foward problems to address uncertainty, methods for physics-constrained AI/ML, and interpretable AI/ML methods are of interest. Also, minisymposia targeting applications of AI/ML that do not obviously fit other Emphasis Area are encouraged.
Johann Guilleminot, Duke University
Michael Shields, Johns Hopkins University
Lori Graham-Brady, Johns Hopkins University
Kirubel Teferra, US Naval Research Laboratory
Advances in physics-based modeling are responsible for the generation of massive datasets containing rich information about the physical systems they describe. Efforts in Uncertainty Quantification (UQ), once an emerging area but now a core discipline of computational mechanics, serve to further enrich these datasets by endowing the simulation results with probabilistic information describing the effects of parameter variations, uncertainties in model-form, and/or their connection to and validation against physical experiments.
This MS aims to:
Highlight novel efforts to (A) Harness the rich datasets afforded by potentially multi-scale, multi-physics simulations for the purposes of uncertainty quantification; and (B) Develop physics-based stochastic models, solvers, and methodologies for identification, forward propagation, and validation;
Address modeling problems at multiple length-scales, ranging from the atomistic level to the component level, for a broad class of materials (including metals, metallic alloys, composites, polymers, and ceramics).
This includes, but is not limited to, efforts that:
Merge machine learning techniques with physics-based models;
Develop physics-based stochastic models and low dimensional representations of very high dimensional systems for the purposes of uncertainty quantification;
Extract usable/actionable information from large, complex datasets generated by physics-based simulations;
Develop active learning algorithms that exploit simulation data to inform iterative/adaptive UQ efforts;
Develop stochastic solvers and sampling algorithms;
Interpolate high-dimensional data for high-fidelity surrogate model development;
Learn the intrinsic structure of physics-based simulation data to better understand model-form and its sensitivity;
Develop new methodologies for model identification;
Assess similarities/differences/sensitivities of physics-based models and validate them against experimental data.
The MS aims to span across applications of mechanics, with an emphasis placed on methodological developments that can be applied to physical systems of all types.
Geoffrey Bomarito, National Aeronautics and Space Administration
Jacob Hochhalter, The University of Utah
John Emery, Sandia National Laboratories
James Warner, National Aeronautics and Space Administration
Kyle Johnson, Sandia National Laboratories
Scientific machine learning has become a field which encompasses the many methods of combining physical knowledge with data-driven modeling. In some cases, physical knowledge might inform a machine learning model by applying constraints based on the relevant physics of the problem, e.g., physics-based regularization. In other cases, the knowledge transfer may transpire in the opposite direction, e.g., data-driven discovery of physics and governing equations. Regardless of the direction, the integration of data-driven modeling and physical knowledge can result in rapid advancements of our state of knowledge and the community's command of a discipline. Similarly, uncertainty quantification methods can be integrated to mitigate overfitting to promote generalizable models in the presence of noise.
This minisymposium solicits presentations on scientific machine learning methods that inherently consider uncertainty, with a focus on the application of these approaches to solid mechanics problems. It is well known that traditional methods for capturing inherently stochastic material phenonema, such as plasticitiy, ductility, and fracture, can be very computationally expensive. As such, this MS focuses on methods where physical knowledge provides more than observational biases in training data, and AI/ML are leveraged to facilitate efficient and probabilistic predictions of plasticity, structural failure, or variability in performance. We solicit related abstracts that advance these capabilities with general algorithm advancement or novel applications to solid mechanics. The following general topics are of interest, but the list is not exhaustive:
AI/ML methods for integration of traditional solid mechanics knowledge with uncertainty quantification
Algorithms that add or maintain interpretability
Methods for data-driven knowledge extraction
AI/ML strategies for tractable multiscale/multi-fidelity simulation
Physics informed surrogate models for expeditious uncertainty quantification
Data-driven models accounting for multiphysics phenomena
Algorithms that efficiently capture microstructural information
Algorithms that inherently capture and propagate uncertainty
Danial Faghihi, University at Buffalo
Alireza Tabarraei, The University of North Carolina at Charlotte
Kathryn Maupin*, Sandia National Laboratories
Prashant K. Jha, The University of Texas at Austin
Peng Chen, Georgia Institute of Technology
Recent advances in computational science have resulted in the ability to perform large-scale simulations and process massive amounts of data obtained from measurements, images, or high-fidelity simulations of complex physical systems. Harnessing such large and heterogeneous observational data and integrating those with physics-based and scientific machine learning models have enabled advancing computational models' prediction capabilities.
This mini-symposium highlights novel efforts to develop predictive computational models and model-based decision-making. It provides a forum for advancing scientific knowledge of data-driven complex system modeling and discussing recent uncertainty quantification (UQ) developments in physics-informed scientific machine learning and data interpretation algorithms. Potential topics may include but are not limited to efforts on:
Bayesian validation and selection of computational models
UQ analyses of high-fidelity discrete (molecular dynamics, agent-based) models
Physics-informed machine/deep learning
Data-driven discovery of physical laws
The interface of UQ and scientific machine learning
Design, control, and decision-making under uncertainty
Integrated multi-scale modeling and image analyses
Computational imaging
Operator inference for model reduction and surrogate modeling
Learning from high-dimensional and uncertain data
Multi-level, multi-fidelity, and dimension reduction methods
Learning the structure of the high-fidelity physics-based model from data
UQ methods for stochastic models with high-dimensional parameter space
Scalable, adaptive, and efficient UQ algorithms
Extensible software framework for large-scale inference and UQ
*Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.
Som Dhulipala, Idaho National Laboratory
Michael Shields, Johns Hopkins University
Xu Wu, North Carolina State University
Audrey Olivier, University of Southern California
Jarek Knap, Army Research Laboratory
Ting Wang, Booz Allen Hamilton
Multiscale models, multiphysics models, and network systems are part of complex systems, the modeling of which is computationally expensive. This expense is exacerbated when performing uncertainty quantification (UQ) studies for tasks such as inverse parameter estimation, rare events analysis, and optimal design and control. By integrating ML models into UQ tasks, especially with active feedback loops, ML-aided UQ can provide a computationally tractable framework for UQ studies involving complex, high-dimensional, and/or multi-scale systems. However, several challenges such as robustness and efficiency of the UQ, algorithmic scalability, enforcement of physical constraints, and use of parallel computing need to be addressed for enhancing ML-aided UQ in the computational sciences. This mini-symposium focuses on the recent advances in algorithms, frameworks, methodologies, and applications that promote ML-aided UQ in the computational sciences. Because advancing ML-aided UQ requires research in ML, statistics and probability, and the domain of interest, all topics directly addressing or supporting ML-aided UQ are welcome in this mini-symposium. These include, but are not limited to, approximate inference, gradient-based sampling and optimization, quality of the UQ, UQ with parallel computing, Bayesian Neural Nets, Reinforcement Learning, Active Learning, and applications.
Tan Bui-Thanh, The University of Texas at Austin
Hai Van Nguyen, The University of Texas at Austin
Krishnanunni Chandradath Girija, The University of Texas at Austin
The fast growth in practical applications of deep learning in a range of contexts has fueled a renewed interest in deep learning methods over recent years. Subsequently, scientific deep learning is an emerging discipline that merges scientific computing and deep learning. Whilst scientific computing focuses on large-scale models that are derived from scientific laws describing physical phenomena, deep learning focuses on developing data-driven models which require minimal knowledge and prior assumptions. With the contrast between these two approaches follows different advantages: scientific models are effective at extrapolation and can be fitted with small data and few parameters whereas deep learning models require a significant amount of data and a large number of parameters but are not biased by the validity of prior assumptions. Scientific deep learning endeavors to combine the two disciplines in order to develop models that retain the advantages from their respective disciplines. This mini-symposium collects recent works on scientific deep learning methods covering theories, algorithms, and engineering and sciences applications.
Somdatta Goswami, Brown University
Katiana Kontolati, Johns Hopkins University
Learning tasks in isolation, i.e., training different models for each separate task and dataset, is the standard paradigm in machine learning. As deep learning utilizes deep and complex architectures, the learning process is usually time and effort consuming and needs large labeled datasets, preventing their applicability in many areas where there is a shortage. Transfer learning, multi-task learning, and federated learning approaches, address the above challenges by exploiting the available data during training and adapting previously learned knowledge to emerging domains, tasks, or applications. Transfer learning in particular, is defined as the set of methods that leverage data from different but correlated tasks/domains to train generalizable models that can be adapted to specific tasks via fine-tuning. Accordingly, federated learning is a learning model that seeks to address the problems of data management and privacy through joint training with this data, without the need to transfer the data to a central entity.
Despite the fact that many research activities are ongoing in these areas, many challenges are still unresolved, especially for problems in regression involving nonlinear partial differential equations. This workshop will bring together researchers working on deep learning and employing different augmented learning techniques to simplify and enhance the efficiency of deep learning. The workshop also aims to bridge the gap between theories and practices by providing researchers and practitioners the opportunity to share ideas and discuss and criticize current theories and results.
We invite submissions on all topics related to deep learning with transfer, multi-task learning, and federated learning, but not limited to:
Deep transfer learning for physics-based problems.
Deep neural network architectures for transfer, multi-task and federated learning.
Transfer learning across different network architectures, e.g. CNN to RNN.
Transfer learning across multi-fidelity models.
Transfer learning across different tasks.
Teresa Portone, Sandia National Laboratories
Kathryn Maupin, Sandia National Laboratories
Computational models are commonly used to make predictions affecting high-consequence engineering design and policy decisions. However, incomplete information about modeled phenomena and limitations in experimental and/or computational resources necessitate approximations and simplifications that can lead to model-form error. How best to address model-form error is an open question whose answer depends on the goals of an analysis, the application problem of interest, and available resources. This minisymposium focuses on data-driven and probabilistic methods to address model-form error, such as inference methods accounting for model inadequacy and explicit data-driven and probabilistic representations of model-form error/uncertainty.
Youngsoo Choi, Lawrence Livermore National Laboratory
Masayuki Yano, University of Toronto
Matthew Zahr, University of Notre Dame
While physical simulation has become an indispensable tool in engineering design and analysis, a number of real-time and many-query applications remain out of reach for classical high-fidelity analysis techniques. Projection-based model reduction is one approach to reduce the computational cost in these applications while controlling the error introduced in the reduction process. In this mini-symposium, we discuss recent developments in model reduction techniques. Topics include, but not limited to nonlinear approximation techniques; high-dimensional problems; hyperreduction methods for nonlinear PDEs; data-driven methods; incorporation of machine-learning techniques; error estimation and adaptivity; and their applications to optimization, feedback control, uncertainty quantification, and inverse problems in fluid and structural dynamics, with an emphasis on large-scale industry-relevant problems. The minisymposium will bring together researchers working on both fundamental and applied aspects of model reduction to provide a forum for discussion, interaction, and assessment of techniques.
Andrew Glaws, National Renewable Energy Laboratory
Joseph Cohen, University of Michigan
Zachary Grey, National Institute of Standards and Technology
Xun Huan, University of Michigan
Michael Schmidt, Sandia National Laboratories
Modeling nonlinear dynamical systems is critical to understanding and characterizing complex physical phenomena, ranging from climate and weather models, to energy systems applications, to quantum computing and smart manufacturing. Physics-based models provide the foundation for simulating these systems; however, comprehensive studies of the system dynamics generally require multiple evaluations of the model under different scenarios in order to perform uncertainty quantification, sensitivity analysis, optimization, or some other outer loop analysis. This is particularly difficult for high-dimensional industrial processes that lack intermediate observability and explainability, which has impeded the adoption and maturity of existing computational methods. In these cases, data-driven surrogates, machine learning enhancements, or model-based reductions can provide pathways to scaling expensive physics-based models to facilitate a more thorough exploration of the system. This minisymposium examines recent developments in data-driven methods for dynamical and industrial systems that improve exploration and interpretation. Emphasis is placed on how these methods can be applied to real-world examples of dynamical systems to enhance our understanding of the underlying physical phenomena.
Parisa Khodabakhshi, Lehigh University
Elnaz Seylabi, University of Nevada Reno
Many-query applications in large- and multi-scale engineering applications, requiring multitude of simulations, can become practically intractable due to limited computational budgets. Surrogate modeling is a viable solution for reducing the computational burden of these applications, where a low-cost yet reasonably accurate model replaces the computationally expensive forward model to define a mapping between the input or design parameter space and the quantities of interest. In this regard, Machine Learning (ML) approaches have received significant attention in the past decade due to their versatility and flexibility. The idea behind ML methods is to develop a nonlinear mapping using training data to make predictions on the model for unseen scenarios. To optimize the underlying structure of the mapping and ensure fidelity, ML-based methods require access to a significant amount of data. Even then, the results may not be generalizable, limiting their applicability in most engineering applications where training data is scarce, and its generation is computationally very demanding. On the other hand, scientific ML (SciML) methods aim to embed the physics of the underlying problem and utilize domain knowledge to help increase the reliability of the outputs of such physics-informed models with limited or no data. In this mini symposium, we invite contributions on recent algorithmic advances and successful examples of utilizing the ML and SciML methods and other relevant techniques in surrogate modeling of dynamical systems.
Qizhi He, University of Minnesota Twin Cities
WaiChing Sun, Columbia University
Jiun-Shyan Chen, University of California, San Diego
Xiaolong He, Ansys Inc.
Recent advancements in physics-informed data-driven computation provide new possibilities for the modeling and simulation of complex problems in solid and geological mechanics. There are several promising directions emerging in the field, ranging from directly exploiting data for computational mechanics without constitutive laws, applying deep learning and manifold learning for highly nonlinear and high-dimensional problems, to integrating machine learning and dimensionality reduction techniques into physics-based models for accelerated multiscale modeling and discovery of the underlying constitutive laws and governing equations of complex material systems.
This mini-symposium aims to solicit research developments and applications that involve data-driven, model order reduction, and machine learning approaches for computational solid and geological mechanics, with topics that include but are not limited to
Model-free data-driven computational mechanics
Physics-informed machine learning for linear and nonlinear solid mechanics
Data-assisted modeling of heterogeneous materials, including geomaterials, concrete, composites, among others
Data-driven discovery of constitutive laws and governing equations
Interpretable discovery driven by machine learning
Causal discovery for explainable modeling
Supervised/Unsupervised data/physics-driven learning of surrogate models
Reduced-order real-time simulation of solid and geo-systems
Thomas Hagstrom, Southern Methodist University
Daniel Appelo, Michigan State University
Lu Zhang, Columbia University
Speakers in this minisymposium will discuss diverse applications of new algorithms to challenging problems in wave theory. Examples include the use of machine learning as well as traditional reduced order modeling techniques to develop effective preconditioners and fast surrogate direct models to solve inverse problems and to approximate multiscale media with uncertainty quantification. In addition new methods, such as time-domain solvers in the frequency domain and frequency-domain solvers in the time domain, will be considered.
Timothy Walsh, Sandia National Laboratories
Wilkins Aquino, Duke University
Volkan Akcelik, Sandia National Laboratories
Bojan Guzina, University of Minnesota Twin Cities
Inverse problems and Optimal Experimental Design (OED) are at the nexus of simulation and experimental science. OED (e.g. sensor placement optimization) is essential for experimentalists to gather data that is well-conditioned for the purpose of solving inverse problems for parameter estimation. Inverse problems, in turn, use that data for producing predictive models that are grounded with respect to experimental measurements.
In this mini-symposium we invite talks in the areas of computational methods for the numerical solution of inverse problems in computational mechanics, OED, novel optimization algorithms, and computational approaches within probabilistic formulations. Of particular interest are talks that focus on fundamental algorithmic advancements in the solution methodologies for inverse methods including
Gradient-based optimization
Stochastic inversion
Multi-physics inverse problems
Integration of inverse problems and OED
Machine-learning
Model reduction techniques for accelerating solution of inverse problems
Applications across various disciplines such as damage detection, imaging problems (medical, seismic, etc.), materials characterization, and source localization, among others.
Assad Oberai, University of Southern California
Daniel Huang, California Institute of Technology
Lu Lu, University of Pennsylvania
Dhruv Patel, Stanford University
Paris Perdikaris, University of Pennsylvania
Deep Ray, University of Southern California
Yue Yu, Lehigh University
The last few years have witnessed a rise in the popularity of deep learning-based surrogate models to solve partial differential equations (PDEs). In particular, operator networks, which approximate the solution operator of the PDE, have shown great promise in various problems in science and engineering. Several flavors of operator networks are currently available, such as DeepONets, Fourier Neural Operators and Graph Neural Operators, each with their own set of advantages in terms of ease in training and accompanying theoretical analysis. These differentiable surrogate models can play a big role improving the computational efficiency of many-query downstream tasks, which include PDE-constrained optimization, inverse problems, and uncertainty quantification.
This symposium brings together researchers to share some recent progress and their perspectives on (i) the methodology development of deep learning surrogate models for PDE-based problems, (ii) existing theory and mathematical interpretation of these approaches, and iii) challenges in deploying such strategies to solve large-scale problems.
Serge Prudhomme, Polytechnique Montréal
Johann Guilleminot, Duke University
Jianxun Wang, University of Notre Dame
Michael Shields, Johns Hopkins University
This mini-symposium (MS) aims to bring together outstanding student contributions in uncertainty quantification (UQ) and related areas. The MS will be composed of presentations by six finalists selected from among the contributed abstracts and short papers. Students interested in contributing should submit an abstract to this MS through the abstract portal in addition to submitting their contribution to another MS. Student abstract contributions will be reviewed by a panel of UQ-TTA members and up to 12 contributors will be invited to submit a short paper (maximum 4 pages). From the 12 contributions, 6 finalists will be selected to present in the MS. From these 6 finalists, 1 – 3 awards will be made.
Requirements and Process for Entry:
All contributions must have a student as the lead author and presenter.
Abstracts should also be submitted to another UQ-related MS. This means that student finalists will present their work twice; once in the traditional MS and once in the competition.
Students will be notified at the time of abstract acceptance/decline whether they have been invited to submit a paper.
Papers will be due approximately 1 month later.
Finalists will be notified approximately 1-2 months prior to the conference.
All finalists must register for the conference and present in the MS.
Alex Gorodetsky, University of Michigan
John Jakeman, Sandia National Laboratories
Mike Eldred, Sandia National Laboratories
Gianluca Geraci, Sandia National Laboratories
This minisymposium will present the latest advancements in multi-level and multi-fidelity algorithms for learning and uncertainty quantification. Talks will address one, or multiple, aspects of the development and/or deployment of advanced multi-fidelity tools, spanning inference and estimation, uncertainty propagation, experimental design, and data-driven learning. Topics and questions of high-interest include, but are not limited to: (1) what constitutes an effective multi-fidelity model ensemble? (2) how can model ensembles be adaptively tuned or developed to improve the performance of multi-fidelity algorithms? (3) how can structure be identified and exploited to improve multi-fidelity analysis, (4) what are relationships between multi-fidelity modeling, multi-task learning, and transfer learning, and how can they be exploited? and (5) how can multi-fidelity tools be leveraged in challenging unsteady, nonlinear, and/or chaotic regimes?
Gianluca Geraci, Sandia National Laboratories
Timothy Wildey, Sandia National Laboratories
Tian Yu Yen, Sandia National Laboratories
Daniele Schiavazzi, University of Notre Dame
Mohammad Motamed, University of New Mexico
Data-driven models, such as deep neural networks, offer a promising alternative to the solution of complex systems of partial differential equations usually employed in computational scientific and engineering applications. However, the accurate and stable training of state-of-the-art models still requires relatively large datasets. This challenge significantly hampers the ability to make effective use of these models for complex and high-fidelity applications. This minisymposium will focus on strategies to alleviate the computational cost of building data-driven models by leveraging datasets obtained from different fidelities or by performing transfer learning.
Following the recent advancements in multi-fidelity approaches in uncertainty quantification and heterogeneous information transfer in machine learning, this minisymposium will welcome contributions proposing and/or analyzing novel solutions for alleviating the challenge of scarce/sparse data by fusing information from multiple sources, which can include both numerical approximations and/or physical approximations of the target computational system. We will consider contributions spanning machine learning for dataset fusion, multifidelity methods and transfer learning.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
Sanjay Govindjee, University of California, Berkeley
Roger Ghanem, University of Southern California
Johann Guilleminot, Duke University
Cosmin Safta, Sandia National Laboratories
Michael Shields, Johns Hopkins University
Christian Soize, Université Gustave Eiffel
Charbel Farhat, Stanford University
Probabilistic models stand at the juncture of physics and data-science. While the semantics of these models can encode logical and physics constraints, their mathematical analysis is steeped in probability theory, statistics, and data analysis. Recent technological advances in sensing and computing underlie the exponential growth of scholarship at this intersection yet challenges remain in making predictions and decisions that are sufficiently constrained by both physics and data. A large challenge in this regard is the learning of probabilistic models utilizing limited data and extracting meaningful statistical information in a mathematically rigorous fashion and to do so in a computationally efficient manner that is both generalizable, yet also domain/problem specific.
We invite submissions that deal with theoretical as well as practical and applied aspects of these challenges. A partial, but non-exclusive list of topics of interest, includes:
Nonlinear manifold identification methods from sparse data
Sampling methods in high stochastic dimensions
Effective methods for constrained sampling
Novel generative models
Design of experiments for probabilistic learning
Application of learned generative models in science and engineering
Probabilistic learning on manifolds
Probabilistic models and reasoning for digital twins and AI
Physics informed probabilistic models
Som Dhulipala, Idaho National Laboratory
Zachary Prince, Idaho National Laboratory
Peter German, Idaho National Laboratory
Dewen Yushu, Idaho National Laboratory
Yifeng Che, Idaho National Laboratory
There has been a significant increase in the use of machine learning (ML) techniques to accelerate numerical simulations and uncertainty quantification (UQ) to quantify the trustworthiness of the numerical or ML model predictions. As a result, software tools to perform UQ and ML for modeling and simulations are gaining interest due to their potential to: (1) translate research methods in ML/UQ to applications for practical computational problems; (2) motivate new research on ML/UQ, especially for complex problems like coupled/multi-scale systems, systems with high-dimensional input-output spaces, and systems that take a significant amount of time to simulate; and (3) incorporate technologies like massively parallel computing, exascale computing, and GPU/TPU-based computing into research methods and practical applications. This mini-symposium focuses on recently developed software tools and advancements in existing packages that promote UQ/ML with applications to modeling and simulations. Of specific interest are software tools that are open source. All topics directly addressing or supporting software tools for UQ/ML are welcome in this mini-symposium. These include, but are not limited to, the architecture and usage of a software package, application of a software package to practical computational problems, future directions for software packages to better support UQ/ML of modeling and simulation tasks, etc.
Alireza Doostan, University of Colorado Boulder
Alexandre Cortiella, University of Colorado Boulder
Assad Oberai, University of Southern California
Jianxun Wang, University of Notre Dame
Recent advances in data acquisition systems along with modern data science and machine learning techniques have fostered the development of accurate data-driven approaches, such as inverse modeling for model calibration and system identification, in science and engineering fields. In particular, system identification, i.e., deducing accurate mathematical models from measured observations, is key to improved understanding of complex phenomena, dominant feature analysis, design of experiments, and system monitoring and control. Furthermore, the emergence of multi-fidelity approaches provides further roles for data-driven models as a low-cost model evaluation for UQ tasks.
This mini-symposium focuses on recent developments in discovering governing equations, deriving discrepancy terms, and building reduced order models of non-linear dynamics from simulation or experimental data. Some of the techniques include sparse regression, physics-based deep learning, operator inference, and reinforcement learning. Of particular interest are data-driven methods addressing challenges regarding measurement noise, sampling strategies, identifiability, and scalability for accurate and robust model extraction in complex non-linear dynamical systems. Additionally, application of the extracted models to visualization, data compression/reconstruction, real-time controls, or uncertainty quantification are encouraged.
Romit Maulik, Argonne National Laboratory
Qi Tang, Los Alamos National Laboratory
Joshua Burby, Los Alamos National Laboratory
Machine learning techniques have recently shown remarkable results for tackling long-standing challenges in computational science. This minisymposium seeks to showcase recent advances in data-driven modeling for dramatically accelerating computational applications with particular emphasis on physics-informed approaches. We are interested in strategies that can leverage large, potentially real-world data sets for learning structure preserving dynamical systems, developing physics-informed closures for advection-dominated problems, and building hybrid PDE and data-driven modeling approaches that significantly accelerate computational workflows for diverse applications.
Confirmed speakers:
Shivam Barwey - Argonne National Laboratory
Stephen Becker - University of Colorado Boulder
David Bortz - University of Colorado Boulder
Andrew Christlieb - Michigan State University
Yingda Cheng - Michigan State University
Emil Constantinescu - Argonne National Laboratory
Cory Hauck - Oak Ridge National Laboratory
Huan Lei - Michigan State University
Daniel Livescu - Los Alamos National Laboratory
Gianmarco Mengaldo - National University of Singapore
Houman Owhadi - Caltech
Daniel Serino - Los Alamos National Laboratory
Xuping Xie - Los Alamos National Laboratory
Aishwarya Pawar, Iowa State University
Aditya Balu, Iowa State University
Baskar Ganapathysubramanian, Iowa State University
Ming-Chen Hsu, Iowa State University
Adarsh Krishnamurthy, Iowa State University
Machine learning based approaches in engineering have seen rapid progress in recent years. While modern ML/AI approaches have transformed a host of application areas that involve assimilating large data streams to make useful predictions, the time is ripe to leverage these advances for analysis, optimization, design, and control of complex engineered systems. Researchers have been leveraging novel tools in machine learning – such as deep generative models, deep reinforcement learning models as a computationally efficient paradigm for modeling and simulation of complex engineered systems. However, despite their apparent utility, current AI systems suffer from three key drawbacks:
Reliance on abundance of data: Current AI systems tend to entirely let data dictate the narrative. As a result, the data requirement for training such systems is very large, which may become a major bottleneck for complex simulations and expensive experiments.
Lack of generalizability: These approaches have narrow scope, i.e., they typically only succeed on the task that they are trained on. Additionally, contextual constraints and domain knowledge known from physical systems are left unused.
Unsatisfactory parsimony and explainability:The representations produced are non-parsimonious and un-interpretable. This is especially damaging when the end-goal, which is the identification of functional relationships in complex systems or constrained explorations of the design space, requires generating insights into the engineered system.
Recent efforts after the advent of physics informed neural networks and generative neural network models have proven to have tremendous impact in numerous tasks such as prediction, visualization, and design. This is a significant departure from the typical, data-hungry approach required by traditional ML training methods because the encoded invariances allow for physically meaningful predictions using far less training data. In our view, these approaches are suitable for solving several problems in computational mechanics. We have worked on novel methods and have exciting preliminary results in which we have trained models (in a data free manner and data driven manner) for solving partial differential equations (PDEs). We have also seen several other researchers in the community working more in this research area. However, there are still many research questions that are unanswered. Some of the key research questions include:
Principled approaches for incorporating physics-based constraints into computational mechanics models
Quantitative guarantees that link model architecture, predictive performance, and generalization
Constructing new model architectures for complex geometries.
Addressing these research questions requires revolutionary advances in AI, physics-based modeling and simulation, optimization, and computational science. Furthermore, these ideas can be extended to be applied on complex engineering systems such as turbulence, fluid-structure interaction, and complex material dynamics.
The Novel Methods Emphasis Area includes a broad range of minisymposia focused on novel methods in computational engineering and sciences such as, but not limited to, isogeometric analysis, meshfree, particle methods, spectral elements, DG methods, generalized/extended finite element methods, etc.
Leszek Demkowicz, The University of Texas at Austin
Jay Gopalakrishnan, Portland State University
The minisymposium focuses on the development of higher order Finite Element (FE) methods for the simulations of complex multiphysics and multiscale problems. We invite contributions on innovative approaches involving finite element exterior calculus, structure-preserving discretizations, least squares and more general minimum residual methods for both steady state and transient problems. The following is an incomplete list of subjects of interest.
Matrix and vector finite elements for compatible discretizations.
Topics in structure-preservation with finite elements.
Minimum residual methodologies for nonlinear problems.
Combining minimum residual discretizations in space and in time.
Space-time discretizations.
Mesh adaptivity, h-, p- and hp-methods.
Efficient implementations including GPUs.
Eigensolvers for finite element discretizations.
Formulations outside of Hilbert spaces.
Hybrid FE/PINN methodlogies
Patrick Diehl, Louisiana State University
Pablo Seleson, Oak Ridge National Laboratory
Erkan Oterkus, University of Strathclyde
Fei Han, Dalian University of Technology
Gilles Lubineau, KAUST
Robert Lipton, Louisiana State University
Peridynamics modeling has been effectively used to predict material failure and damage in many applications, and it was successfully compared against various experiments. However, some theoretical and numerical understanding of simulation results, e.g., crack nucleation, is still missing. The purpose of this symposium is twofold. First, current developments in theoretical efforts will be presented, one objective being to understand better the possible synergies between peridynamics and more classical approaches as well as the way they can be used concurrently or not. Second, advances in computational efforts in recent years will be discussed, and the observed challenges will be highlighted. By combining theoretical and applied presentations, this symposium aims to strengthen the synergy between researchers working on analytical and numerical methods and discuss current challenges and open questions in the field of peridynamics.
Patrick O'Hara, Air Force Research Laboratory
Alejandro Aragon, Delft University of Technology
Daniel Dias-da-Costa, University of Sydney
C. Armando Duarte, University of Illinois Urbana-Champaign
Iterative coupling algorithms and Enriched Finite Element Methods (EFEMs) such as Generalized/eXtended FEM are two distinct but related methods that are often used to solve multiscale, fracture mechanics, moving interfaces, and other challenging problems in mechanics. EFEMs have received increased attention and undergone substantial development during the last two decades. Recent focus has been placed on improving the method’s conditioning, and in the development of Interface- and Discontinuity-Enriched FEMs as alternative procedures for analyzing weak and strong discontinuities. The question of conditioning, robustness, and performance are common issues of EFEMs and iterative coupling algorithms.
As these methods get more and more mature, a common challenge concerns their implementation in available software which is often difficult and time-consuming and, therefore, expensive. One strategy to address this issue is to non-intrusively couple commercial and research software and thus provide the end-user with simulation and modeling capabilities not available in any single software.
This mini-symposium aims to bring together engineers, mathematicians, computer scientists, and national laboratory and industrial researchers to discuss and exchange ideas on new developments, applications, and progresses in coupling algorithms and Enriched FEMs. While contributions to all aspects of these methods and their implementation are invited, topics of particular interest include:
verification and validation; accuracy, computational efficiency, convergence, and stability of EFEMs and coupling algorithms.
new developments for immerse boundary or fictitious domain problems, flow and fluid-structure interaction, among others.
applications to industrial problems exhibiting multiscale phenomena, localized non-linearities such as fracture or damage, and non-linear material behavior.
acceleration techniques for coupling algorithms.
coupling algorithms for multi-physics and time-dependent problems.
Mikhail Shashkov, Los Alamos National Laboratory
Interface reconstruction and mesh data transfer are important parts of many numerical methods for computational mechanics. This includes indirect multi-material arbitrary Lagrangian-Eulerian methods, transferring data between different codes. At this minisymposia we will have presentations describing novel and advance methods for high-order multi-material interface reconstructions and constrained data transfer between arbitrary meshes and present applications of these methods for test and real problems.
Pablo Seleson, Oak Ridge National Laboratory
Marta D’Elia, Meta Reality Labs
Nonlocal models, such as peridynamics and fractional equations, can model phenomena that classical models based on partial differential equations (PDEs) fail to represent. These phenomena include multiscale behavior, material discontinuities such as cracks, and anomalous behavior such as super- and sub-diffusion. For this reason, nonlocal models provide an improved predictive capability for a large class of complex engineering and scientific applications, including fracture mechanics, subsurface flow, and turbulence to mention a few. In many of these applications, the system under consideration exhibits heterogeneity, either in its physical composition or in its response to external stimuli. These cases often result in the need to introduce physical or virtual interfaces between different parts of the domain. The case of heterogeneity in the physical composition, such as two- or multi-material systems, normally requires the treatment of nonlocal-to-nonlocal coupling across physical interfaces. The case of heterogeneity in the system response may be benefited from local-to-nonlocal coupling across virtual interfaces; this occurs when nonlocal effects are concentrated in specific parts of the domain and the system can be partially described with a classical (local) PDE and partially described with a nonlocal model. These settings require the treatment of nonlocal-to-nonlocal or local-to-nonlocal interfaces in an accurate and physically consistent manner. The goal of this minisymposium is to bring together researchers working on interface problems in nonlocal modeling, including both local-to-nonlocal and nonlocal-to-nonlocal coupling, to learn about recent developments, discuss current challenges, and define new research directions.
Gianmarco Manzini, Los Alamos National Laboratory
Joseph E. Bishop, Sandia National Laboratories
Michele Botti, MOX-Politecnico di Milano
N. Sukumar, University of California, Davis
This minisymposium intends to bring together scientists who develop and implement novel discretization techniques that extend the domain of classic finite element approaches for partial differential equations. These technologies include continuous and discontinuous Galerkin methods on polygonal and polyhedral meshes (polytopal meshes, for brevity), structure-preserving mimetic discretizations, virtual elements, hybrid high-order methods, and finite element exterior calculus.
Polygonal and polyhedral meshes with convex and concave elements offer greater flexibility in mesh design and allow efficient strategies for adaptivity. In the context of computational mechanics problems involving inner interfaces and moving discontinuities, such as in the simulation of layered and fractured materials, the versatility of these methods allows for taming the geometric complexity by providing robustness for rough heterogeneities and mesh distortions.
This minisymposium seeks contributions focusing on method design and applications to engineering science challenges using polygonal and polyhedral discretizations. While contributions to all of these methods are encouraged, we highlight the following themes:
Generalized barycentric coordinates for polytopal meshes
Discontinuous Galerkin and nonconforming finite elements on polytopal meshes
Virtual element schemes for arbitrary-order approximations
Structure-preserving algorithms (mimetic and finite element exterior calculus) for multiphysics simulations
Boundary element formulation for polytopal meshes
Polytopal mesh generation algorithms and criteria to assess the quality of a mesh
Error estimates and convergence theory for poltopal finite element methods
Use of polytopal meshes in applications such as material design, microstructural discretization, topology optimization, additive manufacturing, deformation of nonlinear continua, material fractures, computer graphics, and animations.
Emily Johnson, University of Notre Dame
Hugo Casquero, University of Michigan - Dearborn
Ming-Chen Hsu, Iowa State University
Jessica Zhang, Carnegie Mellon University
Matt Sederberg, Coreform
Isogeometric analysis (IGA) was originally introduced to achieve seamless integration of computer-aided design (CAD), computer-aided engineering (CAE), and computer-aided manufacturing (CAM). Many IGA technologies have seen significant advancements since their introduction, including the development of splines that are simultaneously suitable for CAD, CAE, and CAM and the use of spline-based immersed approaches. IGA and its extensive applications continue to evolve as these methods transition from academia into industry. This minisymposium will feature a broad representation of industrial results and IGA research projects, including presentations from academics consulting on industry projects, software vendors, academics working on large-scale parallel implementations of IGA, and end users.
Christopher Eldred, Sandia National Laboratories
Anthony Gruber, Sandia National Laboratories
Artur Palha, Delft University of Technology
Continuum mechanics has an elegant description in terms of geometric mechanics formulations (variational, Hamiltonian, metriplectic, GENERIC, port-Hamiltonian, etc.). These geometric descriptions enable the accurate representation of both reversible (thermodynamic entropy-conserving) and irreversible (thermodynamic entropy-generating) dynamics, and for the correct interconnection/coupling between systems. By emulating the fundamental features of these geometric formulations in a numerical model (i.e., a structure-preserving discretization), many desirable properties can be obtained, e.g., freedom from spurious/unphysical numerical modes, consistent energetics, controlled dissipation of enstrophy or thermodynamic entropy, and stable coupling between systems. Examples of this include spatial, temporal and spatiotemporal discretizations such as compatible Galerkin methods, symplectic integrators, and discrete exterior calculus. This minisymposium brings together researchers studying and implementing these ideas at both the continuous and discrete levels across a wide range of continuum mechanics models, including geophysical fluid dynamics, plasma, compressible flow, and solid mechanics.
Anjali Sandip, University of North Dakota
Irina Tezaur, Sandia National Laboratory
Recent advances in computational hardware have provided opportunities to diminish resolution constraints and increase computing speed in the development of large-scale numerical models. However, utilizing the hardware at their maximal capabilities still remains a challenge. Developing algorithms that optimally leverage the hardware capabilities with performance-portable implementations, enabling the same code to execute correctly and efficiently across a wide range of CPU and GPU architectures, has therefore become an important aspect of numerical modeling.
This mini-symposium will feature presentations on algorithms that optimally leverage the hardware capabilities with performance-portable implementations for unstructured mesh applications. Relevant topics include algorithms applied to high spatial and/or temporal resolution large-scale numerical models.
Pavel Bochev, Sandia National Laboratories
Dmitri Kuzmin, University of Dortmund
Denis Ridzal, Sandia National Laboratories
The main goal of this minisimposium is to bring together researchers working on the development, analysis and verification of new and non-standard methods that preserve fundamental properties of solutions to linear advection equations or nonlinear (systems of) conservation laws. Examples of such properties include conservation of mass, preservation of solenoidal vector fields, and satisfaction of maximum principles or entropy conditions.
The minisimposium will focus on a spectrum of approaches ranging from non-standard optimization-based property-preserving methods to limiter-based approaches, residual redistribution methods, entropy fixes, remapping, mesh optimization and related topics Exploration of the close relationships between these approaches that can further facilitate their analysis and understanding will be of particular interest for this minisimposium.
For example, optimization-based methods enforce physical properties by treating them as inequality and/or equality constraints in suitably defined constrained global optimization problems. In contrast, limiter-based approaches typically use local “worst-case” scenario considerations to construct convex combinations of high- and low-order solution approximations that preserve the desired properties. However, despite their seemingly dissimilar foundations, optimization-based and limiter-based methods are in fact related: the weights in the convex combinations of low and high order solutions can be interpreted as exact solutions of simplified global optimization problems.
By bringing together experts in numerical methods for hyperbolic problems, optimization and numerical analysis, the minisimposium aims to stimulate interactions among these experts that will foster further investigation of mathematical relationships between different cases of property-preserving methods.
Guglielmo Scovazzi, Duke University
Nabil Atallah, Lawrence Livermore National Laboratory
Santiago Badia, Monash University
Hugo Casquero, University of Michigan - Dearborn
Fehmi Cirak, University of Cambridge
Alexander Düster, Hamburg University of Technology
Baskar Ganapathysubramanian, Iowa State University
Ming-Chen Hsu, Iowa State University
WaiChing Sun, Columbia University
Vladimir Tomov, Lawrence Livermore National Laboratory
Jinhui Yan, University of Illinois Urbana-Champaign
In scientific and industrial applications, a large part of the overall effort invested for a finite element analysis is very often devoted to geometric modeling and the transition from computer-aided design (CAD) to analysis suitable models. To avoid the need for body-fitted mesh generation, fictitious domain methods were introduced already in the early 1960s. Since then, many variants of these appealing approaches have been suggested, like embedded domain and immersed boundary methods or special implementations of the extended finite element method. Whereas in earlier years the focus was placed on mathematical aspects, more recently a lot of progress has been achieved in engineering sciences. An important reason for this success is an essential paradigm of the fictitious domain and immersed/unfitted boundary methods, is that they support better design-through-analysis by closely coupling geometric modeling and numerical simulation.
Many variants of Immersed Boundary Methods have been developed, like CutFEM, the Finite Cell Method, Unfitted Finite Elements, the Shifted Boundary Method, or Trimmed Isogeometric Analysis, immersogeometric analysis, just to name a few.
This mini-symposium will focus on immersed/unfitted/fictitious domain methods dedicated, but not limited to problems in solid and fluid mechanics, including possible interactions with other physical fields (e.g. heat, multi-phase flow, etc.) applications. The topics of this mini-symposium will range from modeling aspects including the coupling of analysis and CAD, mathematical analysis, stabilization, pre-conditioning, integration of cut cells, adaptivity and implementational issues to the efficient solution of complex engineering problems. It will address low and higher-order unfitted discretization approaches, CutFEM, the Finite Cell Method, the Shifted Boundary Method, as well as combinations with the Isogeometric Analysis including trimming of spline patches or recent approaches to shape and topology optimization.
Zhen Chen, University of Missouri
Joseph E. Bishop, Sandia National Laboratories
Jiun-Shyan Chen, University of California, San Diego
Sheng-Wei Chi, University of Illinois Chicago
John Foster, The University of Texas at Austin
Michael Hillman, K & C Inc.
Marc Schweitzer, University of Bonn
C.T. Wu, Ansys Inc.
Meshfree and particle methods have emerged as a new class of numerical methods that play an increasingly significant role in the study of challenging engineering and scientific problems. New and exciting developments of meshfree and particle methods often go beyond the classical theories, incorporate more profound physical mechanisms, and become the exclusive numerical tools in addressing the computational challenges that were once difficult or impossible to solve by conventional methods.
The goal of this minisymposium is to bring together experts working on these methods, share research results and identify the emergent needs towards more rapid progress in advancing the important fields of meshfree and particle methods. Topics of interest for this minisymposium include, but are not limited to the following:
Recent advances in meshfree and particle methods, coupling of finite element and meshfree methods, material point method, and peridynamics
Methods for coupling multiple physics and/or multiple scales
Methods of fictitious domains and non-intrusive coupling
Methods enabling a rapid design-to-analysis workflow
Strong form collocation methods
Nodal integration and domain integration methods for the Galerkin formulation
Characterization and stabilization of numerical instabilities
Recent advances in modeling strong and weak discontinuities
Recent advances in modeling extreme loading events
Recent advances in modeling manufacturing problems
Recent advances in modeling bio- and nano- mechanics problems
Integration of physics-based and data-enabled approaches
Parallel computation, solvers and large-scale simulations
New applications such as optimizing additive manufacturing and mitigating disasters
Guillermo Hauke, University of Zaragoza
Isaac Harari, Tel Aviv University
Arif Masud, University of Illinois Urbana-Champaign
This symposium aims to bring together researchers considering various aspects of Stabilized and Multiscale methods in Computational Mechanics. These methods have broad applications in fluid dynamics, solid and structural mechanics, material modeling, problems with weak and strong discontinuities, as well as phenomena involving coupled interacting fields, namely, fluid-structure interaction, thermo-mechanics, chemo-mechanics, and electro-magnetics. This symposium will provide a platform to engineers, mathematicians, and computer scientists to discuss recent developments in the field of stabilized and variational multiscale methods and their novel applications in engineering and science. We welcome contributions dealing with all aspects of stabilized, multiscale and multiphysics methods, including but not limited to,
Mathematical theory of Stabilized and Variational Multiscale methods
Emerging multiscale approaches and applications
New formulations and solution techniques
Multiscale methods in CFD and Turbulence modeling
Application to Error Estimation and Uncertainty Quantification
Applications of VMS in V&V and Reduced Order Modeling
Applications of VMS in emerging Data Science problems
Applications of VMS in Machine Learning Approaches
Reza Abedi, University of Tennessee
Robert Haber, University of Illinois Urbana-Champaign
Tamas Horvath, Oakland University
Alireza Amirkhizi, University of Massachusetts Lowell
This minisymposium provides a forum for presentation and discussion of recent advances in numerical methods for wave problems and related state-of-the-art applications in science and engineering. Numerical methods of interest include, but are not limited to, ADER-DG, spacetime DG (Tent Pitching), Trefftz DG, implicit shock tracking, space-time parallel multigrid, adaptive mulitresolution (MR), IMEX, pseudo-time, local time-stepping, and fast boundary elements as well as frequency-domain counterparts. Presentations describing approximate representations of boundary conditions, homogenization of heterogeneous and dispersive media, stochastic modeling, and novel solution schemes and software architectures for exascale HPC systems are also welcome. Applications of interest include water waves and coastal modeling, dynamics of solids, dynamic fracture, earthquake simulation, forward and inverse scattering, waves in random or dispersive media, traumatic brain injury, photonics and metamaterials, electromagnetics, acoustics, hyperbolic heat conduction, compressible gas dynamics, medical and seismic imaging, and multiphysics wave problems.
Alexander Idesman, Texas Tech University
The objective of this symposium is to discuss new advances in numerical methods for linear and non-linear time-dependent and time-independent partial differential equations used in mechanics. Topics of interest include, but are not limited to: new space and time discretization methods; high-order accurate methods with conformed and unfitted meshes including finite, spectral, isogeometric elements, finite difference methods, fictitious domain methods, meshless methods, and others; special treatment of boundary and interface conditions on irregular geometry; new time-integration methods; adaptive methods and space and time error estimators; comparison of accuracy of new and existing numerical methods; application of new numerical methods to engineering problems; and others.
The Advanced Manufacturing, Materials, and Multiscale Methods Emphasis Area includes minisymposia that focus on computational methods to facilitate manufacturing, design of components that leverage advanced manufacturing capabilities, optimization of design and manufacturing processes, computational efforts to leverage additive manufacturing, and methods to explore and bridge length scales, from nano to engineering length scales.
Ahmad Najafi, Drexel University
Kai James, University of Illinois Urbana-Champaign
Motivated by key advances in manufacturing techniques, the tailoring of materials with desired macroscopic properties has been the focus of active research in engineering and materials science over the past decade. For materials architected at length scales that can be controlled by the manufacturing process, the goal is to determine the optimal spatial layout of one or more constituent materials to achieve a desired macroscopic constitutive response. Topology and shape optimization methods provide a systematic means to achieve this goal. The objective of this symposium is to bring together researchers working on state-of-the-art topology and shape optimization techniques with direct application in materials design to exchange ideas, present novel developments, and discuss recent advances. Topics of interest concern shape and topology optimization techniques, and they include, but are not limited to:
Multiscale, multifunctional design of materials and structures
Design of lattice materials
Design of nonlinear materials
Reduced-order multiscale modeling for design
Simultaneous material and structure optimization
Optimization under uncertainty
Bioinspired design of composites
Design of metamaterials
Smart material design
Software
Emilio Carlos Nelli SIlva, University of Sao Paulo
Glaucio Paulino, Princeton University
Shelly Zhang, University of Illinois Urbana-Champaign
Shinji Nishiwaki, Kyoto University
This mini-symposium aims to bring together researchers working on various aspects of topology optimization applied to fluids, solids and structures. In particular, we are interested in recent advances in topology optimization. Suggested topics include, but are not limited to:
Novel and efficient topology optimization algorithms
New methods to handle manufacturing, stress and other constraints
Exact solutions to topology optimization problems
New methods to solve multi-objective topology optimization problems
Recent advances in reliability-based topology optimization (RBTO)
Efficient solution of industrial large scale topology optimization problems
Inclusion of microstructure in topology predictions
Recent advances in topology optimization applied to multi-physics problems
Exploiting high-performance computing in topology optimization
New methods of adaptive mesh refinement in topology optimization
Multiscale topology optimization
Topology optimization applied to fluid problems
Brendan Keith, Brown University
Boyan Lazarov, Lawrence Livermore National Laboratory
Harbir Antil, George Mason University
Drew Kouri, Sandia National Laboratories
Denis Ridzal, Sandia National Laboratories
Optimal design is a perennial challenge in all engineering endeavors. As engineering systems become larger and more complex, the challenge grows, and confident decision-making requires more sophisticated numerical algorithms and computational techniques. To this end, this minisymposium aims to showcase novel optimization algorithms that efficiently handle extreme numbers of design variables, various complex performance and fabrication constraints, and/or environmental and manufacturing uncertainties.
Specific topics of interest include:
Topology optimization
Shape optimization
PDE-constrained optimization under uncertainty
High-performance computing and algorithms for modern architectures
Risk-averse optimization
Albert To, University of Pittsburgh
Yuichiro Koizumi, Osaka University
Stefan Kollmannsberger, Technical University of Munich
Andreas Lundback, Lulea University of Technology
Gregory Wagner, Northwestern University
Ashley Spear, The University of Utah
Dan Moser, Sandia National Laboratories
Kyle Johnson, Sandia National Laboratories
Mike Stender, Sandia National Laboratories
Theron Rodgers, Sandia National Laboratories
Various additive manufacturing (AM) techniques including 4D printing have been developed to manufacture complex-shaped components with well-controlled precision. Sophisticated AM techniques often require systematic modeling and simulation efforts during the design stage and for the purpose of part qualification/certification. The objective of this minisymposium is to provide a platform to discuss recently developed modeling and simulation techniques for AM, including experimental calibration and validation efforts for the process. The topics include (but are not limited to):
Simulation of the manufacturing process to predict heat transfer, residual stress/distortion, surface topology, composition, and microstructure including defects at multiscale length and time scales
Data-driven approaches for simulation acceleration
Combined simulation and in-situ monitoring for rapid build qualification
Effects of microstructure and defects on mechanical properties
Feedback control for minimizing defects and residual stress in as-built structures
AM-oriented topology optimization
Modeling and simulation of functionally graded materials, tissue engineering scaffolds, bioinspired composites, bi-material joints, etc
Computational modeling and simulation for any AM processes (e.g. laser power bed fusion, electron beam melting, form deposition modeling, stereolithography, binder jetting) and materials (e.g. metals, plastics, ceramics and their composites as well as biological materials) are welcome.
606 - Industrial Artificial Intelligence and Smart Manufacturing [Merged with 409], Joseph Cohen, Xun Huan
John Mitchell, Sandia National Laboratories
Kyle Johnson, Sandia National Laboratories
Thomas Ivanoff, Sandia National Laboratories
Kevin Long, Sandia National Laboratories
John Emery, Sandia National Laboratories
Advances in material and component manufacture are accelerating the need for Microstructure aware simulation capabilities. Disparity of length scales between material microstructures and engineering components continues to be a challenge -- especially for additively manufactured materials. Microstructure heterogeneity at component scale is also a challenge. Relatedly, problems which exhibit a lack of length scale separation are quite common and go beyond microstructures -- one example here is 3D printed lattice structures, where representive cell dimensions may be on the order of the structural component. In finite element simulations at component scale, the question of what properties to use, where to use them, and over what length scale, persists. This minisymposium is focused on the development of novel methods to tackle these challenges. While there is a focus on additively manufactured metals, polymer systems are welcome since many strategies may have overlap. Continuum scale mechanical, electrical and thermal properties are of interest.
We seek talks which describe methods of computational data analytics and/or work flows that address these challenges. Methods using FFT, RVE and spatial statistics to characterize length scales and properties for use in finite element simulations are of interest as are methods to translate inhomogeneous texture information from laboratory scale to component scale. We consider methods for identifying relevant length scales, homogenization, machine learning of constitutive models, and analytics applied to laboratory data (such as CT or EBSD) or simulated data; all with a focus towards facilitating component scale simulation reflecting important microstructural aspects.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.
Jonas Actor, Sandia National Laboratories
Elise Walker, Sandia National Laboratories
With the synthesis of new high-throughput methods, materials R&D is readying for the discovery, characterization, and design of robust materials and manufacturing processes through the development and implementation of multimodal, physics-informed machine learning algorithms. The fusion of human expert materials knowledge with multimodal, physically constrained, machine learning algorithms can aid in detection of ""fingerprints"" critical in materials behavior, prognose component performance, and adapt manufacturing strategies.
This minisymposium convenes world-class researchers in advanced manufacturing, materials characterization, data science, modeling/simulation, and hardware engineering to showcase works that detect critical features that govern material behavior. This minisymposium will discuss:
Hybrid, physics informed machine learning methods to understand process-structure mappings
Surrogate models using multimodal data streams combining experiments and simulations
Machine learning guided process optimization
Hongkyu Yoon, Sandia National Laboratories
Pania Newell, The University of Utah
Azadeh Sheidaei, Iowa State University
Mohammad Saber Hashemi, Iowa State University
Artificial intelligence machine learning (AI/ML) methods have been used to accelerate the prediction of material properties as well as the discovery of new materials. Manufacturing and/or natural processes to make materials with different compositions result in microstructural arrangements, textural orientations, and defects that all affect the properties and behavior of materials. Traditional methods often involve a large design and testing matrix to explore material with desired properties through intensive experiments and computational simulations. AI/ ML methods can potentially transform the traditional practice of predicting material properties and discovery faster and more accurately.
This mini symposium invites scientific and engineering contributions to the field of ML and materials sciences, including but not limited to:
Recent advances in ML algorithms for predicting material properties and discovery of new materials
ML in characterization of microstructure
ML in designing robust materials with tailored properties
ML and homogenization
Integration of numerical model and ML to predict material properties.
We specifically invite participation from undergraduate and graduate students, postdocs, early careers, minorities, etc.
Nilesh Mankame, General Motors Global Research & Development
Pablo Zavattieri, Purdue University
David Restrepo, The University of Texas at San Antonio
Tian "Tim" Chen, University of Houston
Architecting materials and structures allows designers to obtain extraordinary behaviors from ordinary base materials. These materials and structures have found applications in a wide variety of applications ranging from optical, acoustic to structural. Naturally, there is strong interest from academia and industry in exploiting their capabilities. This mini-symposium seeks to bring together researchers working on different aspects of architected materials and structures from conceptual design, to application and methods of fabrication.
Topics of interest include, but are not limited to:
Architected materials and structures for structural, acoustic, thermal, mechanical, biomechanical, electromagnetic, and other applications
Hierarchical architected materials and structures
Bio-inspired architected materials and structures
Knitted or woven architected materials and structures
Adaptive, active, reconfigurable architected materials and structures
Methods for design of architected materials and structures
Novel fabrication methods for architected materials and structures
Amartya Banerjee, University of California, Los Angeles
Ananya Balakrishna, University of California, Santa Barbara
Vikram Gavini, University of Michigan
This symposium aims to bring together researchers interested in developing computational and data science based techniques that are directed towards understanding the electronic, atomistic and mesoscopic underpinnings of materials of interest to energy and quantum information science applications.
Areas of interest include, but are not limited to, recent progress in:
Numerical and machine learning based methods for first principles, atomistic, microscale and mesoscale modeling of energy and quantum materials.
Formulation, analysis and implementation of coarse-graining techniques, variational methods, data-driven approaches for multiscale and multiphysics problems.
Applications of the above techniques to effectively predict, characterize, and guide the development of energy and quantum materials.
Ryan Sills, Rutgers University
Coleman Alleman, Sandia National Laboratories
Aaron Kohnert, Los Alamos National Laboratory
Laurent Capolungo, Los Alamos National Laboratory
Damage mechanics models for material systems ranging from ductile metals to fibrous composites are widely used in industry and other applications. These models are largely phenomenological in nature and often highly sensitive to discretization length and timescales, making their calibration difficult and limiting predictive capabilities. As multiscale modeling and experimental techniques mature, great opportunities are opening up to enrich damage mechanics models with microscale knowledge. The goal of this minisymposium is to bring together researchers studying damage mechanics across length and time scales in order to work towards fusing microscale knowledge with macroscale damage models.
Specific topics of interest include: damage in all structural materials including metals, ceramics, and composites; ductile and brittle fracture; voiding and microcracking; microscale modeling of damage (e.g., molecular dynamics, defect dynamics, crystal plasticity); continuum modeling of damage (e.g., porous plasticity, continuum damage mechanics); fracture modeling (e.g., cohesive surface models, phase field modeling); concurrent and sequential multiscaling techniques for damage.
Eligiusz Postek, Polish Academy of Sciences
Tomasz Sadowski, Lublin University of Technology
Somnath Ghosh, Johns Hopkins University
Modern composites are used in many strategic industrial sectors, such as aerospace, nuclear power plants, and space exploration. Following the experimental observations on the composites, the most important is their microstructure for the composites' loading resistance (mechanical or thermal). Advanced composite materials are characterized by complex internal multiphase structures including polycrystals, layered, FGM, and others. They can be brittle, ductile, and hybrid, i.e., both brittle and ductile. The materials are used in structures of high importance, for example, cutting tools, drilling devices, jet engines, military applications, and many others.
This minisymposium will focus on the effects of microstructure of the composite materials on their strength. In particular, the following effects occur during impact load, and they are of interest:
Phase transformation during impact load,
Properties of the interfaces between phases,
High strain rate in the metallic phase,
Coupled problem of heat generation in the metallic phase,
Fracture of the brittle phases,
Damage development in metallic and brittle phases,
Atomistic simulations of the interfaces between the phases under high pressure induced by the impact.
Examples of such advanced materials are multiphase polycrystals (e.g. WC/Co, SiC/Al, Al2O3/Ti(C, N)/ZrO2), CMCs, MMCs, and others. The combination of phases of different properties yields a complex usually random microstructure.
Talks focusing on experiments that can be used to verify numerical simulations and highlight this context are strongly encouraged. Talks on Artificial Intelligence/Machine learning approaches that can help improve simulation quality and incorporate uncertainty due to measurements are also highly appropriate for this session.
The minisymposium aims to provide a forum where researchers can present and discuss numerical methods for complex composites, their validation and verification, as well as the practical applications of complex composite materials.
Yozo Mikata, Fluor
Glaucio Paulino, Princeton University
The symposium will address some of the emerging themes in the computational applied mechanics. Due to the enormous recent advances in computer hardware, software, and algorithms, many researchers are now able to obtain the numerical solutions for even more complex problems than before. Some of the key developments in this on-going process are the multi-scale, multi-physics, and parallel computations. The contributions will include atomistic/continuum computations, peridynamics, fast multi-pole method (FMM), acoustic and optical metamaterials, fluid-structure interactions, multi-phase flow, lattice Boltzmann method, magneto-electro-mechanical systems, computations in biological systems such as protein folding modeling and cell mechanics, high performance computing using MPI or OpenMP, etc. Cross-disciplinary contributions are particularly welcome.
The Special Topics Emphasis Area solicits minisymposia targeting the following list: High performance computing and exascale; Computational mechanics software; Climate modeling and simulation; Simulation and algorithms for advanced energy topics such as fusion, hydrogen, and renewables; and Computational geoscience and geomechanics. Additional computational mechanics topics that do not readily conform to other Emphasis Areas are welcomed for consideration.
Teeratorn Kadeethum, Sandia National Laboratories
Daniel O'Malley, Los Alamos National Laboratory
Youngsoo Choi, Lawrence Livermore National Laboratory
Hongkyu Yoon, Sandia National Laboratories
Maruti Mudunuru, Pacific Northwest National Laboratory
Kalyana Nakshatrala, University of Houston
Thushara Gunda, Sandia National Laboratory
Leila Hernandez Rodriguez, Lawrence Berkeley National Laboratory
Michelle Newcomer, Lawrence Berkeley National Laboratory
Sam Foreman, Argonne National Laboratory
Bulbul Ahmmed, Los Alamos National Laboratory
Satish Karra, Pacific Northwest National Laboratory
Mahantesh Halappanavar, Pacific Northwest National Laboratory
Earth science systems are complex, comprising many interacting parts. Examples include watershed hydrology, geothermal energy, carbon capture and storage, environmental remediation, hydrogen storage, and remote sensing. Conventional computational techniques do not offer efficient, sometimes tractable, solutions to address many practical problems arising in the mentioned areas—because of prohibitive computational cost, inability to capture the underlying physics, or incapacity to incorporate available heterogeneous data. Recent advancements in machine learning techniques and quantum computing have entered the earth sciences domain. These approaches have been adopted and proposed to tackle long-standing challenges in these areas or enhance classical methods used in this field (e.g., data-driven-assisted framework or hybridized approach). This mini-symposium invites presentations on advances in machine learning and/or quantum computing in earth sciences. Topics include, but are not limited to,
machine learning models for model-reduction, optimization, inverse problems, uncertainty quantification, and efficient dimensionality reduction of nonlinear operators aiming specifically for earth sciences applications and
quantum computing applications in geoscience research; for instance, seismic inversion with quantum annealing, quantum-computational hydrologic inverse analysis, or quantum optimization.
Jesus Bonilla, Los Alamos National Laboratory
Qi Tang, Los Alamos National Laboratory
John Shadid, Sandia National Laboratory
Fusion is a virtually inexhaustible source of energy for electricity production. A key step towards viable fusion energy is to sustain and control burning plasma inside a reactor chamber. Large experimentation costs and the possibility of doing nonrecoverable damage to the reactor during experiments makes numerical modeling a fundamental part of fusion energy development. Fluid plasma models can often provide effective, and computationally efficient, descriptions to model plasmas in complex, multidimensional domains with the required resolution, for sufficiently dense and collisional systems.
However, as the model complexity used is increased, the need for an efficient numerical approximation becomes more important. Therefore, novel preconditioning strategies, AMR, multiscale methods, structure preserving discretizations, stabilization methods, and time integration methods for fluid models of plasma, and its combination are essential to successful plasma modeling.
In this minisymposium, we invite the speakers to discuss the latest developments in aforementioned topics. Currently, 12 people have already shown interest in participating:
Michael Crockatt (SNL), Brian O'Shea (MSU), Cory Hauck (ORNL), Jesus Bonilla (LANL), Zak Jorti (LANL), Jingmei Qiu (University of Delaware), Wei Gou (Texas Tech), James Rossmanith (Iowa State), Jingwei Hu (University of Washington), Golo Wimmer (LANL), Eric Cyr (SNL), and Thomas M. Smith (SNL).
Kara Peterson, Sandia National Laboratories
Devin O'Connor, Sandia National Laboratories
Svetoslav Nikolov, Sandia National Laboratories
Recent warming on the cryosphere due to climate change is causing significant impacts. Permafrost thaw has resulted in infrastructure damage and coastal erosion and may eventually lead to large greenhouse gas releases. Melting and calving of ice sheets in Antarctica and Greenland have led to global sea level rise creating risks to coastal infrastructure. Arctic sea ice loss has led to increased maritime activity in the region and may be influencing ocean circulation and mid-latitude weather patterns. Accurate modeling of these cryosphere systems is key to predicting future changes and informing public policy.
The focus of this minisymposium is on new computational methodologies for simulating cryosphere systems (land ice, sea ice, permafrost, etc.) and their interaction. The goal is to bring together researchers working on a broad range of cryosphere modeling topics to discuss recent advances and identify synergies.
Topics of interest include, but are not restricted to, the following:
Novel numerical discretizations for ice and permafrost mechanics
New constitutive models
Mechanics-based formulations of ice fracture/calving
Multiscale methods for coupling models with different spatial/temporal scales
Efficient solvers and methods for improving computational performance
Advanced analysis techniques including data assimilation and uncertainty quantification
Data-driven approaches to modeling
J. Adam Stephens, Sandia National Laboratories
Gianluca Geraci, Sandia National Laboratories
Brian Adams, Sandia National Laboratories
Driven by Sandia National Laboratories’ applications, the Dakota project (http://dakota.sandia.gov) invests in both state-of-the-art research and robust, usable software for optimization and uncertainty quantification (UQ). Written in C++, the Dakota toolkit provides a flexible, extensible interface between simulation codes and a variety of iterative systems analysis methods. Dakota enables the users to run, with a minimal setup overhead, a variety of algorithms. Dakota’s methods include optimization, uncertainty quantification, parameter estimation, and sensitivity analysis, which may be used individually or as components within surrogate- or sampling-based strategies and other advanced techniques, as for instance multifidelity UQ. The software is available publicly under an open source license and is used broadly by academic, government, and corporate institutions. In this minisymposium, we will accept contributions describing Dakota algorithm and usability developments. We also solicit contributions focused on advanced application of Dakota capabilities to science and engineering problems, whether academic or industrial.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DENA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
Mauro Perego, Sandia National Labratories
Andrew Salinger, Sandia National Labratories
Irina Tezaur, Sandia National Labratories
Climate models are important for predicting changes in the earth system that can have consequential effects on human society. The complexity and scale of the climate system translate into major computational challenges.
The dynamics of atmosphere, ocean, land ice and sea ice, are governed by fluid flow equations, and characterized by different spatial and temporal scales and specific physical properties that need to be preserved by numerical models. This demands the use of non-trivial spatial and temporal discretizations that need to be amenable to running efficiently on emerging heterogeneous architectures.
Climate models are characterized by complex physical processes that are not fully resolved (e.g. cloud formation) and are modeled with parameterizations. Several efforts are being made to replace these parametrizations with data-driven models trained using observational or simulation data. More generally, machine learning models are being used to enhance the current models with deep learning models for improving numerical stabilization, turbulence models or for replacing parts of the climate models with inexpensive surrogates that are particularly useful for sensitivity studies and calibration.
All climate components are affected by uncertainty arising from errors in the observations and in the models, and from socioeconomic scenarios. Quantifying this uncertainty is at the same time critical and often prohibitively expensive, and the development of methods for accelerating the uncertainty quantification analysis, whether traditional or rooted in machine learning, is of the essence.
In this mini-symposium we focus on a wide variety of computational approaches for addressing problems arising in climate modeling, including advanced discretizations, high performance computing, data-driven models and uncertainty quantification.
NSF MoMS Program Directors will present program overview and funding opportunities for the mechanics community at large (~30 minutes). It will be followed by Q/A. (Open to All).