GRINS: A Multiphysics Framework Based on the libMesh Finite Element Library
Abstract
The progression of scientific computing resources has enabled the numerical approximation of mathematical models describing complex physical phenomena. A significant portion of researcher time is typically dedicated to the development of software to compute the numerical solutions. This work describes a flexible C++ software framework, built on the libMesh finite element library, designed to alleviate developer burden and provide easy access to modern computational algorithms, including quantityofinterestdriven parallel adaptive mesh refinement on unstructured grids and adjointbased sensitivities. Other software environments are highlighted and the current work motivated; in particular, the present work is an attempt to balance software infrastructure and user flexibility. The applicable class of problems and design of the software components is discussed in detail. Several examples demonstrate the effectiveness of the design, including applications that incorporate uncertainty. Current and planned developments are discussed.
Key words. Multiphysics, Finite Elements, libMesh, Open Source Software
AMS subject classifications. 97N80, 68N30, 65N30, 68Q85
1 Introduction and Motivation
The evolution of mathematics, algorithms, and software to support the solution of partial differential equations as well as the deployment of increasing scientific computing resources have enabled the study of increasingly complex mathematical models of physical phenomena. A key aspect of the development of the solution of these models is the deployment of software elements to effectively compute solutions using advanced numerical methods and algorithms. However, the complexity of both the physical problems being studied and the solution methodologies used to compute solutions to these problems ensures that the software development effort will require significant resources in developer time and software engineering expertise.
For example, a research task may be to calibrate parameters of a mathematical model using a new numerical formulation, such as a new stabilization scheme for finite element formulations in fluid mechanics. As calibration requires many forward solves of the physics model, efficiency is key. Thus, one may wish to use adaptive spatial grids, as well as adaptive and/or highorder time stepping schemes. To drive adaptive mesh refinement, there are variety of techniques, including algorithms requiring the solution of an adjoint problem that focus on reducing errors in functionals of the solution of the mathematical model, e.g. [16]. There are existing software libraries dedicated to supporting each of these tasks, requiring the developer to become familiar with each package, bring the elements together, and deploy a standalone program for the study, e.g. libMesh [34], deal.II [11], and Trilinos [31].
While the use of existing libraries is essential, there are still inefficiencies with focusing on standalone programs for each study. First, there is little or no reusability with the developed modeling kernels. When developing the mathematical modeling capability, there may be opportunities to reuse that model in future work. This is especially true in multiphysics applications where similar physical phenomena may occur, e.g. incompressible fluid flow, heat transfer, chemical tracers, and stabilization operators. In addition to losing the potential reusability of this physics kernel, one loses the testing that went into that code, e.g. regression and unit testing and MMSbased testing. Often substantial developer time goes into testing the developed kernels and by tying that development to a deployed application, this effort will need to be duplicated to deploy a separate standalone program that uses the developed physics. Although one may simply, in principle, “copyandpaste” into the new application, there will inevitably be modifications such as matching APIs for the new application, changes to coding style, and adapting to possibly different testing frameworks. All these changes incur development costs that could potentially be avoided by using a common infrastructure in which to develop mathematical models.
Another negative aspect of standalone applications is the limited ability to compare to alternative algorithms and methodologies. If, for example, one aspect of the developed application is a new finite element formulation for a particular physics kernel, it would be of interest to compare and contrast this new formulation with existing formulations. A framework which easily accommodates the extension of physics kernels would more easily enable such comparisons as opposed to standalone applications, where either another application or a framework for handling the multiple kernels would need to be developed. These ideas have been an important aspect of the development of “composable solvers” within the PETSc library for linear and nonlinear solver algorithms [19, 18].
On the other hand, while software frameworks greatly aid in alleviating the aforementioned concerns, one also potentially loses one of the advantages of standalone software applications: flexibility for experimentation of algorithms to meet demands for the problem at hand. This is perhaps the greatest challenge in developing such frameworks to tackle complex multiphysics applications utilizing sophisticated numerical strategies, such as adjointbased adaptive mesh refinement (AMR), adaptive modeling, sensitivity analysis, and, eventually, enabling uncertainty quantification.
The focus of the present work is the development and deployment of a framework, called GRINS,^{1}^{1}1GRINS stands for “General Reacting Incompressible NavierStokes”. The original impetus for GRINS development was reusing incompressible reacting flow kernels. The development led to a more general framework, but the acronym stuck. to support multiphysics finite element applications, the reusability and extensibility of mathematical modeling kernels, supporting interfaces to existing solver and discretization libraries to enable modern solution strategies, while, at the same time, retaining flexibility to effectively tackle the science or engineering problem of focus.
The remainder of the paper is devoted to discussing the underlying libraries used and the description of the GRINS framework. Alternative libraries and frameworks are compared and contrasted in the Section LABEL:sec:compare. Section LABEL:sec:libmesh discusses the libMesh finite element library that is the foundation of the present work. Section LABEL:sec:femsystem discusses the FEMSystem framework in libMesh that the current work extends into a multiphysics framework. Section LABEL:sec:grins discusses GRINS in some detail, highlighting the mechanisms that promote reusability and extensibility in multiphysics applications. Section LABEL:sec:examples illustrates several examples that use the GRINS infrastructure before discussing ongoing and future efforts in Section LABEL:sec:conclusions.
1.1 Existing Libraries and Frameworks
In this section, we briefly highlight software environments with related goals to the present work. We note that many of the packages/frameworks discussed below, including the present work, all started gaining development momentum around 2010; aspects of each started earlier, but the frameworks seemed to gel around this time. In a sense, all these packages, including GRINS, are exploring tradeoffs between ease of development and user flexibility.
Moose
The closest existing software infrastructure to the current work may be the Multiphysics ObjectOriented Simulation Environment (MOOSE) [27]. MOOSE is developed at the Idaho National Lab targeting nuclear scientists and engineers in support of their modeling and simulation efforts and currently supports dozens (approaching hundreds) of applications. MOOSE is built on libMesh, as is the current work. MOOSE targets scientists and engineers who may have little software development training but who wish to rapidly deploy programs aimed at the simulation of nuclear engineering applications.
One significant area where the current work differs from MOOSE is the availability in GRINS of adjointbased methods for error estimation, AMR, and sensitivity analysis. Adjointbased methods, highlighted in Section LABEL:sec:adjoints, provide the opportunity for adaptive mesh refinement based on local functionals called quantities of interest. Furthermore, they provide a more efficient means for computing parameter sensitivities in quantities of interest. GRINS uses libMesh to automatically provide discrete adjoint computations based on the given Jacobian representation.
FEniCS
The FEniCS python package (see [36] and references therein) uses a domain specific language (DSL) [7] approach to provide a platform for enabling PDE solutions based off of the weak forms supplied by the user in the DSL. This functionality includes the construction of discrete adjoint operators [23] and their solution based on the forward operator supplied. Assembly, solver interfaces, and other aspects of the simulation are then handled by the underlying FEniCS infrastructure. FEniCS provides a powerful framework for enabling rapid development of PDE solutions, but one is also beholden to this framework and user requirements outside the ecosystem can be challenging to realize. For example, if one is exploring formulations of the adjoint problem that do not correspond to the discrete adjoint of the forward operator, i.e. the adjoint of the forward problem lacks consistency. Such issues arise in many stabilization schemes in finite element methods of fluid mechanics [21], for example. Consider as well the solid mechanics example in Section LABEL:sec:solids_ex — to the knowledge of the authors, this is not possible to deploy within FEniCS due to the restriction of the language distinguishing gradients in physical coordinates vs. gradients in reference element coordinates as well as supporting meshes containing elements of different dimension.
Albany
The Albany package [1] takes an “agile components” approach to their framework, building and connecting dozens of packages within Trilinos [31] and is beginning to support user applications [20, 54]. Albany facilitates interaction with the different components of Trilinos: linear solver, nonlinear solver, optimization, finite element assembly, and other scientific computing infrastructure. Originally, Albany interfaced with with the Sierra toolkit [49], but recent efforts seem to indicate collaboration with the mesh infrastructure of the SCOREC center, see e.g. [56]. Albany’s approach is, in a sense, at a different extreme where instead of a wholistic framework to which the user much adhere, it endeavors to maximize reusability of software components. While this can lead to much less user code to write, it also may hinder determining which lines of code to write as the user must become familiar with the many different components to facilitate their simulation requirements.
Other Foundational Packages
There are several other software packages that provide the foundations for constructing programs to compute numerical approximations of complex mathematical models. These include the deal.II finite element library [11], the DUNE numerics environment, see e.g. [22], and the OpenFOAM package [2]. Each of these software environments provides the framework upon which to construct application programs targeting a specific problem. In a sense, these packages, as well as libMesh discussed in Section LABEL:sec:libmesh, provide the maximum amount of flexibility for the user, but also provide a higher barriertoentry as there is a more limited amount of infrastructure upon which to build the application.
2 libMesh Finite Element Library
The libMesh finite element library [34] was initiated as part of the Ph.D. work of Benjamin Kirk [33], as an alternative to the deal.II library [11], in order to provide adaptive mesh refinement capabilities on general unstructured meshes, including triangles and tetrahedra. The library supports a number of geometric elements including quadrilaterals, hexahedra, triangles, tetrahedra, and prisms. Supported finite elements include traditional first and second order Lagrange, both scalar and vectorvalued, as well as arbitrary order hierarchical bases, CloughToucher macroelements, and Nédélec elements of the first type.
libMesh supports mesh partitioning through interfaces to several packages, including Hilbert space filling curves through libHilbert [30] and graphbased algorithms through Metis and ParMetis [32]. Additionally, libMesh supports parallel distributed meshes for increased scalability and fully supports distributed parallel adaptive mesh refinement. Currently, libMesh has been scaled tens of thousands of cores and has been run on over cores on the BG/Q machine Mira at Argonne National Lab [26]. A variety of mesh formats are supported to facilitate use of meshes generated for complex geometries. Additionally, libMesh supports parallel restart file formats.
More direct libMesh support for multiphysics applications comes via the existing interfaces to external solver packages, such as PETSc [9, 8, 10] and Trilinos [31]. In particular, the nature of the design of the PETSc solvers, as described in [17], allows one to experiment with solver choices, including solver algorithm and preconditioning. Variables in multiphysics applications can be automatically assigned to PETSc field splits by libMesh to enable physicsaware algorithms. Next, libMesh supports identifying subdomains within a mesh. This allows one to easily control which models are active on particular subdomains to allow, for example, modeling of conjugate heat transfer and fluidstructure interaction problems. Such capabilities also enable adaptive modeling algorithms [55]. Finally, libMesh has recently extended the fparser [3] library to support both parsing and compilation of mathematical functions into high performance kernels. This capability allows for easy specification of boundary conditions, initial conditions, or constitutive equations from an input file.
3 FEMSystem Infrastructure
The libMesh finite element library provides a wide set of tools with which to build a meshbased application. However, libMesh applications were originally required to reimplement many kernels common to finite element applications, including assembly loops, time integration schemes. To reduce the rewriting of these routines, the FEMSystem infrastructure was implemented in the libMesh library as part of the dissertation work of the second author [51, 53, 52]. Implementation of the FEMSystem infrastructure began in 2007, but discussion of the original design is limited to dissertations of the second author and Dr. John Peterson [40], so we detail the current design here, both to disseminate to a larger audience and to lay the groundwork for the design of GRINS.
3.1 Problem Class
Currently, the FEMSystem infrastructure is designed to accommodate the following classes of problems. Let , with each manifold subdomain , ; problems of mixed dimension are allowed. We seek solutions for which each subdomain solution is defined at spatial locations for times . The number and choice of variable components of each Sobolev space to which may belong depends on the subdomain . For those variables which span connected subdomains, typically continuity is enforced, but continuity or discontinuity may be required instead. Dirichlet boundary conditions on some may also be strongly enforced if specified.
Then, the following system of partial differential equations is assumed to hold:
\hb@xt@.01(3.1)  
where indicates a time derivative, and are given, is the outward unit normal on , the operator is the “mass term”, is the “time derivative term”, and is the constraint term. In general each of these terms and conditions may depend on and . The remainder of the presentation will focus on first order (in time) systems such as the above, but we note that second order (in time) systems are also supported within this framework, for which examples will be shown (Section LABEL:sec:examples).
The strong form of this system of equations is now cast into a weak form. In particular, for any test function , where is an appropriate space of test functions, each term of the strong form is converted to a consistent variational form:
\hb@xt@.01(3.2)  
where is an integer that depends on the operators , and . The “;” in the semilinear forms indicates that the terms to the left are possibly nonlinear while the terms to the right are linear ^{2}^{2}2 While the time derivative is a linear operator, nonlinear combinations of time derivative terms do appear in, for example, certain stabilized schemes for convection dominated problems. Thus, in general, the mass term is possibly nonlinear in the time derivative of the state variable.. The notation has been abused here: the operator symbols in the strong form have been reused in the work form; the context will distinguish them. In the present context, the operators in (LABEL:eq:first_order_weak_form) are understood to be integral operators over that are typical in the weak form of partial differential equations.
Now introduce a tesselation of , , into finite elements such that
and define finite element spaces and . Then, problem (LABEL:eq:first_order_weak_form) becomes the following semidiscrete system:
\hb@xt@.01(3.3)  
Upon introduction of finite element shape functions, i.e. , this system of ODEs can now be discretized in time^{3}^{3}3We frame the discussion in terms of the “method of lines”, but there’s no restriction from the standpoint of the software framework on following the “method of Rothe”..
Before elaborating on how the software design leverages this formulation, we introduce an example to help clarify the discussion. An example of a system of partial differential equations that can be cast into this form is the incompressible NavierStokes equations:
\hb@xt@.01(3.4)  
\hb@xt@.01(3.5) 
where is the velocity, is the pressure, is the density of the fluid, is the viscosity of the fluid, and is the gravitational vector. These can be recast into weak form, e.g. [29]. Define
\hb@xt@.01(3.6)  
\hb@xt@.01(3.7)  
\hb@xt@.01(3.8)  
\hb@xt@.01(3.9)  
\hb@xt@.01(3.10) 
For simplicity, we assume and that the velocity field is “noslip” on . Define and . Then, we seek and which satisfy the following weak form for almost every :
\hb@xt@.01(3.12)  
In this case, , and clearly
\hb@xt@.01(3.13)  
Now, the weak form can be discretized in both space and time, yielding, in general, a system of nonlinear equations for each time step. For example, introduce finite element spaces , , and . Let and . Assume the resulting system of ODEs are discretized using a variant of the thetamethod. Let the time interval be partitioned into a finite number of segments, , and let . Assume a uniform timestep length . Then,
\hb@xt@.01(3.14)  
Then, given from initial conditions for or from a previous time step solution, we obtain the following nonlinear system of equations for :
\hb@xt@.01(3.15)  
Note that the constraint equation is evaluated at the end of the timestep; this was the impetus for separating the constraint equation in both the presentation here and in the software framework.
The FEMSystem framework was designed around the following trivial observations of (LABEL:eq:ns_theta):

As is typical of finite element software, the residual (LABEL:eq:ns_theta) can be assembled as a sum over element residual contributions since the integral operator is trivially decomposed into a sum of integrals over elements. Thus, the user only needs to supply each of the operators , , and at the element level. The user need not know the details of mesh partitioning, parallelization, or solver algorithm.

If instead the user wishes to solve only the steadystate problem, then simply ignoring the term in the residual (LABEL:eq:ns_theta) will yield a system of nonlinear equations for the steady state solution . Thus, the same code can reused for either steady or unsteady problems.
With a framework that exploits this mathematical structure, the user only need supply element level calculations for each of the above operators. All other implementation considerations can be handled outside the scope of the user and parameters such as a steady or unsteady formulation can be relegated all the way to the program input option level. We describe next some of the software design details that specify such an infrastructure.
We note here that the delegation of residual evaluations is a common theme amongst the frameworks discussed in Section LABEL:sec:compare. The details vary between them: MOOSE uses a more finegrained approach, requiring the user to specify the residual at the degree of freedom level, i.e. inside quadrature and degree of freedom loops; FEniCS operates at “global” level, where the user is only required to supply the weak form, e.g. (LABEL:eq:ns_weak_form) in the present context; Albany uses a templatemetaprogramming approach where residuals at each quadrature point are computed based on residual expressions supplied by the user. There, the user is required to implement the element, node, and quadrature loops over the current workload partitioned by the framework. The present work is an attempt to strike a balance between framework infrastructure and user flexibility.
3.2 FEMSystem Design
This section delves into some details about the software design that implements the mathematical framework discussed previously. In particular, we focus on how the element assembly and the solvers are encapsulated, encouraging code reusability. As with the many other aspects of libMesh, the FEMSystem infrastructure makes heavy use of objectoriented software engineering concepts.
3.2.1 Element Assembly
Element assembly of residuals, be they unsteady or steady residuals, requires the initialization of several components. First, the element loop must be initiated for the current processor; this is done using predicated iterators supplied by libMesh. Additionally, this abstraction facilitates the use of shared memory parallelism as the processor local elements can then be divided amongst thread workers^{4}^{4}4Currently, libMesh supports pthreads as well as Intel Threading Building Blocks [47].. Next, for each element, the element local data must be initialized including element shape functions evaluated at quadrature points (and derivatives, and second derivatives as needed), and solution values for the residual evaluation, e.g. in (LABEL:eq:u_theta). All the element local data is handled by the FEMContext object. This object is constructed and passed to the user to facilitate the requirements of residual evaluation. Using such an object facilitates a consistent API that can address any model within the problem class described previously. Again, we emphasize the element level API is independent of the form of the final residual evaluation. As such, the accumulation of the final residual evaluation can be delegated to the solver.
3.2.2 Solvers
The TimeSolver hierarchy handles the interaction with the users supplied residual evaluations. Subclasses include SteadySolver, for steady problems, EulerSolver that implements the theta method, and NewmarkSolver for second order in time systems. Each of these solvers implement the specific residual form dictated by the mathematical expressions for each method, delegating the elementlevel terms to the usersupplied functions. Note that the nonlinear solver functionality is delegated to separate objects. This promotes code reusability and encapsulation of the time stepping away from the other solver interface considerations. For example, the EulerSolver code is only 125 lines.
The nonlinear solver interface is handled by the DiffSolver hierarchy. Subclasses include NewtonSolver, a libMesh local implementation, and an interface to PETSc’s SNES solvers. All DiffSolver subclasses reuse the libMesh interfaces to a variety of linear solvers, including PETSc and Trilinos.
3.2.3 User Interaction
The user can create an application by developing a FEMSystem subclass that implements each of the terms, as needed, in (LABEL:eq:first_order_fe_form). In particular:

is implemented in mass_residual

is implemented in element_time_derivative for element interior contributions, side_time_derivative for element boundary contributions, and nonlocal_time_derivative for scalar variable contributions, e.g. Lagrange multipliers.

is implemented in element_constraint, side_constraint, and
nonlocal_constraint, for element interior, element boundary, and scalar variable contributions respectively. 
The user can supply Jacobians of the residuals if they are known analytically or, through the API, the user can indicate that Jacobians are not computed. In the case Jacobians are not supplied by the user, they are computed using finitedifferences by the FEMSystem framework. Additionally, an option is provided to use finitedifferenced Jacobians to verify a usersupplied analytic Jacobian.
Additionally, the user must write a main program that constructs all the data structures, including the previously discussed FEMSystem subclass. Several examples are distributed with libMesh and are available online^{5}^{5}5http://libmesh.github.io/examples.html with the libMesh documentation^{6}^{6}6http://libmesh.github.io/doxygen/index.html.
3.3 Quantities of Interest
In addition to supporting PDEs of the form discussed previously, a key feature of this work is the support of computing quantities of interest (QoIs) as well as QoIbased error estimation and parameter sensitivity computation based on the discrete adjoint (see e.g [37]). QoIs are functionals of the solution . Therefore, the user can define the functional
\hb@xt@.01(3.16) 
Once the QoI is defined, then the process of computing an error estimate and gradient can be automated using the discrete adjoint. In particular, for the steadystate case, the linear system for the discrete adjoint operator corresponds to the transpose of the Jacobian of the nonlinear system and the forcing vector is the derivative of the QoI. To elaborate, rewrite (LABEL:eq:first_order_weak_form) in a more traditional form. Take and such that and and ^{7}^{7}7Here, we’ve split the operator into its constant parts (with respect to ) and the remaining (nonlinear) part for notational convenience in order to appeal to the notation of previous presentations of the adjoint problem [45, 38].. Then,
\hb@xt@.01(3.17)  
Define the residual as . Define the continuous adjoint solution that satisfies the continuous adjoint problem^{8}^{8}8There are alternative definitions of a nonlinear adjoint operator, but the Fréchet derivativebased definition is the most commonly used.
\hb@xt@.01(3.18)  
where denotes the Fréchet derivative. One then linearizes around the discrete solution . Assuming an exact solution of the adjoint problem, the error representation for the quantity of interest is [38]
\hb@xt@.01(3.19) 
where is a term higher order in and is typically neglected (and is zero in the case where is linear in ). One must also discretize and solve the adjoint problem, impacting the error estimate. We defer discussion of the implication of the discretized adjoint solution to later in this section.
An alternative path to an error representation in the quantity of interest is to first discretize the forward problem (LABEL:eq:fwd_problem):
\hb@xt@.01(3.20)  
Now in contrast with (LABEL:eq:adjoint_problem), the adjoint problem becomes
\hb@xt@.01(3.21)  
where denotes the derivative with respect to the discrete solution . This “discretizethendifferentiate” approach yields and is typically denoted as the discrete adjoint. This approach is appealing because is nothing other than the transpose of the discrete Jacobian of the forward model. Thus, given the form of the QoI and it’s derivative (which can be computed by finite differences if necessary), then the adjoint solution can be automated is a physicsindependent way. This is exactly the approach taken in FEMSystem. Each contribution to the linear system (LABEL:eq:discrete_adjoint_problem) is then assembled elementwise in analogy with the assembly of the forward problem as described in Section LABEL:sec:femsystem_problem_class.
The theory for QoIbased error estimation has been well established for some time both in the context of discretization error [16] as well as modeling error [38, 39]. Given the residual evaluations and adjoint solution, using the infrastructure described previously, libMesh provides several error estimators to drive adaptivity based on this theory. Currently, there are two QoIbased error estimation classes: AdjointResidual which uses (userspecified) global error estimates in the forward and adjoint solutions from which a bound on the QoI error can be computed, see e.g. [25] for an example utilizing libMesh. Although the effectivity index tends to be far from unity for such error estimators, they can be effective error indicators for driving mesh adaptation.
The other core QoIbased error estimation object is AdjointRefinement which uses and/or refinement to construct an enriched discretization to compute the adjoint solution which can then be directly used to produce dualweighted residual error estimates. The issue is that the forward problem has been solved by testing against all functions in , including particularly . Several strategies have been proposed, see e.g. [16]. Simply performing a uniform refinement then transforms (LABEL:eq:discrete_adjoint_problem) to the following discrete adjoint problem:
\hb@xt@.01(3.22)  
where is an interpolation operator from to . The error estimate is then, ignoring higher order terms,
\hb@xt@.01(3.23) 
where various bounds can be used in the latter term; other error representations are also possible [38]. As before, all aspects of the adjoint computation and error estimation are independent of the actual form of the element contributions and can be automated by the FEMSystem framework.
Similar ideas hold for the unsteady case, although the infrastructure to support the unsteady case in libMesh [25] is still in preliminary stages. Currently, the forward solution is stored in memory and no checkpointing schemes have been employed.
4 GRINS Framework
While FEMSystem provides a platform that eases the development burden of libMeshbased applications, the FEMSystem infrastructure still targets the development of standalone applications. However, as libMesh is meant to be a library to build finite element applications, implementations of particular weak forms of mathematical models do not belong in the library. This is primary impetus for the GRINS Multiphysics framework: extend the FEMSystem framework to enable reusable mathematical model kernels and ease the burden in developing libMeshbased multiphysics applications. Because the framework is built atop libMesh, modern sophisticated parallel algorithms are intrinsically available.
In addition to adhering to good software development paradigms, there are two primary driving goals in the development of GRINS: reusability of the modules contributed to the framework and ease of extensibility of the modules of the framework. We discuss realizing reusability next in Section LABEL:sec:reuse and extensibility in Section LABEL:sec:extend.
4.1 Reusability
One of the goals of GRINS is to maintain a repository of various kernels for multiphysics simulation: weak forms (referred to as Physics classes in GRINS), quantities of interest (referred to as QoI classes), solver algorithms (Solver classes), boundary and initial conditions, and postprocessing. By maintaining this repository, we can reuse capabilities that have already been developed and tested and facilitate their reuse to particular applications. This is accomplished through judicious use of software design patterns (e.g., see [48, 24]).
The capability to reuse weak form kernels is enabled by defining an interface to which each weak form will adhere — this is done in the Physics class where the interface borrows heavily from the FEMSystem interface specification. Each weak form is a subclass of the Physics class. Then, in a FEMSystem subclass, called MultiphysicsSystem, we implement a strategy pattern to accumulate contributions to each term in (LABEL:eq:first_order_fe_form) from each of the enabled Physics kernels. In particular, each of the terms can be decomposed into a sum of operators:
\hb@xt@.01(4.1)  
The selection of the Physics kernels to be accumulated is parsed from an input file at runtime and is managed using a factory pattern (PhysicsFactory). Similar design strategies were used for the other modules in GRINS:

QoI: Interface defined by QoIBase, construction done in QoIFactory to enable runtime selection capability

Solver: Interface defined by Solver, construction done in SolverFactory to enable runtime selection capability

Postprocessing: Interface defined by PostProcessedQuantities, construction done in PostprocessingFactory to enable additional postprocessing of solution variables
The interaction between these modules, as well as interactions with the libMesh library are handled using a Simulation object.
By building up reusable kernels of Physics, Solver, QoI, and Postprocessing objects, complex multiphysics simulation capabilities can be constructed with the user supplying only an input file and a mesh (if the geometry is complex). Thus, if the modeling capabilities required by the user exist in the framework, then the user can use the input file and mesh and directly call the grins executable built as part of the distribution.
Although a user can use GRINS as a library and build their own application by extending the various modules, as discussed in the next section, it is our hope that a community can be formed that will help build up this repository of modeling capabilities to facilitate reuse of the different kernels, thereby eliminating a great deal of code duplication and testing.
4.2 Extensibility
While we endeavor to encapsulate a great deal of modeling capability within GRINS, we realize that it cannot possibly meet all needs all the time. Furthermore, the application may have requirements that restrict dissemination due to security classifications, nondisclosure agreements, or export control constraints. To this end, GRINS has also been designed with flexibility and extensibility in its capabilities in order to provide some ease to the software development burden of enriching the modeling capability needed by the user while, at the same time, allowing for reuse of the modeling capabilities present in the framework. This is accomplished through objectoriented design and use of well established design patterns. We briefly highlight the extension of each of the modules below.
Physics
Extension of the core weak forms available requires creating a
subclass of the Physics object and overriding the
appropriate methods:
element_time_derivative
and element_constraint for element interior integrals
and side_time_derivative and side_constraint
for boundary contributions that cannot be handled through the boundary
conditions interface (e.g. element jump contributions from
discontinuous Galerkin formulations). The mass_residual
function handles contributions to the unsteady terms.
Default “noop” implementations are present in
the Physics class so that only the methods that are
required by the formulation need be overridden. There are
other functions that can be overridden to promote efficiency, but this is
not required. Once this new Physics subclass is defined,
then the PhysicsFactory should be updated so that it can be
invoked from an input file.
Solver
Although GRINS, through libMesh, provides basic steady and unsteady solvers, including mesh adaptive versions, it is conceivable that the user has a specialized algorithm to use or with which they want to experiment, such as parameter continuation. To enable such solvers, one must create a subclass of Solver and implement its initialization and the solve method. The solve method is where the user can supply their specialized algorithm. A SolverContext is supplied to the solve method thereby giving the user access to objects and options needed. Once the Solver subclass is implemented, its creation should be added to the SolverFactory to enable runtime selection capabilities.
QoIs
To create a new QoI, a subclass of QoIBase must be created
that implements the QoI functionality. The subclass must define the
methods
assemble_on_interior
and assemble_on_sides to indicate if the QoI is an
interior, boundary, or both. Additionally, element interior
(element_qoi and
element_qoi_derivative) and
boundary (side_qoi and side_qoi_derivative)
assemblies should be overridden as needed. Finally,
the QoIFactory should be updated to enable the new QoI.
4.3 Other Software Elements
GRINS is released under an LGPL 2.1 license in order to promote use by the community. It is developed within the GitHub environment and as such can be cloned by anyone. It is a C++ framework, but we have not yet mandated C++11 as a requirement, although we use it when it is available. Optional thermochemistry libraries are supported, including Cantera [28] and Antioch [4]. Documentation is made using Doxygen. A unit and regression test suite is included and integrated with the build system.
5 Examples
We now describe several examples illustrating the reusability of the existing modeling kernels, QoIbased adaptivity, and interfaces with the QUESO statistical library [44] to facilitate statistical inverse problems on complex datareduction modeling problems. We proceed informally, only discussing the models at a coarse level, to promote brevity of the manuscript while illustrating the current capabilities and applications of GRINS.
5.1 Fluid Mechanics
We first illustrate the ability to reuse developed modeling capabilities with several typical examples from fluid mechanics. The first two examples share common modeling kernels for which the input specification^{9}^{9}9GRINS uses a fork of the GetPot input parser [5] that is distributed with libMesh. Particularly desirable features include the sectioning and the ability to “include” other files to facilitate input file reuse. is shown in Figure LABEL:fig:ins_common_input.
LidDriven Cavity
The geometry for the liddriven cavity is simple enough that the mesh can be constructed by libMesh. Thus, all that remains to be specified are the mesh generation options and the boundary conditions; these are shown in Figure LABEL:fig:lid_driven_cavity_input.
There are, of course, many other options that are used to control other aspects of the calculation such as the solver (that can be overridden at the command line when using PETScbased solvers), visualization output, and terminal output. All of these have sensible defaults so the information given is enough to define a simulation and proceed.
For the given parameters in this example, the convective scales may not be adequately resolved to yield a stable solution. Thus, one may wish to add stabilization. There are several stabilization schemes already present in GRINS, but should one wish to implement a method not present, one only need add a new Physics subclass, as discussed in Section LABEL:sec:extend.
Backward Facing Step
The backward facing step example requires a slightly more involved mesh specification, particularly the association of boundary ids with specific parts of the boundary, so this mesh was generated externally and will be read at runtime.
As with the liddriven cavity, we need to add stabilization to stabilize the convection scales so we add stabilization as in Figure LABEL:fig:ins_stab.
Other Examples
We illustrate more complex examples that benefit from the reusability of developed Physics models. First is the study of thermally induced vortices used for power generation [6]. Figure LABEL:fig:sov shows an example completely specified using a supplied mesh and input file that makes heavy use of function parsing to construct complex boundary and initial conditions. Figure LABEL:fig:cavity shows a solution to a cavity benchmark problem [14] for the NavierStokes equations in the low Mach number limit, see e.g. [41]. Finally, Figure LABEL:fig:ozone_flame shows an example calculation of a confined Ozone flame, akin to the example in Braack et al [15].
5.2 Solid Mechanics
Now we describe an application in solid mechanics using this framework. The mathematical models are used to simulate one and twodimensional manifolds of elastic materials that undergo large deformation, see e.g. [35]. Although the final application is for modeling of parachutes, we show an example that contains many elements of the desired application whose features illustrate the flexibility of the current work.
The body is a twodimensional manifold with overlapping onedimensional “stiffeners” along with the axial directions and pressure distribution acting normal to the surface; the latter induces another geometric nonlinearity in addition to the large deformation. In particular, Figure LABEL:fig:inflating_membrane shows a twodimensional circular rubber membrane deforming under a uniform pressure loading, acting normal to the surface, with onedimensional “stiffeners” (cables) along the x and yaxis. The rubber membrane is modeled as a MooneyRivlin material (material nonlinearity) and the cables are modeled as linear elastic materials^{10}^{10}10The material is still large deformation, but it is assumed that the strains are small..
This application exhibits multiple materials, multiple dimensional mesh elements directly coupled, and the underlying nonlinear plane stress elasticity equations are posed on the manifolds. The latter two elements, to best of the authors knowledge, are challenging to deploy within the FEniCS platform^{11}^{11}11Because the equations are plane stress and the reference configuration of the body is not planar in the xy plane, then the weak form is posed in convected curvilinear coordinates such that gradients are with respect to reference element coordinates and not the more typical physical element coordinates..
This example can use both an unsteady solver with heavy damping or simple continuation solver extension where the applied pressure is incrementally increased. The latter is distributed as part of this example in GRINS and required roughly 100 lines of code.
5.3 QoIDriven Mesh Adaptivivty
This example illustrates using the QoI and Solver infrastructure to enable QoIbased AMR using error estimates based on adjoint solutions. This particular example mimics that of Prudhomme and Oden [45, 46] for pointvalued QoIs, namely the Poisson equation with a prescribed loading such that the solution is a given function.
We note that the infrastructure automatically supports multiple QoIs for both error estimation/adaptive mesh refinement and for parameter sensitivity.
5.4 Data Reduction Modeling
The final set of examples that we cite here are the initial integration with the QUESO statistical library [44] for solving statistical inverse problems. In particular, all of these examples use GRINS as a data reduction model — mathematical models are used to infer desired quantities from experiments given physical measured data and its associated uncertainty.
The first application was using onedimensional models of thermocouple gages to infer surface heat flux [13]. In this case, the thermal conductivity was unknown and temperature history data was used to infer the conductivity using a Bayesian approach. Several likelihood models were considered, all using the underlying forward model infrastructure. The second application was the realtime inference of a damage field given given strain field measurements [42, 43]. Here, an extended Kalman filter approach was used to update the damage predictions based on measured strain data; the underlying forward model in GRINS supplied to forward model predictions derivatives. The final application is the inference of surface chemistry parameters [12]. This case uses a twodimensional reacting flow model to study the consumption of carbon by dissociated nitrogen from which surface chemistry between carbon and nitridation can be inferred. Such reactions are important in reentry flows [50].
At this point, all the previous examples are small application codes built using GRINS and QUESO. Current work is providing direct interfaces to QUESO that will facilitate runtime selection of parameters and construction of data reduction models without the need for separate applications. Completion of this work will be the subject of future publications.
6 Concluding Remarks and Future Directions
In this paper, we have described the GRINS multiphysics framework that provides a platform for constructing numerical approximations of complex mathematical models and facilitating the use of modern numerical methods, including adjointbased error estimation, AMR, and interactions with statistical libraries for performing inference. Users can directly use existing functionality to run calculations only from an input file and supplied mesh and experiment with the solvers and parameters to find the right method for the problem; we were particularly inspired by the PETSc library for this approach. The approach taken here is attempt to balance between infrastructure mandates, e.g. as in FEniCS, and the flexibility afforded by building a standalone application.
Ongoing work includes building interfaces between PETSc and libMesh to facilitate the use of adapted grids for geometric multigrid based solvers, building reusable interfaces to QUESO to facilitate rapid deployment of inference applications, more flexibility to interfaces to allow userspecified adjoint operators, infrastructure to support arbitrary LagrangianEulerian (ALE) fluidstructure interaction, as well as many other finegrained enhancements to lower the barrier for users.
Acknowledgements
The authors thank Mr. Nicholas Malaya at the University of Texas at Austin for providing the thermal vortex image. The first author during his time in the PECOS center and the second author were supported the Department of Energy [National Nuclear Security Administration] under Award Number [DEFC52 08NA28615]. The solid mechanics example was developed by the first author under NASA grant NNX14AI27A. This support is gratefully acknowledged.
References
 [1] https://github.com/gahansen/Albany
 [2] http://www.openfoam.org.
 [3] http://warp.povusers.org/FunctionParser/.
 [4] https://github.com/libantioch/antioch
 [5] http://getpot.sourceforge.net/documentationindex.html
 [6] http://www.fmrl.gatech.edu/drupal/projects/solarvortex
 [7] Martin S. Alnæs, Anders Logg, Kristian B. Ølgaard, Marie E. Rognes, and Garth N. Wells, Unified form language: A domainspecific language for weak formulations of partial differential equations, ACM Trans. Math. Softw., 40 (2014), pp. 9:1–9:37.
 [8] Satish Balay, Mark F. Adams, Jed Brown, Peter Brune, Kris Buschelman, Victor Eijkhout, William D. Gropp, Dinesh Kaushik, Matthew G. Knepley, Lois Curfman McInnes, Karl Rupp, Barry F. Smith, and Hong Zhang, PETSc users manual, Tech. Report ANL95/11  Revision 3.4, Argonne National Laboratory, 2013.
 [9] , PETSc Web page. http://www.mcs.anl.gov/petsc, 2014.
 [10] Satish Balay, William D. Gropp, Lois Curfman McInnes, and Barry F. Smith, Efficient management of parallelism in object oriented numerical software libraries, in Modern Software Tools in Scientific Computing, E. Arge, A. M. Bruaset, and H. P. Langtangen, eds., Birkhäuser Press, 1997, pp. 163–202.
 [11] W. Bangerth, R. Hartmann, and G. Kanschat, deal.II – a general purpose object oriented finite element library, ACM Trans. Math. Softw., 33 (2007), pp. 24/1–24/27.
 [12] Paul T. Bauman, Enabling the statistical calibration of active nitridation of graphite by atomic nitrogen, in 45th AIAA Thermophysics Conference, 2015.
 [13] Paul T. Bauman, Jeremy Jagodzinski, and Benjamin S. Kirk, Statistical calibration of thermocouple gauges used for inferring heat flux, in 42nd AIAA Thermophysics Conference, AIAA 20113779, 2011.
 [14] R. Becker and M. Braack, Solution of a stationary benchmark problem for natural convection with large temperature difference, International Journal Of Thermal Sciences, 41 (2002), pp. 428–439.
 [15] R. Becker, M. Braack, and R. Rannacher, Numerical simulation of laminar flames at low Mach number by adaptive finite elements, Combustion Theory And Modelling, 3 (1999), pp. 503–534.
 [16] Roland Becker and Rolf Rannacher, An optimal control approach to a posteriori error estimation in finite element methods, Acta Numerica, 10 (2001), pp. 1–102.
 [17] Jed Brown, Matthew G. Knepley, David A. May, Lois Curfman McInnes, and Barry Smith, Composable linear solvers for multiphysics, in Proceedings of the 2012 11th International Symposium on Parallel and Distributed Computing, ISPDC ’12, Washington, DC, USA, 2012, IEEE Computer Society, pp. 55–62.
 [18] J. Brown, M. G. Knepley, D. A. May, Lois Curfman McInnes, and B. F. Smith, Composable linear solvers for multiphysics, in 11th International Symposium on Parallel and Distributed Computing, Munich, Germany, 06/2012 2012.
 [19] P. Brune, M. G. Knepley, B. F. Smith, and X. Tu, Composing scalable nonlinear algebraic solvers, (2013).
 [20] Qiushi Chen, Jakob T. Ostien, and Glen Hansen, Development of a used fuel cladding damage model incorporating circumferential and radial hydride responses, Journal of Nuclear Materials, 447 (2014), pp. 292–303.
 [21] E. Cyr, J. Shadid, and T. Wildey, Approaches for AdjointBased A Posteriori Analysis of Stabilized Finite Element Methods, SIAM J. Sci. Comput., 36 (2014), pp. A766–A791.
 [22] A. Dedner, R. Klöfkorn, M. Nolte, and M. Ohlberger, A generic interface for parallel and adaptive scientific computing: Abstraction principles and the dunefem module, Computing, 90 (2011), pp. 165–196.
 [23] P. E. Farrell, D. A. Ham, S. W. Funke, and M .E. Rognes, Automated derivation of the adjoint of highlevel transient finite element programs, SIAM J. on Sci. Comput., 35 (2013), pp. C369â–C393.
 [24] Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, Design Patterns: Elements of Reusable ObjectOriented Software, AddisonWesley Professional, 1 ed., 1994.
 [25] Vikram V. Garg, Coupled Flow Systems, Adjoint Techniques and Uncertainty Quantification, PhD thesis, The University of Texas at Austin, 2012.
 [26] D. R. Gaston, C. J. Permann, D. Andrs, and J. W. Peterson, Massive hybrid parallelism for fully implicit multiphysics, in Proceedings of the International Conference on Mathematics and Computational Methods Applied to Nuclear Science & Engineering (M&C 2013), May 5–9, 2013.
 [27] D. R. Gaston, C. J. Permann, J. W. Peterson, A. E. Slaughter, D. Andrš, Y. Wang, M. P. Short, D. M. Perez, M. R. Tonks, J. Ortensi, and R. C. Martineau, Physicsbased multiscale coupling for full core nuclear reactor simulation, Annals of Nuclear Energy, Special Issue on MultiPhysics Modelling of LWR Static and Transient Behaviour, 84 (2015), pp. 45–54. http://dx.doi.org/10.1016/j.anucene.2014.09.060.
 [28] D. G. Goodwin, An opensource, extensible software suite for cvd process simulation, in CVD XVI and EuroCVD Fourteen, M. Allendorf, F. Maury, and F. Teyssandier, eds., Electrochemical Society, pp. 155–162.
 [29] Max .D. Gunzburger, Finite Element Methods for Viscous Incompressible Flows, Academic Press, 1989.
 [30] C. H. Hamilton and A. RauChaplin, Compact hilbert indices: Spacefilling curves for domains with unequal side lengths, Information Processing Letters, 105 (2008), pp. 155–163.
 [31] Michael A. Heroux, Roscoe A. Bartlett, Vicki E. Howle, Robert J. Hoekstra, Jonathan J. Hu, Tamara G. Kolda, Richard B. Lehoucq, Kevin R. Long, Roger P. Pawlowski, Eric T. Phipps, Andrew G. Salinger, Heidi K. Thornquist, Ray S. Tuminaro, James M. Willenbring, Alan Williams, and Kendall S. Stanley, An overview of the trilinos project, ACM Trans. Math. Softw., 31 (2005), pp. 397–423.
 [32] George Karypis and Vipin Kumar, A fast and highly quality multilevel scheme for partitioning irregular graphs, SIAM Journal on Scientific Computing, 20 (1999), pp. 359–392.
 [33] Benjamin S. Kirk, Adaptive Finite Element Simulation of Flow and Transport Applications on Parallel Computers, PhD thesis, The University of Texas at Austin, 2007.
 [34] B. S. Kirk, J. W. Peterson, R. H. Stogner, and G. F. Carey, libMesh: A C++ library for parallel adaptive mesh refinement/coarsening simulations, Engineering with Computers, 22 (2006), pp. 237–254.
 [35] Alain Lo, Nonlinear Dynamic Analysis of Cable and Membrane Structures, PhD thesis, Oregon State University, 1982.
 [36] Anders Logg, KentAndre Mardal, and Garth Wells, eds., Automated Solution of Differential Equations by the Finite Element Method, vol. 84 of Lecture Notes in Computational Science and Engineering, Springer, 2012.
 [37] J. Nocedal and S. J. Wright, Numerical Optimization, Springer, New York, 1999.
 [38] J. T. Oden and S. Prudhomme, Estimation of modeling error in computational mechanics, Journal Of Computational Physics, 182 (2002), pp. 496–515.
 [39] J. T. Oden, S. Prudhomme, A. Romkes, and P. T. Bauman, Multiscale modeling of physical phenomena: Adaptive control of models, SIAM Journal on Scientific Computing, 28 (2006), pp. 2359–2389.
 [40] John W. Peterson, Parallel Adaptive Finite Element Methods for Problems in Natural Convection, PhD dissertation, The University of Texas at Austin, 2008.
 [41] J. Principe and R. Codina, Mathematical models for thermally coupled low speed flows, Advances in Theoretical and Applied Mechanics, 2 (2009), pp. 93–112.
 [42] E. E. Prudencio, P. T. Bauman, D. Faghihi, K. RaviChandar, and J. T. Oden, A computational framework for dynamic datadriven material damage control, based on bayesian inference and model selection, Int. J. Numer. Meth. Engng, 102 (2015), pp. 379–403.
 [43] E. E. Prudencio, P. T. Bauman, S. V. Williams, D. Faghihi, K. RaviChandar, and J. T. Oden, Realtime inference of stochastic damage in composite materials, Composites Part B: Engineering, 67 (2014), pp. 209–219.
 [44] Ernesto E. Prudencio and Karl W. Schulz, The parallel C++ statistical library QUESO: Quantification of uncertainty for estimation, simulation and optimization, in EuroPar 2011 Workshops, Part I, M. Alexander et al., ed., vol. 7155 of Lecture Notes in Computer Science, SpringerVerlag, Berlin Heidelberg, 2012, pp. 398–407.
 [45] S. Prudhomme and J. T. Oden, On goaloriented error estimation for elliptic problems: application to the control of pointwise errors, Computer Methods in Applied Mechanics and Engineering, 176 (1999), pp. 313–331.
 [46] Serge Michael Prudhomme, Adaptive Control of Error and Stability of hp Approximations of the Transient NavierStokes Equations, PhD thesis, University of Texas at Austin, 1999.
 [47] James Reinders, Intel Threading Building Blocks: Outfitting C++ for Multicore Processor Parallelism, O’Reilly Media, 2007.
 [48] Alan Shalloway and James R. Trott, Design Patterns Explained, AddisonWesley, 2 ed., 2005.
 [49] JR Stewart and HC Edwards, A framework approach for developing parallel adaptive multiphysics applications, Finite Elements In Analysis And Design, 40 (2004), pp. 1599–1617.
 [50] R. Stogner, Paul T. Bauman, K. W. Schulz, and R. Upadhyay, Uncertainty and parameter sensitivity in multiphysics reentry flows, in 49th AIAA Aerospace Sciences Meeting, AIAA 2011764, 2011.
 [51] Roy H Stogner, Parallel Adaptive C1 MacroElements for Nonlinear Thin Film and NonNewtonian Flow Problems, PhD thesis, The University of Texas at Austin, 2008.
 [52] Roy H Stogner and Graham F Carey, C1 macroelements in adaptive finite element methods, International Journal for Numerical Methods in Engineering, 70 (2007), pp. 1076–1095.
 [53] Roy H Stogner, Graham F Carey, and Bruce T Murray, Approximation of CahnHilliard diffuse interface models using parallel adaptive mesh refinement and coarsening with C1 elements, International Journal for Numerical Methods in Engineering, 76 (2008), pp. 636–661.
 [54] I. Tezaur, A. Salinger, M. Perego, and R. Tuminaro, On the development & performance of a first order stokes finite element ice sheet dycore built using trilinos software components, in SIAM Conference on Computational Science and Engineering, Salt Lake City, UT, 2015.
 [55] T. M. van Opstal, Paul T. Bauman, Serge Prudhomme, and E. H. van Brummelen, Goaloriented adaptive modeling for viscous incompressible flows, (2013). ICES Report 1307.
 [56] Min Zhou, Onkar Sahni, Ting Xie, Mark S. Shephard, and Kenneth E. Jansen, Unstructured mesh partition improvement for implicit finite element at extreme scale, The Journal of Supercomputing, 59 (2012), pp. 1218–1228.