Developer Guide
The following is intended to outline how to build the code from scratch. This is necessary if the user seeks to run the code on diverse hardware, add new user-defined models, and/or add capabilities to the Fierro ecosystem.
Folder Structure
The Fierro repository contains many useful things for building models and applications. See the following annotated folder structure for the most important file locations.
Fierro - CMakeLists.txt // Top level CMake for building Fierro - scripts // Bash Scripts for building Fierro from scratch - single-node - Explicit-Lagrange - src - meshes // Suite of example meshes - src - EVPFFT - scripts // Bash scripts for building the small-strain EVPFFT solver - LS-EVPFFT - scripts // Bash scripts for building the large-strain EVPFFT solver - Parallel-Solvers // finite element solvers - Simulation_Parameters.h // Comprehensive example of Yaml input parsing - User-Material-Interface // Contains placeholders for implementing custom material models - scripts // Bash scripts for building the parallel solvers - Parallel-Explicit // All code for the explicit solver - example_simple.yaml // Example explicit input file - main.cpp // Entry point for solver - Implicit-Lagrange // All code for the implicit solver - example_simple.yaml // Example implicit input file - main.cpp // Entry point for solver - Yaml-Serializable // Library for parsing Yaml files into native structs
Building from scratch inside Anaconda
We encourage developers to use Anaconda, making the build process as simple as possible. Anaconda can be installed on Mac and Linux OSs. At this time, Windows users must install Anaconda inside WSL-2. As a starting place, follow the steps for your platform to install Anaconda / miniconda / mamba. The Fierro code contains many solvers: Finite element to micromechancal FFT-based solvers. The instrutions that follow here will address building the solvers individually and then building all of them.
Building large- or small-strain EVPFFT inside Anaconda
Open a terminal on your machine. Then create and activate an Anaconda environment by typing:
conda create -n FierroEVPFFT
conda activate FierroEVPFFT
In this example, the enviroment is called FierroEVPFFT, but any name can be used.
In some cases, the text to activate an enviroment is source activate FierroEVPFFT
.
Likewise, if an enviroment already exists, then just activate the desired environment.
The next steps are to install essential dependencies for EVPFFT. C++ and Fortran compilers are installed by typing:
conda install -c conda-forge "cxx-compiler=1.5.2"
conda install -c conda-forge "fortran-compiler=1.5.2"
Here, the conda compiler version 1.5.2 is used, which will install gcc version 11.0. Omitting 1.5.2 will install gcc version 12 (at this time). We encourage users who are interested in running with the CUDA backend to use a gcc version with 1 number less than the CUDA library version. For all other users, omit the version number, i.e., use "cxx-compiler" and "fortran-compiler".
The next step is to install cmake by typing:
conda install cmake
The EVPFFT solver requires the following thrid-party libraries
conda install "fftw=*=mpi_openmpi*" -c conda-forge
conda install "hdf5=*=mpi_openmpi*" -c conda-forge
It is essential to install mpi versions of these libraries. We offer an option to build these libraries (fftw and hdf5) from scratch, details are given at the end of this subsection. A word of caution, either install the libraries via conda or include the flags with the build script to compile these libraries. Failure to do so may result in including incompatible versions of fftw and hdf5 that exist on the machine, resulting in compulation errors of EVPFFT.
If running on an Nvidia GPU, install cudatoolkit by typing:
conda install -c conda-forge cudatoolkit
conda install -c conda-forge cudatoolkit-dev
As this time, CUDA 12 will be installed using the above commands.
At this point, all necessary dependencies and third-party libraries are installed in the Conda environment. It is now possible to build the EVPFFT solvers from scratch. The build scripts are located at:
Fierro/src/EVPFFT/scripts/build-scripts/
Usage:
source build_evpfft.sh [OPTION]
Required arguments:
--heffte_build_type=<fftw|cufft|rocfft>
--kokkos_build_type=<serial|openmp|pthreads|cuda|hip>
Optional arguments:
--build_fftw: builds fftw from scratch
--build_hdf5: builds hdf5 from scratch
--machine=<darwin|chicoma|linux|ma> (default: none)
--num_jobs=<number>: Number of jobs for 'make' (default: 1, on Mac use 1)
--help: Display this help message
Important: don't use the machine args when building EVPFFT from scratch with anaconda. When building inside anaconda on a Mac, the num_jobs can be >1 (i.e., compiling in parallel is possible). However, using cmake installed on a Mac outside anaconda via homebrew is restricted to serial compulation at this time. Inside anaconda, for all OSs (Linux, Mac, WSL-2), it is recommended to set the number of jobs equal to the number of cores on the CPU for fast compulation times.
To compile the code with the serial Kokkos backend, which means only MPI, type:
source build_evpfft.sh --heffte_build_type=fftw --kokkos_build_type=serial
Congratulations, the code is compiled. The executable is located in:
Fierro/src/EVPFFT/evpfft_fftw_serial
To compile the code with the openMP Kokkos backend, which means MPI plus openMP, type:
source build_evpfft.sh --heffte_build_type=fftw --kokkos_build_type=openmp
The executable is located in:
Fierro/src/EVPFFT/evpfft_cufftw_openmp
To compile the code with the CUDA Kokkos backend, which means MPI plus CUDA, type:
source build_evpfft.sh --heffte_build_type=cufft --kokkos_build_type=cuda
The executable is located in:
Fierro/src/EVPFFT/evpfft_fftw_cuda
Custom Material Models
For more advanced users, Fierro supports complex material models defined in C++ code. The interface for such models
is defined in Parallel-Solvers/Material-Models/material_models.h
. Currently it supports custom, dynamic implementations
of the element speed of sound, pressure, and stress.
These custom material models are linked to a material in the Yaml configuration file by specifying that the equation of state model and/or the strength model for the material is a user defined type. Then, Fierro will look to the user implemented material model for determining the material properties.
When implementing a user material model, you are given access to global configuration options in the form of an array of doubles in the "global_vars" material field. This can be used to tweak properties of your material model without recompiling the implementation. Additionally, you have access to state element-wise state variables that are carried over from the previous iteration.
Solvers
The explicit and implicit solvers offer a framework for two different approaches for solving the static or dynamic physics systems. Both solvers govern loading and parsing of input parameters, mesh files, and geometries; allocating global memory, element-wise memory, and FEA modules; as well as invoke the optional topology optimization routines.
FEA Modules
FEA Modules are implementations of specific physical properties or phenomena. They can contain logic for either static computation or simulating the next steps of dynamic evolution. For example, the Inertial FEA Module is tasked with computing element-wise masses and moments of inertia. Alternatively, the SGH module implements a Lagrangian finite element staggered grid hydrodynamic (SGH) method with a Runge Kutta time evolution scheme to simulate the dynamics of materials.
Yaml Serialization
Fierro makes use of Yaml as a human readable configuration interface. The Fierro GUI exists to improve the Yaml file creation experience, but the backend ingests the inputs as Yaml options. Yaml-Serializable is a library designed to simplify the conversion of Yaml strings to native C++ data types.
Basic Usage
The following is an example of a C++ struct and a potential Yaml representation:
SERIALIZABLE_ENUM(TEST_ENUM,
VALUE_1,
VALUE_2,
VALUE_3
)
struct Serializable {
int a;
float b;
double c;
std::set d;
std::vector e;
};
IMPL_YAML_SERIALIZABLE_FOR(Serializable,
a, b, c, d, e
)
a: 1
b: 1.01
c: 1.025
d:
- VALUE_1
- VALUE_2
- VALUE_3
e:
- string_1
- string_2
Here we have defined a C++ struct with all of the supported fundamental datatypes (custom nested data types are also supported
but not featured here). Once defined and serialization is enabled with the IMPL_YAML_SERIALIZABLE_FOR
macro,
you can load an instance of the struct from a string with Yaml::from_string<Serializable>(string)
or from
a file with Yaml::from_file<Serializable>(filepath)
.
Validation
Yaml is naturally a permissive specification. When it is initially parsed into a C++ Yaml::Node object, each value will
be parsed as a string and the existence and structure of nodes is not yet validated. The Yaml-Serializable library
makes it trivial to coerce the Yaml values into the specific C++ types as well as apply validation. In general, validation
errors are represented as Yaml::ConfigurationException
.
By default, the deserializer will only validate enumerations. In the example above, specifying a value for the "d" field that is not in the enumeration will throw an exception. All other potential errors will fail softly, including missing fields or the presence of Yaml fields that don't map to struct fields.
To enforce that all values in the Yaml representation must map to something in the struct, use Yaml::from_file_strict
when loading the object. If there is an errant field present in the Yaml, an execption will be thrown.
To enforce that a field is present in the Yaml representation when loading, use the YAML_ADD_REQUIRED_FIELDS_FOR
macro when defining the struct. There you can list out all of the fields that should be required.
Advanced Usage
While the Yaml-Serializable library offers a simple interface for establishing a mapping between Yaml representation and C++ representation, it also offers flexibility throughout the deserialization process. The following example, taken from the Fierro source code, demonstrates a more advanced usage pattern.
struct Input_Options : Yaml::ValidatedYaml, Yaml::DerivedFields {
std::string mesh_file_name;
MESH_FORMAT mesh_file_format;
ELEMENT_TYPE element_type = ELEMENT_TYPE::hex8;
bool zero_index_base = false;
// Non-serialized fields
int words_per_line;
int elem_words_per_line;
/**
* Determine a couple of file parsing parameters from the specified filetype.
*/
void derive() {
if (mesh_file_format == MESH_FORMAT::ansys_dat) {
words_per_line = 4;
elem_words_per_line = 11;
} else {
switch (mesh_file_format) {
case MESH_FORMAT::ensight:
words_per_line = 1;
break;
case MESH_FORMAT::vtk:
case MESH_FORMAT::tecplot:
words_per_line = 3;
break;
default:
break;
}
switch (element_type) {
case ELEMENT_TYPE::hex8:
elem_words_per_line = 8;
break;
case ELEMENT_TYPE::quad4:
elem_words_per_line = 4;
break;
default:
throw Yaml::ConfigurationException("Unsupported element type `" + to_string(element_type) + "`.");
break;
}
}
mesh_file_name = std::filesystem::absolute(mesh_file_name).string();
}
/**
* Ensures that the provided filepath is valid.
*/
void validate() {
Yaml::validate_filepath(mesh_file_name);
}
};
YAML_ADD_REQUIRED_FIELDS_FOR(Input_Options, mesh_file_name)
IMPL_YAML_SERIALIZABLE_FOR(Input_Options, mesh_file_name, mesh_file_format, element_type, zero_index_base)
Here we can see several features in use. As mentioned earlier, YAML_ADD_REQUIRED_FIELDS_FOR(Input_Options, mesh_file_name)
is used to enforce that the field "mesh_file_name" is present in the Yaml representation. However, on top of that, we implement
a custom validation step by deriving from Yaml::ValidatedYaml
and implementing void validate()
.
Here we make use of a convenience method for validating that the string is actually a valid file.
Aside from validations, we can also see two other options: non-serialized properties, and property derivation. When creating the struct
not all of the fields must be serializable. Any field not listed in IMPL_YAML_SERIALIZABLE_FOR
will simply
be ignored by both the serializer and deserializer. One good use case for these kinds of fields is having derived properties.
In this case, "words_per_line" and "elem_words_per_line" are not unique information as they can be determined from the
other fields, namely "mesh_file_format" and "element_type". However, it is convenient to keep the logic for deriving these
fields close to the definition. This is achieved by deriving from Yaml::DerivedFields
and implementing
void derive()
. With those things in place, the deserializer will automatically call derive
after the serializable fields are loaded and before calling validate
.