Software Installation

Activate environment

It is assumed that the tardigrade-examples-env environment has been installed per the instructions provided in Activate Environment.

$ conda activate -n tardigrade-examples-env

Abaqus FEM

Abaqus is used for several upscaling verification workflows, but is otherwise optional. Users may refer to the Abaqus documentation for installation instructions.

Add Abaqus to software configuration path

Either using scons --config-software or manually, add /path/to/abaqus to the config_software.yml entry for “Abaqus”.

For most Windows installations, the path to the abaqus.bat script may be specified. The default path is already added to the SConstruct file assuming the program is installed in the default location.

Ratel FEM

Ratel may be installed according to directions provided in the main repository, https://ratel.micromorph.org/doc/intro/, however, further instructions are provided here for clarity. For the DNS required by workflows of this repository, the only Ratel software prerequisites are libCEED and PETSc. Once libCEED and PETSc are properly installed, Ratel may be built.

First, create a root directory for Ratel software in a convenient location.

$ mkdir /path/to/directory/Ratel
$ export RATEL_DIR=/path/to/directory/Ratel
$ cd $RATEL_DIR

Build libCEED

Clone and build libCEED.

$ cd $RATEL_DIR
$ git clone https://github.com/CEED/libCEED
$ cd libCEED
$ make

Export the build directory ($CEED_DIR) as an environment variable.

$ export CEED_DIR=$RATEL_DIR/libCEED

Build PETSc

Clone, configure, and build PETSc then export the build location as an environment variable. The following instructions provided by PETSc may be a helpful reference if problems arise: https://petsc.org/release/install/install_tutorial/#qqtw-quickest-quick-start-in-the-west. Several configure options are specified here to allow Ratel simulations to run with Exodus meshes generated using Cubit.

$ cd $RATEL_DIR
$ git clone https://gitlab.com/petsc/petsc
$ cd petsc
$ ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mpich --download-fblaslapack --download-exodusii --download-hdf5 --download-netcdf --download-pnetcdf --download-zlib

After configuring PETSc, a specific make command will be provided that lists the PETSc build architercture ($PETSC_ARCH) and the PETSc build directory ($PETSC_DIR) which will be needed for Ratel. The following make command was provided when building on sstbigbird.lanl.gov.

$ make PETSC_DIR=$RATEL_DIR/petsc PETSC_ARCH=arch-linux-debug all

Export the build directory ($PETSC_DIR) and build architecture ($PETSC_ARCH) as environment variables.

export PETSC_DIR=$RATEL_DIR/petsc
export PETSC_ARCH=arch-linux-debug

Note

Make sure to use the PETSC_ARCH specified by PETSc after the configuration step

Build Ratel

Clone and build Ratel. This should work if the $CEED_DIR, $PETSC_DIR, and $PETSC_ARCH environment variables have been set.

$ cd $RATEL_DIR
$ git clone https://gitlab.com/micromorph/ratel
$ cd ratel
$ make

Test

The Ratel documentation includes instructions for how to test the installation which a user is welcome to follow. Another simple test may be run using the following commands:

$ cd $RATEL_DIR
$ ./bin/ratel-quasistatic -options_file examples/ex02-quasistatic-elasticity-linear-platen.yml

Many other examples can be found in the $RATEL_DIR/examples directory.

Add Ratel to software configuration path

Currently, all Ratel DNS used in this repository only require the ratel-quasistatic program. This executable should be located in $RATEL_DIR/ratel/bin/ratel-quasistatic. Either using scons --config-software or manually, add /path/to/ratel/bin/ratel-quasistatic to the config_software.yml entry for “Ratel”.

GEOS MPM

Coming soon!

Cubit

Cubit is used for a number of meshing operations. Users may refer to the Cubit documentation for installation instructions.

For users without access to Cubit, several example meshes are contained in model_package/meshes/, however, functionality of workflows will be limited.

Add Cubit to software configuration path

Either using scons --config-software or manually, add /path/to/cubit to the config_software.yml entry for “Cubit”.

Micromorphic Filter

All workflows use the Micromorphic Filter for homogenization. This software is written entirely in Python and does not need to be compiled or built in any capacity. Workflows using the Micromorphic Filter are already configured to instantiate the Filter class and call relevant functions. Simply clone the repository to a desired location.

$ git clone git@github.com:UCBoulder/tardigrade_filter.git

In order to clone this repository, a user may need to configure their GitHub account to be associated with University of Colorado Boulder’s single sign-on (SSO). For instructions, see the section titled “Access GitHub” from the Office of Information Technology at the following link: https://oit.colorado.edu/services/business-services/github-enterprise

The Conda Environment for this repo includes all of the same packages included in the Micrormophic Filter repository to guarantee that this software functions appropriately.

Test

The Micromorphic Filter comes with built in tests using PyTest. To run these tests, simply run the following commands:

$ cd /path/to/tardigrade_filter
$ pytest

Add Micromorphic Filter to software configuration path

Either using scons --config-software or manually, add /path/to/tardigrade_filter/src/python to the config_software.yml entry for “filter”.

The path to the Micromorphic Filter’s src/python directory needs to be inserted into the Python path whenever it is to be used. This is handled automatically by the SCons workflow.

Tardigrade-MOOSE

Tardigrade-MOOSE is built using CMake and requires a number of compilers and Python libraries which are included in the environment.txt file included in this repository.

Note

Note that MOOSE and associated Python package update frequently, so the conda environment for this repository should be rebuilt each time Tardigrade-MOOSE is to be compiled. See the following link for more information: https://mooseframework.inl.gov/getting_started/new_users.html#update.

Clone Tardigrade

$ git clone https://github.com/UCBoulder/tardigrade.git
$ cd tardigrade

CMake

$ mkdir build
$ cd build
$ cmake .. -DTARDIGRADE_BUILD_PYTHON_BINDINGS=OFF
$ make -j 4

Set LD_LIBRARY_PATH

There is an LD_LIBRARY_PATH that needs to be specified. A user may either: (1) export this path as an environment variable or (2) include this path on the command line each time a Tardigrade package is run.

For option 1, an environment variable may be set with the following command. It is NOT recommended to include this environment variable in a ~/.bashrc as there may be unintended consequences.

$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/tardigrade/build/_deps/tardigrade_micromorphic_element-build/src/cpp

For details using option 2, see the following subsection for “Test” or Using Tardigrade-MOOSE from Command Line. Workflows that run Tardigrade-MOOSE are configured to automatically use option 2 in which the LD_LIBRARY_PATH is prepended to the command that launches a simulation. However, note that other operations may still require manual intervention (such as those described in the sections just mentioned).

Either using scons --config-software or manually, add /path/to/tardigrade/build/_deps/tardigrade_micromorphic_element-build/src/cpp to the config_software.yml entry for “LD_PATH”. This configuration will ensure that Tardigrade-MOOSE simulations run through SCons workflows will access the appropriate shared libraries.

If one encounters error while loading shared libraries: libmicromat.so: cannot open shared object file, then the LD_LIBRARY_PATH is not configured correctly.

Test

The tests may be run using the ctest -v command from within the Tardigrade build directory. As discussed in Set LD_LIBRARY_PATH, the tests may be run with the LD_LIBRARY_PATH already set as an environment variable with:

$ cd /path/to/tardigrade/build
$ ctest -v

or by specifying the LD_LIBRARY_PATH on the command line:

$ cd /path/to/tardigrade/build
$ LD_LIBRARY_PATH=/path/to/tardigrade/build/_deps/tardigrade_micromorphic_element-build/src/cpp ctest -v

Most or all of the tests should pass. If they do not all pass, the tests may be run again with the --rerun-failed and --output-on-failure to see what tests failed. If a test fails with the “EXODIFF” reason, then it is likely that the most recent of Tardigade produces output that does not exactly match the “gold” results file. Otherwise, if tests fail because a specific library is not found (e.g., libmicromat.so) then Tardigrade is configured improperly and/or the LD_LIBRARY_PATH has not been specified correctly.

Add Tardigrade-MOOSE to software configuration path

Either using scons --config-software or manually, add /path/to/tardigrade/build/tardigrade-opt to the config_software.yml entry for “Tardigrade”.

Micromorphic Calibration Tool

The micromorphic calibration tool is a shared Python library that can be built after building tardigrade_micromorphic_element. If the Tardigrade-MOOSE build went smoothly then the directory containing the calibration tool will be contained in the /path/to/tardigrade/build/_deps/tardigrade_micromorphic_element-src/src/python directory. Alternatively, tardigrade_micromorphic_element may be built separately from Tardigrade-MOOSE. Be sure that the “tardigrade-examples-env” environment is activated.

Note

It is likely that the setup.py file will need to be modified!

Set the library_dirs in setup.py to the following path:

library_dirs = [os.path.abspath('../../../tardigrade_micromorphic_element-build/src/cpp')]

The LD_LIBRARY_PATH must be set according to the instuctions provided in Set LD_LIBRARY_PATH.

The shared library may be built as follows:

$ cd /path/to/tardigrade/build/_deps/tardigrade_micromorphic_element-src/src/python
$ python setup.py build_ext --inplace

Test

To test that the shared library is working correctly, one may start an interactive Python session (in the /path/to/tardigrade/build/_deps/tardigrade_micromorphic_element-src/src/python directory) and use import micromorphic. Similarly, an interactive session may be run from any directory, but the location of the micromorphic shared library must be appended to the Python path as follows:

import sys
sys.path.append('/path/to/tardigrade/build/_deps/tardigrade_micromorphic_element-src/src/python')
import micromorphic

Further discussion is provided in Software Usage to show how the WAVES workflow automatically sets these Python paths.

Add Micromorphic Calibration Tool to software configuration path

Either using scons --config-software or manually, add /path/to/tardigrade/build/_deps/tardigrade_micromorphic_element-src/src/python to the config_software.yml entry for “micromorphic”.

The path to the micromorphic shared library needs to be inserted into the Python path whenever it is to be used. This is handled automatically by the SCons workflow.

Micromorphic Linear Elastic Constraints

Constraints of the micromorphic linear elasticity model of Eringen and Suhubi [3] must be enforced. See discussion of these constraints in Constraints on Elastic Parameters.

The calibration stage of upscaling workflows must evaluate these constraints when determining linear elastic parameters. The linear_elastic_parameter_constraint_equations.py script is provided in the tardigrade_micromorphic_linear_elasticity repository to evluate these 13 constraints. This repository is automatically pulled during the Tardigrade-MOOSE CMake build.

Add Micromorphic Linear Elastic Constraints to software configuration path

Either using scons --config-software or manually, add /path/to/tardigrade/build/_deps/tardigrade_micromorphic_linear_elasticity-src/src/python to the config_software.yml entry for “constraints”.

The path to the linear_elastic_parameter_constraint_equations.py script needs to be inserted into the Python path whenever it is to be used. This is handled automatically by the SCons workflow.

MPI

Parallel jobs for Ratel and Tardigrade-MOOSE may be run using MPI (message passing interface). The location of the mpiexec utility will depend on the system being used, however, it may have been installed when creating the conda environment for this project (i.e. /path/to/tardigrade-examples-env/bin/mpiexec). One may be able to locate this utility by executing which mpiexec on the command line.

The mpiexec command should only be necessary for parallelizing simulations run on systems without a job scheduler such as SLURM. For HPCs with SLURM, see the discussion in Serial vs. Parallel.

Add MPI to software configuration path

Either using scons --config-software or manually, add /path/to/mpiexec to the config_software.yml entry for “mpi”.