Internal API
SCons Extensions
- waves.scons_extensions._abaqus_datacheck_solver_emitter(target: list, source: list, env) tuple[list, list] [source]
Passes the datacheck specific extensions to
_abaqus_solver_emitter()
- waves.scons_extensions._abaqus_explicit_solver_emitter(target: list, source: list, env) tuple[list, list] [source]
Passes the explicit specific extensions to
_abaqus_solver_emitter()
- waves.scons_extensions._abaqus_extract_emitter(target: list, source: list, env) tuple[list, list] [source]
Prepends the abaqus extract builder target H5 file if none is specified. Appends the source[0].csv file unless
delete_report_file
isTrue
. Always appends thetarget[0]_datasets.h5
file.If no targets are provided to the Builder, the emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then at least one target must be provided with the build subdirectory, e.g.
parameter_set1/target.h5
. When in doubt, provide the expected H5 file as a target, e.g.source[0].h5
.- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env (SCons.Script.SConscript.SConsEnvironment) – The builder’s SCons construction environment object
- Returns:
target, source
- waves.scons_extensions._abaqus_journal_emitter(target: list, source: list, env) tuple[list, list] [source]
Appends the abaqus_journal builder target list with the builder managed targets
Appends
target[0]
.abaqus_v6.env andtarget[0]
.stdout to thetarget
list. The abaqus_journal Builder requires at least one target.The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/target.ext
. When in doubt, provide a STDOUT redirect file as a target, e.g.target.stdout
.- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env (SCons.Script.SConscript.SConsEnvironment) – The builder’s SCons construction environment object
- Returns:
target, source
- waves.scons_extensions._abaqus_solver_emitter(target: list, source: list, env, suffixes: list[str] = ['.odb', '.dat', '.msg', '.com', '.prt'], stdout_extension: str = '.stdout') tuple[list, list] [source]
Appends the abaqus_solver builder target list with the builder managed targets
If no targets are provided to the Builder, the emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then at least one target must be provided with the build subdirectory, e.g.
parameter_set1/target.ext
. When in doubt, provide the output database as a target, e.g.job_name.odb
If “suffixes” is a key in the environment,
env
, then the suffixes list will override thesuffixes_to_extend
argument.- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env (SCons.Script.SConscript.SConsEnvironment) – The builder’s SCons construction environment object
suffixes_to_extend – List of strings to use as emitted file suffixes. Must contain the leading period, e.g.
.extension
- Returns:
target, source
- waves.scons_extensions._abaqus_standard_solver_emitter(target: list, source: list, env) tuple[list, list] [source]
Passes the standard specific extensions to
_abaqus_solver_emitter()
- waves.scons_extensions._build_odb_extract(target: list, source: list, env) None [source]
Define the odb_extract action when used as an internal package and not a command line utility
- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env (SCons.Script.SConscript.SConsEnvironment) – The builder’s SCons construction environment object
- waves.scons_extensions._build_subdirectory(target: list) Path [source]
Return the build subdirectory of the first target file
- Parameters:
target – The target file list of strings
- Returns:
build directory
- waves.scons_extensions._cache_environment(command: str, cache: str | None = None, overwrite_cache: bool = False, verbose: bool = False) dict [source]
Retrieve cached environment dictionary or run a shell command to generate environment dictionary
If the environment is created successfully and a cache file is requested, the cache file is _always_ written. The
overwrite_cache
behavior forces the shellcommand
execution, even when the cache file is present.Warning
Currently only supports bash shells
- Parameters:
command – the shell command to execute
cache – absolute or relative path to read/write a shell environment dictionary. Will be written as YAML formatted file regardless of extension.
overwrite_cache – Ignore previously cached files if they exist.
verbose – Print SCons configuration-like action messages when True
- Returns:
shell environment dictionary
- waves.scons_extensions._custom_scanner(pattern: str, suffixes: list[str], flags: int | None = None) Scanner [source]
Custom Scons scanner
constructs a scanner object based on a regular expression pattern. Will only search for files matching the list of suffixes provided.
_custom_scanner
will always use there.MULTILINE
flag https://docs.python.org/3/library/re.html#re.MULTILINE- Parameters:
pattern – Regular expression pattern.
suffixes – List of suffixes of files to search
flags – An integer representing the combination of re module flags to be used during compilation. Additional flags can be combined using the bitwise OR (|) operator. The re.MULTILINE flag is automatically added to the combination.
- Returns:
Custom Scons scanner
- waves.scons_extensions._first_target_emitter(target: list, source: list, env, suffixes: list[str] = [], appending_suffixes: list[str] = [], stdout_extension: str = '.stdout') tuple[list, list] [source]
Appends the target list with the builder managed targets
Searches for a file ending in the stdout extension. If none is found, creates a target by appending the stdout extension to the first target in the
target
list. The associated Builder requires at least one target for this reason. The stdout file is always placed at the end of the returned target list.The suffixes list are replacement operations on the first target’s suffix. The appending suffixes list are appending operations on the first target’s suffix.
The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/target.ext
. When in doubt, provide a STDOUT redirect file with the.stdout
extension as a target, e.g.target.stdout
.- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env (SCons.Script.SConscript.SConsEnvironment) – The builder’s SCons construction environment object
suffixes – Suffixes which should replace the first target’s extension
appending_suffixes – Suffixes which should append the first target’s extension
- Returns:
target, source
- waves.scons_extensions._matlab_script_emitter(target: list, source: list, env) tuple[list, list] [source]
Appends the matlab_script builder target list with the builder managed targets
Appends
target[0]
.matlab.env andtarget[0]
.stdout to thetarget
list. The matlab_script Builder requires at least one target. The build tree copy of the Matlab script is not added to the target list to avoid multiply defined targets when the script is used more than once in the same build directory.The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/target.ext
. When in doubt, provide a STDOUT redirect file as a target, e.g.target.stdout
.- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env (SCons.Script.SConscript.SConsEnvironment) – The builder’s SCons construction environment object
- Returns:
target, source
- waves.scons_extensions._return_environment(command: str) dict [source]
Run a shell command and return the shell environment as a dictionary
Warning
Currently only supports bash shells
- Parameters:
command – the shell command to execute
- Returns:
shell environment dictionary
- waves.scons_extensions._sierra_emitter(target: list, source: list, env) tuple[list, list] [source]
Appends the sierra builder target list with the builder managed targets
Appends
target[0]
.env andtarget[0]
.stdout to thetarget
list. The Sierra Builder requires at least one target.The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/target.ext
. When in doubt, provide a STDOUT redirect file as a target, e.g.target.stdout
.- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env (SCons.Script.SConscript.SConsEnvironment) – The builder’s SCons construction environment object
- Returns:
target, source
- waves.scons_extensions._string_action_list(builder: Builder) list [source]
Return a builders action list as a list of str
- Parameters:
builder – The builder to extract the action list from
- Returns:
list of builder actions
- waves.scons_extensions._warn_kwarg_change(kwargs: dict, old_kwarg: str, new_kwarg: str = 'program')[source]
Return the value of an old kwarg and raise a deprecation warning pointing to the new kwarg
Return None if the old keyword argument is not found in the keyword arguments dictionary.
>>> def function_with_kwarg_change(new_kwarg="something", **kwargs): >>> old_kwarg = waves.scons_extensions._warn_kwarg_change() >>> new_kwarg = old_kwarg if old_kwarg is not None else new_kwarg
- Parameters:
kwargs – The
**kwargs
dictionary from a function interfaceold_kwarg – The older kwarg key.
- Returns:
Value of the
old_kwarg
if it exists in thekwargs
dictionary.None
if the old keyword isn’t found in the dictionary.
- waves.scons_extensions.abaqus_extract(program: str = 'abaqus', **kwargs) Builder [source]
Abaqus ODB file extraction Builder
This builder executes the
odb_extract
command line utility against an ODB file in the source list. The ODB file must be the first file in the source list. If there is more than one ODB file in the source list, all but the first file are ignored byodb_extract
.This builder is unique in that no targets are required. The Builder emitter will append the builder managed targets and
odb_extract
target name constructions automatically. The first target determines the working directory for the emitter targets. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then at least one target must be provided with the build subdirectory, e.g.parameter_set1/target.h5
. When in doubt, provide the expected H5 file as a target, e.g.source[0].h5
.The target list may specify an output H5 file name that differs from the ODB file base name as
new_name.h5
. If the first file in the target list does not contain the*.h5
extension, or if there is no file in the target list, the target list will be prepended with a name matching the ODB file base name and the*.h5
extension.The builder emitter appends the CSV file created by the
abaqus odbreport
command as executed byodb_extract
unlessdelete_report_file
is set toTrue
.This builder supports the keyword arguments:
output_type
,odb_report_args
,delete_report_file
with behavior as described in the ODB Extract command line interface.Warning
odb_extract
requires Abaqus arguments forodb_report_args
in the form ofoption=value
, e.g.step=step_name
./ # Top level group required in all hdf5 files /<instance name>/ # Groups containing data of each instance found in an odb FieldOutputs/ # Group with multiple xarray datasets for each field output <field name>/ # Group with datasets containing field output data for a specified set or surface # If no set or surface is specified, the <field name> will be # 'ALL_NODES' or 'ALL_ELEMENTS' HistoryOutputs/ # Group with multiple xarray datasets for each history output <region name>/ # Group with datasets containing history output data for specified history region name # If no history region name is specified, the <region name> will be 'ALL NODES' Mesh/ # Group written from an xarray dataset with all mesh information for this instance /<instance name>_Assembly/ # Group containing data of assembly instance found in an odb Mesh/ # Group written from an xarray dataset with all mesh information for this instance /odb/ # Catch all group for data found in the odbreport file not already organized by instance info/ # Group with datasets that mostly give odb meta-data like name, path, etc. jobData/ # Group with datasets that contain additional odb meta-data rootAssembly/ # Group with datasets that match odb file organization per Abaqus documentation sectionCategories/ # Group with datasets that match odb file organization per Abaqus documentation /xarray/ # Group with a dataset that lists the location of all data written from xarray datasets
import waves env = Environment() env["abaqus"] = waves.scons_extensions.add_program(["abaqus"], env) env.Append(BUILDERS={"AbaqusExtract": waves.scons_extensions.abaqus_extract()}) env.AbaqusExtract(target=["my_job.h5", "my_job.csv"], source=["my_job.odb"])
- Parameters:
program – An absolute path or basename string for the abaqus program
- Returns:
Abaqus extract builder
- waves.scons_extensions.abaqus_input_scanner() Scanner [source]
Abaqus input file dependency scanner
Custom SCons scanner that searches for the
INPUT=
parameter and associated file dependencies inside Abaqus*.inp
files.- Returns:
Abaqus input file dependency Scanner
- Return type:
SCons.Scanner.Scanner
- waves.scons_extensions.abaqus_journal(program: str = 'abaqus', post_action: list = [], **kwargs) Builder [source]
Abaqus journal file SCons builder
This builder requires that the journal file to execute is the first source in the list. The builder returned by this function accepts all SCons Builder arguments and adds the keyword argument(s):
journal_options
: The journal file command line options provided as a string.abaqus_options
: The Abaqus command line options provided as a string.
At least one target must be specified. The first target determines the working directory for the builder’s action, as shown in the action code snippet below. The action changes the working directory to the first target’s parent directory prior to executing the journal file.
The Builder emitter will append the builder managed targets automatically. Appends
target[0]
.abaqus_v6.env andtarget[0]
.stdout to thetarget
list.The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/my_target.ext
. When in doubt, provide a STDOUT redirect file as a target, e.g.target.stdout
.cd ${TARGET.dir.abspath} && abaqus cae -noGui ${SOURCE.abspath} ${abaqus_options} -- ${journal_options} > ${TARGETS[-1].abspath} 2>&1
import waves env = Environment() env["abaqus"] = waves.scons_extensions.add_program(["abaqus"], env) env.Append(BUILDERS={"AbaqusJournal": waves.scons_extensions.abaqus_journal()}) env.AbaqusJournal(target=["my_journal.cae"], source=["my_journal.py"], journal_options="")
- Parameters:
program (str) – An absolute path or basename string for the abaqus program.
post_action (list) – List of shell command string(s) to append to the builder’s action list. Implemented to allow post target modification or introspection, e.g. inspect the Abaqus log for error keywords and throw a non-zero exit code even if Abaqus does not. Builder keyword variables are available for substitution in the
post_action
action using the${}
syntax. Actions are executed in the first target’s directory ascd ${TARGET.dir.abspath} && ${post_action}
- Returns:
Abaqus journal builder
- Return type:
SCons.Builder.Builder
- waves.scons_extensions.abaqus_solver(program: str = 'abaqus', post_action: list[str] = [], emitter: Literal['standard', 'explicit', 'datacheck'] | None = None, **kwargs) Builder [source]
Abaqus solver SCons builder
This builder requires that the root input file is the first source in the list. The builder returned by this function accepts all SCons Builder arguments and adds the keyword argument(s):
job_name
: The job name string. If not specifiedjob_name
defaults to the root input file stem. The Builder emitter will append common Abaqus output files as targets automatically from thejob_name
, e.g.job_name.odb
.abaqus_options
: The Abaqus command line options provided as a string.suffixes
: override the emitter targets with a new list of extensions, e.g.AbaqusSolver(target=[], source=["input.inp"], suffixes=[".odb"])
will emit only one file namedjob_name.odb
.
The first target determines the working directory for the builder’s action, as shown in the action code snippet below. The action changes the working directory to the first target’s parent directory prior to executing the journal file.
This builder is unique in that no targets are required. The Builder emitter will append the builder managed targets automatically. The target list only appends those extensions which are common to Abaqus analysis operations. Some extensions may need to be added explicitly according to the Abaqus simulation solver, type, or options. If you find that SCons isn’t automatically cleaning some Abaqus output files, they are not in the automatically appended target list.
The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/job_name.odb
. When in doubt, provide a STDOUT redirect file as a target, e.g.target.stdout
.The
-interactive
option is always appended to the builder action to avoid exiting the Abaqus task before the simulation is complete. The-ask_delete no
option is always appended to the builder action to overwrite existing files in programmatic execution, where it is assumed that the Abaqus solver target(s) should be re-built when their source files change.import waves env = Environment() env["abaqus"] = waves.scons_extensions.add_program(["abaqus"], env) env.Append(BUILDERS={ "AbaqusSolver": waves.scons_extensions.abaqus_solver(), "AbaqusStandard": waves.scons_extensions.abaqus_solver(emitter='standard'), "AbaqusOld": waves.scons_extensions.abaqus_solver(program="abq2019"), "AbaqusPost": waves.scons_extensions.abaqus_solver(post_action="grep -E '\<SUCCESSFULLY' ${job_name}.sta") }) env.AbaqusSolver(target=[], source=["input.inp"], job_name="my_job", abaqus_options="-cpus 4") env.AbaqusSolver(target=[], source=["input.inp"], job_name="my_job", suffixes=[".odb"])
cd ${TARGET.dir.abspath} && ${program} -job ${job_name} -input ${SOURCE.filebase} ${abaqus_options} -interactive -ask_delete no > ${TARGETS[-1].abspath} 2>&1
- Parameters:
program – An absolute path or basename string for the abaqus program
post_action – List of shell command string(s) to append to the builder’s action list. Implemented to allow post target modification or introspection, e.g. inspect the Abaqus log for error keywords and throw a non-zero exit code even if Abaqus does not. Builder keyword variables are available for substitution in the
post_action
action using the${}
syntax. Actions are executed in the first target’s directory ascd ${TARGET.dir.abspath} && ${post_action}
.emitter –
emit file extensions based on the value of this variable. Overridden by the
suffixes
keyword argument that may be provided in the Task definition.”standard”: [“.odb”, “.dat”, “.msg”, “.com”, “.prt”, “.sta”]
”explicit”: [“.odb”, “.dat”, “.msg”, “.com”, “.prt”, “.sta”]
”datacheck”: [“.odb”, “.dat”, “.msg”, “.com”, “.prt”, “.023”, “.mdl”, “.sim”, “.stt”]
default value: [“.odb”, “.dat”, “.msg”, “.com”, “.prt”]
- Returns:
Abaqus solver builder
- waves.scons_extensions.add_cubit(names: list[str], env) str [source]
Modifies environment variables with the paths required to
import cubit
in a Python3 environment.Returns the absolute path of the first program name found. Appends
PATH
with first program’s parent directory if a program is found and the directory is not already onPATH
. PrependsPYTHONPATH
withparent/bin
. PrependsLD_LIBRARY_PATH
withparent/bin/python3
.Returns None if no program name is found.
import waves env = Environment() env["cubit"] = waves.scons_extensions.add_cubit(["cubit"], env)
- Parameters:
names – list of string program names. May include an absolute path.
env (SCons.Script.SConscript.SConsEnvironment) – The SCons construction environment object to modify
- Returns:
Absolute path of the found program. None if none of the names are found.
- waves.scons_extensions.add_program(names: list[str], env) str [source]
Search for a program from a list of possible program names. Add first found to system
PATH
.Returns the absolute path of the first program name found. Appends
PATH
with first program’s parent directory if a program is found and the directory is not already onPATH
. Returns None if no program name is found.import waves env = Environment() env["program"] = waves.scons_extensions.add_program(["program"], env)
- Parameters:
names – list of string program names. May include an absolute path.
env (SCons.Script.SConscript.SConsEnvironment) – The SCons construction environment object to modify
- Returns:
Absolute path of the found program. None if none of the names are found.
- waves.scons_extensions.alias_list_message(env=None, append: bool = True, keep_local: bool = True) None [source]
Add the alias list to the project’s help message
See the SCons Help documentation for appending behavior. Adds text to the project help message formatted as
Target Aliases: Alias_1 Alias_2
where the aliases are recovered from
SCons.Node.Alias.default_ans
.- Parameters:
env (SCons.Script.SConscript.SConsEnvironment) – The SCons construction environment object to modify
append – append to the
env.Help
message (default). When False, theenv.Help
message will be overwritten ifenv.Help
has not been previously called.keep_local – Limit help message to the project specific content when True. Only applies to SCons >=4.6.0
- waves.scons_extensions.append_env_path(program: str, env) None [source]
Append SCons contruction environment
PATH
with the program’s parent directoryRaises a
FileNotFoundError
if theprogram
absolute path does not exist. Uses the SCons AppendENVPath method. If the program parent directory is already onPATH
, thePATH
directory order is preserved.import waves env = Environment() env["program"] = waves.scons_extensions.find_program(["program"], env) if env["program"]: waves.append_env_path(env["program"], env)
- Parameters:
program – An absolute path for the program to add to SCons construction environment
PATH
env (SCons.Script.SConscript.SConsEnvironment) – The SCons construction environment object to modify
- waves.scons_extensions.catenate_actions(**outer_kwargs)[source]
Decorator factory to apply the
catenate_builder_actions
to a function that returns an SCons Builder.Accepts the same keyword arguments as the
waves.scons_extensions.catenate_builder_actions()
import SCons.Builder import waves @waves.scons_extensions.catenate_actions def my_builder(): return SCons.Builder.Builder(action=["echo $SOURCE > $TARGET", "echo $SOURCE >> $TARGET"])
- waves.scons_extensions.catenate_builder_actions(builder: Builder, program: str = '', options: str = '') Builder [source]
Catenate a builder’s arguments and prepend the program and options
${program} ${options} "action one && action two"
- Parameters:
builder – The SCons builder to modify
program – wrapping executable
options – options for the wrapping executable
- Returns:
modified builder
- waves.scons_extensions.conda_environment() Builder [source]
Create a Conda environment file with
conda env export
This builder is intended to help WAVES workflows document the Conda environment used in the current build. At least one target file must be specified for the
conda env export --file ${TARGET}
output. Additional options to the Condaenv export
subcommand may be passed as the builder keyword argumentconda_env_export_options
.At least one target must be specified. The first target determines the working directory for the builder’s action, as shown in the action code snippet below. The action changes the working directory to the first target’s parent directory prior to creating the Conda environment file.
cd ${TARGET.dir.abspath} && conda env export ${conda_env_export_options} --file ${TARGET.file}
The modsim owner may choose to re-use this builder throughout their project configuration to provide various levels of granularity in the recorded Conda environment state. It’s recommended to include this builder at least once for any workflows that also use the
waves.scons_extensions.python_builder()
. The builder may be re-used once per build sub-directory to provide more granular build environment reproducibility in the event that sub-builds are run at different times with variations in the active Conda environment. For per-Python script task environment reproducibility, the builder source list can be linked to the output of awaves.scons_extensions.python_builder()
task with a target environment file name to match.The first recommendation, always building the project wide Conda environment file, is demonstrated in the example usage below.
import waves env = Environment() env.Append(BUILDERS={"CondaEnvironment": waves.scons_extensions.conda_environment()}) environment_target = env.CondaEnvironment(target=["environment.yaml"]) env.AlwaysBuild(environment_target)
- Returns:
Conda environment builder
- Return type:
SCons.Builder.Builder
- waves.scons_extensions.construct_action_list(actions: list[str], prefix: str = 'cd ${TARGET.dir.abspath} &&', postfix: str = '') list[str] [source]
Return an action list with a common pre/post-fix
Returns the constructed action list with pre/post fix strings as
f"{prefix} {new_action} {postfix}"
where SCons action objects are converted to their string representation. If a string is passed instead of a list, it is first converted to a list. If an empty list is passed, and empty list is returned.
- Parameters:
actions – List of action strings
prefix – Common prefix to prepend to each action
postfix – Common postfix to append to each action
- Returns:
action list
- waves.scons_extensions.copy_substitute(source_list: list, substitution_dictionary: dict | None = None, env: ~SCons.Environment.Base = <SCons.Environment.Base object>, build_subdirectory: str = '.', symlink: bool = False) NodeList [source]
Copy source list to current variant directory and perform template substitutions on
*.in
filenamesWarning
This is a Python function and not an SCons builder. It cannot be added to the construction environment
BUILDERS
list. The function returns a list of targets instead of a Builder object.Creates an SCons Copy task for each source file. Files are copied to the current variant directory matching the calling SConscript parent directory. Files with the name convention
*.in
are also given an SCons Substfile Builder, which will perform template substitution with the provided dictionary in-place in the current variant directory and remove the.in
suffix.To avoid dependency cycles, the source file(s) should be passed by absolute path.
import pathlib import waves env = Environment() current_directory = pathlib.Path(Dir(".").abspath) source_list = [ "#/subdir3/file_three.ext", # File found with respect to project root directory using SCons notation current_directory / file_one.ext, # File found in current SConscript directory current_directory / "subdir2/file_two", # File found below current SConscript directory current_directory / "file_four.ext.in" # File with substitutions matching substitution dictionary keys ] substitution_dictionary = { "@variable_one@": "value_one" } waves.scons_extensions.copy_substitute(source_list, substitution_dictionary, env)
- Parameters:
source_list – List of pathlike objects or strings. Will be converted to list of pathlib.Path objects.
substitution_dictionary – key: value pairs for template substitution. The keys must contain the optional template characters if present, e.g.
@variable@
. The template character, e.g.@
, can be anything that works in the SCons Substfile builder.env – An SCons construction environment to use when defining the targets.
build_subdirectory – build subdirectory relative path prepended to target files
symlink – Whether symbolic links are created as new symbolic links. If true, symbolic links are shallow copies as a new symbolic link. If false, symbolic links are copied as a new file (dereferenced).
- Returns:
SCons NodeList of Copy and Substfile target nodes
- waves.scons_extensions.default_targets_message(env=None, append: bool = True, keep_local: bool = True) None [source]
Add a default targets list to the project’s help message
See the SCons Help documentation for appending behavior. Adds text to the project help message formatted as
Default Targets: Default_Target_1 Default_Target_2
where the targets are recovered from
SCons.Script.DEFAULT_TARGETS
.- Parameters:
env (SCons.Script.SConscript.SConsEnvironment) – The SCons construction environment object to modify
append – append to the
env.Help
message (default). When False, theenv.Help
message will be overwritten ifenv.Help
has not been previously called.keep_local – Limit help message to the project specific content when True. Only applies to SCons >=4.6.0
- waves.scons_extensions.find_program(names: list[str], env) str [source]
Search for a program from a list of possible program names.
Returns the absolute path of the first program name found. If path parts contain spaces, the part will be wrapped in double quotes.
- Parameters:
names – list of string program names. May include an absolute path.
env (SCons.Script.SConscript.SConsEnvironment) – The SCons construction environment object to modify
- Returns:
Absolute path of the found program. None if none of the names are found.
- waves.scons_extensions.matlab_script(program: str = 'matlab', post_action: list[str] = [], **kwargs) Builder [source]
Matlab script SCons builder
Warning
Experimental implementation is subject to change
This builder requires that the Matlab script is the first source in the list. The builder returned by this function accepts all SCons Builder arguments and adds the keyword argument(s):
script_options
: The Matlab function interface options in Matlab syntax and provided as a string.matlab_options
: The Matlab command line options provided as a string.
The parent directory absolute path is added to the Matlab
path
variable prior to execution. All required Matlab files should be co-located in the same source directory.At least one target must be specified. The first target determines the working directory for the builder’s action, as shown in the action code snippet below. The action changes the working directory to the first target’s parent directory prior to executing the python script.
The Builder emitter will append the builder managed targets automatically. Appends
target[0].matlab.env and ``target[0]
.stdout to thetarget
list.The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/my_target.ext
. When in doubt, provide a STDOUT redirect file as a target, e.g.target.stdout
.cd ${TARGET.dir.abspath} && {program} ${matlab_options} -batch "path(path, '${SOURCE.dir.abspath}'); ${SOURCE.filebase}(${script_options})" > ${TARGETS[-1].abspath} 2>&1
- Parameters:
program – An absolute path or basename string for the Matlab program.
post_action – List of shell command string(s) to append to the builder’s action list. Implemented to allow post target modification or introspection, e.g. inspect a log for error keywords and throw a non-zero exit code even if Matlab does not. Builder keyword variables are available for substitution in the
post_action
action using the${}
syntax. Actions are executed in the first target’s directory ascd ${TARGET.dir.abspath} && ${post_action}
- Returns:
Matlab script builder
- waves.scons_extensions.print_build_failures(print_stdout: bool = True) None [source]
On exit, query the SCons reported build failures and print the associated node’s STDOUT file, if it exists
- Parameters:
print_stdout – Boolean to set the exit behavior. If False, don’t modify the exit behavior.
- waves.scons_extensions.project_help_message(env=None, append: bool = True, keep_local: bool = True) None [source]
Add default targets and alias lists to project help message
See the SCons Help documentation for appending behavior. Thin wrapper around
- Parameters:
env (SCons.Script.SConscript.SConsEnvironment) – The SCons construction environment object to modify
append – append to the
env.Help
message (default). When False, theenv.Help
message will be overwritten ifenv.Help
has not been previously called.keep_local – Limit help message to the project specific content when True. Only applies to SCons >=4.6.0
- waves.scons_extensions.python_script(post_action: list[str] = []) Builder [source]
Python script SCons builder
This builder requires that the python script to execute is the first source in the list. The builder returned by this function accepts all SCons Builder arguments and adds the keyword argument(s):
script_options
: The Python script command line arguments provided as a string.python_options
: The Python command line arguments provided as a string.
At least one target must be specified. The first target determines the working directory for the builder’s action, as shown in the action code snippet below. The action changes the working directory to the first target’s parent directory prior to executing the python script.
The Builder emitter will append the builder managed targets automatically. Appends
target[0]
.stdout to thetarget
list.The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/my_target.ext
. When in doubt, provide a STDOUT redirect file as a target, e.g.target.stdout
.cd ${TARGET.dir.abspath} && python ${python_options} ${SOURCE.abspath} ${script_options} > ${TARGETS[-1].abspath} 2>&1
import waves env = Environment() env.Append(BUILDERS={"PythonScript": waves.scons_extensions.python_script()}) env.PythonScript(target=["my_output.stdout"], source=["my_script.py"], python_options="", script_options="")
- Parameters:
post_action (list) – List of shell command string(s) to append to the builder’s action list. Implemented to allow post target modification or introspection, e.g. inspect a log for error keywords and throw a non-zero exit code even if Python does not. Builder keyword variables are available for substitution in the
post_action
action using the${}
syntax. Actions are executed in the first target’s directory ascd ${TARGET.dir.abspath} && ${post_action}
- Returns:
Python script builder
- Return type:
SCons.Builder.Builder
- waves.scons_extensions.quinoa_solver(charmrun: str = 'charmrun', inciter: str = 'inciter', charmrun_options: str = '+p1', inciter_options: str = '', prefix_command: str = '', post_action: list[str] = []) Builder [source]
Quinoa solver SCons builder
This builder requires at least two source files provided in the order
Quinoa control file:
*.q
Exodus mesh file:
*.exo
The builder returned by this function accepts all SCons Builder arguments. Except for the
post_action
, the arguments of this function are also available as keyword arguments of the builder. When provided during task definition, the keyword arguments override the builder returned by this function.The first target determines the working directory for the builder’s action, as shown in the action code snippet below. The action changes the working directory to the first target’s parent directory prior to executing quinoa.
The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/target.ext
. When in doubt, provide a STDOUT redirect file as a target, e.g.target.stdout
.Warning
This is an experimental builder for Quinoa support. The only emitted file is the
target[0].stdout
redirected STDOUT and STDERR file. All relevant application output files, e.g.out.*
must be specified in the target list.import waves env = waves.scons_extensions.shell_environment("module load quinoa") env.Append(BUILDERS={ "QuinoaSolver": waves.scons_extensions.quinoa_solver(charmrun_options="+p1"), }) # Serial execution with "+p1" env.QuinoaSolver(target=["flow.stdout"], source=["flow.q", "box.exo"]) # Parallel execution with "+p4" env.QuinoaSolver(target=["flow.stdout"], source=["flow.q", "box.exo"], charmrun_options="+p4")
${prefix_command} ${TARGET.dir.abspath} && ${charmrun} ${charmrun_options} ${inciter} ${inciter_options} --control ${SOURCES[0].abspath} --input ${SOURCES[1].abspath} > ${TARGETS[-1].abspath} 2>&1
- Parameters:
charmrun – The relative or absolute path to the charmrun executable
charmrun_options – The charmrun command line interface options
inciter – The relative or absolute path to the inciter (quinoa) executable
inciter_options – The inciter (quinoa executable) command line interface options
prefix_command – Optional prefix command intended for environment preparation. Primarily intended for use with
waves.scons_extensions.sbatch_quinoa_solver()
or when wrapping the builder withwaves.scons_extensions.ssh_builder_actions()
. For local, direct execution, user’s should prefer to create an SCons construction environment withwaves.scons_extensions.shell_environment()
. When overriding in a task definition, the prefix command must end with' &&'
.post_action – List of shell command string(s) to append to the builder’s action list. Implemented to allow post target modification or introspection, e.g. inspect the Abaqus log for error keywords and throw a non-zero exit code even if Abaqus does not. Builder keyword variables are available for substitution in the
post_action
action using the${}
syntax. Actions are executed in the first target’s directory ascd ${TARGET.dir.abspath} && ${post_action}
- Returns:
Quinoa builder
- waves.scons_extensions.sbatch(program: str = 'sbatch', post_action: list[str] = [], **kwargs) Builder [source]
-
The builder does not use a SLURM batch script. Instead, it requires the
slurm_job
variable to be defined with the command string to execute.At least one target must be specified. The first target determines the working directory for the builder’s action, as shown in the action code snippet below. The action changes the working directory to the first target’s parent directory prior to executing the journal file.
The Builder emitter will append the builder managed targets automatically. Appends
target[0]
.stdout to thetarget
list.cd ${TARGET.dir.abspath} && sbatch --wait --output=${TARGETS[-1].abspath} ${sbatch_options} --wrap ${slurm_job}
import waves env = Environment() env.Append(BUILDERS={"SlurmSbatch": waves.scons_extensions.sbatch()}) env.SlurmSbatch(target=["my_output.stdout"], source=["my_source.input"], slurm_job="cat $SOURCE > $TARGET")
- Parameters:
program – An absolute path or basename string for the sbatch program.
post_action – List of shell command string(s) to append to the builder’s action list. Implemented to allow post target modification or introspection, e.g. inspect the Abaqus log for error keywords and throw a non-zero exit code even if Abaqus does not. Builder keyword variables are available for substitution in the
post_action
action using the${}
syntax. Actions are executed in the first target’s directory ascd ${TARGET.dir.abspath} && ${post_action}
- Returns:
SLURM sbatch builder
- waves.scons_extensions.sbatch_abaqus_journal(*args, **kwargs)[source]
Thin pass through wrapper of
waves.scons_extensions.abaqus_journal()
Catenate the actions and submit with SLURM sbatch. Accepts the
sbatch_options
builder keyword argument to modify sbatch behavior.sbatch --wait --output=${TARGET.base}.slurm.out ${sbatch_options} --wrap "cd ${TARGET.dir.abspath} && abaqus cae -noGui ${SOURCE.abspath} ${abaqus_options} -- ${journal_options} > ${TARGETS[-1].abspath} 2>&1"
- waves.scons_extensions.sbatch_abaqus_solver(*args, **kwargs)[source]
Thin pass through wrapper of
waves.scons_extensions.abaqus_solver()
Catenate the actions and submit with SLURM sbatch. Accepts the
sbatch_options
builder keyword argument to modify sbatch behavior.sbatch --wait --output=${TARGET.base}.slurm.out ${sbatch_options} --wrap "cd ${TARGET.dir.abspath} && ${program} -job ${job_name} -input ${SOURCE.filebase} ${abaqus_options} -interactive -ask_delete no > ${TARGETS[-1].abspath} 2>&1"
- waves.scons_extensions.sbatch_python_script(*args, **kwargs)[source]
Thin pass through wrapper of
waves.scons_extensions.python_script()
Catenate the actions and submit with SLURM sbatch. Accepts the
sbatch_options
builder keyword argument to modify sbatch behavior.sbatch --wait --output=${TARGET.base}.slurm.out ${sbatch_options} --wrap "cd ${TARGET.dir.abspath} && python ${python_options} ${SOURCE.abspath} ${script_options} > ${TARGETS[-1].abspath} 2>&1"
- waves.scons_extensions.sbatch_quinoa_solver(*args, **kwargs)[source]
Thin pass through wrapper of
waves.scons_extensions.quinoa_solver()
Catenate the actions and submit with SLURM sbatch. Accepts the
sbatch_options
builder keyword argument to modify sbatch behavior.sbatch --wait --output=${TARGET.base}.slurm.out ${sbatch_options} --wrap ""
- waves.scons_extensions.sbatch_sierra(*args, **kwargs)[source]
Thin pass through wrapper of
waves.scons_extensions.sierra()
Catenate the actions and submit with SLURM sbatch. Accepts the
sbatch_options
builder keyword argument to modify sbatch behavior.sbatch --wait --output=${TARGET.base}.slurm.out ${sbatch_options} --wrap "cd ${TARGET.dir.abspath} && ${program} ${sierra_options} ${application} ${application_options} -i ${SOURCE.file} > ${TARGETS[-1].abspath} 2>&1"
- waves.scons_extensions.shell_environment(command: str, cache: str | None = None, overwrite_cache: bool = False) Base [source]
Return an SCons shell environment from a cached file or by running a shell command
If the environment is created successfully and a cache file is requested, the cache file is _always_ written. The
overwrite_cache
behavior forces the shellcommand
execution, even when the cache file is present.Warning
Currently only supports bash shells
import waves env = waves.scons_extensions.shell_environment("source my_script.sh")
- Parameters:
command – the shell command to execute
cache – absolute or relative path to read/write a shell environment dictionary. Will be written as YAML formatted file regardless of extension.
overwrite_cache – Ignore previously cached files if they exist.
- Returns:
SCons shell environment
- waves.scons_extensions.sierra(program: str = 'sierra', application: str = 'adagio', post_action: list[str] = []) Builder [source]
Sierra SCons builder
This builder requires that the root input file is the first source in the list. The builder returned by this function accepts all SCons Builder arguments and adds the keyword argument(s):
sierra_options
: The Sierra command line options provided as a string.application_options
: The application (e.g. adagio) command line options provided as a string.
The first target determines the working directory for the builder’s action, as shown in the action code snippet below. The action changes the working directory to the first target’s parent directory prior to executing sierra.
The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/target.ext
. When in doubt, provide a STDOUT redirect file as a target, e.g.target.stdout
.Warning
This is an experimental builder for Sierra support. The only emitted file is the application’s version report in
target[0].env
and thetarget[0].stdout
redirected STDOUT and STDERR file. All relevant application output files, e.g.genesis_output.e
must be specified in the target list.import waves env = waves.scons_extensions.shell_environment("module load sierra") env.Append(BUILDERS={ "Sierra": waves.scons_extensions.sierra(), }) env.Sierra(target=["output.e"], source=["input.i"])
cd ${TARGET.dir.abspath} && ${program} ${sierra_options} ${application} ${application_options} -i ${SOURCE.file} > ${TARGETS[-1].abspath} 2>&1
- Parameters:
program – An absolute path or basename string for the Sierra program
application – The string name for the Sierra application
post_action – List of shell command string(s) to append to the builder’s action list. Implemented to allow post target modification or introspection, e.g. inspect the Sierra log for error keywords and throw a non-zero exit code even if Sierra does not. Builder keyword variables are available for substitution in the
post_action
action using the${}
syntax. Actions are executed in the first target’s directory ascd ${TARGET.dir.abspath} && ${post_action}
.
- Returns:
Sierra builder
- waves.scons_extensions.sphinx_build(program: str = 'sphinx-build', options: str = '', builder: str = 'html', tags: str = '') Builder [source]
Sphinx builder using the
-b
specifierThis builder does not have an emitter. It requires at least one target.
${program} ${options} -b ${builder} ${TARGET.dir.dir.abspath} ${TARGET.dir.abspath} ${tags}
import waves env = Environment() env.Append(BUILDERS={ "SphinxBuild": waves.scons_extensions.sphinx_build(options="-W"), }) sources = ["conf.py", "index.rst"] targets = ["html/index.html"] html = env.SphinxBuild( target=targets, source=sources, ) env.Clean(html, [Dir("html")] + sources) env.Alias("html", html)
- Parameters:
program – sphinx-build executable
options – sphinx-build options
builder – builder name. See the Sphinx documentation for options
tags – sphinx-build tags
- Returns:
Sphinx builder
- waves.scons_extensions.sphinx_latexpdf(program: str = 'sphinx-build', options: str = '', builder: str = 'latexpdf', tags: str = '') Builder [source]
Sphinx builder using the
-M
specifier. Intended forlatexpdf
builds.This builder does not have an emitter. It requires at least one target.
${program} -M ${builder} ${TARGET.dir.dir.abspath} ${TARGET.dir.dir.abspath} ${tags} ${options}"
import waves env = Environment() env.Append(BUILDERS={ "SphinxPDF": waves.scons_extensions.sphinx_latexpdf(options="-W"), }) sources = ["conf.py", "index.rst"] targets = ["latex/project.pdf"] latexpdf = env.SphinxBuild( target=targets, source=sources, ) env.Clean(latexpdf, [Dir("latex")] + sources) env.Alias("latexpdf", latexpdf)
- Parameters:
program (str) – sphinx-build executable
options (str) – sphinx-build options
builder (str) – builder name. See the Sphinx documentation for options
tags (str) – sphinx-build tags
- Returns:
Sphinx latexpdf builder
- waves.scons_extensions.sphinx_scanner() Scanner [source]
SCons scanner that searches for directives
.. include::
.. literalinclude::
.. image::
.. figure::
.. bibliography::
inside
.rst
and.txt
files- Returns:
Abaqus input file dependency Scanner
- Return type:
SCons.Scanner.Scanner
- waves.scons_extensions.ssh_builder_actions(builder: Builder, remote_server: str = '${remote_server}', remote_directory: str = '${remote_directory}') Builder [source]
Wrap a builder’s action list with remote copy operations and ssh commands
By default, the remote server and remote directory strings are written to accept (and require) task-by-task overrides via task keyword arguments. Any SCons replacement string patterns,
${variable}
, will make thatvariable
a required task keyword argument. Only if the remote server and/or remote directory are known to be constant across all possible tasks should those variables be overridden with a string literal containing no${variable}
SCons keyword replacement patterns.Warning
The
waves.scons_extensions.ssh_builder_actions()
is a work-in-progress solution with some assumptions specific to the action construction used by WAVES. It _should_ work for most basic builders, but adapting this function to users’ custom builders will probably require some advanced SCons knowledge and inspection of thewaves.scons_extensions_ssh_builder_actions()
implementation.Design assumptions
Creates the
remote_directory
withmkdir -p
.mkdir
must exist on theremote_server
.Copies all source files to a flat
remote_directory
withrsync -rlptv
.rsync
must exist on the local system.Replaces instances of
cd ${TARGET.dir.abspath} &&
withcd ${remote_directory} &&
in the original builder actions.Replaces instances of
SOURCE.abspath
orSOURCES.abspath
withSOURCE[S].file
in the original builder actions.Prefixes all original builder actions with
cd ${remote_directory} &&
.All original builder actions are wrapped in single quotes as
'{original action}'
to preserve the&&
as part of theremote_server
command. Shell variables, e.g.$USER
, will not be expanded on theremote_server
. If quotes are included in the original builder actions, they should be double quotes.Returns the entire
remote_directory
to the original builder${TARGET.dir.abspath}
withrysnc
.rsync
must exist on the local system.
import getpass import waves user = getpass.getuser() env = Environment() env.Append(BUILDERS={ "SSHAbaqusSolver": waves.scons_extensions.ssh_builder_actions( waves.scons_extensions.abaqus_solver(program="/remote/server/installation/path/of/abaqus"), remote_server="myserver.mydomain.com" ) }) env.SSHAbaqusSolver(target=["myjob.sta"], source=["input.inp"], job_name="myjob", abaqus_options="-cpus 4", remote_directory="/scratch/${user}/myproject/myworkflow", user=user)
import SCons.Builder import waves def print_builder_actions(builder): for action in builder.action.list: print(action.cmd_list) def cat(program="cat"): return SCons.Builder.Builder(action= [f"{program} ${{SOURCES.abspath}} | tee ${{TARGETS.file}}", "echo \"Hello World!\""] ) build_cat = cat() ssh_build_cat = waves.scons_extensions.ssh_builder_actions( cat(), remote_server="myserver.mydomain.com", remote_directory="/scratch/roppenheimer/ssh_wrapper" )
>>> import my_package >>> my_package.print_builder_actions(my_package.build_cat) cat ${SOURCES.abspath} | tee ${TARGETS.file} echo "Hello World!" >>> my_package.print_builder_actions(my_package.ssh_build_cat) ssh myserver.mydomain.com "mkdir -p /scratch/roppenheimer/ssh_wrapper" rsync -rlptv ${SOURCES.abspath} myserver.mydomain.com:/scratch/roppenheimer/ssh_wrapper ssh myserver.mydomain.com 'cd /scratch/roppenheimer/ssh_wrapper && cat ${SOURCES.file} | tee ${TARGETS.file}' ssh myserver.mydomain.com 'cd /scratch/roppenheimer/ssh_wrapper && echo "Hello World!"' rsync -rltpv myserver.mydomain.com:/scratch/roppenheimer/ssh_wrapper/ ${TARGET.dir.abspath}
- Parameters:
builder – The SCons builder to modify
remote_server – remote server where the original builder’s actions should be executed. The default string requires every task to specify a matching keyword argument string.
remote_directory – absolute or relative path where the original builder’s actions should be executed. The default string requires every task to specify a matching keyword argument string.
- Returns:
modified builder
- waves.scons_extensions.substitution_syntax(substitution_dictionary: dict, prefix: str = '@', postfix: str = '@') dict [source]
Return a dictionary copy with the pre/postfix added to the key strings
Assumes a flat dictionary with keys of type str. Keys that aren’t strings will be converted to their string representation. Nested dictionaries can be supplied, but only the first layer keys will be modified. Dictionary values are unchanged.
- Parameters:
substitution_dictionary (dict) – Original dictionary to copy
prefix (string) – String to prepend to all dictionary keys
postfix (string) – String to append to all dictionary keys
- Returns:
Copy of the dictionary with key strings modified by the pre/posfix
Parameter Generators
- class waves.parameter_generators.CartesianProduct(parameter_schema: dict, output_file_template: str = None, output_file: str = None, output_file_type: str = 'yaml', set_name_template: str = 'parameter_set@number', previous_parameter_study: str = None, overwrite: bool = False, dryrun: bool = False, write_meta: bool = False, **kwargs)[source]
Bases:
_ParameterGenerator
Builds a cartesian product parameter study
- Parameters:
parameter_schema (dict) – The YAML loaded parameter study schema dictionary - {parameter_name: schema value} CartesianProduct expects “schema value” to be an iterable. For example, when read from a YAML file “schema value” will be a Python list.
output_file_template (str) – Output file name template. Required if parameter sets will be written to files instead of printed to STDOUT. May contain pathseps for an absolute or relative path template. May contain the
@number
set number placeholder in the file basename but not in the path. If the placeholder is not found it will be appended to the template string.output_file (str) – Output file name for a single file output of the parameter study. May contain pathseps for an absolute or relative path.
output_file
andoutput_file_template
are mutually exclusive. Output file is always overwritten.output_file_type (str) – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
set_name_template (str) – Parameter set name template. Overridden by
output_file_template
, if provided.previous_parameter_study (str) – A relative or absolute file path to a previously created parameter study Xarray Dataset
overwrite (bool) – Overwrite existing output files
dryrun (bool) – Print contents of new parameter study output files to STDOUT and exit
write_meta (bool) – Write a meta file named “parameter_study_meta.txt” containing the parameter set file names. Useful for command line execution with build systems that require an explicit file list for target creation.
Example
>>> import waves >>> parameter_schema = { ... 'parameter_1': [1, 2], ... 'parameter_2': ['a', 'b'] ... } >>> parameter_generator = waves.parameter_generators.CartesianProduct(parameter_schema) >>> print(parameter_generator.parameter_study) <xarray.Dataset> Dimensions: (data_type: 1, parameter_set_hash: 4) Coordinates: * data_type (data_type) object 'samples' parameter_set_hash (parameter_set_hash) <U32 'de3cb3eaecb767ff63973820b2... * parameter_sets (parameter_set_hash) <U14 'parameter_set0' ... 'param... Data variables: parameter_1 (data_type, parameter_set_hash) object 1 1 2 2 parameter_2 (data_type, parameter_set_hash) object 'a' 'b' 'a' 'b'
- Variables:
self.parameter_study – The final parameter study XArray Dataset object
- _generate(**kwargs) None [source]
Generate the Cartesian Product parameter sets.
- _validate() None [source]
Validate the Cartesian Product parameter schema. Executed by class initiation.
- parameter_study_to_dict(*args, **kwargs) dict [source]
Return parameter study as a dictionary
Used for iterating on parameter sets in an SCons workflow with parameter substitution dictionaries, e.g.
>>> for set_name, parameters in parameter_generator.parameter_study_to_dict().items(): ... print(f"{set_name}: {parameters}") ... parameter_set0: {'parameter_1': 1, 'parameter_2': 'a'} parameter_set1: {'parameter_1': 1, 'parameter_2': 'b'} parameter_set2: {'parameter_1': 2, 'parameter_2': 'a'} parameter_set3: {'parameter_1': 2, 'parameter_2': 'b'}
- Parameters:
data_type (str) – The data_type selection to return - samples or quantiles
- Returns:
parameter study sets and samples as a dictionary: {set_name: {parameter: value}, …}
- Return type:
dict - {str: {str: value}}
- write() None [source]
Write the parameter study to STDOUT or an output file.
Writes to STDOUT by default. Requires non-default
output_file_template
oroutput_file
specification to write to files.If printing to STDOUT, print all parameter sets together. If printing to files, overwrite when contents of existing files have changed. If overwrite is specified, overwrite all parameter set files. If a dry run is requested print file-content associations for files that would have been written.
Writes parameter set files in YAML syntax by default. Output formatting is controlled by
output_file_type
.parameter_1: 1 parameter_2: a
- class waves.parameter_generators.CustomStudy(parameter_schema: dict, output_file_template: str = None, output_file: str = None, output_file_type: str = 'yaml', set_name_template: str = 'parameter_set@number', previous_parameter_study: str = None, overwrite: bool = False, dryrun: bool = False, write_meta: bool = False, **kwargs)[source]
Bases:
_ParameterGenerator
Builds a custom parameter study from user-specified values
- Parameters:
parameter_schema (array) – Dictionary with two keys:
parameter_samples
andparameter_names
. Parameter samples in the form of a 2D array with shape M x N, where M is the number of parameter sets and N is the number of parameters. Parameter names in the form of a 1D array with length N. When creating a parameter_samples array with mixed type (e.g. string and floats) use dtype=object to preserve the mixed types and avoid casting all values to a common type (e.g. all your floats will become strings).output_file_template (str) – Output file name template. Required if parameter sets will be written to files instead of printed to STDOUT. May contain pathseps for an absolute or relative path template. May contain the
@number
set number placeholder in the file basename but not in the path. If the placeholder is not found it will be appended to the template string.output_file (str) – Output file name for a single file output of the parameter study. May contain pathseps for an absolute or relative path.
output_file
andoutput_file_template
are mutually exclusive. Output file is always overwritten.output_file_type (str) – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
set_name_template (str) – Parameter set name template. Overridden by
output_file_template
, if provided.previous_parameter_study (str) – A relative or absolute file path to a previously created parameter study Xarray Dataset
overwrite (bool) – Overwrite existing output files
dryrun (bool) – Print contents of new parameter study output files to STDOUT and exit
write_meta (bool) – Write a meta file named “parameter_study_meta.txt” containing the parameter set file names. Useful for command line execution with build systems that require an explicit file list for target creation.
Example
>>> import waves >>> import numpy >>> parameter_schema = dict( ... parameter_samples = numpy.array([[1.0, 'a', 5], [2.0, 'b', 6]], dtype=object), ... parameter_names = numpy.array(['height', 'prefix', 'index'])) >>> parameter_generator = waves.parameter_generators.CustomStudy(parameter_schema) >>> print(parameter_generator.parameter_study) <xarray.Dataset> Dimensions: (data_type: 1, parameter_set_hash: 2) Coordinates: * data_type (data_type) object 'samples' parameter_set_hash (parameter_set_hash) <U32 '50ba1a2716e42f8c4fcc34a90a... * parameter_sets (parameter_set_hash) <U14 'parameter_set0' 'parameter... Data variables: height (data_type, parameter_set_hash) object 1.0 2.0 prefix (data_type, parameter_set_hash) object 'a' 'b' index (data_type, parameter_set_hash) object 5 6
- Variables:
self.parameter_study – The final parameter study XArray Dataset object
- _generate(**kwargs) None [source]
Generate the parameter study dataset from the user provided parameter array.
- _validate() None [source]
Validate the Custom Study parameter samples and names. Executed by class initiation.
- parameter_study_to_dict(*args, **kwargs) dict [source]
Return parameter study as a dictionary
Used for iterating on parameter sets in an SCons workflow with parameter substitution dictionaries, e.g.
>>> for set_name, parameters in parameter_generator.parameter_study_to_dict().items(): ... print(f"{set_name}: {parameters}") ... parameter_set0: {'parameter_1': 1, 'parameter_2': 'a'} parameter_set1: {'parameter_1': 1, 'parameter_2': 'b'} parameter_set2: {'parameter_1': 2, 'parameter_2': 'a'} parameter_set3: {'parameter_1': 2, 'parameter_2': 'b'}
- Parameters:
data_type (str) – The data_type selection to return - samples or quantiles
- Returns:
parameter study sets and samples as a dictionary: {set_name: {parameter: value}, …}
- Return type:
dict - {str: {str: value}}
- write() None [source]
Write the parameter study to STDOUT or an output file.
Writes to STDOUT by default. Requires non-default
output_file_template
oroutput_file
specification to write to files.If printing to STDOUT, print all parameter sets together. If printing to files, overwrite when contents of existing files have changed. If overwrite is specified, overwrite all parameter set files. If a dry run is requested print file-content associations for files that would have been written.
Writes parameter set files in YAML syntax by default. Output formatting is controlled by
output_file_type
.parameter_1: 1 parameter_2: a
- class waves.parameter_generators.LatinHypercube(*args, **kwargs)[source]
Bases:
_ScipyGenerator
Builds a Latin-Hypercube parameter study from the scipy Latin Hypercube class
The
h5
output_file_type
is the only output type that contains both the parameter samples and quantiles.Warning
The merged parameter study feature does not check for consistent parameter distributions. Changing the parameter definitions and merging with a previous parameter study will result in incorrect relationships between parameters and the parameter study samples and quantiles.
- Parameters:
parameter_schema (dict) – The YAML loaded parameter study schema dictionary - {parameter_name: schema value} LatinHypercube expects “schema value” to be a dictionary with a strict structure and several required keys. Validated on class instantiation.
output_file_template (str) – Output file name template. Required if parameter sets will be written to files instead of printed to STDOUT. May contain pathseps for an absolute or relative path template. May contain the
@number
set number placeholder in the file basename but not in the path. If the placeholder is not found it will be appended to the template string.output_file (str) – Output file name for a single file output of the parameter study. May contain pathseps for an absolute or relative path.
output_file
andoutput_file_template
are mutually exclusive. Output file is always overwritten.output_file_type (str) – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
set_name_template (str) – Parameter set name template. Overridden by
output_file_template
, if provided.previous_parameter_study (str) – A relative or absolute file path to a previously created parameter study Xarray Dataset
overwrite (bool) – Overwrite existing output files
dryrun (bool) – Print contents of new parameter study output files to STDOUT and exit
write_meta (bool) – Write a meta file named “parameter_study_meta.txt” containing the parameter set file names. Useful for command line execution with build systems that require an explicit file list for target creation.
kwargs – Any additional keyword arguments are passed through to the sampler method
To produce consistent Latin Hypercubes on repeat instantiations, the
**kwargs
must include{'seed': <int>}
. See the scipy Latin Hypercubescipy.stats.qmc.LatinHypercube
class documentation for details Thed
keyword argument is internally managed and will be overwritten to match the number of parameters defined in the parameter schema.Example
>>> import waves >>> parameter_schema = { ... 'num_simulations': 4, # Required key. Value must be an integer. ... 'parameter_1': { ... 'distribution': 'norm', # Required key. Value must be a valid scipy.stats ... 'loc': 50, # distribution name. ... 'scale': 1 ... }, ... 'parameter_2': { ... 'distribution': 'skewnorm', ... 'a': 4, ... 'loc': 30, ... 'scale': 2 ... } ... } >>> parameter_generator = waves.parameter_generators.LatinHypercube(parameter_schema) >>> print(parameter_generator.parameter_study) <xarray.Dataset> Dimensions: (data_type: 2, parameter_set_hash: 4) Coordinates: parameter_set_hash (parameter_set_hash) <U32 '1e8219dae27faa5388328e225a... * data_type (data_type) <U9 'quantiles' 'samples' * parameter_sets (parameter_set_hash) <U14 'parameter_set0' ... 'param... Data variables: parameter_1 (data_type, parameter_set_hash) float64 0.125 ... 51.15 parameter_2 (data_type, parameter_set_hash) float64 0.625 ... 30.97
- Variables:
self.parameter_distributions – A dictionary mapping parameter names to the scipy.stats distribution
self.parameter_study – The final parameter study XArray Dataset object
- _generate(**kwargs) None [source]
Generate the Latin Hypercube parameter sets
- parameter_study_to_dict(*args, **kwargs) dict [source]
Return parameter study as a dictionary
Used for iterating on parameter sets in an SCons workflow with parameter substitution dictionaries, e.g.
>>> for set_name, parameters in parameter_generator.parameter_study_to_dict().items(): ... print(f"{set_name}: {parameters}") ... parameter_set0: {'parameter_1': 1, 'parameter_2': 'a'} parameter_set1: {'parameter_1': 1, 'parameter_2': 'b'} parameter_set2: {'parameter_1': 2, 'parameter_2': 'a'} parameter_set3: {'parameter_1': 2, 'parameter_2': 'b'}
- Parameters:
data_type (str) – The data_type selection to return - samples or quantiles
- Returns:
parameter study sets and samples as a dictionary: {set_name: {parameter: value}, …}
- Return type:
dict - {str: {str: value}}
- write() None [source]
Write the parameter study to STDOUT or an output file.
Writes to STDOUT by default. Requires non-default
output_file_template
oroutput_file
specification to write to files.If printing to STDOUT, print all parameter sets together. If printing to files, overwrite when contents of existing files have changed. If overwrite is specified, overwrite all parameter set files. If a dry run is requested print file-content associations for files that would have been written.
Writes parameter set files in YAML syntax by default. Output formatting is controlled by
output_file_type
.parameter_1: 1 parameter_2: a
- class waves.parameter_generators.SALibSampler(sampler_class, *args, **kwargs)[source]
Bases:
_ParameterGenerator
,ABC
Builds a SALib sampler parameter study from a SALib.sample
sampler_class
Samplers must use the
N
sample count argument. Note that in SALib.sampleN
is not always equivalent to the number of simulations. The following samplers are tested for parameter study shape and merge behavior:fast_sampler
finite_diff
latin
sobol
morris
Warning
For small numbers of parameters, some SALib generators produce duplicate parameter sets. These duplicate sets are removed during parameter study generation. This may cause the SALib analyze method(s) to raise errors related to the expected parameter set count.
Warning
The merged parameter study feature does not check for consistent parameter distributions. Changing the parameter definitions and merging with a previous parameter study will result in incorrect relationships between parameters and the parameter study samples.
- Parameters:
sampler_class (str) – The SALib.sample sampler class name. Case sensitive.
parameter_schema (dict) – The YAML loaded parameter study schema dictionary - {parameter_name: schema value} SALibSampler expects “schema value” to be a dictionary with a strict structure and several required keys. Validated on class instantiation.
output_file_template (str) – Output file name template. Required if parameter sets will be written to files instead of printed to STDOUT. May contain pathseps for an absolute or relative path template. May contain the
@number
set number placeholder in the file basename but not in the path. If the placeholder is not found it will be appended to the template string.output_file (str) – Output file name for a single file output of the parameter study. May contain pathseps for an absolute or relative path.
output_file
andoutput_file_template
are mutually exclusive. Output file is always overwritten.output_file_type (str) – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
set_name_template (str) – Parameter set name template. Overridden by
output_file_template
, if provided.previous_parameter_study (str) – A relative or absolute file path to a previously created parameter study Xarray Dataset
overwrite (bool) – Overwrite existing output files
dryrun (bool) – Print contents of new parameter study output files to STDOUT and exit
write_meta (bool) – Write a meta file named “parameter_study_meta.txt” containing the parameter set file names. Useful for command line execution with build systems that require an explicit file list for target creation.
kwargs – Any additional keyword arguments are passed through to the sampler method
Keyword arguments for the SALib.sample
sampler_class
sample
method.Example
>>> import waves >>> parameter_schema = { ... "N": 4, # Required key. Value must be an integer. ... "problem": { # Required key. See the SALib sampler interface documentation ... "num_vars": 3, ... "names": ["parameter_1", "parameter_2", "parameter_3"], ... "bounds": [[-1, 1], [-2, 2], [-3, 3]] ... } ... } >>> parameter_generator = waves.parameter_generators.SALibSampler("sobol", parameter_schema) >>> print(parameter_generator.parameter_study) <xarray.Dataset> Dimensions: (data_type: 1, parameter_sets: 32) Coordinates: * data_type (data_type) object 'samples' parameter_set_hash (parameter_sets) <U32 'e0cb1990f9d70070eaf5638101dcaf... * parameter_sets (parameter_sets) <U15 'parameter_set0' ... 'parameter... Data variables: parameter_1 (data_type, parameter_sets) float64 -0.2029 ... 0.187 parameter_2 (data_type, parameter_sets) float64 -0.801 ... 0.6682 parameter_3 (data_type, parameter_sets) float64 0.4287 ... -2.871
- Variables:
self.parameter_study – The final parameter study XArray Dataset object
- Raises:
ValueError – If the SALib sobol or SALib morris sampler is specified and there are fewer than 2 parameters.
AttributeError –
N
is not a key ofparameter_schema
problem
is not a key ofparameter_schema
names
is not a key ofparameter_schema['problem']
TypeError –
parameter_schema
is not a dictionaryparameter_schema['N']
is not an integerparameter_schema['problem']
is not a dictionaryparameter_schema['problem']['names']
is not a YAML compliant iterable (list, set, tuple)
- _create_parameter_names() None [source]
Construct the parameter names from a distribution parameter schema
- _generate(**kwargs) None [source]
Generate the SALib.sample
sampler_class
parameter sets
- _sampler_overrides(override_kwargs=None) dict [source]
Provide sampler specific kwarg override dictionaries
sobol produces duplicate parameter sets for two parameters when
calc_second_order
isTrue
. Override this kwarg to beFalse
if there are only two parameters.
- Parameters:
override_kwargs (dict) – any common kwargs to include in the override dictionary
- Returns:
override kwarg dictionary
- _sampler_validation() None [source]
Call campler specific schema validation check methods
sobol requires at least two parameters
Requires attributes:
self._sampler_class
set by class initiationself._parameter_names
set byself._create_parameter_names()
- _validate() None [source]
Process parameter study input to verify schema
Must set the class attributes:
self._parameter_names
: list of strings containing the parameter study’s parameter names
Minimum necessary work example:
# Work unique to the parameter generator schema. Example matches CartesianProduct schema. self._parameter_names = list(self.parameter_schema.keys())
- parameter_study_to_dict(*args, **kwargs) dict [source]
Return parameter study as a dictionary
Used for iterating on parameter sets in an SCons workflow with parameter substitution dictionaries, e.g.
>>> for set_name, parameters in parameter_generator.parameter_study_to_dict().items(): ... print(f"{set_name}: {parameters}") ... parameter_set0: {'parameter_1': 1, 'parameter_2': 'a'} parameter_set1: {'parameter_1': 1, 'parameter_2': 'b'} parameter_set2: {'parameter_1': 2, 'parameter_2': 'a'} parameter_set3: {'parameter_1': 2, 'parameter_2': 'b'}
- Parameters:
data_type (str) – The data_type selection to return - samples or quantiles
- Returns:
parameter study sets and samples as a dictionary: {set_name: {parameter: value}, …}
- Return type:
dict - {str: {str: value}}
- write() None [source]
Write the parameter study to STDOUT or an output file.
Writes to STDOUT by default. Requires non-default
output_file_template
oroutput_file
specification to write to files.If printing to STDOUT, print all parameter sets together. If printing to files, overwrite when contents of existing files have changed. If overwrite is specified, overwrite all parameter set files. If a dry run is requested print file-content associations for files that would have been written.
Writes parameter set files in YAML syntax by default. Output formatting is controlled by
output_file_type
.parameter_1: 1 parameter_2: a
- class waves.parameter_generators.ScipySampler(sampler_class, *args, **kwargs)[source]
Bases:
_ScipyGenerator
Builds a scipy sampler parameter study from a scipy.stats.qmc
sampler_class
Samplers must use the
d
parameter space dimension keyword argument. The following samplers are tested for parameter study shape and merge behavior:Sobol
Halton
LatinHypercube
PoissonDisk
The
h5
output_file_type
is the only output type that contains both the parameter samples and quantiles.Warning
The merged parameter study feature does not check for consistent parameter distributions. Changing the parameter definitions and merging with a previous parameter study will result in incorrect relationships between parameters and the parameter study samples and quantiles.
- Parameters:
sampler_class (str) – The scipy.stats.qmc sampler class name. Case sensitive.
parameter_schema (dict) – The YAML loaded parameter study schema dictionary - {parameter_name: schema value} ScipySampler expects “schema value” to be a dictionary with a strict structure and several required keys. Validated on class instantiation.
output_file_template (str) – Output file name template. Required if parameter sets will be written to files instead of printed to STDOUT. May contain pathseps for an absolute or relative path template. May contain the
@number
set number placeholder in the file basename but not in the path. If the placeholder is not found it will be appended to the template string.output_file (str) – Output file name for a single file output of the parameter study. May contain pathseps for an absolute or relative path.
output_file
andoutput_file_template
are mutually exclusive. Output file is always overwritten.output_file_type (str) – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
set_name_template (str) – Parameter set name template. Overridden by
output_file_template
, if provided.previous_parameter_study (str) – A relative or absolute file path to a previously created parameter study Xarray Dataset
overwrite (bool) – Overwrite existing output files
dryrun (bool) – Print contents of new parameter study output files to STDOUT and exit
write_meta (bool) – Write a meta file named “parameter_study_meta.txt” containing the parameter set file names. Useful for command line execution with build systems that require an explicit file list for target creation.
kwargs – Any additional keyword arguments are passed through to the sampler method
Keyword arguments for the
scipy.stats.qmc
sampler_class
. Thed
keyword argument is internally managed and will be overwritten to match the number of parameters defined in the parameter schema.Example
>>> import waves >>> parameter_schema = { ... 'num_simulations': 4, # Required key. Value must be an integer. ... 'parameter_1': { ... 'distribution': 'norm', # Required key. Value must be a valid scipy.stats ... 'loc': 50, # distribution name. ... 'scale': 1 ... }, ... 'parameter_2': { ... 'distribution': 'skewnorm', ... 'a': 4, ... 'loc': 30, ... 'scale': 2 ... } ... } >>> parameter_generator = waves.parameter_generators.ScipySampler("LatinHypercube", parameter_schema) >>> print(parameter_generator.parameter_study) <xarray.Dataset> Dimensions: (data_type: 2, parameter_set_hash: 4) Coordinates: parameter_set_hash (parameter_set_hash) <U32 '1e8219dae27faa5388328e225a... * data_type (data_type) <U9 'quantiles' 'samples' * parameter_sets (parameter_set_hash) <U14 'parameter_set0' ... 'param... Data variables: parameter_1 (data_type, parameter_set_hash) float64 0.125 ... 51.15 parameter_2 (data_type, parameter_set_hash) float64 0.625 ... 30.97
- Variables:
parameter_distributions – A dictionary mapping parameter names to the
scipy.stats
distributionself.parameter_study – The final parameter study XArray Dataset object
- _generate(**kwargs) None [source]
Generate the scipy.stats.qmc
sampler_class
parameter sets
- parameter_study_to_dict(*args, **kwargs) dict [source]
Return parameter study as a dictionary
Used for iterating on parameter sets in an SCons workflow with parameter substitution dictionaries, e.g.
>>> for set_name, parameters in parameter_generator.parameter_study_to_dict().items(): ... print(f"{set_name}: {parameters}") ... parameter_set0: {'parameter_1': 1, 'parameter_2': 'a'} parameter_set1: {'parameter_1': 1, 'parameter_2': 'b'} parameter_set2: {'parameter_1': 2, 'parameter_2': 'a'} parameter_set3: {'parameter_1': 2, 'parameter_2': 'b'}
- Parameters:
data_type (str) – The data_type selection to return - samples or quantiles
- Returns:
parameter study sets and samples as a dictionary: {set_name: {parameter: value}, …}
- Return type:
dict - {str: {str: value}}
- write() None [source]
Write the parameter study to STDOUT or an output file.
Writes to STDOUT by default. Requires non-default
output_file_template
oroutput_file
specification to write to files.If printing to STDOUT, print all parameter sets together. If printing to files, overwrite when contents of existing files have changed. If overwrite is specified, overwrite all parameter set files. If a dry run is requested print file-content associations for files that would have been written.
Writes parameter set files in YAML syntax by default. Output formatting is controlled by
output_file_type
.parameter_1: 1 parameter_2: a
- class waves.parameter_generators.SobolSequence(*args, **kwargs)[source]
Bases:
_ScipyGenerator
Builds a Sobol sequence parameter study from the scipy Sobol class
random
method.The
h5
output_file_type
is the only output type that contains both the parameter samples and quantiles.Warning
The merged parameter study feature does not check for consistent parameter distributions. Changing the parameter definitions and merging with a previous parameter study will result in incorrect relationships between parameters and the parameter study samples and quantiles.
- Parameters:
parameter_schema (dict) – The YAML loaded parameter study schema dictionary - {parameter_name: schema value} SobolSequence expects “schema value” to be a dictionary with a strict structure and several required keys. Validated on class instantiation.
output_file_template (str) – Output file name template. Required if parameter sets will be written to files instead of printed to STDOUT. May contain pathseps for an absolute or relative path template. May contain the
@number
set number placeholder in the file basename but not in the path. If the placeholder is not found it will be appended to the template string.output_file (str) – Output file name for a single file output of the parameter study. May contain pathseps for an absolute or relative path.
output_file
andoutput_file_template
are mutually exclusive. Output file is always overwritten.output_file_type (str) – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
set_name_template (str) – Parameter set name template. Overridden by
output_file_template
, if provided.previous_parameter_study (str) – A relative or absolute file path to a previously created parameter study Xarray Dataset
overwrite (bool) – Overwrite existing output files
dryrun (bool) – Print contents of new parameter study output files to STDOUT and exit
write_meta (bool) – Write a meta file named “parameter_study_meta.txt” containing the parameter set file names. Useful for command line execution with build systems that require an explicit file list for target creation.
kwargs – Any additional keyword arguments are passed through to the sampler method
To produce consistent Sobol sequences on repeat instantiations, the
**kwargs
must include eitherscramble=False
orseed=<int>
. See the scipy Sobolscipy.stats.qmc.Sobol
class documentation for details. Thed
keyword argument is internally managed and will be overwritten to match the number of parameters defined in the parameter schema.Example
>>> import waves >>> parameter_schema = { ... 'num_simulations': 4, # Required key. Value must be an integer. ... 'parameter_1': { ... 'distribution': 'uniform', # Required key. Value must be a valid scipy.stats ... 'loc': 0, # distribution name. ... 'scale': 10 ... }, ... 'parameter_2': { ... 'distribution': 'uniform', ... 'loc': 2, ... 'scale': 3 ... } ... } >>> parameter_generator = waves.parameter_generators.SobolSequence(parameter_schema) >>> print(parameter_generator.parameter_study) <xarray.Dataset> Dimensions: (data_type: 2, parameter_sets: 4) Coordinates: parameter_set_hash (parameter_sets) <U32 'c1fa74da12c0991379d1df6541c421... * data_type (data_type) <U9 'quantiles' 'samples' * parameter_sets (parameter_sets) <U14 'parameter_set0' ... 'parameter... Data variables: parameter_1 (data_type, parameter_sets) float64 0.0 0.5 ... 7.5 2.5 parameter_2 (data_type, parameter_sets) float64 0.0 0.5 ... 4.25
- _generate(**kwargs) None [source]
Generate the parameter study dataset from the user provided parameter array
- parameter_study_to_dict(*args, **kwargs) dict [source]
Return parameter study as a dictionary
Used for iterating on parameter sets in an SCons workflow with parameter substitution dictionaries, e.g.
>>> for set_name, parameters in parameter_generator.parameter_study_to_dict().items(): ... print(f"{set_name}: {parameters}") ... parameter_set0: {'parameter_1': 1, 'parameter_2': 'a'} parameter_set1: {'parameter_1': 1, 'parameter_2': 'b'} parameter_set2: {'parameter_1': 2, 'parameter_2': 'a'} parameter_set3: {'parameter_1': 2, 'parameter_2': 'b'}
- Parameters:
data_type (str) – The data_type selection to return - samples or quantiles
- Returns:
parameter study sets and samples as a dictionary: {set_name: {parameter: value}, …}
- Return type:
dict - {str: {str: value}}
- write() None [source]
Write the parameter study to STDOUT or an output file.
Writes to STDOUT by default. Requires non-default
output_file_template
oroutput_file
specification to write to files.If printing to STDOUT, print all parameter sets together. If printing to files, overwrite when contents of existing files have changed. If overwrite is specified, overwrite all parameter set files. If a dry run is requested print file-content associations for files that would have been written.
Writes parameter set files in YAML syntax by default. Output formatting is controlled by
output_file_type
.parameter_1: 1 parameter_2: a
- class waves.parameter_generators._AtSignTemplate(template)[source]
Bases:
Template
Use the CMake ‘@’ delimiter in a Python ‘string.Template’ to avoid clashing with bash variable syntax
- class waves.parameter_generators._ParameterGenerator(parameter_schema: dict, output_file_template: str = None, output_file: str = None, output_file_type: str = 'yaml', set_name_template: str = 'parameter_set@number', previous_parameter_study: str = None, overwrite: bool = False, dryrun: bool = False, write_meta: bool = False, **kwargs)[source]
Bases:
ABC
Abstract base class for internal parameter study generators
- Parameters:
parameter_schema – The YAML loaded parameter study schema dictionary, e.g.
{parameter_name: schema_value}
. Validated on class instantiation.output_file_template – Output file name template. Required if parameter sets will be written to files instead of printed to STDOUT. May contain pathseps for an absolute or relative path template. May contain the
@number
set number placeholder in the file basename but not in the path. If the placeholder is not found it will be appended to the template string.output_file – Output file name for a single file output of the parameter study. May contain pathseps for an absolute or relative path.
output_file
andoutput_file_template
are mutually exclusive. Output file is overwritten if the content of the file has changed or ifoverwrite
is True.output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
set_name_template – Parameter set name template. Overridden by
output_file_template
, if provided.previous_parameter_study (str) – A relative or absolute file path to a previously created parameter study Xarray Dataset
overwrite – Overwrite existing output files
dryrun – Print contents of new parameter study output files to STDOUT and exit
write_meta – Write a meta file named “parameter_study_meta.txt” containing the parameter set file names. Useful for command line execution with build systems that require an explicit file list for target creation.
- _conditionally_write_dataset(existing_parameter_study: str, parameter_study) None [source]
Write NetCDF file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
existing_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
parameter_study (xarray.Dataset) – Parameter study xarray data
- _conditionally_write_yaml(output_file: str | Path, parameter_dictionary: dict) None [source]
Write YAML file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
output_file – A relative or absolute file path to the output YAML file
parameter_dictionary – dictionary containing parameter set data
- _create_parameter_array(data, name: str)[source]
Create the standard structure for a parameter_study array
requires:
self._parameter_set_hashes
: parameter set content hashes identifying rows of parameter studyself._parameter_names
: parameter names used as columns of parameter study
- Parameters:
data (numpy.array) – 2D array of parameter study samples with shape (number of parameter sets, number of parameters).
name – Name of the array. Used as a data variable name when converting to parameter study dataset.
- Returns:
parameter study array
- Return type:
xarray.DataArra
- _create_parameter_set_hashes() None [source]
Construct unique, repeatable parameter set content hashes from
self._samples
.Creates an md5 hash from the concatenated string representation of parameter values.
requires:
self._samples
: The parameter study samples. Rows are sets. Columns are parameters.
creates attribute:
self._parameter_set_hashes
: parameter set content hashes identifying rows of parameter study
- _create_parameter_set_names() None [source]
Construct parameter set names from the set name template and number of parameter sets in
self._samples
Creates the class attribute
self._parameter_set_names
required to populate the_generate()
method’s parameter study Xarray dataset object.requires:
self._parameter_set_hashes
: parameter set content hashes identifying rows of parameter study
creates attribute:
self._parameter_set_names
: Dictionary mapping parameter set hash to parameter set name
- _create_parameter_set_names_array() None [source]
Create an Xarray DataArray with the parameter set names using parameter set hashes as the coordinate
- Returns:
parameter_set_names_array
- Return type:
xarray.DataArray
- _create_parameter_study() None [source]
Create the standard structure for the parameter study dataset
requires:
self._parameter_set_hashes
: parameter set content hashes identifying rows of parameter studyself._parameter_names
: parameter names used as columns of parameter studyself._samples
: The parameter study samples. Rows are sets. Columns are parameters.
optional:
self._quantiles
: The quantiles associated with the paramter study sampling distributions
creates attribute:
self.parameter_study
- abstract _generate(**kwargs) None [source]
Generate the parameter study definition
All implemented class method should accept kwargs as
_generate(self, **kwargs)
. The ABC class accepts, but does not use anykwargs
.Must set the class attributes:
self._samples
: The parameter study samples. A 2D numpy array in the shape (number of parameter sets, number of parameters). If it’s possible that the samples may be of mixed type,numpy.array(..., dtype=object)
should be used to preserve the original Python types.self._parameter_set_hashes
: list of parameter set content hashes created by callingself._create_parameter_set_hashes
after populating theself._samples
parameter study values.self._parameter_set_names
: Dictionary mapping parameter set hash to parameter set name strings created by callingself._create_parameter_set_names
after populatingself._parameter_set_hashes
.self.parameter_study
: The Xarray Dataset parameter study object, created by callingself._create_parameter_study()
after definingself._samples
and the optionalself._quantiles
class attribute.
May set the class attributes:
self._quantiles
: The parameter study sample quantiles, if applicable. A 2D numpy array in the shape (number of parameter sets, number of parameters)
Minimum necessary work example:
# Work unique to the parameter generator schema and set generation set_count = 5 # Normally set according to the parameter schema parameter_count = len(self._parameter_names) self._samples = numpy.zeros((set_count, parameter_count)) # Work performed by common ABC methods super().generate()
- _merge_parameter_set_names_array() None [source]
Merge the parameter set names array into the parameter study dataset as a non-index coordinate
- _merge_parameter_studies() None [source]
Merge the current parameter study into a previous parameter study.
Preserve the previous parameter study set name to set contents associations by dropping the current study’s set names during merge. Resets attributes:
self.parameter_study
self._samples
self._quantiles
: if it existsself._parameter_set_hashes
self._parameter_set_names
- _parameter_study_to_numpy(data_type: Literal['samples', 'quantiles'])[source]
Return the parameter study data as a 2D numpy array
- Parameters:
data_type (str) – The data_type selection to return - samples or quantiles
- Returns:
data
- Return type:
numpy.array
- _update_parameter_set_names() None [source]
Update the parameter set names after a parameter study dataset merge operation.
Resets attributes:
self.parameter_study
self._parameter_set_names
- abstract _validate() None [source]
Process parameter study input to verify schema
Must set the class attributes:
self._parameter_names
: list of strings containing the parameter study’s parameter names
Minimum necessary work example:
# Work unique to the parameter generator schema. Example matches CartesianProduct schema. self._parameter_names = list(self.parameter_schema.keys())
- _write_dataset() None [source]
Write Xarray Dataset formatted output to STDOUT, separate set files, or a single file
Behavior as specified in
waves.parameter_generators._ParameterGenerator.write()
- _write_meta(parameter_set_files: list[Path]) None [source]
Write the parameter study meta data file.
The parameter study meta file is always overwritten. It should NOT be used to determine if the parameter study target or dependee is out-of-date. Parameter study file paths are written as absolute paths.
- Parameters:
parameter_set_files – List of pathlib.Path parameter set file paths
- _write_yaml(parameter_set_files: list[Path]) None [source]
Write YAML formatted output to STDOUT, separate set files, or a single file
Behavior as specified in
waves.parameter_generators._ParameterGenerator.write()
- Parameters:
parameter_set_files (list) – List of pathlib.Path parameter set file paths
- generate(kwargs=None) None [source]
Deprecated public generate method.
The parameter study is now generated as part of class instantiation. This method has been kept for backward compatibility. Method will overwrite the class instantiated study with a new parameter study each time it is called instead of duplicating or merging the parameter study.
- parameter_study_to_dict(data_type: Literal['samples', 'quantiles'] = 'samples') dict [source]
Return parameter study as a dictionary
Used for iterating on parameter sets in an SCons workflow with parameter substitution dictionaries, e.g.
>>> for set_name, parameters in parameter_generator.parameter_study_to_dict().items(): ... print(f"{set_name}: {parameters}") ... parameter_set0: {'parameter_1': 1, 'parameter_2': 'a'} parameter_set1: {'parameter_1': 1, 'parameter_2': 'b'} parameter_set2: {'parameter_1': 2, 'parameter_2': 'a'} parameter_set3: {'parameter_1': 2, 'parameter_2': 'b'}
- Parameters:
data_type (str) – The data_type selection to return - samples or quantiles
- Returns:
parameter study sets and samples as a dictionary: {set_name: {parameter: value}, …}
- Return type:
dict - {str: {str: value}}
- scons_write(target: list, source: list, env) None [source]
SCons Python build function wrapper for the parameter generator’s write() function.
Reference: https://scons.org/doc/production/HTML/scons-user/ch17s04.html
- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env (SCons.Script.SConscript.SConsEnvironment) – The builder’s SCons construction environment object
- write() None [source]
Write the parameter study to STDOUT or an output file.
Writes to STDOUT by default. Requires non-default
output_file_template
oroutput_file
specification to write to files.If printing to STDOUT, print all parameter sets together. If printing to files, overwrite when contents of existing files have changed. If overwrite is specified, overwrite all parameter set files. If a dry run is requested print file-content associations for files that would have been written.
Writes parameter set files in YAML syntax by default. Output formatting is controlled by
output_file_type
.parameter_1: 1 parameter_2: a
- class waves.parameter_generators._ScipyGenerator(parameter_schema: dict, output_file_template: str = None, output_file: str = None, output_file_type: str = 'yaml', set_name_template: str = 'parameter_set@number', previous_parameter_study: str = None, overwrite: bool = False, dryrun: bool = False, write_meta: bool = False, **kwargs)[source]
Bases:
_ParameterGenerator
,ABC
- _create_parameter_names() None [source]
Construct the parameter names from a distribution parameter schema
- _generate(**kwargs) None [source]
Generate the parameter study definition
All implemented class method should accept kwargs as
_generate(self, **kwargs)
. The ABC class accepts, but does not use anykwargs
.Must set the class attributes:
self._samples
: The parameter study samples. A 2D numpy array in the shape (number of parameter sets, number of parameters). If it’s possible that the samples may be of mixed type,numpy.array(..., dtype=object)
should be used to preserve the original Python types.self._parameter_set_hashes
: list of parameter set content hashes created by callingself._create_parameter_set_hashes
after populating theself._samples
parameter study values.self._parameter_set_names
: Dictionary mapping parameter set hash to parameter set name strings created by callingself._create_parameter_set_names
after populatingself._parameter_set_hashes
.self.parameter_study
: The Xarray Dataset parameter study object, created by callingself._create_parameter_study()
after definingself._samples
and the optionalself._quantiles
class attribute.
May set the class attributes:
self._quantiles
: The parameter study sample quantiles, if applicable. A 2D numpy array in the shape (number of parameter sets, number of parameters)
Minimum necessary work example:
# Work unique to the parameter generator schema and set generation set_count = 5 # Normally set according to the parameter schema parameter_count = len(self._parameter_names) self._samples = numpy.zeros((set_count, parameter_count)) # Work performed by common ABC methods super().generate()
- _generate_distribution_samples(set_count, parameter_count) None [source]
Convert quantiles to parameter distribution samples
Requires attibrutes:
self.parameter_distributions
: dictionary containing the {parameter name: scipy.stats distribution} defined by the parameter schema. Set bywaves.parameter_generators._ScipyGenerator._generate_parameter_distributions()
.
Sets attribute(s):
self._samples
: The parameter study samples. A 2D numpy array in the shape (number of parameter sets, number of parameters).
- _generate_parameter_distributions() dict [source]
Return dictionary containing the {parameter name: scipy.stats distribution} defined by the parameter schema.
- Returns:
parameter_distributions
- _validate() None [source]
Validate the parameter distribution schema. Executed by class initiation.
parameter_schema = { 'num_simulations': 4, # Required key. Value must be an integer. 'parameter_1': { 'distribution': 'norm', # Required key. Value must be a valid scipy.stats 'loc': 50, # distribution name. 'scale': 1 }, 'parameter_2': { 'distribution': 'skewnorm', 'a': 4, 'loc': 30, 'scale': 2 } }
main.py
- waves.main.build(targets: list, scons_args: list | None = None, max_iterations: int = 5, working_directory: str | Path | None = None, git_clone_directory: str | Path | None = None) int [source]
Submit an iterative SCons command
SCons command is re-submitted until SCons reports that the target ‘is up to date.’ or the iteration count is reached. If multiple targets are submitted, they are executed sequentially in the order provided.
- Parameters:
targets – list of SCons targets (positional arguments)
scons_args – list of SCons arguments
max_iterations – Maximum number of iterations before the iterative loop is terminated
working_directory – Change the SCons command working directory
git_clone_directory – Destination directory for a Git clone operation
- Returns:
return code
- waves.main.docs(print_local_path: bool = False) int [source]
Open the package HTML documentation in the system default web browser or print the path to the documentation index file.
- Parameters:
print_local_path – Flag to print the local path to terminal instead of calling the default web browser
- Returns:
return code
- waves.main.fetch(subcommand: str, root_directory: str | Path, relative_paths: list[str | Path], destination: str | Path, requested_paths: list[str | Path] | None = None, overwrite: bool = False, dry_run: bool = False, print_available: bool = False) int [source]
Thin wrapper on
waves.fetch.recursive_copy()
to provide subcommand specific behavior and STDOUT/STDERRRecursively copy requested paths from root_directory/relative_paths directories into destination directory using the shortest possible shared source prefix.
If files exist, report conflicting files and exit with a non-zero return code unless overwrite is specified.
- Parameters:
subcommand – name of the subcommand to report in STDOUT
root_directory – String or pathlike object for the root_directory directory
relative_paths – List of string or pathlike objects describing relative paths to search for in root_directory
destination – String or pathlike object for the destination directory
requested_paths – list of relative path-like objects that subset the files found in the
root_directory
relative_paths
overwrite – Boolean to overwrite any existing files in destination directory
dry_run – Print the destination tree and exit. Short circuited by
print_available
print_available – Print the available source files and exit. Short circuits
dry_run
- Returns:
return code
- waves.main.get_parser() ArgumentParser [source]
Get parser object for command line options
- Returns:
parser
- Return type:
ArgumentParser
- waves.main.main() int [source]
This is the main function that performs actions based on command line arguments.
- Returns:
return code
- waves.main.visualization(target: str, sconstruct: str | Path, exclude_list: list[str], exclude_regex: str, output_file: str | Path | None = None, print_graphml: bool = False, height: int = 12, width: int = 36, font_size: int = 10, vertical: bool = False, no_labels: bool = False, print_tree: bool = False, input_file: str | Path | None = None) int [source]
Visualize the directed acyclic graph created by a SCons build
Uses matplotlib and networkx to build out an acyclic directed graph showing the relationships of the various dependencies using boxes and arrows. The visualization can be saved as an svg and graphml output can be printed as well.
- Parameters:
target – String specifying an SCons target
sconstruct – Path to an SConstruct file or parent directory
exclude_list – exclude nodes starting with strings in this list (e.g. /usr/bin)
exclude_regex – exclude nodes that match this regular expression
output_file – File for saving the visualization
print_graphml – Whether to print the graph in graphml format
height – Height of visualization if being saved to a file
width – Width of visualization if being saved to a file
font_size – Font size of node labels
vertical – Specifies a vertical layout of graph instead of the default horizontal layout
no_labels – Don’t print labels on the nodes of the visualization
print_tree – Print the text output of the scons –tree command to the screen
input_file – Path to text file storing output from scons tree command
- Returns:
return code
fetch.py
- waves.fetch.available_files(root_directory: Path | str, relative_paths: list[str]) tuple[list[Path], list[str]] [source]
Build a list of files at
relative_paths
with respect to the rootroot_directory
directoryReturns a list of absolute paths and a list of any relative paths that were not found. Falls back to a full recursive search of
relative_paths
withpathlib.Path.rglob
to enable pathlib style pattern matching.- Parameters:
root_directory – Relative or absolute root path to search. Relative paths are converted to absolute paths with respect to the current working directory before searching.
relative_paths – Relative paths to search for. Directories are searched recursively for files.
- Returns:
available_files, not_found
- waves.fetch.build_copy_tuples(destination: str | Path, requested_paths_resolved: list, overwrite: bool = False) tuple[tuple] [source]
- Parameters:
destination – String or pathlike object for the destination directory
requested_paths_resolved – List of absolute requested file paths
- Returns:
requested and destination file path pairs
- waves.fetch.build_destination_files(destination: str | Path, requested_paths: list[str | Path]) tuple[list, list] [source]
Build destination file paths from the requested paths, truncating the longest possible source prefix path
- Parameters:
destination – String or pathlike object for the destination directory
requested_paths – List of requested file paths
- Returns:
destination files, existing files
- waves.fetch.build_source_files(root_directory: str, relative_paths: list[str], exclude_patterns: list[str] = ['__pycache__', '.pyc', '.sconf_temp', '.sconsign.dblite', 'config.log']) tuple[list[Path], list[str]] [source]
Wrap
available_files()
and trim list based on exclude patternsIf no source files are found, an empty list is returned.
- Parameters:
root_directory (str) – Relative or absolute root path to search. Relative paths are converted to absolute paths with respect to the current working directory before searching.
relative_paths (list) – Relative paths to search for. Directories are searched recursively for files.
exclude_patterns (list) – list of strings to exclude from the root_directory directory tree if the path contains a matching string.
- Returns:
source_files, not_found
- Return type:
tuple of lists
- waves.fetch.conditional_copy(copy_tuples: tuple[tuple]) None [source]
Copy when destination file doesn’t exist or doesn’t match source file content
Uses Python
shutil.copyfile
, so meta data isn’t preserved. Creates intermediate parent directories prior to copy, but doesn’t raise exceptions on existing parent directories.- Parameters:
copy_tuples – Tuple of source, destination pathlib.Path pairs, e.g.
((source, destination), ...)
- waves.fetch.longest_common_path_prefix(file_list: str | Path | list[str | Path]) Path [source]
Return the longest common file path prefix.
The edge case of a single path is handled by returning the parent directory
- Parameters:
file_list – List of path-like objects
- Returns:
longest common path prefix
- waves.fetch.print_list(things_to_print: list, prefix: str = '\t', stream=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>) None [source]
Print a list to the specified stream, one line per item
- Parameters:
things_to_print (list) – List of items to print
prefix (str) – prefix to print on each line before printing the item
stream (file-like) – output stream. Defaults to
sys.stdout
.
- waves.fetch.recursive_copy(root_directory: str | Path, relative_paths: list[str | Path], destination: str | Path, requested_paths: list[str | Path] | None = None, overwrite: bool = False, dry_run: bool = False, print_available: bool = False) int [source]
Recursively copy requested paths from root_directory/relative_paths directories into destination directory using the shortest possible shared source prefix.
If files exist, report conflicting files and exit with a non-zero return code unless overwrite is specified.
- Parameters:
root_directory – String or pathlike object for the root_directory directory
relative_paths – List of string or pathlike objects describing relative paths to search for in root_directory
destination – String or pathlike object for the destination directory
requested_paths – list of relative path-like objects that subset the files found in the
root_directory
relative_paths
overwrite – Boolean to overwrite any existing files in destination directory
dry_run – Print the destination tree and exit. Short circuited by
print_available
print_available – Print the available source files and exit. Short circuits
dry_run
visualize.py
- waves.visualize.check_regex_exclude(exclude_regex: str, node_name: str, current_indent: int, exclude_indent: int, exclude_node: bool = False) tuple[bool, int] [source]
Excludes node names that match the regular expression
- Parameters:
exclude_regex (str) – Regular expression
node_name (str) – Name of the node
current_indent (int) – Current indent of the parsed output
exclude_indent (int) – Set to current_indent if node is to be excluded
exclude_node (bool) – Indicated whether a node should be excluded
- Returns:
Tuple containing exclude_node and exclude_indent
- waves.visualize.click_arrow(event, annotations: dict, arrows: dict) None [source]
Create effect with arrows when mouse click
- Parameters:
event (matplotlib.backend_bases.Event) – Event that is handled by this function
annotations – Dictionary linking node names to their annotations
arrows – Dictionary linking darker arrow annotations to node names
- waves.visualize.parse_output(tree_lines: list, exclude_list: list, exclude_regex: str) dict [source]
Parse the string that has the tree output and store it in a dictionary
- Parameters:
tree_lines – output of the scons tree command
exclude_list – exclude nodes starting with strings in this list(e.g. /usr/bin)
exclude_regex – exclude nodes that match this regular expression
- Returns:
dictionary of tree output
- waves.visualize.visualize(tree: dict, output_file: str, height: int = 12, width: int = 36, font_size: int = 10, vertical: bool = False, no_labels: bool = False) None [source]
Create a visualization showing the tree
- Parameters:
tree – output of the scons tree command stored as dictionary
output_file – Name of file to store visualization
height – Height of visualization if being saved to a file
width – Width of visualization if being saved to a file
font_size – Font size of file names in points
vertical – Specifies a vertical layout of graph instead of the default horizontal layout
no_labels – Don’t print labels on the nodes of the visualization
_parameter_study.py
Thin CLI wrapper around waves.parameter_generators()
classes
- waves._parameter_study.parameter_study(subcommand: str, input_file_path: str, output_file_template: str = None, output_file: str = None, output_file_type: Literal['yaml', 'h5'] = 'yaml', set_name_template: str = 'parameter_set@number', previous_parameter_study: str = None, overwrite: bool = False, dryrun: bool = False, write_meta: bool = False) int [source]
Build parameter studies
- Parameters:
subcommand (str) – parameter study type to build
input_file_path (str) – path to YAML formatted parameter study schema file
output_file_template (str) – output file template name
output_file (str) – relative or absolute output file path
output_file_type (str) – yaml or h5
set_name_template (str) – parameter set name string template. May contain ‘@number’ for the set number.
previous_parameter_study (str) – relative or absolute path to previous parameter study file
overwrite (bool) – overwrite all existing parameter set file(s)
dryrun (bool) – print what files would have been written, but do no work
write_meta (bool) – write a meta file name ‘parameter_study_meta.txt’ containing the parameter set file path(s)
- Returns:
return code
_utilities.py
- waves._utilities._quote_spaces_in_path(path: Path | str) Path [source]
Traverse parts of a path and place in double quotes if there are spaces in the part
>>> import pathlib >>> import waves >>> path = pathlib.Path("path/directory with space/filename.ext") >>> waves.scons_extensions._quote_spaces_in_path(path) PosixPath('path/"directory with space"/filename.ext')
- Parameters:
path – path to modify as necessary
- Returns:
Path with parts wrapped in double quotes as necessary
- waves._utilities.cubit_os_bin() str [source]
Return the OS specific Cubit bin directory name
Making Cubit importable requires putting the Cubit bin directory on PYTHONPATH. On MacOS, the directory is “MacOS”. On other systems it is “bin”.
- Returns:
bin directory name, e.g. “bin” or “MacOS”
- Return type:
- waves._utilities.find_command(options: list[str]) str | None [source]
Return first found command in list of options.
Raise a FileNotFoundError if none is found.
- Parameters:
options – alternate command options
- Returns:
command absolute path
- waves._utilities.find_cubit_bin(options: list[str], bin_directory: str | None = None) Path [source]
Provided a few options for the Cubit executable, search for the bin directory.
Recommend first checking to see if cubit will import.
If the Cubit command or bin directory is not found, raise a FileNotFoundError.
- Parameters:
options – Cubit command options
bin_directory – Cubit’s bin directory name. Override the bin directory returned by
waves._utilities.cubit_os_bin()
.
- Returns:
Cubit bin directory absolute path
- waves._utilities.search_commands(options: list[str]) str | None [source]
Return the first found command in the list of options. Return None if none are found.
- Parameters:
options (list) – executable path(s) to test
- Returns:
command absolute path
- waves._utilities.tee_subprocess(command: list[str], **kwargs) tuple[int, str] [source]
Stream STDOUT to terminal while saving buffer to variable
- Parameters:
command – Command to execute provided a list of strings
kwargs (dict) – Any additional keyword arguments are passed through to subprocess.Popen
- Returns:
integer return code, string STDOUT
odb_extract.py
Extracts data from an Abaqus odb file. Calls odbreport feature of Abaqus, parses resultant file, and creates output file. Most simulation data lives in a group path following the instance and set name, e.g. /INSTANCE/FieldOutputs/ELEMENT_SET, and can be accessed with xarray as xarray.open_dataset(“sample.h5”, group=”/INSTANCE/FieldOutputs/ELEMENT_SET”). You can view all group paths with h5ls -r sample.h5. Additional ODB information is available in the /odb group path. The /xarray/Dataset group path contains a list of group paths that contain an xarray dataset.
/ # Top level group required in all hdf5 files
/<instance name>/ # Groups containing data of each instance found in an odb
FieldOutputs/ # Group with multiple xarray datasets for each field output
<field name>/ # Group with datasets containing field output data for a specified set or surface
# If no set or surface is specified, the <field name> will be 'ALL_NODES' or 'ALL_ELEMENTS'
HistoryOutputs/ # Group with multiple xarray datasets for each history output
<region name>/ # Group with datasets containing history output data for specified history region name
# If no history region name is specified, the <region name> will be 'ALL NODES'
Mesh/ # Group written from an xarray dataset with all mesh information for this instance
/<instance name>_Assembly/ # Group containing data of assembly instance found in an odb
Mesh/ # Group written from an xarray dataset with all mesh information for this instance
/odb/ # Catch all group for data found in the odbreport file not already organized by instance
info/ # Group with datasets that mostly give odb meta-data like name, path, etc.
jobData/ # Group with datasets that contain additional odb meta-data
rootAssembly/ # Group with datasets that match odb file organization per Abaqus documentation
sectionCategories/ # Group with datasets that match odb file organization per Abaqus documentation
/xarray/ # Group with a dataset that lists the location of all data written from xarray datasets
- waves.abaqus.odb_extract.get_odb_report_args(odb_report_args, input_file, job_name, verbose)[source]
Generates odb_report arguments
- Parameters:
odb_report_args (str) – String of command line options to pass to
abaqus odbreport
.input_file (Path) –
.odb
file.job_name (Path) – Report file.
verbose (bool) – Boolean to print more verbose messages
- waves.abaqus.odb_extract.get_parser()[source]
Get parser object for command line options
- Returns:
argument parser
- Return type:
parser
- waves.abaqus.odb_extract.odb_extract(input_file, output_file, output_type='h5', odb_report_args=None, abaqus_command='abq2023', delete_report_file=False, verbose=False)[source]
The odb_extract Abaqus data extraction tool. Most users should use the associated command line interface.
Warning
odb_extract
requires Abaqus arguments forodb_report_args
in the form ofoption=value
, e.g.step=step_name
.- Parameters:
input_file (list) – A list of
*.odb
files to extract. Current implementation only supports extraction on the first file in the list.output_file (str) – The output file name to extract to. Extension should match on of the supported output types.
output_type (str) – Output file type. Defaults to
h5
. Options are:h5
,yaml
,json
.odb_report_args (str) – String of command line options to pass to
abaqus odbreport
.abaqus_command (str) – The abaqus command name or absolute path to the Abaqus exectuble.
delete_report_file (bool) – Boolean to delete the intermediate Abaqus generated report file after producing the
output_file
.verbose (bool) – Boolean to print more verbose messages
Odb Report File Parser
- class waves.abaqus.abaqus_file_parser.OdbReportFileParser(input_file, verbose=False, *args, **kwargs)[source]
Bases:
AbaqusFileParser
Class for parsing Abaqus odbreport files. Expected input includes only files that are in the csv format and which have used the ‘blocked’ option.
Results are stored either in a dictionary which mimics the format of the odb file (see Abaqus documentation), or stored in a specialized ‘extract’ format written to an hdf5 file.
/ # Top level group required in all hdf5 files /<instance name>/ # Groups containing data of each instance found in an odb FieldOutputs/ # Group with multiple xarray datasets for each field output <field name>/ # Group with datasets containing field output data for a specified set or surface # If no set or surface is specified, the <field name> will be # 'ALL_NODES' or 'ALL_ELEMENTS' HistoryOutputs/ # Group with multiple xarray datasets for each history output <region name>/ # Group with datasets containing history output data for specified history region name # If no history region name is specified, the <region name> will be 'ALL NODES' Mesh/ # Group written from an xarray dataset with all mesh information for this instance /<instance name>_Assembly/ # Group containing data of assembly instance found in an odb Mesh/ # Group written from an xarray dataset with all mesh information for this instance /odb/ # Catch all group for data found in the odbreport file not already organized by instance info/ # Group with datasets that mostly give odb meta-data like name, path, etc. jobData/ # Group with datasets that contain additional odb meta-data rootAssembly/ # Group with datasets that match odb file organization per Abaqus documentation sectionCategories/ # Group with datasets that match odb file organization per Abaqus documentation /xarray/ # Group with a dataset that lists the location of all data written from xarray datasets Dataset # HDF5 Dataset that lists the location within the hdf5 file of all xarray datasets
- create_extract_format(odb_dict, h5_file, time_stamp)[source]
Format the dictionary with the odb data into something that resembles previous abaqus extract method
- Parameters:
odb_dict (dict) – Dictionary with odb data
h5_file (str) – Name of h5_file to use for storing data
time_stamp (str) – Time stamp for possibly appending to hdf5 file names
- Returns:
None
- get_position_index(position, position_type, values)[source]
Get the index of the position (node or element) currently used
- Parameters:
position (int) – integer representing a node or element
position_type (str) – string of either ‘nodes’ or ‘elements’
values (dict) – dictionary where values are stored
- Returns:
index, just_added
- Return type:
int, bool
- pad_none_values(step_number, frame_number, position_length, data_length, element_size, values)[source]
Pad the values list with None or lists of None values in the locations indicated by the parameters
- Parameters:
step_number (int) – index of current step
frame_number (int) – index of current frame
position_length (int) – number of nodes or elements
data_length (int) – length of data given in field
element_size (int) – number of element lines that could be listed, e.g. for a hex this value would be 6
values (list) – list that holds the data values
- parse(format='extract', h5_file='extract.h5', time_stamp=None)[source]
- Parse the file and store the results in the self.parsed dictionary.
Can parse csv formatted output with the blocked option from the odbreport command
- Parameters:
format (str) – Format in which to store data can be ‘odb’ or ‘extract’
h5_file (str) – Name of hdf5 file to store data into when using the extract format
time_stamp (str) – Time stamp for possibly appending to hdf5 file names
- Returns:
None
- parse_analytic_surface(f, instance, line)[source]
Parse the section that contains analytic surface
- Parameters:
f (file object) – open file
instance (dict) – dictionary for storing the analytic surface
line (str) – current line of file
- Returns:
None
- parse_components_of_field(f, line, field)[source]
Parse the section that contains the data for field outputs found after the ‘Components of field’ heading
- Parameters:
f (file object) – open file
line (str) – current line of file
field (dict) – dictionary for storing field output
- Returns:
current line of file
- Return type:
str
- parse_element_classes(f, instance, number_of_element_classes)[source]
Parse the section that contains element classes
- Parameters:
f (file object) – open file
instance (dict) – dictionary for storing the elements
number_of_element_classes (int) – number of element classes to parse
- Returns:
None
- parse_element_set(f, instance, number_of_element_sets)[source]
Parse the section that contains element sets
- Parameters:
f (file object) – open file
instance (dict) – dictionary for storing the element sets
number_of_element_sets (int) – number of element sets to parse
- Returns:
None
- parse_elements(f, instance, number_of_elements)[source]
Parse the section that contains elements
- Parameters:
f (file object) – open file
instance (dict) – dictionary for storing the elements
number_of_elements (int) – number of elements to parse
- Returns:
None
- parse_field_values(f, line, values)[source]
Parse the section that contains the data for field values
- Parameters:
f (file object) – open file
line (str) – current line
values (list) – list for storing the field values
- Returns:
current line of file
- Return type:
str
- parse_fields(f, fields, line)[source]
Parse the section that contains the data for field outputs
- Parameters:
f (file object) – open file
fields (dict) – dictionary for storing the field outputs
line (str) – current line of file
- Returns:
current line of file
- Return type:
str
- parse_frames(f, frames, number_of_frames)[source]
Parse the section that contains the data for frames
- Parameters:
f (file object) – open file
frames (list) – list for storing the frames
number_of_frames (int) – number of frames to parse
- Returns:
current line of file
- Return type:
str
- parse_history_outputs(f, outputs, line)[source]
Parse the section that contains history outputs
- Parameters:
f (file object) – open file
outputs (dict) – dict for storing the history output data
line (str) – current line of file
- Returns:
current line of file
- Return type:
str
- parse_history_regions(f, line, regions, number_of_history_regions)[source]
Parse the section that contains history regions
- Parameters:
f (file object) – open file
line (str) – current line of file
regions (dict) – dict for storing the history region data
number_of_history_regions (int) – number of history regions to parse
- Returns:
current line of file
- Return type:
str
- parse_instances(f, instances, number_of_instances)[source]
Parse the section that contains instances
- Parameters:
f (file object) – open file
instances (dict) – dictionary for storing the instances
number_of_instances (int) – number of instances to parse
- Returns:
None
- parse_node_set(f, instance, number_of_node_sets)[source]
Parse the section that contains node sets
- Parameters:
f (file object) – open file
instance (dict) – dictionary for storing the node sets
number_of_node_sets (int) – number of node sets to parse
- Returns:
None
- parse_nodes(f, instance, number_of_nodes, embedded_space)[source]
Parse the section that contains nodes
- Parameters:
f (file object) – open file
instance (dict) – dictionary for storing the nodes
number_of_nodes (int) – number of nodes to parse
embedded_space (str) – type of embedded space
- Returns:
None
- parse_rigid_bodies(f, instance, number_of_rigid_bodies)[source]
Parse the section that contains rigid_bodies
- Parameters:
f (file object) – open file
instance (dict) – dictionary for storing the rigid bodies
number_of_rigid_bodies (int) – number of rigid bodies to parse
- Returns:
None
- parse_section_categories(f, categories, number_of_categories)[source]
Parse the section that contains section categories
- Parameters:
f (file object) – open file
categories (dict) – dictionary for storing the section categories
number_of_categories (int) – number of section categories to parse
- Returns:
None
- parse_steps(f, steps, number_of_steps)[source]
Parse the section that contains the data for steps
- Parameters:
f (file object) – open file
steps (dict) – dictionary for storing the steps
number_of_steps (int) – number of steps to parse
- Returns:
None
- parse_surfaces(f, instance, number_of_surfaces)[source]
Parse the section that contains surfaces
- Parameters:
f (file object) – open file
instance (dict) – dictionary for storing the surfaces
number_of_surfaces (int) – number of surfaces to parse
- Returns:
None
- save_dict_to_group(h5file, path, data_member, output_file)[source]
Recursively save data from python dictionary to hdf5 file.
This method can handle data types of int, float, str, and xarray Datasets, as well as lists or dictionaries of the aforementioned types. Tuples are assumed to have ints or floats.
- Parameters:
h5file (stream) – file stream to write data into
path (str) – name of hdf5 group to write into
data_member (dict) – member of dictionary
output_file (str) – name of h5 output file
Sta File Parser
Msg File Parser
- class waves.abaqus.abaqus_file_parser.MsgFileParser(input_file, verbose=False, *args, **kwargs)[source]
Bases:
AbaqusFileParser
Class for parsing Abaqus msg files.
- parse(input_file=None)[source]
Parse the file and store the results in the self.parsed dictionary.
- Parameters:
input_file (str) – Name of msg file to parse
- Returns:
None