algebra module
Full Documentation for hippynn.layers.algebra
module.
Click here for a summary page.
Layers for simple operations
- class AtLeast2D(*args, **kwargs)[source]
Bases:
Module
- forward(item)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class EnsembleTarget(*args, **kwargs)[source]
Bases:
Module
- forward(*input_tensors)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class Idx(index, repr_info=None)[source]
Bases:
Module
- extra_repr()[source]
Return the extra representation of the module.
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward(bundled_inputs)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class LambdaModule(fn)[source]
Bases:
Module
- extra_repr()[source]
Return the extra representation of the module.
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward(*args, **kwargs)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ListMod(*args, **kwargs)[source]
Bases:
Module
- forward(*features)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ValueMod(value, convert=True)[source]
Bases:
Module
- extra_repr()[source]
Return the extra representation of the module.
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward()[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class WeightedMAELoss(*args, **kwargs)[source]
Bases:
_WeightedLoss
- static loss_func(input: Tensor, target: Tensor, size_average: bool | None = None, reduce: bool | None = None, reduction: str = 'mean', weight: Tensor | None = None) Tensor
l1_loss(input, target, size_average=None, reduce=None, reduction=’mean’) -> Tensor
Function that takes the mean element-wise absolute value difference.
See
L1Loss
for details.
- class WeightedMSELoss(*args, **kwargs)[source]
Bases:
_WeightedLoss
- static loss_func(input: Tensor, target: Tensor, size_average: bool | None = None, reduce: bool | None = None, reduction: str = 'mean', weight: Tensor | None = None) Tensor
mse_loss(input, target, size_average=None, reduce=None, reduction=’mean’, weight=None) -> Tensor
Measures the element-wise mean squared error, with optional weighting.
- Args:
input (Tensor): Predicted values. target (Tensor): Ground truth values. size_average (bool, optional): Deprecated (use reduction). reduce (bool, optional): Deprecated (use reduction). reduction (str, optional): Specifies the reduction to apply to the output:
‘none’ | ‘mean’ | ‘sum’. ‘mean’: the mean of the output is taken. ‘sum’: the output will be summed. ‘none’: no reduction will be applied. Default: ‘mean’.
weight (Tensor, optional): Weights for each sample. Default: None.
- Returns:
Tensor: Mean Squared Error loss (optionally weighted).