AdversarialTensors.models package#

Submodules#

AdversarialTensors.models.densenet module#

class AdversarialTensors.models.densenet.DenseNet(growth_rate=32, block_config=(6, 12, 24, 16), num_init_features=64, bn_size=4, drop_rate=0, num_classes=10)[source]#

Bases: Module

Densenet-BC model class, based on "Densely Connected Convolutional Networks"

Parameters:
  • growth_rate (int) - how many filters to add each layer (k in paper) --

  • block_config (list of 4 ints) --

  • num_init_features (int) --

  • bn_size (int) -- (i.e. bn_size * k features in the bottleneck layer)

  • drop_rate (float) --

  • num_classes (int) --

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
AdversarialTensors.models.densenet.densenet121(pretrained=False, progress=True, device='cpu', **kwargs)[source]#

Densenet-121 model from "Densely Connected Convolutional Networks"

Parameters:
  • pretrained (bool) -- If True, returns a model pre-trained on ImageNet

  • progress (bool) -- If True, displays a progress bar of the download to stderr

AdversarialTensors.models.densenet.densenet161(pretrained=False, progress=True, device='cpu', **kwargs)[source]#

Densenet-161 model from "Densely Connected Convolutional Networks"

Parameters:
  • pretrained (bool) -- If True, returns a model pre-trained on ImageNet

  • progress (bool) -- If True, displays a progress bar of the download to stderr

AdversarialTensors.models.densenet.densenet169(pretrained=False, progress=True, device='cpu', **kwargs)[source]#

Densenet-169 model from "Densely Connected Convolutional Networks"

Parameters:
  • pretrained (bool) -- If True, returns a model pre-trained on ImageNet

  • progress (bool) -- If True, displays a progress bar of the download to stderr

AdversarialTensors.models.googlenet module#

class AdversarialTensors.models.googlenet.GoogLeNet(num_classes=10, aux_logits=False, transform_input=False)[source]#

Bases: Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
AdversarialTensors.models.googlenet.googlenet(pretrained=False, progress=True, device='cpu', **kwargs)[source]#

GoogLeNet (Inception v1) model architecture from "Going Deeper with Convolutions".

Parameters:
  • pretrained (bool) -- If True, returns a model pre-trained on ImageNet

  • progress (bool) -- If True, displays a progress bar of the download to stderr

  • aux_logits (bool) -- If True, adds two auxiliary branches that can improve training. Default: False when pretrained is True otherwise True

  • transform_input (bool) -- If True, preprocesses the input according to the method with which it was trained on ImageNet. Default: False

AdversarialTensors.models.inception module#

class AdversarialTensors.models.inception.Inception3(num_classes=10, aux_logits=False, transform_input=False)[source]#

Bases: Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
AdversarialTensors.models.inception.inception_v3(pretrained=False, progress=True, device='cpu', **kwargs)[source]#

Inception v3 model architecture from "Rethinking the Inception Architecture for Computer Vision".

Note

Important: In contrast to the other models the inception_v3 expects tensors with a size of N x 3 x 299 x 299, so ensure your images are sized accordingly.

Parameters:
  • pretrained (bool) -- If True, returns a model pre-trained on ImageNet

  • progress (bool) -- If True, displays a progress bar of the download to stderr

  • aux_logits (bool) -- If True, add an auxiliary branch that can improve training. Default: True

  • transform_input (bool) -- If True, preprocesses the input according to the method with which it was trained on ImageNet. Default: False

AdversarialTensors.models.mobilenetv2 module#

class AdversarialTensors.models.mobilenetv2.MobileNetV2(num_classes=10, width_mult=1.0)[source]#

Bases: Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
AdversarialTensors.models.mobilenetv2.mobilenet_v2(pretrained=False, progress=True, device='cpu', **kwargs)[source]#

Constructs a MobileNetV2 architecture from "MobileNetV2: Inverted Residuals and Linear Bottlenecks".

Parameters:
  • pretrained (bool) -- If True, returns a model pre-trained on ImageNet

  • progress (bool) -- If True, displays a progress bar of the download to stderr

AdversarialTensors.models.resnet module#

class AdversarialTensors.models.resnet.ResNet(block, layers, num_classes=10, zero_init_residual=False, groups=1, width_per_group=64, replace_stride_with_dilation=None, norm_layer=None)[source]#

Bases: Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
AdversarialTensors.models.resnet.resnet18(pretrained=False, progress=True, device='cpu', fpath=None, **kwargs)[source]#

Constructs a ResNet-18 model. :param pretrained: If True, returns a model pre-trained on ImageNet :type pretrained: bool :param progress: If True, displays a progress bar of the download to stderr :type progress: bool

AdversarialTensors.models.resnet.resnet34(pretrained=False, progress=True, device='cpu', fpath=None, **kwargs)[source]#

Constructs a ResNet-34 model. :param pretrained: If True, returns a model pre-trained on ImageNet :type pretrained: bool :param progress: If True, displays a progress bar of the download to stderr :type progress: bool

AdversarialTensors.models.resnet.resnet50(pretrained=False, progress=True, device='cpu', fpath=None, **kwargs)[source]#

Constructs a ResNet-50 model. :param pretrained: If True, returns a model pre-trained on ImageNet :type pretrained: bool :param progress: If True, displays a progress bar of the download to stderr :type progress: bool

AdversarialTensors.models.resnet_orig module#

class AdversarialTensors.models.resnet_orig.BasicBlock(in_planes, planes, stride=1)[source]#

Bases: Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

expansion = 1#
forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
class AdversarialTensors.models.resnet_orig.Bottleneck(in_planes, planes, stride=1)[source]#

Bases: Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

expansion = 4#
forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
class AdversarialTensors.models.resnet_orig.BottleneckChen2020AdversarialNet(in_planes, planes, stride=1)[source]#

Bases: Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

expansion = 4#
forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
AdversarialTensors.models.resnet_orig.OrigPreActResNet18()[source]#
AdversarialTensors.models.resnet_orig.OrigResNet101()[source]#
AdversarialTensors.models.resnet_orig.OrigResNet152()[source]#
AdversarialTensors.models.resnet_orig.OrigResNet18()[source]#
AdversarialTensors.models.resnet_orig.OrigResNet34()[source]#
AdversarialTensors.models.resnet_orig.OrigResNet50()[source]#
class AdversarialTensors.models.resnet_orig.PreActBlock(in_planes, planes, stride=1, out_shortcut=False)[source]#

Bases: Module

Pre-activation version of the BasicBlock.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

expansion = 1#
forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
class AdversarialTensors.models.resnet_orig.PreActBlockV2(in_planes, planes, stride=1, out_shortcut=False)[source]#

Bases: Module

Pre-activation version of the BasicBlock (slightly different forward pass)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

expansion = 1#
forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
class AdversarialTensors.models.resnet_orig.PreActBottleneck(in_planes, planes, stride=1)[source]#

Bases: Module

Pre-activation version of the original Bottleneck module.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

expansion = 4#
forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
class AdversarialTensors.models.resnet_orig.PreActResNet(block, num_blocks, num_classes=10, bn_before_fc=False, out_shortcut=False)[source]#

Bases: Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
class AdversarialTensors.models.resnet_orig.ResNet(block, num_blocks, num_classes=10)[source]#

Bases: Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#

AdversarialTensors.models.vgg module#

class AdversarialTensors.models.vgg.VGG(features, num_classes=10, init_weights=True)[source]#

Bases: Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
AdversarialTensors.models.vgg.vgg11_bn(pretrained=False, progress=True, device='cpu', **kwargs)[source]#

VGG 11-layer model (configuration "A") with batch normalization

Parameters:
  • pretrained (bool) -- If True, returns a model pre-trained on ImageNet

  • progress (bool) -- If True, displays a progress bar of the download to stderr

AdversarialTensors.models.vgg.vgg13_bn(pretrained=False, progress=True, device='cpu', **kwargs)[source]#

VGG 13-layer model (configuration "B") with batch normalization

Parameters:
  • pretrained (bool) -- If True, returns a model pre-trained on ImageNet

  • progress (bool) -- If True, displays a progress bar of the download to stderr

AdversarialTensors.models.vgg.vgg16_bn(pretrained=False, progress=True, device='cpu', **kwargs)[source]#

VGG 16-layer model (configuration "D") with batch normalization

Parameters:
  • pretrained (bool) -- If True, returns a model pre-trained on ImageNet

  • progress (bool) -- If True, displays a progress bar of the download to stderr

AdversarialTensors.models.vgg.vgg19_bn(pretrained=False, progress=True, device='cpu', **kwargs)[source]#

VGG 19-layer model (configuration 'E') with batch normalization

Parameters:
  • pretrained (bool) -- If True, returns a model pre-trained on ImageNet

  • progress (bool) -- If True, displays a progress bar of the download to stderr

AdversarialTensors.models.wide_resnet module#

class AdversarialTensors.models.wide_resnet.BasicBlock(in_planes, out_planes, stride, dropRate=0.0)[source]#

Bases: Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
class AdversarialTensors.models.wide_resnet.NetworkBlock(nb_layers, in_planes, out_planes, block, stride, dropRate=0.0)[source]#

Bases: Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
class AdversarialTensors.models.wide_resnet.WideResNet(depth=28, num_classes=10, widen_factor=10, sub_block1=False, dropRate=0.0, bias_last=True)[source]#

Bases: Module

Based on code from yaodongyu/TRADES

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#

Module contents#