vision.pytorch.layers package#

Submodules#

vision.pytorch.layers.BatchChannelNorm module#

class vision.pytorch.layers.BatchChannelNorm.BatchChannelNorm2D#

Bases: torch.nn.Module

Implements Batch Channel Normalization proposed in Micro-Batch Training with Batch-Channel Normalization and Weight Standardization <https://arxiv.org/abs/1903.10520>

Parameters
  • num_groups (int) – number of groups to separate the channels into.

  • num_channels (int) – number of channels. C from an expected input of size (N, C, H, W).

  • eps (float) – a value added to the denominator for numerical stability. Default: 1e-5.

  • momentum (float) – The Update rate value used for the running_mean and running_var computation. Default: 0.1.

  • device (torch.device) – Device to place the learnable parameters.

  • dtype (torch.dtype) – Data type of learnable parameters.

Shape:

input: (N, C, H, W) output: (N, C, H, W) (same shape as input)

__init__(num_groups, num_channels, eps=1e-05, momentum=0.1, device=None, dtype=None)#
forward(input)#
reset_parameters() None#

vision.pytorch.layers.ConvNormActBlock module#

class vision.pytorch.layers.ConvNormActBlock.ConvNormActBlock#

Bases: torch.nn.Sequential

Customizable Convolution -> Normalization -> Activation Block.

:param (int) in_channels : Number of channels in the input image :param (int) out_channels: Number of channels produced by convolution :param (int or tuple) kernel_size : Size of the convolving kernel :param (int or tuple, optional) stride : Stride of the convolution.

Default: 1

:param (str) paddingcontrols the amount of padding applied

to the input. Can be either valid or same.

Parameters

dilation ((int or tuple, optional)) – Spacing between kernel elements. Default: 1

:param (int, optional) groupsNumber of blocked connections from

input channels to output channels. Default: 1

Parameters
  • bias ((bool, optional)) – If True, adds a learnable bias to the output. Default: False

  • padding_mode ((str, optional)) – ‘zeros’, ‘reflect’, ‘replicate’ or ‘circular’. Default: ‘zeros’

  • norm_layer ((str, optional)) – Type of normalization to be used. Supported norm layers can be found in ./layers/normalizations.py. Default: batchnorm2d

  • norm_kwargs ((dict, optional)) –

    args to be passed to norm layers during initialization. For norm_type = group,

    norm_kwargs must include num_groups key value pair.

    norm_type = layer,

    norm_kwargs must include normalized_shape key value pair.

  • weight_standardization ((bool, optional)) – If True standardize weights according to <https://arxiv.org/abs/1903.10520v2>.

  • act ((str, optional)) – Activation to be used. Supported activation layers can be found in ../../../common/pytorch/model_utils/activations.py. Default: relu

  • device ((str, optional)) – Device to place conv layer.

  • dtype ((torch.dtype, optional)) – Datatype to be used for weight and bias of convolution layer.

__init__(in_channels, out_channels, kernel_size, stride=1, padding='valid', dilation=1, groups=1, bias=False, padding_mode='zeros', norm_layer='batchnorm2d', norm_kwargs=None, weight_standardization=False, act='relu', device=None, dtype=None, use_conv3d=False, affine=True)#

:param (int) in_channels : Number of channels in the input image :param (int) out_channels: Number of channels produced by convolution :param (int or tuple) kernel_size : Size of the convolving kernel :param (int or tuple, optional) stride : Stride of the convolution.

Default: 1

:param (str) paddingcontrols the amount of padding applied

to the input. Can be either valid or same.

Parameters

dilation ((int or tuple, optional)) – Spacing between kernel elements. Default: 1

:param (int, optional) groupsNumber of blocked connections from

input channels to output channels. Default: 1

Parameters
  • bias ((bool, optional)) – If True, adds a learnable bias to the output. Default: False

  • padding_mode ((str, optional)) – ‘zeros’, ‘reflect’, ‘replicate’ or ‘circular’. Default: ‘zeros’

  • norm_layer ((str, optional)) – Type of normalization to be used. Supported norm layers can be found in ./layers/normalizations.py. Default: batchnorm2d

  • norm_kwargs ((dict, optional)) –

    args to be passed to norm layers during initialization. For norm_type = group,

    norm_kwargs must include num_groups key value pair.

    norm_type = layer,

    norm_kwargs must include normalized_shape key value pair.

  • weight_standardization ((bool, optional)) – If True standardize weights according to <https://arxiv.org/abs/1903.10520v2>.

  • act ((str, optional)) – Activation to be used. Supported activation layers can be found in ../../../common/pytorch/model_utils/activations.py. Default: relu

  • device ((str, optional)) – Device to place conv layer.

  • dtype ((torch.dtype, optional)) – Datatype to be used for weight and bias of convolution layer.

class vision.pytorch.layers.ConvNormActBlock.ConvNormActBlockModule#

Bases: torch.nn.Module

Customizable Convolution -> Normalization -> Activation Block. with custom namespace flexibility with nn.Module

:param (int) in_channels : Number of channels in the input image :param (int) out_channels: Number of channels produced by convolution :param (int or tuple) kernel_size : Size of the convolving kernel :param (int or tuple, optional) stride : Stride of the convolution.

Default: 1

:param (str) paddingcontrols the amount of padding applied

to the input. Can be either valid or same.

Parameters

dilation ((int or tuple, optional)) – Spacing between kernel elements. Default: 1

:param (int, optional) groupsNumber of blocked connections from

input channels to output channels. Default: 1

Parameters
  • bias ((bool, optional)) – If True, adds a learnable bias to the output. Default: False

  • padding_mode ((str, optional)) – ‘zeros’, ‘reflect’, ‘replicate’ or ‘circular’. Default: ‘zeros’

  • norm_layer ((str, optional)) – Type of normalization to be used. Supported norm layers can be found in ./layers/normalizations.py. Default: batchnorm2d

  • norm_kwargs ((dict, optional)) –

    args to be passed to norm layers during initialization. For norm_type = group,

    norm_kwargs must include num_groups key value pair.

    norm_type = layer,

    norm_kwargs must include normalized_shape key value pair.

  • weight_standardization ((bool, optional)) – If True standardize weights according to <https://arxiv.org/abs/1903.10520v2>.

  • act ((str, optional)) – Activation to be used. Supported activation layers can be found in ../../../common/pytorch/model_utils/activations.py. Default: relu

  • device ((str, optional)) – Device to place conv layer.

  • dtype ((torch.dtype, optional)) – Datatype to be used for weight and bias of convolution layer.

  • custom_namespace ((List[str], optional)) – List of strings to control the namespace of the module

__init__(in_channels, out_channels, kernel_size, stride=1, padding='valid', dilation=1, groups=1, bias=False, padding_mode='zeros', norm_layer='batchnorm2d', norm_kwargs=None, weight_standardization=False, act='relu', device=None, dtype=None, use_conv3d=False, affine=True, custom_namespace=None)#

:param (int) in_channels : Number of channels in the input image :param (int) out_channels: Number of channels produced by convolution :param (int or tuple) kernel_size : Size of the convolving kernel :param (int or tuple, optional) stride : Stride of the convolution.

Default: 1

:param (str) paddingcontrols the amount of padding applied

to the input. Can be either valid or same.

Parameters

dilation ((int or tuple, optional)) – Spacing between kernel elements. Default: 1

:param (int, optional) groupsNumber of blocked connections from

input channels to output channels. Default: 1

Parameters
  • bias ((bool, optional)) – If True, adds a learnable bias to the output. Default: False

  • padding_mode ((str, optional)) – ‘zeros’, ‘reflect’, ‘replicate’ or ‘circular’. Default: ‘zeros’

  • norm_layer ((str, optional)) – Type of normalization to be used. Supported norm layers can be found in ./layers/normalizations.py. Default: batchnorm2d

  • norm_kwargs ((dict, optional)) –

    args to be passed to norm layers during initialization. For norm_type = group,

    norm_kwargs must include num_groups key value pair.

    norm_type = layer,

    norm_kwargs must include normalized_shape key value pair.

  • weight_standardization ((bool, optional)) – If True standardize weights according to <https://arxiv.org/abs/1903.10520v2>.

  • act ((str, optional)) – Activation to be used. Supported activation layers can be found in ../../../common/pytorch/model_utils/activations.py. Default: relu

  • device ((str, optional)) – Device to place conv layer.

  • dtype ((torch.dtype, optional)) – Datatype to be used for weight and bias of convolution layer.

  • custom_namespace ((List[str], optional)) – List of strings to control the namespace of the module

forward(input)#
class vision.pytorch.layers.ConvNormActBlock.ConvNormActLayers#

Bases: object

Customizable Convolution -> Normalization -> Activation Block. Returns list of layers in the above order when get_layers method is called.

:param (int) in_channels : Number of channels in the input image :param (int) out_channels: Number of channels produced by convolution :param (int or tuple) kernel_size : Size of the convolving kernel :param (int or tuple, optional) stride : Stride of the convolution.

Default: 1

:param (str) paddingcontrols the amount of padding applied

to the input. Can be either valid or same.

Parameters

dilation ((int or tuple, optional)) – Spacing between kernel elements. Default: 1

:param (int, optional) groupsNumber of blocked connections from

input channels to output channels. Default: 1

Parameters
  • bias ((bool, optional)) – If True, adds a learnable bias to the output. Default: False

  • padding_mode ((str, optional)) – ‘zeros’, ‘reflect’, ‘replicate’ or ‘circular’. Default: ‘zeros’

  • norm_layer ((str, optional)) – Type of normalization to be used. Supported norm layers can be found in ./layers/normalizations.py. Default: batchnorm2d

  • norm_kwargs ((dict, optional)) –

    args to be passed to norm layers during initialization. For norm_type = group,

    norm_kwargs must include num_groups key value pair.

    norm_type = layer,

    norm_kwargs must include normalized_shape key value pair.

  • weight_standardization ((bool, optional)) – If True standardize weights according to <https://arxiv.org/abs/1903.10520v2>.

  • act ((str, optional)) – Activation to be used. Supported activation layers can be found in ../../../common/pytorch/model_utils/activations.py. Default: relu

  • device ((str, optional)) – Device to place conv layer.

  • dtype ((torch.dtype, optional)) – Datatype to be used for weight and bias of convolution layer.

__init__(in_channels, out_channels, kernel_size, stride=1, padding='valid', dilation=1, groups=1, bias=False, padding_mode='zeros', norm_layer='batchnorm2d', norm_kwargs=None, weight_standardization=False, act='relu', device=None, dtype=None, use_conv3d=False, affine=True)#

:param (int) in_channels : Number of channels in the input image :param (int) out_channels: Number of channels produced by convolution :param (int or tuple) kernel_size : Size of the convolving kernel :param (int or tuple, optional) stride : Stride of the convolution.

Default: 1

:param (str) paddingcontrols the amount of padding applied

to the input. Can be either valid or same.

Parameters

dilation ((int or tuple, optional)) – Spacing between kernel elements. Default: 1

:param (int, optional) groupsNumber of blocked connections from

input channels to output channels. Default: 1

Parameters
  • bias ((bool, optional)) – If True, adds a learnable bias to the output. Default: False

  • padding_mode ((str, optional)) – ‘zeros’, ‘reflect’, ‘replicate’ or ‘circular’. Default: ‘zeros’

  • norm_layer ((str, optional)) – Type of normalization to be used. Supported norm layers can be found in ./layers/normalizations.py. Default: batchnorm2d

  • norm_kwargs ((dict, optional)) –

    args to be passed to norm layers during initialization. For norm_type = group,

    norm_kwargs must include num_groups key value pair.

    norm_type = layer,

    norm_kwargs must include normalized_shape key value pair.

  • weight_standardization ((bool, optional)) – If True standardize weights according to <https://arxiv.org/abs/1903.10520v2>.

  • act ((str, optional)) – Activation to be used. Supported activation layers can be found in ../../../common/pytorch/model_utils/activations.py. Default: relu

  • device ((str, optional)) – Device to place conv layer.

  • dtype ((torch.dtype, optional)) – Datatype to be used for weight and bias of convolution layer.

get_layers()#

vision.pytorch.layers.GroupInstanceNorm module#

class vision.pytorch.layers.GroupInstanceNorm.GroupInstanceNorm#

Bases: torch.nn.Module

Uses torch.nn.GroupNorm to emulate InstanceNorm by setting number of groups equal to the number of channels.

Parameters

num_channels (int) – number of channels. C from an expected input of size (N, C, H, W).

__init__(num_channels, eps=1e-05, affine=True, device=None, dtype=None)#
forward(input)#

vision.pytorch.layers.StandardizedConvolutionLayer module#

class vision.pytorch.layers.StandardizedConvolutionLayer.StdConv1d#

Bases: torch.nn.Conv1d

forward(inputs: torch.Tensor)#
class vision.pytorch.layers.StandardizedConvolutionLayer.StdConv2d#

Bases: torch.nn.Conv2d

forward(inputs: torch.Tensor)#
class vision.pytorch.layers.StandardizedConvolutionLayer.StdConv3d#

Bases: torch.nn.Conv3d

forward(inputs: torch.Tensor)#

vision.pytorch.layers.normalizations module#

vision.pytorch.layers.normalizations.get_normalization(normalization_str)#

vision.pytorch.layers.utils module#

class vision.pytorch.layers.utils.ModuleWrapperClass#

Bases: torch.nn.Module

__init__(fcn, name=None, kwargs=None)#
extra_repr() str#
forward(input)#
vision.pytorch.layers.utils.adjust_channels(channels: int, width_multiplier: float, divisor: Optional[int] = 8, min_value: Optional[int] = None, round_limit: Optional[int] = 0.9) int#
vision.pytorch.layers.utils.adjust_depth(num_layers: int, depth_multiplier: float)#

Module contents#