storm_kit.geom.nn_model.network_macros module

MLP(channels, dropout_ratio=0.0, batch_norm=False, act_fn=<class 'torch.nn.modules.activation.ReLU'>, layer_norm=False, nerf=True)[source]

Automatic generation of mlp given some

Parameters
  • channels (int) – number of channels in input

  • dropout_ratio (float, optional) – dropout used after every layer. Defaults to 0.0.

  • batch_norm (bool, optional) – batch norm after every layer. Defaults to False.

  • act_fn ([type], optional) – activation function after every layer. Defaults to ReLU.

  • layer_norm (bool, optional) – layer norm after every layer. Defaults to False.

  • nerf (bool, optional) – use positional encoding (x->[sin(x),cos(x)]). Defaults to True.

Returns

nn sequential layers

class MLPRegression(input_dims, output_dims, mlp_layers=[256, 128, 128], dropout_ratio=0.0, batch_norm=False, scale_mlp_units=1.0, act_fn=<class 'torch.nn.modules.activation.ELU'>, layer_norm=False, nerf=False)[source]

Bases: torch.nn.modules.module.Module

Create an instance of mlp nn model

Parameters
  • input_dims (int) – number of channels

  • output_dims (int) – output channel size

  • mlp_layers (list, optional) – perceptrons in each layer. Defaults to [256, 128, 128].

  • dropout_ratio (float, optional) – dropout after every layer. Defaults to 0.0.

  • batch_norm (bool, optional) – batch norm after every layer. Defaults to False.

  • scale_mlp_units (float, optional) – Quick way to scale up and down the number of perceptrons, as this gets multiplied with values in mlp_layers. Defaults to 1.0.

  • act_fn ([type], optional) – activation function after every layer. Defaults to ELU.

  • layer_norm (bool, optional) – layer norm after every layer. Defaults to False.

  • nerf (bool, optional) – use positional encoding (x->[sin(x),cos(x)]). Defaults to False.

_is_full_backward_hook: Optional[bool]
forward(x, *args)[source]

forward pass on network.

reset_parameters()[source]

Use this function to initialize weights. Doesn’t help much for mlp.

training: bool
he_init(param)[source]

initialize weights with he.

Parameters

param (network params) – params to initialize.

scale_to_base(data, norm_dict, key)[source]

Scale the tensor back to the orginal units.

Parameters
  • data (tensor) – input tensor to scale

  • norm_dict (Dict) – normalization dictionary of the form dict={key:{‘mean’:,’std’:}}

  • key (str) – key of the data

Returns

output scaled tensor

Return type

tensor

scale_to_net(data, norm_dict, key)[source]

Scale the tensor network range

Parameters
  • data (tensor) – input tensor to scale

  • norm_dict (Dict) – normalization dictionary of the form dict={key:{‘mean’:,’std’:}}

  • key (str) – key of the data

Returns

output scaled tensor

Return type

tensor

weights_init(m)[source]

Function to initialize weights of a nn.

Parameters

m (network params) – pass in model.parameters()

xavier(param)[source]

initialize weights with xavier.

Parameters

param (network params) – params to initialize.