archai.algos.divnas package¶
Submodules¶
archai.algos.divnas.analyse_activations module¶
-
archai.algos.divnas.analyse_activations.
collect_features
(rootfolder: str, subsampling_factor: int = 1) → Dict[str, List[numpy.array]][source]¶ Walks the rootfolder for h5py files and loads them into the format required for analysis.
Inputs:
rootfolder: full path to folder containing h5 files which have activations subsampling_factor: every nth minibatch will be loaded to keep memory manageable
Outputs:
dictionary with edge name strings as keys and values are lists of np.array [num_samples, feature_dim]
-
archai.algos.divnas.analyse_activations.
compute_brute_force_sol
(cov_kernel: numpy.array, budget: int) → Tuple[Tuple[Any], float][source]¶
-
archai.algos.divnas.analyse_activations.
compute_correlation
(covariance: numpy.array) → numpy.array[source]¶
-
archai.algos.divnas.analyse_activations.
compute_covariance_offline
(feature_list: List[numpy.array]) → numpy.array[source]¶ Compute covariance matrix for high-dimensional features. feature_shape: (num_samples, feature_dim)
-
archai.algos.divnas.analyse_activations.
compute_euclidean_dist_quantiles
(feature_list: List[numpy.array], subsamplefactor=1) → List[Tuple[float, float]][source]¶ Compute quantile distances between feature pairs feature_list: List of features each of shape: (num_samples, feature_dim)
-
archai.algos.divnas.analyse_activations.
compute_marginal_gain
(y: int, A: Set[int], S: Set[int], covariance: numpy.array) → float[source]¶
-
archai.algos.divnas.analyse_activations.
compute_rbf_kernel_covariance
(feature_list: List[numpy.array], sigma=0.1) → numpy.array[source]¶ Compute rbf kernel covariance for high dimensional features. feature_list: List of features each of shape: (num_samples, feature_dim) sigma: sigma of the rbf kernel
-
archai.algos.divnas.analyse_activations.
create_submod_f
(covariance: numpy.array) → Callable[source]¶
archai.algos.divnas.divnas_cell module¶
-
class
archai.algos.divnas.divnas_cell.
Divnas_Cell
(cell: archai.nas.cell.Cell)[source]¶ Bases:
object
Wrapper cell class for divnas specific modifications
-
archai.algos.divnas.divnas_cell.
tensor
(data, dtype=None, device=None, requires_grad=False, pin_memory=False) → Tensor¶ Constructs a tensor with
data
.Warning
torch.tensor()
always copiesdata
. If you have a Tensordata
and want to avoid a copy, usetorch.Tensor.requires_grad_()
ortorch.Tensor.detach()
. If you have a NumPyndarray
and want to avoid a copy, usetorch.as_tensor()
.Warning
When data is a tensor x,
torch.tensor()
reads out ‘the data’ from whatever it is passed, and constructs a leaf variable. Thereforetorch.tensor(x)
is equivalent tox.clone().detach()
andtorch.tensor(x, requires_grad=True)
is equivalent tox.clone().detach().requires_grad_(True)
. The equivalents usingclone()
anddetach()
are recommended.- Args:
- data (array_like): Initial data for the tensor. Can be a list, tuple,
NumPy
ndarray
, scalar, and other types.- dtype (
torch.dtype
, optional): the desired data type of returned tensor. Default: if
None
, infers data type fromdata
.- device (
torch.device
, optional): the desired device of returned tensor. Default: if
None
, uses the current device for the default tensor type (seetorch.set_default_tensor_type()
).device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.- requires_grad (bool, optional): If autograd should record operations on the
returned tensor. Default:
False
.- pin_memory (bool, optional): If set, returned tensor would be allocated in
the pinned memory. Works only for CPU tensors. Default:
False
.
Example:
>>> torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]]) tensor([[ 0.1000, 1.2000], [ 2.2000, 3.1000], [ 4.9000, 5.2000]]) >>> torch.tensor([0, 1]) # Type inference on data tensor([ 0, 1]) >>> torch.tensor([[0.11111, 0.222222, 0.3333333]], dtype=torch.float64, device=torch.device('cuda:0')) # creates a torch.cuda.DoubleTensor tensor([[ 0.1111, 0.2222, 0.3333]], dtype=torch.float64, device='cuda:0') >>> torch.tensor(3.14159) # Create a scalar (zero-dimensional tensor) tensor(3.1416) >>> torch.tensor([]) # Create an empty tensor (of size (0,)) tensor([])
archai.algos.divnas.divnas_exp_runner module¶
-
class
archai.algos.divnas.divnas_exp_runner.
DivnasExperimentRunner
(config_filename: str, base_name: str, clean_expdir=False)[source]¶ Bases:
archai.nas.exp_runner.ExperimentRunner
-
finalizers
() → archai.nas.finalizers.Finalizers[source]¶
-
model_desc_builder
() → archai.algos.divnas.divnas_model_desc_builder.DivnasModelDescBuilder[source]¶
-
trainer_class
() → Optional[Type[archai.nas.arch_trainer.ArchTrainer]][source]¶
-
archai.algos.divnas.divnas_finalizers module¶
-
class
archai.algos.divnas.divnas_finalizers.
DivnasFinalizers
[source]¶ Bases:
archai.nas.finalizers.Finalizers
-
finalize_cell
(cell: archai.nas.cell.Cell, cell_index: int, model_desc: archai.nas.model_desc.ModelDesc, *args, **kwargs) → archai.nas.model_desc.CellDesc[source]¶
-
finalize_model
(model: archai.nas.model.Model, to_cpu=True, restore_device=True) → archai.nas.model_desc.ModelDesc[source]¶
-
finalize_node
(node: torch.nn.modules.container.ModuleList, node_index: int, node_desc: archai.nas.model_desc.NodeDesc, max_final_edges: int, *args, **kwargs) → archai.nas.model_desc.NodeDesc[source]¶
-
archai.algos.divnas.divnas_model_desc_builder module¶
-
class
archai.algos.divnas.divnas_model_desc_builder.
DivnasModelDescBuilder
[source]¶ Bases:
archai.nas.model_desc_builder.ModelDescBuilder
-
build_nodes
(stem_shapes: List[List[int]], conf_cell: archai.common.config.Config, cell_index: int, cell_type: archai.nas.model_desc.CellType, node_count: int, in_shape: List[int], out_shape: List[int]) → Tuple[List[List[int]], List[archai.nas.model_desc.NodeDesc]][source]¶
-
pre_build
(conf_model_desc: archai.common.config.Config) → None[source]¶
-
archai.algos.divnas.divnas_rank_finalizer module¶
-
class
archai.algos.divnas.divnas_rank_finalizer.
DivnasRankFinalizers
[source]¶ Bases:
archai.nas.finalizers.Finalizers
-
finalize_cell
(cell: archai.nas.cell.Cell, cell_index: int, model_desc: archai.nas.model_desc.ModelDesc, *args, **kwargs) → archai.nas.model_desc.CellDesc[source]¶
-
finalize_model
(model: archai.nas.model.Model, to_cpu=True, restore_device=True) → archai.nas.model_desc.ModelDesc[source]¶
-
finalize_node
(node: torch.nn.modules.container.ModuleList, node_index: int, node_desc: archai.nas.model_desc.NodeDesc, max_final_edges: int, cov: numpy.array, cell: archai.nas.cell.Cell, node_id: int, *args, **kwargs) → archai.nas.model_desc.NodeDesc[source]¶
-
archai.algos.divnas.divop module¶
-
class
archai.algos.divnas.divop.
DivOp
(op_desc: archai.nas.model_desc.OpDesc, arch_params: Optional[archai.nas.arch_params.ArchParams], affine: bool)[source]¶ Bases:
archai.nas.operations.Op
The output of DivOp is weighted output of all allowed primitives.
-
PRIMITIVES
= ['max_pool_3x3', 'avg_pool_3x3', 'skip_connect', 'sep_conv_3x3', 'sep_conv_5x5', 'dil_conv_3x3', 'dil_conv_5x5', 'none']¶
-
property
activations
¶
-
property
collect_activations
¶
-
finalize
() → Tuple[archai.nas.model_desc.OpDesc, Optional[float]][source]¶ Divnas with default finalizer option needs this override else the finalizer in base class returns the whole divop
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
property
num_primitive_ops
¶
-
ops
() → Iterator[Tuple[archai.nas.operations.Op, float]][source]¶ Return contituent ops, if this op is primitive just return self
-