contilearn documentation

Add your content using reStructuredText syntax. See the reStructuredText documentation for details.

class chicken.model.Chicken(model: Module, *args, **kwargs)[source]

A Incremental learning class Module.

__init__(model, device: str = 'cpu', init_val: float = 0.1, max_mult: float = 1.0, matching_texts: List[str] = ('layernorm', 'bias', 'embeddings', 'layrnorm', 'layer_norm'), rank=None)[source]
Parameters:
  • model (torch.nn.Module, required)

  • device (string, optional) – Initial Value (default cpu).

  • init_val (float, optional) – Maximum initial value mask ~ U[0,init_val] (default 0.1).

  • max_mult (float, optional) – Maximum possible value the mask can take [0,max_mult] (default 1.0).

  • matching_texts (List[str], optional) – A list of matching layer names that should not perform the decomposition and reconstruction (default (“layernorm”, “bias”, “embeddings”, “layrnorm”, “layer_norm”)).

Examples

>>> from transformers import ViTModel
>>> model = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k')
>>> model = Chicken(model, device="cuda", init_val=0.05, max_mult=1.0)
add_class(class_names: List[str])[source]

Call this to add a new set of classes (creates a new mask vector per decomposed matrix)

Parameters:

class_names (List[str], required) – A list of class names

Returns:

True if the classes were added successfully, False otherwise.

Return type:

bool

Examples

>>> model.add_class(["cat", "dog"])
True
apply_policy_to_model(mask_idx: int | None = None)[source]

Compose & write weights into the live model (fast in-place copy).

Parameters:

mask_idx (int, required) – index of the mask that should be applied to the model if None will choose based on set_mask or latest mask

Examples

>>> model.apply_policy_to_model(1)
property class_map

Returns a string of mask index and the classes associated with it

Returns:

string

Examples

>>> print(model.class_map)
CLASS MAP
1: cat, dog, horse, cow
2: mouse, lion
get_mask(mask_idx: int = -1)[source]

Returns the state dictionary of the the selected mask

Parameters:

mask_idx (int, required) – The mask index if not sepecified return the last mask (default -1).

Returns:

state_dict: a state dict of the selected mask

Return type:

dict

property latest_mask_idx

retruns the latest mask index

Return type:

int

load_weights(path: str)[source]

Load the mask

Parameters:

path (str, required) – location to where the .pt for the mask is located.

save_weights(path: str)[source]

Save the mask weights to the path

Parameters:

path (str, required) – location to where the mask should be saved should be .pt file.

set_mask(mask_idx: int = 0)[source]

Set the selected mask

Parameters:

mask_idx (int, optional) – Set the selected mask to the mask_idx (default 0)

Returns:

True if selected mask set successfully

Return type:

boolean

set_train(mask_idx: int | None = None)[source]

Set the learnable parameters to training mode.

Parameters:

mask_idx (int, optional) – If None use the mask index from set_mask

toggle_mask(mask_value: bool = True, mask_idx: int | None = None)[source]

turn on or off the mask

Parameters:
  • mask_value (bool, optional) – A boolean checking whether the mask should be on or off (default True)

  • mask_idx (int, optional) – If None selected the last mask index (default None)

update_backward(mask_idx: int | None = None)[source]

Backpropagate through the learnable mask parameters using VJP. Requires that loss.backward() has populated dL/dW on base weights.

Parameters:

mask_idx (int, optional) – If None use the selected mask from set_mask (default None)

Indices and tables