ebes.model.PrimeNet package
Submodules
ebes.model.PrimeNet.learn_time_emb module
- class ebes.model.PrimeNet.learn_time_emb.MultiTimeAttention(input_dim, nhidden=16, embed_time=16, num_heads=1)
Bases:
Module- forward(query, key)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ebes.model.PrimeNet.learn_time_emb.Similarity(temp)
Bases:
ModuleDot product or cosine similarity
- forward(x, y)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ebes.model.PrimeNet.learn_time_emb.TimeBERT
Bases:
Module- forward(time_steps)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- learn_time_embedding(tt)
ebes.model.PrimeNet.models module
- class ebes.model.PrimeNet.models.BertInterpHead(input_dim, hidden_size=128)
Bases:
Module- forward(first_token_tensor)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ebes.model.PrimeNet.models.BertPooler(hidden_size=128)
Bases:
Module- forward(first_token_tensor)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ebes.model.PrimeNet.models.MultiTimeAttention(input_dim, nhidden=16, embed_time=16, num_heads=1)
Bases:
Module- attention(query, key, value, mask=None, dropout=None)
Compute ‘Scaled Dot Product Attention’
- forward(query, key, value, mask=None, dropout=None)
Compute ‘Scaled Dot Product Attention’
- class ebes.model.PrimeNet.models.Similarity(temp)
Bases:
ModuleDot product or cosine similarity
- forward(x, y)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ebes.model.PrimeNet.models.SwitchTimeBERT(config)
Bases:
Module- encode(x, switch_key, is_pooling=False)
- forward(x, time_steps, switch_key, query_time_steps=None)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- learn_time_embedding(tt)
- time_embedding(pos, d_model)
- class ebes.model.PrimeNet.models.TimeBERT(input_dim, max_length, hidden_size=128, embed_time=128, num_heads=1, freq=10, learn_emb=True, dropout=0.3, pooling='bert')
Bases:
Module- encode(x, is_pooling=False)
- forward(x, time_steps, query_time_steps=None, pretrain=False)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- learn_time_embedding(tt)
- time_embedding(pos, d_model)
- class ebes.model.PrimeNet.models.TimeBERTConfig(input_dim, dataset=None, pretrain_tasks=None, cls_query=tensor([0.0000, 0.0079, 0.0157, 0.0236, 0.0315, 0.0394, 0.0472, 0.0551, 0.0630, 0.0709, 0.0787, 0.0866, 0.0945, 0.1024, 0.1102, 0.1181, 0.1260, 0.1339, 0.1417, 0.1496, 0.1575, 0.1654, 0.1732, 0.1811, 0.1890, 0.1969, 0.2047, 0.2126, 0.2205, 0.2283, 0.2362, 0.2441, 0.2520, 0.2598, 0.2677, 0.2756, 0.2835, 0.2913, 0.2992, 0.3071, 0.3150, 0.3228, 0.3307, 0.3386, 0.3465, 0.3543, 0.3622, 0.3701, 0.3780, 0.3858, 0.3937, 0.4016, 0.4094, 0.4173, 0.4252, 0.4331, 0.4409, 0.4488, 0.4567, 0.4646, 0.4724, 0.4803, 0.4882, 0.4961, 0.5039, 0.5118, 0.5197, 0.5276, 0.5354, 0.5433, 0.5512, 0.5591, 0.5669, 0.5748, 0.5827, 0.5906, 0.5984, 0.6063, 0.6142, 0.6220, 0.6299, 0.6378, 0.6457, 0.6535, 0.6614, 0.6693, 0.6772, 0.6850, 0.6929, 0.7008, 0.7087, 0.7165, 0.7244, 0.7323, 0.7402, 0.7480, 0.7559, 0.7638, 0.7717, 0.7795, 0.7874, 0.7953, 0.8031, 0.8110, 0.8189, 0.8268, 0.8346, 0.8425, 0.8504, 0.8583, 0.8661, 0.8740, 0.8819, 0.8898, 0.8976, 0.9055, 0.9134, 0.9213, 0.9291, 0.9370, 0.9449, 0.9528, 0.9606, 0.9685, 0.9764, 0.9843, 0.9921, 1.0000]), hidden_size=16, embed_time=16, num_heads=1, learn_emb=True, freq=10.0, pooling='ave', classify_pertp=False, max_length=128, dropout=0.3, temp=0.05, switch_keys=['pretraining', 'classification'])
Bases:
object
- class ebes.model.PrimeNet.models.TimeBERTForMultiTask(input_dim, max_length, n_classes, hidden_size=128, pretrain=True, learn_emb=True, pooling='bert', dropout=0.3, freq=10, num_heads=1, embed_time=128, pretrain_task=None, temp=0.05)
Bases:
BaseModel- forward(seq)
x : batch_size, num_seq, seq_len, (input_dim x 3) time_steps : batch_size, num_seq, seq_len
- ebes.model.PrimeNet.models.isnan(x)
ebes.model.PrimeNet.modules module
- class ebes.model.PrimeNet.modules.Attention(*args, **kwargs)
Bases:
ModuleCompute ‘Scaled Dot Product Attention
- forward(query, key, value, mask=None, dropout=None)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ebes.model.PrimeNet.modules.GELU(*args, **kwargs)
Bases:
ModulePaper Section 3.4, last paragraph notice that BERT used the GELU instead of RELU
- forward(x)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ebes.model.PrimeNet.modules.LayerNorm(features, eps=1e-12)
Bases:
ModuleConstruct a layernorm module (See citation for details).
- forward(x)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ebes.model.PrimeNet.modules.MultiHeadedAttention(h, d_model, dropout=0.1)
Bases:
ModuleTake in model size and number of heads.
- forward(query, key, value, mask=None)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ebes.model.PrimeNet.modules.OutputLayer(hidden_dim)
Bases:
ModuleOuptut Layer for BERT model
- forward(x)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ebes.model.PrimeNet.modules.PositionwiseFeedForward(d_model, d_ff, dropout=0.1)
Bases:
ModuleImplements FFN equation.
- forward(x)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ebes.model.PrimeNet.modules.SublayerConnection(size, dropout)
Bases:
ModuleA residual connection followed by a layer norm.
- forward(x, sublayer)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ebes.model.PrimeNet.modules.SwitchTransformerBlock(hidden, attn_heads, feed_forward_hidden, dropout, switch_keys)
Bases:
ModuleBidirectional Encoder = Transformer (self-attention) Transformer = MultiHead_Attention + Feed_Forward with sublayer connection
- forward(x, mask=None, switch_key=None)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ebes.model.PrimeNet.modules.TransformerBlock(hidden, attn_heads, feed_forward_hidden, dropout)
Bases:
ModuleBidirectional Encoder = Transformer (self-attention) Transformer = MultiHead_Attention + Feed_Forward with sublayer connection
- forward(x, mask=None)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.