pykt.models package
Submodules
pykt.models.akt module
- class pykt.models.akt.AKT(n_question, n_pid, d_model, n_blocks, dropout, d_ff=256, kq_same=1, final_fc_dim=512, num_attn_heads=8, separate_qa=False, l2=1e-05, emb_type='qid', emb_path='', pretrain_dim=768)[source]
Bases:
Module
- forward(q_data, target, pid_data=None, qtest=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.akt.Architecture(n_question, n_blocks, d_model, d_feature, d_ff, n_heads, dropout, kq_same, model_type, emb_type)[source]
Bases:
Module
- forward(q_embed_data, qa_embed_data, pid_embed_data)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.akt.CosinePositionalEmbedding(d_model, max_len=512)[source]
Bases:
Module
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.akt.Dim(value)[source]
Bases:
IntEnum
An enumeration.
- batch = 0
- feature = 2
- seq = 1
- class pykt.models.akt.LearnablePositionalEmbedding(d_model, max_len=512)[source]
Bases:
Module
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.akt.MultiHeadAttention(d_model, d_feature, n_heads, dropout, kq_same, bias=True, emb_type='qid')[source]
Bases:
Module
- forward(q, k, v, mask, zero_pad, pdiff=None)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.akt.TransformerLayer(d_model, d_feature, d_ff, n_heads, dropout, kq_same, emb_type)[source]
Bases:
Module
- forward(mask, query, key, values, apply_pos=True, pdiff=None)[source]
- Input:
block : object of type BasicBlock(nn.Module). It contains masked_attn_head objects which is of type MultiHeadAttention(nn.Module). mask : 0 means, it can peek only past values. 1 means, block can peek only current and pas values query : Query. In transformer paper it is the input for both encoder and decoder key : Keys. In transformer paper it is the input for both encoder and decoder Values. In transformer paper it is the input for encoder and encoded output for decoder (in masked attention part)
- Output:
query: Input gets changed over the layer and returned.
- training: bool
pykt.models.akt_que module
- class pykt.models.akt_que.AKTQue(num_q, num_c, emb_size, n_blocks=1, dropout=0.1, emb_type='qid', kq_same=1, final_fc_dim=512, num_attn_heads=8, separate_qa=False, l2=1e-05, d_ff=256, emb_path='', pretrain_dim=768, device='cpu', seed=0)[source]
Bases:
QueBaseModel
- training: bool
- class pykt.models.akt_que.AKTQueNet(num_q, num_c, emb_size, n_blocks, dropout, d_ff=256, kq_same=1, final_fc_dim=512, num_attn_heads=8, separate_qa=False, l2=1e-05, emb_type='qid', emb_path='', pretrain_dim=768)[source]
Bases:
Module
- forward(q, c, r)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.akt_que.Architecture(num_q, n_blocks, d_model, d_feature, d_ff, n_heads, dropout, kq_same, model_type)[source]
Bases:
Module
- forward(q_embed_data, qa_embed_data)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.akt_que.CosinePositionalEmbedding(d_model, max_len=512)[source]
Bases:
Module
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.akt_que.Dim(value)[source]
Bases:
IntEnum
An enumeration.
- batch = 0
- feature = 2
- seq = 1
- class pykt.models.akt_que.LearnablePositionalEmbedding(d_model, max_len=512)[source]
Bases:
Module
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.akt_que.MultiHeadAttention(d_model, d_feature, n_heads, dropout, kq_same, bias=True)[source]
Bases:
Module
- forward(q, k, v, mask, zero_pad)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.akt_que.TransformerLayer(d_model, d_feature, d_ff, n_heads, dropout, kq_same)[source]
Bases:
Module
- forward(mask, query, key, values, apply_pos=True)[source]
- Input:
block : object of type BasicBlock(nn.Module). It contains masked_attn_head objects which is of type MultiHeadAttention(nn.Module). mask : 0 means, it can peek only past values. 1 means, block can peek only current and pas values query : Query. In transformer paper it is the input for both encoder and decoder key : Keys. In transformer paper it is the input for both encoder and decoder Values. In transformer paper it is the input for encoder and encoded output for decoder (in masked attention part)
- Output:
query: Input gets changed over the layer and returned.
- training: bool
pykt.models.atkt module
- class pykt.models.atkt.ATKT(num_c, skill_dim, answer_dim, hidden_dim, attention_dim=80, epsilon=10, beta=0.2, dropout=0.2, emb_type='qid', emb_path='', fix=True)[source]
Bases:
Module
- forward(skill, answer, perturbation=None)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.deep_irt module
- class pykt.models.deep_irt.DeepIRT(num_c, dim_s, size_m, dropout=0.2, emb_type='qid', emb_path='', pretrain_dim=768)[source]
Bases:
Module
- forward(q, r, qtest=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.dkt module
- class pykt.models.dkt.DKT(num_c, emb_size, dropout=0.1, emb_type='qid', emb_path='', pretrain_dim=768)[source]
Bases:
Module
- forward(q, r)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.dkt_forget module
- class pykt.models.dkt_forget.CIntegration(num_rgap, num_sgap, num_pcount, emb_dim)[source]
Bases:
Module
- forward(vt, rgap, sgap, pcount)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.dkt_forget.DKTForget(num_c, num_rgap, num_sgap, num_pcount, emb_size, dropout=0.1, emb_type='qid', emb_path='')[source]
Bases:
Module
- forward(q, r, dgaps)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.dkt_plus module
- class pykt.models.dkt_plus.DKTPlus(num_c, emb_size, lambda_r, lambda_w1, lambda_w2, dropout=0.1, emb_type='qid', emb_path='', pretrain_dim=768)[source]
Bases:
Module
- forward(q, r)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.dkt_que module
- class pykt.models.dkt_que.DKTQue(num_q, num_c, emb_size, dropout=0.1, emb_type='qaid', emb_path='', pretrain_dim=768, device='cpu', seed=0)[source]
Bases:
QueBaseModel
- training: bool
- class pykt.models.dkt_que.DKTQueNet(num_q, num_c, emb_size, dropout=0.1, emb_type='qaid', emb_path='', pretrain_dim=768, device='cpu')[source]
Bases:
Module
- forward(q, c, r)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.dkvmn module
- class pykt.models.dkvmn.DKVMN(num_c, dim_s, size_m, dropout=0.2, emb_type='qid', emb_path='', pretrain_dim=768)[source]
Bases:
Module
- forward(q, r, qtest=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.evaluate_model module
- pykt.models.evaluate_model.evaluate_question(model, test_loader, model_name, fusion_type=['early_fusion', 'late_fusion'], save_path='')[source]
- pykt.models.evaluate_model.evaluate_splitpred_question(model, data_config, testf, model_name, save_path='', use_pred=False, train_ratio=0.2, atkt_pad=False)[source]
- pykt.models.evaluate_model.predict_each_group(dtotal, dcur, dforget, curdforget, is_repeat, qidx, uid, idx, model_name, model, t, end, fout, atkt_pad=False, maxlen=200)[source]
use the predict result as next question input
- pykt.models.evaluate_model.predict_each_group2(dtotal, dcur, dforget, curdforget, is_repeat, qidx, uid, idx, model_name, model, t, end, fout, atkt_pad=False, maxlen=200)[source]
not use the predict result
- pykt.models.evaluate_model.prepare_data(model_name, is_repeat, qidx, dcur, curdforget, dtotal, dforget, t, end, maxlen=200)[source]
pykt.models.gkt module
- class pykt.models.gkt.EraseAddGate(feature_dim, num_c, bias=True)[source]
Bases:
Module
Erase & Add Gate module NOTE: this erase & add gate is a bit different from that in DKVMN. For more information about Erase & Add gate, please refer to the paper “Dynamic Key-Value Memory Networks for Knowledge Tracing” The paper can be found in https://arxiv.org/abs/1611.08108
- Parameters
nn (_type_) – _description_
- forward(x)[source]
- Params:
x: input feature matrix
- Shape:
x: [batch_size, num_c, feature_dim] res: [batch_size, num_c, feature_dim]
- Returns
returned feature matrix with old information erased and new information added The GKT paper didn’t provide detailed explanation about this erase-add gate. As the erase-add gate in the GKT only has one input parameter, this gate is different with that of the DKVMN. We used the input matrix to build the erase and add gates, rather than $mathbf{v}_{t}$ vector in the DKVMN.
- Return type
res
- training: bool
- class pykt.models.gkt.GKT(num_c, hidden_dim, emb_size, graph_type='dense', graph=None, dropout=0.5, emb_type='qid', emb_path='', bias=True)[source]
Bases:
Module
Graph-based Knowledge Tracing Modeling Student Proficiency Using Graph Neural Network
- Parameters
num_c (int) – total num of unique questions
hidden_dim (int) – hidden dimension for MLP
emb_size (int) – embedding dimension for question embedding layer
graph_type (str, optional) – graph type, dense or transition. Defaults to “dense”.
graph (_type_, optional) – graph. Defaults to None.
dropout (float, optional) – dropout. Defaults to 0.5.
emb_type (str, optional) – emb_type. Defaults to “qid”.
emb_path (str, optional) – emb_path. Defaults to “”.
bias (bool, optional) – add bias for DNN. Defaults to True.
- forward(q, r)[source]
_summary_
- Parameters
q (_type_) – _description_
r (_type_) – _description_
- Returns
the correct probability of questions answered at the next timestamp
- Return type
list
- training: bool
- class pykt.models.gkt.MLP(input_dim, hidden_dim, output_dim, dropout=0.0, bias=True)[source]
Bases:
Module
Two-layer fully-connected ReLU net with batch norm.
- forward(inputs)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.gkt_utils module
- pykt.models.gkt_utils.build_dense_graph(concept_num)[source]
generate dense graph
- Parameters
concept_num (int) – number of concepts
- Returns
graph
- Return type
numpy
pykt.models.hawkes module
- class pykt.models.hawkes.HawkesKT(n_skills, n_problems, emb_size, time_log, emb_type='qid')[source]
Bases:
Module
- forward(skills, problems, times, labels, qtest=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.iekt module
- class pykt.models.iekt.IEKT(num_q, num_c, emb_size, max_concepts, lamb=40, n_layer=1, cog_levels=10, acq_levels=10, dropout=0, gamma=0.93, emb_type='qid', emb_path='', pretrain_dim=768, device='cpu', seed=0)[source]
Bases:
QueBaseModel
- training: bool
- class pykt.models.iekt.IEKTNet(num_q, num_c, emb_size, max_concepts, lamb=40, n_layer=1, cog_levels=10, acq_levels=10, dropout=0, gamma=0.93, emb_type='qc_merge', emb_path='', pretrain_dim=768, device='cpu')[source]
Bases:
Module
- get_ques_representation(q, c)[source]
Get question representation equation 3
- Parameters
q (_type_) – question ids
c (_type_) – concept ids
- Returns
_description_
- Return type
_type_
- obtain_v(q, c, h, x, emb)[source]
_summary_
- Parameters
q (_type_) – _description_
c (_type_) – _description_
h (_type_) – _description_
x (_type_) – _description_
emb (_type_) – m_t
- Returns
_description_
- Return type
_type_
- training: bool
pykt.models.iekt_ce module
- class pykt.models.iekt_ce.IEKTCE(num_q, num_c, emb_size, max_concepts, lamb=40, n_layer=1, cog_levels=10, acq_levels=10, dropout=0, gamma=0.93, emb_type='qid', emb_path='', pretrain_dim=768, device='cpu', seed=0, train_mode='sample')[source]
Bases:
QueBaseModel
- training: bool
- class pykt.models.iekt_ce.IEKTNet(num_q, num_c, emb_size, max_concepts, lamb=40, n_layer=1, cog_levels=10, acq_levels=10, dropout=0, gamma=0.93, emb_type='qc_merge', emb_path='', pretrain_dim=768, device='cpu', train_mode='sample')[source]
Bases:
Module
- forward(data)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_ques_representation(q, c)[source]
Get question representation equation 3
- Parameters
q (_type_) – question ids
c (_type_) – concept ids
- Returns
_description_
- Return type
_type_
- obtain_v(q, c, h, x, emb)[source]
_summary_
- Parameters
q (_type_) – _description_
c (_type_) – _description_
h (_type_) – _description_
x (_type_) – _description_
emb (_type_) – m_t
- Returns
_description_
- Return type
_type_
- training: bool
pykt.models.iekt_utils module
- class pykt.models.iekt_utils.funcs(n_layer, hidden_dim, output_dim, dpo)[source]
Bases:
Module
classifier decoder implemented with mlp
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.iekt_utils.funcsgru(n_layer, hidden_dim, output_dim, dpo)[source]
Bases:
Module
classifier decoder implemented with mlp
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.iekt_utils.mygru(n_layer, input_dim, hidden_dim)[source]
Bases:
Module
classifier decoder implemented with mlp
- forward(x, h)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.init_model module
pykt.models.kqn module
- class pykt.models.kqn.KQN(n_skills: int, n_hidden: int, n_rnn_hidden: int, n_mlp_hidden: int, dropout, n_rnn_layers: int = 1, rnn_type='lstm', emb_type='qid', emb_path='', pretrain_dim=768)[source]
Bases:
Module
- forward(q, r, qshft, qtest=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.loss module
pykt.models.lpkt module
- class pykt.models.lpkt.LPKT(n_at, n_it, n_exercise, n_question, d_a, d_e, d_k, gamma=0.03, dropout=0.2, q_matrix='', emb_type='qid', emb_path='', pretrain_dim=768, use_time=True)[source]
Bases:
Module
- forward(e_data, a_data, it_data=None, at_data=None, qtest=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.lpkt_utils module
pykt.models.qdkt module
- class pykt.models.qdkt.QDKT(num_q, num_c, emb_size, dropout=0.1, emb_type='qaid', emb_path='', pretrain_dim=768, device='cpu', seed=0, mlp_layer_num=1, other_config={}, **kwargs)[source]
Bases:
QueBaseModel
- training: bool
- class pykt.models.qdkt.QDKTNet(num_q, num_c, emb_size, dropout=0.1, emb_type='qaid', emb_path='', pretrain_dim=768, device='cpu', mlp_layer_num=1, other_config={})[source]
Bases:
Module
- forward(q, c, r, data=None)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.que_base_model module
- class pykt.models.que_base_model.QueBaseModel(model_name, emb_type, emb_path, pretrain_dim, device, seed=0)[source]
Bases:
Module
- compile(optimizer, lr=0.001, loss='binary_crossentropy', metrics=None)[source]
- Parameters
optimizer – String (name of optimizer) or optimizer instance. See [optimizers](https://pytorch.org/docs/stable/optim.html).
loss – String (name of objective function) or objective function. See [losses](https://pytorch.org/docs/stable/nn.functional.html#loss-functions).
metrics – List of metrics to be evaluated by the model during training and testing. Typically you will use metrics=[‘accuracy’].
- evaluate_multi_ahead(data_config, batch_size, ob_portions=0.5, acc_threshold=0.5, accumulative=False)[source]
Predictions in the multi-step ahead prediction scenario
- Parameters
data_config (_type_) – data_config
batch_size (int) – batch_size
ob_portions (float, optional) – portions of observed student interactions. Defaults to 0.5.
accumulative (bool, optional) – True for accumulative prediction and False for non-accumulative prediction. Defaults to False.
acc_threshold (float, optional) – threshold for accuracy. Defaults to 0.5.
- Returns
auc,acc
- Return type
metrics
- train(train_dataset, valid_dataset, batch_size=16, valid_batch_size=None, num_epochs=32, test_loader=None, test_window_loader=None, save_dir='tmp', save_model=False, patient=10, shuffle=True, process=True)[source]
Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Parameters
mode (bool) – whether to set training mode (
True
) or evaluation mode (False
). Default:True
.- Returns
self
- Return type
Module
- training: bool
- class pykt.models.que_base_model.QueEmb(num_q, num_c, emb_size, model_name, device='cpu', emb_type='qid', emb_path='', pretrain_dim=768)[source]
Bases:
Module
- forward(q, c, r=None)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.saint module
- class pykt.models.saint.Decoder_block(dim_model, total_res, heads_de, seq_len, dropout)[source]
Bases:
Module
M1 = SkipConct(Multihead(LayerNorm(Qin;Kin;Vin))) M2 = SkipConct(Multihead(LayerNorm(M1;O;O))) L = SkipConct(FFN(LayerNorm(M2)))
- forward(in_res, in_pos, en_out, first_block=True)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.saint.Encoder_block(dim_model, heads_en, total_ex, total_cat, seq_len, dropout, emb_path='', pretrain_dim=768)[source]
Bases:
Module
M = SkipConct(Multihead(LayerNorm(Qin;Kin;Vin))) O = SkipConct(FFN(LayerNorm(M)))
- forward(in_ex, in_cat, in_pos, first_block=True)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.saint.SAINT(num_q, num_c, seq_len, emb_size, num_attn_heads, dropout, n_blocks=1, emb_type='qid', emb_path='', pretrain_dim=768)[source]
Bases:
Module
- forward(in_ex, in_cat, in_res, qtest=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.saint_plus_plus module
- class pykt.models.saint_plus_plus.Decoder_block(dim_model, total_res, heads_de, seq_len, dropout, num_q, num_c)[source]
Bases:
Module
M1 = SkipConct(Multihead(LayerNorm(Qin;Kin;Vin))) M2 = SkipConct(Multihead(LayerNorm(M1;O;O))) L = SkipConct(FFN(LayerNorm(M2)))
- forward(in_ex, in_cat, in_res, in_pos, en_out, first_block=True)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.saint_plus_plus.Encoder_block(dim_model, heads_en, total_ex, total_cat, seq_len, dropout, emb_path='', pretrain_dim=768)[source]
Bases:
Module
M = SkipConct(Multihead(LayerNorm(Qin;Kin;Vin))) O = SkipConct(FFN(LayerNorm(M)))
- forward(in_ex, in_cat, in_pos, first_block=True)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.saint_plus_plus.SAINT(num_q, num_c, seq_len, emb_size, num_attn_heads, dropout, n_blocks=1, emb_type='qid', emb_path='', pretrain_dim=768)[source]
Bases:
Module
- forward(in_ex, in_cat, in_res, qtest=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.sakt module
- class pykt.models.sakt.Blocks(emb_size, num_attn_heads, dropout)[source]
Bases:
Module
- forward(q=None, k=None, v=None)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.sakt.SAKT(num_c, seq_len, emb_size, num_attn_heads, dropout, num_en=2, emb_type='qid', emb_path='', pretrain_dim=768)[source]
Bases:
Module
- forward(q, r, qry, qtest=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.skvmn module
- class pykt.models.skvmn.DKVMN(memory_size, memory_key_state_dim, memory_value_state_dim, init_memory_key, memory_value=None)[source]
Bases:
Module
- forward(input_)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class pykt.models.skvmn.DKVMNHeadGroup(memory_size, memory_state_dim, is_write)[source]
Bases:
Module
- static addressing(control_input, memory)[source]
- Parameters
control_input: Shape (batch_size, control_state_dim) memory: Shape (memory_size, memory_state_dim)
- Returns
correlation_weight: Shape (batch_size, memory_size)
- forward(input_)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- read(memory, control_input=None, read_weight=None)[source]
- Parameters
control_input: Shape (batch_size, control_state_dim) memory: Shape (batch_size, memory_size, memory_state_dim) read_weight: Shape (batch_size, memory_size)
- Returns
read_content: Shape (batch_size, memory_state_dim)
- training: bool
- class pykt.models.skvmn.SKVMN(num_c, dim_s, size_m, dropout=0.2, emb_type='qid', emb_path='', use_onehot=False)[source]
Bases:
Module
- forward(q, r)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
pykt.models.train_model module
pykt.models.utils module
- class pykt.models.utils.transformer_FFN(emb_size, dropout)[source]
Bases:
Module
- forward(in_fea)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool