pykt.models package

Submodules

pykt.models.akt module

class pykt.models.akt.AKT(n_question, n_pid, d_model, n_blocks, dropout, d_ff=256, kq_same=1, final_fc_dim=512, num_attn_heads=8, separate_qa=False, l2=1e-05, emb_type='qid', emb_path='', pretrain_dim=768)[source]

Bases: Module

base_emb(q_data, target)[source]
forward(q_data, target, pid_data=None, qtest=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reset()[source]
training: bool
class pykt.models.akt.Architecture(n_question, n_blocks, d_model, d_feature, d_ff, n_heads, dropout, kq_same, model_type, emb_type)[source]

Bases: Module

forward(q_embed_data, qa_embed_data, pid_embed_data)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class pykt.models.akt.CosinePositionalEmbedding(d_model, max_len=512)[source]

Bases: Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class pykt.models.akt.Dim(value)[source]

Bases: IntEnum

An enumeration.

batch = 0
feature = 2
seq = 1
class pykt.models.akt.LearnablePositionalEmbedding(d_model, max_len=512)[source]

Bases: Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class pykt.models.akt.MultiHeadAttention(d_model, d_feature, n_heads, dropout, kq_same, bias=True, emb_type='qid')[source]

Bases: Module

forward(q, k, v, mask, zero_pad, pdiff=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

pad_zero(scores, bs, dim, zero_pad)[source]
training: bool
class pykt.models.akt.TransformerLayer(d_model, d_feature, d_ff, n_heads, dropout, kq_same, emb_type)[source]

Bases: Module

forward(mask, query, key, values, apply_pos=True, pdiff=None)[source]
Input:

block : object of type BasicBlock(nn.Module). It contains masked_attn_head objects which is of type MultiHeadAttention(nn.Module). mask : 0 means, it can peek only past values. 1 means, block can peek only current and pas values query : Query. In transformer paper it is the input for both encoder and decoder key : Keys. In transformer paper it is the input for both encoder and decoder Values. In transformer paper it is the input for encoder and encoded output for decoder (in masked attention part)

Output:

query: Input gets changed over the layer and returned.

training: bool
pykt.models.akt.attention(q, k, v, d_k, mask, dropout, zero_pad, gamma=None, pdiff=None)[source]

This is called by Multi-head atention object to find the values.

pykt.models.akt_que module

class pykt.models.akt_que.AKTQue(num_q, num_c, emb_size, n_blocks=1, dropout=0.1, emb_type='qid', kq_same=1, final_fc_dim=512, num_attn_heads=8, separate_qa=False, l2=1e-05, d_ff=256, emb_path='', pretrain_dim=768, device='cpu', seed=0)[source]

Bases: QueBaseModel

predict_one_step(data, return_details=False)[source]
train_one_step(data)[source]
training: bool
class pykt.models.akt_que.AKTQueNet(num_q, num_c, emb_size, n_blocks, dropout, d_ff=256, kq_same=1, final_fc_dim=512, num_attn_heads=8, separate_qa=False, l2=1e-05, emb_type='qid', emb_path='', pretrain_dim=768)[source]

Bases: Module

base_emb(q, c, r)[source]
forward(q, c, r)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reset()[source]
training: bool
class pykt.models.akt_que.Architecture(num_q, n_blocks, d_model, d_feature, d_ff, n_heads, dropout, kq_same, model_type)[source]

Bases: Module

forward(q_embed_data, qa_embed_data)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class pykt.models.akt_que.CosinePositionalEmbedding(d_model, max_len=512)[source]

Bases: Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class pykt.models.akt_que.Dim(value)[source]

Bases: IntEnum

An enumeration.

batch = 0
feature = 2
seq = 1
class pykt.models.akt_que.LearnablePositionalEmbedding(d_model, max_len=512)[source]

Bases: Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class pykt.models.akt_que.MultiHeadAttention(d_model, d_feature, n_heads, dropout, kq_same, bias=True)[source]

Bases: Module

forward(q, k, v, mask, zero_pad)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class pykt.models.akt_que.TransformerLayer(d_model, d_feature, d_ff, n_heads, dropout, kq_same)[source]

Bases: Module

forward(mask, query, key, values, apply_pos=True)[source]
Input:

block : object of type BasicBlock(nn.Module). It contains masked_attn_head objects which is of type MultiHeadAttention(nn.Module). mask : 0 means, it can peek only past values. 1 means, block can peek only current and pas values query : Query. In transformer paper it is the input for both encoder and decoder key : Keys. In transformer paper it is the input for both encoder and decoder Values. In transformer paper it is the input for encoder and encoded output for decoder (in masked attention part)

Output:

query: Input gets changed over the layer and returned.

training: bool
pykt.models.akt_que.attention(q, k, v, d_k, mask, dropout, zero_pad, gamma=None)[source]

This is called by Multi-head atention object to find the values.

pykt.models.atkt module

class pykt.models.atkt.ATKT(num_c, skill_dim, answer_dim, hidden_dim, attention_dim=80, epsilon=10, beta=0.2, dropout=0.2, emb_type='qid', emb_path='', fix=True)[source]

Bases: Module

attention_module(lstm_output)[source]
forward(skill, answer, perturbation=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

pykt.models.deep_irt module

class pykt.models.deep_irt.DeepIRT(num_c, dim_s, size_m, dropout=0.2, emb_type='qid', emb_path='', pretrain_dim=768)[source]

Bases: Module

forward(q, r, qtest=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

pykt.models.dkt module

class pykt.models.dkt.DKT(num_c, emb_size, dropout=0.1, emb_type='qid', emb_path='', pretrain_dim=768)[source]

Bases: Module

forward(q, r)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

pykt.models.dkt_forget module

class pykt.models.dkt_forget.CIntegration(num_rgap, num_sgap, num_pcount, emb_dim)[source]

Bases: Module

forward(vt, rgap, sgap, pcount)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class pykt.models.dkt_forget.DKTForget(num_c, num_rgap, num_sgap, num_pcount, emb_size, dropout=0.1, emb_type='qid', emb_path='')[source]

Bases: Module

forward(q, r, dgaps)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

pykt.models.dkt_plus module

class pykt.models.dkt_plus.DKTPlus(num_c, emb_size, lambda_r, lambda_w1, lambda_w2, dropout=0.1, emb_type='qid', emb_path='', pretrain_dim=768)[source]

Bases: Module

forward(q, r)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

pykt.models.dkt_que module

class pykt.models.dkt_que.DKTQue(num_q, num_c, emb_size, dropout=0.1, emb_type='qaid', emb_path='', pretrain_dim=768, device='cpu', seed=0)[source]

Bases: QueBaseModel

predict_one_step(data, return_details=False, process=True)[source]
train_one_step(data, process=True)[source]
training: bool
class pykt.models.dkt_que.DKTQueNet(num_q, num_c, emb_size, dropout=0.1, emb_type='qaid', emb_path='', pretrain_dim=768, device='cpu')[source]

Bases: Module

forward(q, c, r)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

pykt.models.dkvmn module

class pykt.models.dkvmn.DKVMN(num_c, dim_s, size_m, dropout=0.2, emb_type='qid', emb_path='', pretrain_dim=768)[source]

Bases: Module

forward(q, r, qtest=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

pykt.models.evaluate_model module

pykt.models.evaluate_model.calC(row, data_config)[source]
pykt.models.evaluate_model.calT4lpkt(row, data_config, at2idx, it2idx)[source]
pykt.models.evaluate_model.cal_predres(dcres, dqres)[source]
pykt.models.evaluate_model.early_fusion(curhs, model, model_name)[source]
pykt.models.evaluate_model.effective_fusion(df, model, model_name, fusion_type)[source]
pykt.models.evaluate_model.evaluate(model, test_loader, model_name, save_path='')[source]
pykt.models.evaluate_model.evaluate_question(model, test_loader, model_name, fusion_type=['early_fusion', 'late_fusion'], save_path='')[source]
pykt.models.evaluate_model.evaluate_splitpred_question(model, data_config, testf, model_name, save_path='', use_pred=False, train_ratio=0.2, atkt_pad=False)[source]
pykt.models.evaluate_model.get_cur_teststart(is_repeat, train_ratio)[source]
pykt.models.evaluate_model.get_info_dkt_forget(row, data_config)[source]
pykt.models.evaluate_model.get_info_lpkt(row, data_config, at2idx, it2idx)[source]
pykt.models.evaluate_model.group_fusion(dmerge, model, model_name, fusion_type, fout)[source]
pykt.models.evaluate_model.late_fusion(dcur, curdf, fusion_type=['mean', 'vote', 'all'])[source]
pykt.models.evaluate_model.log2(t)[source]
pykt.models.evaluate_model.predict_each_group(dtotal, dcur, dforget, curdforget, is_repeat, qidx, uid, idx, model_name, model, t, end, fout, atkt_pad=False, maxlen=200)[source]

use the predict result as next question input

pykt.models.evaluate_model.predict_each_group2(dtotal, dcur, dforget, curdforget, is_repeat, qidx, uid, idx, model_name, model, t, end, fout, atkt_pad=False, maxlen=200)[source]

not use the predict result

pykt.models.evaluate_model.prepare_data(model_name, is_repeat, qidx, dcur, curdforget, dtotal, dforget, t, end, maxlen=200)[source]
pykt.models.evaluate_model.save_cur_predict_result(dres, q, r, d, t, m, sm, p)[source]
pykt.models.evaluate_model.save_currow_question_res(idx, dcres, dqres, qidxs, ctrues, cpreds, uid, fout)[source]
pykt.models.evaluate_model.save_each_question_res(dcres, dqres, ctrues, cpreds)[source]
pykt.models.evaluate_model.save_question_res(dres, fout, early=False)[source]

pykt.models.gkt module

class pykt.models.gkt.EraseAddGate(feature_dim, num_c, bias=True)[source]

Bases: Module

Erase & Add Gate module NOTE: this erase & add gate is a bit different from that in DKVMN. For more information about Erase & Add gate, please refer to the paper “Dynamic Key-Value Memory Networks for Knowledge Tracing” The paper can be found in https://arxiv.org/abs/1611.08108

Parameters

nn (_type_) – _description_

forward(x)[source]
Params:

x: input feature matrix

Shape:

x: [batch_size, num_c, feature_dim] res: [batch_size, num_c, feature_dim]

Returns

returned feature matrix with old information erased and new information added The GKT paper didn’t provide detailed explanation about this erase-add gate. As the erase-add gate in the GKT only has one input parameter, this gate is different with that of the DKVMN. We used the input matrix to build the erase and add gates, rather than $mathbf{v}_{t}$ vector in the DKVMN.

Return type

res

reset_parameters()[source]
training: bool
class pykt.models.gkt.GKT(num_c, hidden_dim, emb_size, graph_type='dense', graph=None, dropout=0.5, emb_type='qid', emb_path='', bias=True)[source]

Bases: Module

Graph-based Knowledge Tracing Modeling Student Proficiency Using Graph Neural Network

Parameters
  • num_c (int) – total num of unique questions

  • hidden_dim (int) – hidden dimension for MLP

  • emb_size (int) – embedding dimension for question embedding layer

  • graph_type (str, optional) – graph type, dense or transition. Defaults to “dense”.

  • graph (_type_, optional) – graph. Defaults to None.

  • dropout (float, optional) – dropout. Defaults to 0.5.

  • emb_type (str, optional) – emb_type. Defaults to “qid”.

  • emb_path (str, optional) – emb_path. Defaults to “”.

  • bias (bool, optional) – add bias for DNN. Defaults to True.

forward(q, r)[source]

_summary_

Parameters
  • q (_type_) – _description_

  • r (_type_) – _description_

Returns

the correct probability of questions answered at the next timestamp

Return type

list

training: bool
class pykt.models.gkt.MLP(input_dim, hidden_dim, output_dim, dropout=0.0, bias=True)[source]

Bases: Module

Two-layer fully-connected ReLU net with batch norm.

batch_norm(inputs)[source]
forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

init_weights()[source]
training: bool

pykt.models.gkt_utils module

pykt.models.gkt_utils.build_dense_graph(concept_num)[source]

generate dense graph

Parameters

concept_num (int) – number of concepts

Returns

graph

Return type

numpy

pykt.models.gkt_utils.build_transition_graph(df, concept_num)[source]

generate transition graph

Parameters
  • df (da) – _description_

  • concept_num (int) – number of concepts

Returns

graph

Return type

numpy

pykt.models.gkt_utils.get_gkt_graph(num_c, dpath, trainfile, testfile, graph_type='dense', tofile='./graph.npz')[source]

pykt.models.hawkes module

class pykt.models.hawkes.HawkesKT(n_skills, n_problems, emb_size, time_log, emb_type='qid')[source]

Bases: Module

forward(skills, problems, times, labels, qtest=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

static init_weights(m)[source]
printparams()[source]
training: bool

pykt.models.iekt module

class pykt.models.iekt.IEKT(num_q, num_c, emb_size, max_concepts, lamb=40, n_layer=1, cog_levels=10, acq_levels=10, dropout=0, gamma=0.93, emb_type='qid', emb_path='', pretrain_dim=768, device='cpu', seed=0)[source]

Bases: QueBaseModel

predict_one_step(data, return_details=False, process=True)[source]
train_one_step(data, process=True)[source]
training: bool
class pykt.models.iekt.IEKTNet(num_q, num_c, emb_size, max_concepts, lamb=40, n_layer=1, cog_levels=10, acq_levels=10, dropout=0, gamma=0.93, emb_type='qc_merge', emb_path='', pretrain_dim=768, device='cpu')[source]

Bases: Module

get_ques_representation(q, c)[source]

Get question representation equation 3

Parameters
  • q (_type_) – question ids

  • c (_type_) – concept ids

Returns

_description_

Return type

_type_

obtain_v(q, c, h, x, emb)[source]

_summary_

Parameters
  • q (_type_) – _description_

  • c (_type_) – _description_

  • h (_type_) – _description_

  • x (_type_) – _description_

  • emb (_type_) – m_t

Returns

_description_

Return type

_type_

pi_cog_func(x, softmax_dim=1)[source]
pi_sens_func(x, softmax_dim=1)[source]
training: bool
update_state(h, v, emb, operate)[source]

_summary_

Parameters
  • h (_type_) – rnn的h

  • v (_type_) – question 表示

  • emb (_type_) – s_t knowledge acquistion sensitivity

  • operate (_type_) – label

Returns

_description_

Return type

next_p_state {}

pykt.models.iekt_ce module

class pykt.models.iekt_ce.IEKTCE(num_q, num_c, emb_size, max_concepts, lamb=40, n_layer=1, cog_levels=10, acq_levels=10, dropout=0, gamma=0.93, emb_type='qid', emb_path='', pretrain_dim=768, device='cpu', seed=0, train_mode='sample')[source]

Bases: QueBaseModel

predict_one_step(data, return_details=False, process=True)[source]
train_one_step(data, process=True)[source]
training: bool
class pykt.models.iekt_ce.IEKTNet(num_q, num_c, emb_size, max_concepts, lamb=40, n_layer=1, cog_levels=10, acq_levels=10, dropout=0, gamma=0.93, emb_type='qc_merge', emb_path='', pretrain_dim=768, device='cpu', train_mode='sample')[source]

Bases: Module

forward(data)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_ques_representation(q, c)[source]

Get question representation equation 3

Parameters
  • q (_type_) – question ids

  • c (_type_) – concept ids

Returns

_description_

Return type

_type_

obtain_v(q, c, h, x, emb)[source]

_summary_

Parameters
  • q (_type_) – _description_

  • c (_type_) – _description_

  • h (_type_) – _description_

  • x (_type_) – _description_

  • emb (_type_) – m_t

Returns

_description_

Return type

_type_

pi_cog_func(x, softmax_dim=1)[source]
pi_sens_func(x, softmax_dim=1)[source]
training: bool
update_state(h, v, emb, operate)[source]

_summary_

Parameters
  • h (_type_) – rnn的h

  • v (_type_) – question 表示

  • emb (_type_) – s_t knowledge acquistion sensitivity

  • operate (_type_) – label

Returns

_description_

Return type

next_p_state {}

pykt.models.iekt_utils module

pykt.models.iekt_utils.batch_data_to_device(data, device)[source]
class pykt.models.iekt_utils.funcs(n_layer, hidden_dim, output_dim, dpo)[source]

Bases: Module

classifier decoder implemented with mlp

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class pykt.models.iekt_utils.funcsgru(n_layer, hidden_dim, output_dim, dpo)[source]

Bases: Module

classifier decoder implemented with mlp

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class pykt.models.iekt_utils.mygru(n_layer, input_dim, hidden_dim)[source]

Bases: Module

classifier decoder implemented with mlp

forward(x, h)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

pykt.models.init_model module

pykt.models.init_model.init_model(model_name, model_config, data_config, emb_type)[source]
pykt.models.init_model.load_model(model_name, model_config, data_config, emb_type, ckpt_path)[source]

pykt.models.kqn module

class pykt.models.kqn.KQN(n_skills: int, n_hidden: int, n_rnn_hidden: int, n_mlp_hidden: int, dropout, n_rnn_layers: int = 1, rnn_type='lstm', emb_type='qid', emb_path='', pretrain_dim=768)[source]

Bases: Module

encode_knowledge(in_data)[source]
encode_skills(next_skills)[source]
forward(q, r, qshft, qtest=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

init_hidden(batch_size: int)[source]
training: bool

pykt.models.loss module

class pykt.models.loss.Loss(loss_type='ce', epsilon=1.0, gamma=2.0, reduction='mean')[source]

Bases: object

get_loss(x, target)[source]

This criterion computes the loss between x and target.

Parameters
  • x (_type_) – Predicted unnormalized scores (often referred to as logits)

  • target (_type_) – Ground truth class indices or class probabilities;

Returns

loss

Return type

_type_

pykt.models.loss.focal_loss(x, target, gamma=2.0, reduction='mean')[source]
pykt.models.loss.get_pt(x, target)[source]
pykt.models.loss.loss_reduction(loss, target, reduction)[source]
pykt.models.loss.poly1_cross_entropy(x, target, epsilon=1.0, reduction='mean')[source]
pykt.models.loss.polyl_focal_loss(x, target, epsilon=1.0, gamma=2.0, reduction='mean')[source]

pykt.models.lpkt module

class pykt.models.lpkt.LPKT(n_at, n_it, n_exercise, n_question, d_a, d_e, d_k, gamma=0.03, dropout=0.2, q_matrix='', emb_type='qid', emb_path='', pretrain_dim=768, use_time=True)[source]

Bases: Module

forward(e_data, a_data, it_data=None, at_data=None, qtest=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

pykt.models.lpkt_utils module

pykt.models.lpkt_utils.generate_qmatrix(data_config, gamma=0.0)[source]

pykt.models.qdkt module

class pykt.models.qdkt.QDKT(num_q, num_c, emb_size, dropout=0.1, emb_type='qaid', emb_path='', pretrain_dim=768, device='cpu', seed=0, mlp_layer_num=1, other_config={}, **kwargs)[source]

Bases: QueBaseModel

predict_one_step(data, return_details=False, process=True, return_raw=False)[source]
train_one_step(data, process=True, return_all=False)[source]
training: bool
class pykt.models.qdkt.QDKTNet(num_q, num_c, emb_size, dropout=0.1, emb_type='qaid', emb_path='', pretrain_dim=768, device='cpu', mlp_layer_num=1, other_config={})[source]

Bases: Module

forward(q, c, r, data=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

pykt.models.que_base_model module

class pykt.models.que_base_model.QueBaseModel(model_name, emb_type, emb_path, pretrain_dim, device, seed=0)[source]

Bases: Module

batch_to_device(data, process=True)[source]
compile(optimizer, lr=0.001, loss='binary_crossentropy', metrics=None)[source]
Parameters

ref from https://github.com/shenweichen/DeepCTR-Torch/blob/2cd84f305cb50e0fd235c0f0dd5605c8114840a2/deepctr_torch/models/basemodel.py

evaluate(dataset, batch_size, acc_threshold=0.5)[source]
evaluate_multi_ahead(data_config, batch_size, ob_portions=0.5, acc_threshold=0.5, accumulative=False)[source]

Predictions in the multi-step ahead prediction scenario

Parameters
  • data_config (_type_) – data_config

  • batch_size (int) – batch_size

  • ob_portions (float, optional) – portions of observed student interactions. Defaults to 0.5.

  • accumulative (bool, optional) – True for accumulative prediction and False for non-accumulative prediction. Defaults to False.

  • acc_threshold (float, optional) – threshold for accuracy. Defaults to 0.5.

Returns

auc,acc

Return type

metrics

get_loss(ys, rshft, sm)[source]
load_model(save_dir)[source]
predict(dataset, batch_size, return_ts=False, process=True)[source]
predict_one_step(data, process=True)[source]
train(train_dataset, valid_dataset, batch_size=16, valid_batch_size=None, num_epochs=32, test_loader=None, test_window_loader=None, save_dir='tmp', save_model=False, patient=10, shuffle=True, process=True)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Parameters

mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.

Returns

self

Return type

Module

train_one_step(data, process=True)[source]
training: bool
class pykt.models.que_base_model.QueEmb(num_q, num_c, emb_size, model_name, device='cpu', emb_type='qid', emb_path='', pretrain_dim=768)[source]

Bases: Module

forward(q, c, r=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_avg_skill_emb(c)[source]
training: bool

pykt.models.saint module

class pykt.models.saint.Decoder_block(dim_model, total_res, heads_de, seq_len, dropout)[source]

Bases: Module

M1 = SkipConct(Multihead(LayerNorm(Qin;Kin;Vin))) M2 = SkipConct(Multihead(LayerNorm(M1;O;O))) L = SkipConct(FFN(LayerNorm(M2)))

forward(in_res, in_pos, en_out, first_block=True)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class pykt.models.saint.Encoder_block(dim_model, heads_en, total_ex, total_cat, seq_len, dropout, emb_path='', pretrain_dim=768)[source]

Bases: Module

M = SkipConct(Multihead(LayerNorm(Qin;Kin;Vin))) O = SkipConct(FFN(LayerNorm(M)))

forward(in_ex, in_cat, in_pos, first_block=True)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class pykt.models.saint.SAINT(num_q, num_c, seq_len, emb_size, num_attn_heads, dropout, n_blocks=1, emb_type='qid', emb_path='', pretrain_dim=768)[source]

Bases: Module

forward(in_ex, in_cat, in_res, qtest=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

pykt.models.saint_plus_plus module

class pykt.models.saint_plus_plus.Decoder_block(dim_model, total_res, heads_de, seq_len, dropout, num_q, num_c)[source]

Bases: Module

M1 = SkipConct(Multihead(LayerNorm(Qin;Kin;Vin))) M2 = SkipConct(Multihead(LayerNorm(M1;O;O))) L = SkipConct(FFN(LayerNorm(M2)))

forward(in_ex, in_cat, in_res, in_pos, en_out, first_block=True)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class pykt.models.saint_plus_plus.Encoder_block(dim_model, heads_en, total_ex, total_cat, seq_len, dropout, emb_path='', pretrain_dim=768)[source]

Bases: Module

M = SkipConct(Multihead(LayerNorm(Qin;Kin;Vin))) O = SkipConct(FFN(LayerNorm(M)))

forward(in_ex, in_cat, in_pos, first_block=True)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class pykt.models.saint_plus_plus.SAINT(num_q, num_c, seq_len, emb_size, num_attn_heads, dropout, n_blocks=1, emb_type='qid', emb_path='', pretrain_dim=768)[source]

Bases: Module

forward(in_ex, in_cat, in_res, qtest=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

pykt.models.sakt module

class pykt.models.sakt.Blocks(emb_size, num_attn_heads, dropout)[source]

Bases: Module

forward(q=None, k=None, v=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class pykt.models.sakt.SAKT(num_c, seq_len, emb_size, num_attn_heads, dropout, num_en=2, emb_type='qid', emb_path='', pretrain_dim=768)[source]

Bases: Module

base_emb(q, r, qry)[source]
forward(q, r, qry, qtest=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

pykt.models.skvmn module

class pykt.models.skvmn.DKVMN(memory_size, memory_key_state_dim, memory_value_state_dim, init_memory_key, memory_value=None)[source]

Bases: Module

attention(control_input)[source]
forward(input_)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

read(read_weight, memory_value)[source]
training: bool
write(write_weight, control_input, memory_value)[source]
class pykt.models.skvmn.DKVMNHeadGroup(memory_size, memory_state_dim, is_write)[source]

Bases: Module

static addressing(control_input, memory)[source]
Parameters

control_input: Shape (batch_size, control_state_dim) memory: Shape (memory_size, memory_state_dim)

Returns

correlation_weight: Shape (batch_size, memory_size)

forward(input_)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

read(memory, control_input=None, read_weight=None)[source]
Parameters

control_input: Shape (batch_size, control_state_dim) memory: Shape (batch_size, memory_size, memory_state_dim) read_weight: Shape (batch_size, memory_size)

Returns

read_content: Shape (batch_size, memory_state_dim)

training: bool
write(control_input, memory, write_weight=None)[source]
Parameters

control_input: Shape (batch_size, control_state_dim) write_weight: Shape (batch_size, memory_size) memory: Shape (batch_size, memory_size, memory_state_dim)

Returns

new_memory: Shape (batch_size, memory_size, memory_state_dim)

class pykt.models.skvmn.SKVMN(num_c, dim_s, size_m, dropout=0.2, emb_type='qid', emb_path='', use_onehot=False)[source]

Bases: Module

forward(q, r)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
triangular_layer(correlation_weight, batch_size=64, a=0.075, b=0.088, c=1.0)[source]
ut_mask(seq_len)[source]

pykt.models.train_model module

pykt.models.train_model.cal_loss(model, ys, r, rshft, sm, preloss=[])[source]
pykt.models.train_model.model_forward(model, data)[source]
pykt.models.train_model.train_model(model, train_loader, valid_loader, num_epochs, opt, ckpt_path, test_loader=None, test_window_loader=None, save_model=False)[source]

pykt.models.utils module

pykt.models.utils.get_clones(module, N)[source]

Cloning nn modules

pykt.models.utils.lt_mask(seq_len)[source]

Upper Triangular Mask

pykt.models.utils.pos_encode(seq_len)[source]

position Encoding

class pykt.models.utils.transformer_FFN(emb_size, dropout)[source]

Bases: Module

forward(in_fea)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
pykt.models.utils.ut_mask(seq_len)[source]

Upper Triangular Mask

Module contents