Predict

Submodules

kale.predict.class_domain_nets module

Classification of data or domain

Modules for typical classification tasks (into class labels) and adversarial discrimination of source vs target domains, from https://github.com/criteo-research/pytorch-ada/blob/master/adalib/ada/models/modules.py

class kale.predict.class_domain_nets.SoftmaxNet(input_dim=15, n_classes=2, name='c', hidden=(), activation_fn=<class 'torch.nn.modules.activation.ReLU'>, **activation_args)

Bases: Module

Regular and domain classifier network for regular-size images

Parameters
  • input_dim (int, optional) – the dimension of the final feature vector.. Defaults to 15.

  • n_classes (int, optional) – the number of classes. Defaults to 2.

  • name (str, optional) – the classifier name. Defaults to “c”.

  • hidden (tuple, optional) – the hidden layer sizes. Defaults to ().

  • activation_fn ([type], optional) – the activation function. Defaults to nn.ReLU.

forward(input_data)
extra_repr()
n_classes()
training: bool
class kale.predict.class_domain_nets.ClassNetSmallImage(input_size=128, n_class=10)

Bases: Module

Regular classifier network for small-size images

Parameters
  • input_size (int, optional) – the dimension of the final feature vector. Defaults to 128.

  • n_class (int, optional) – the number of classes. Defaults to 10.

n_classes()
forward(input)
training: bool
class kale.predict.class_domain_nets.DomainNetSmallImage(input_size=128, bigger_discrim=False)

Bases: Module

Domain classifier network for small-size images

Parameters
  • input_size (int, optional) – the dimension of the final feature vector. Defaults to 128.

  • bigger_discrim (bool, optional) – whether to use deeper network. Defaults to False.

forward(input)
training: bool
class kale.predict.class_domain_nets.ClassNetVideo(input_size=512, n_channel=100, dropout_keep_prob=0.5, n_class=8)

Bases: Module

Regular classifier network for video input.

Parameters
  • input_size (int, optional) – the dimension of the final feature vector. Defaults to 512.

  • n_channel (int, optional) – the number of channel for Linear and BN layers.

  • dropout_keep_prob (int, optional) – the dropout probability for keeping the parameters.

  • n_class (int, optional) – the number of classes. Defaults to 8.

n_classes()
forward(input)
training: bool
class kale.predict.class_domain_nets.ClassNetVideoConv(input_size=1024, n_class=8)

Bases: Module

Classifier network for video input refer to MMSADA.

Parameters
  • input_size (int, optional) – the dimension of the final feature vector. Defaults to 1024.

  • n_class (int, optional) – the number of classes. Defaults to 8.

References

Munro Jonathan, and Dima Damen. “Multi-modal domain adaptation for fine-grained action recognition.” In CVPR, pp. 122-132. 2020.

forward(input)
training: bool
class kale.predict.class_domain_nets.DomainNetVideo(input_size=128, n_channel=100)

Bases: Module

Regular domain classifier network for video input.

Parameters
  • input_size (int, optional) – the dimension of the final feature vector. Defaults to 512.

  • n_channel (int, optional) – the number of channel for Linear and BN layers.

forward(input)
training: bool

kale.predict.isonet module

The ISONet module, which is based on the ResNet module, from https://github.com/HaozhiQi/ISONet/blob/master/isonet/models/isonet.py (based on https://github.com/facebookresearch/pycls/blob/master/pycls/models/resnet.py)

kale.predict.isonet.get_trans_fun(name)

Retrieves the transformation function by name.

class kale.predict.isonet.SReLU(nc)

Bases: Module

Shifted ReLU

forward(x)
training: bool
class kale.predict.isonet.ResHead(w_in, net_params)

Bases: Module

ResNet head.

forward(x)
training: bool
class kale.predict.isonet.BasicTransform(w_in, w_out, stride, has_bn, use_srelu, w_b=None, num_gs=1)

Bases: Module

Basic transformation: 3x3, 3x3

forward(x)
training: bool
class kale.predict.isonet.BottleneckTransform(w_in, w_out, stride, has_bn, use_srelu, w_b, num_gs)

Bases: Module

Bottleneck transformation: 1x1, 3x3, 1x1, only for very deep networks

forward(x)
training: bool
class kale.predict.isonet.ResBlock(w_in, w_out, stride, trans_fun, has_bn, has_st, use_srelu, w_b=None, num_gs=1)

Bases: Module

Residual block: x + F(x)

forward(x)
training: bool
class kale.predict.isonet.ResStage(w_in, w_out, stride, net_params, d, w_b=None, num_gs=1)

Bases: Module

Stage of ResNet.

forward(x)
training: bool
class kale.predict.isonet.ResStem(w_in, w_out, net_params, kernelsize=3, stride=1, padding=1, use_maxpool=False, poolksize=3, poolstride=2, poolpadding=1)

Bases: Module

Stem of ResNet.

forward(x)
training: bool
class kale.predict.isonet.ISONet(net_params)

Bases: Module

ISONet, a modified ResNet model.

forward(x)
ortho(device)

regularizes the convolution kernel to be (near) orthogonal during training. This is called in Trainer.loss of the isonet example.

ortho_conv(m, device)

regularizes the convolution kernel to be (near) orthogonal during training.

Parameters

m (nn.module]) – [description]

training: bool

kale.predict.losses module

Commonly used losses, from domain adaptation package https://github.com/criteo-research/pytorch-ada/blob/master/adalib/ada/models/losses.py

kale.predict.losses.cross_entropy_logits(linear_output, label, weights=None)

Computes cross entropy with logits

Examples

See DANN, WDGRL, and MMD trainers in kale.pipeline.domain_adapter

kale.predict.losses.topk_accuracy(output, target, topk=(1,))

Computes the top-k accuracy for the specified values of k.

Parameters
  • output (Tensor) – Generated predictions. Shape: (batch_size, class_count).

  • target (Tensor) – Ground truth. Shape: (batch_size)

  • topk (tuple(int)) – Compute accuracy at top-k for the values of k specified in this parameter.

Returns

A list of tensors of the same length as topk. Each tensor consists of boolean variables to show if this prediction ranks top k with each value of k. True means the prediction ranks top k and False means not. The shape of tensor is batch_size, i.e. the number of predictions.

Return type

list(Tensor)

Examples

>>> output = torch.tensor(([0.3, 0.2, 0.1], [0.3, 0.2, 0.1]))
>>> target = torch.tensor((0, 1))
>>> top1, top2 = topk_accuracy(output, target, topk=(1, 2)) # get the boolean value
>>> top1_value = top1.double().mean() # get the top 1 accuracy score
>>> top2_value = top2.double().mean() # get the top 2 accuracy score
kale.predict.losses.multitask_topk_accuracy(output, target, topk=(1,))

Computes the top-k accuracy for the specified values of k for multitask input.

Parameters
  • output (tuple(Tensor)) – A tuple of generated predictions. Each tensor is of shape [batch_size, class_count], class_count can vary per task basis, i.e. outputs[i].shape[1] can differ from outputs[j].shape[1].

  • target (tuple(Tensor)) – A tuple of ground truth. Each tensor is of shape [batch_size]

  • topk (tuple(int)) – Compute accuracy at top-k for the values of k specified in this parameter.

Returns

A list of tensors of the same length as topk. Each tensor consists of boolean variables to show if predictions of multitask ranks top k with each value of k. True means predictions of this output for all tasks ranks top k and False means not. The shape of tensor is batch_size, i.e. the number of predictions.

Examples

>>> first_output = torch.tensor(([0.3, 0.2, 0.1], [0.3, 0.2, 0.1]))
>>> first_target = torch.tensor((0, 2))
>>> second_output = torch.tensor(([0.2, 0.1], [0.2, 0.1]))
>>> second_target = torch.tensor((0, 1))
>>> output = (first_output, second_output)
>>> target = (first_target, second_target)
>>> top1, top2 = multitask_topk_accuracy(output, target, topk=(1, 2)) # get the boolean value
>>> top1_value = top1.double().mean() # get the top 1 accuracy score
>>> top2_value = top2.double().mean() # get the top 2 accuracy score

Return type

list(Tensor)

kale.predict.losses.entropy_logits(linear_output)

Computes entropy logits in CDAN with entropy conditioning (CDAN+E)

Examples

See CDANTrainer in kale.pipeline.domain_adapter

kale.predict.losses.entropy_logits_loss(linear_output)

Computes entropy logits loss in semi-supervised or few-shot domain adaptation

Examples

See FewShotDANNTrainer in kale.pipeline.domain_adapter

kale.predict.losses.gradient_penalty(critic, h_s, h_t)

Computes gradient penalty in Wasserstein distance guided representation learning

Examples

See WDGRLTrainer and WDGRLTrainerMod in kale.pipeline.domain_adapter

kale.predict.losses.gaussian_kernel(source, target, kernel_mul=2.0, kernel_num=5, fix_sigma=None)

Code from XLearn: computes the full kernel matrix, which is less than optimal since we don’t use all of it with the linear MMD estimate.

Examples

See DANTrainer and JANTrainer in kale.pipeline.domain_adapter

kale.predict.losses.compute_mmd_loss(kernel_values, batch_size)

Computes the Maximum Mean Discrepancy (MMD) between domains.

Examples

See DANTrainer and JANTrainer in kale.pipeline.domain_adapter

kale.predict.losses.hsic(kx, ky, device)

Perform independent test with Hilbert-Schmidt Independence Criterion (HSIC) between two sets of variables x and y.

Parameters
  • kx (2-D tensor) – kernel matrix of x, shape (n_samples, n_samples)

  • ky (2-D tensor) – kernel matrix of y, shape (n_samples, n_samples)

  • device (torch.device) – the desired device of returned tensor

Returns

Independent test score >= 0

Return type

[tensor]

Reference:
[1] Gretton, Arthur, Bousquet, Olivier, Smola, Alex, and Schölkopf, Bernhard. Measuring Statistical Dependence

with Hilbert-Schmidt Norms. In Algorithmic Learning Theory (ALT), pp. 63–77. 2005.

[2] Gretton, Arthur, Fukumizu, Kenji, Teo, Choon H., Song, Le, Schölkopf, Bernhard, and Smola, Alex J. A Kernel

Statistical Test of Independence. In Advances in Neural Information Processing Systems, pp. 585–592. 2008.

kale.predict.losses.euclidean(x1, x2)

Compute the Euclidean distance

Parameters
  • x1 (torch.Tensor) – variables set 1

  • x2 (torch.Tensor) – variables set 2

Returns

Euclidean distance

Return type

torch.Tensor

kale.predict.decode module

class kale.predict.decode.MLPDecoder(in_dim, hidden_dim, out_dim, dropout_rate=0.1)

Bases: Module

The MLP decoder module, which comprises four fully connected neural networks. It’s a common decoder for decoding drug-target encoding information.

Parameters
  • in_dim (int) – Dimension of input feature.

  • hidden_dim (int) – Dimension of hidden layers.

  • out_dim (int) – Dimension of output layer.

  • dropout_rate (float) – dropout rate during training.

forward(x)
training: bool

Module contents