vlkit.pytorch package
Submodules
vlkit.pytorch.loss module
- class vlkit.pytorch.loss.FLoss(beta=0.3)[source]
Bases:
torch.nn.modules.module.Module
Code acompanying the paper https://arxiv.org/abs/1805.07567
- Parameters
beta – the beta parameter of fmeasure
- forward(prediction, target, weight=None)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
vlkit.pytorch.wasserstein module
- class vlkit.pytorch.wasserstein.WassersteinLoss(gm, reg, max_iter, eps=1e-06)[source]
Bases:
torch.nn.modules.module.Module
- forward(prediction, target)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class vlkit.pytorch.wasserstein.WassersteinLossFunction(*args, **kwargs)[source]
Bases:
torch.autograd.function.Function
- static backward(ctx, grad_output)[source]
Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs as theforward()
returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
- static forward(ctx, prediction, label, M, reg, numItermax=100, eps=1e-06)[source]
Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store arbitrary data that can be then retrieved during the backward pass.
Module contents
- class vlkit.pytorch.AverageMeter[source]
Bases:
object
Computes and stores the average and current value