vlkit.ops package
Submodules
vlkit.ops.blocks module
vlkit.ops.conv module
vlkit.ops.distributed module
- class vlkit.ops.distributed.AllGather(*args, **kwargs)[source]
Bases:
torch.autograd.function.Function
all_gather with gradient back-propagation
- static backward(ctx, *grad_list)[source]
Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs as theforward()
returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
- static forward(ctx, tensor_list, tensor)[source]
Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store arbitrary data that can be then retrieved during the backward pass.
- vlkit.ops.distributed.all_gather()
vlkit.ops.divergence module
- vlkit.ops.divergence.js_divergence(x, y)[source]
The Jensen–Shannon divergence Inputs are similar to kl_divergence
vlkit.ops.drop module
- class vlkit.ops.drop.DropPath(p=0)[source]
Bases:
torch.nn.modules.module.Module
Randomly drop paths (zero output) per sample. p: probability of dropping samples
- extra_repr()[source]
Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
vlkit.ops.dual_bn module
vlkit.ops.functional module
vlkit.ops.nn module
- class vlkit.ops.nn.ArcFace(in_features, out_features, s=32, m=0.5, ada_m=False, warmup_iters=- 1, return_m=False)[source]
Bases:
torch.nn.modules.module.Module
ArcFace https://arxiv.org/pdf/1801.07698
- extra_repr()[source]
Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward(input, label=None)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- vlkit.ops.nn.deconv_upsample(channels, stride, fixed=True)[source]
channels: number of input/output channels stride: upsampling factor fixed: whether fix deconv parameters (default: True)
- vlkit.ops.nn.upsample_filter(size)[source]
Make a 2D bilinear kernel suitable for upsampling of the given (h, w) size. reference: https://github.com/shelhamer/fcn.berkeleyvision.org/blob/master/surgery.py
vlkit.ops.non_local module
- class vlkit.ops.non_local.NonLocal(in_chs, hidden_chs=None, return_affinity=False)[source]
Bases:
torch.nn.modules.module.Module
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
vlkit.ops.scale_grad module
- class vlkit.ops.scale_grad.ScaleGrad(scale)[source]
Bases:
torch.nn.modules.module.Module
Scale gradient. This module can be used to inverse the gradient (e.g. scale=-1) or block the gradient (e.g. scale=0)
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class vlkit.ops.scale_grad.ScaleGradFunc(*args, **kwargs)[source]
Bases:
torch.autograd.function.Function
Scale the gradient
- static backward(ctx, grad_output)[source]
Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs as theforward()
returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
- static forward(ctx, x, scale=1)[source]
Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store arbitrary data that can be then retrieved during the backward pass.
- vlkit.ops.scale_grad.scale_grad()