smoothness
: Backwards differences#
smoothness
#
- hypercoil.loss.smoothness(X: Tensor, *, n: int = 1, pad_value: float | None = None, axis: int = -1, key: PRNGKey | None = None) Tensor [source]#
Smoothness score function.
This loss penalises large or sudden changes in the input tensor. It is currently a thin wrapper around
jax.numpy.diff
.Warning
This function returns both positive and negative values, and so should probably not be used with a scalarisation map like
mean_scalarise
orsum_scalarise
. Instead, maps likemeansq_scalarise
orvnorm_scalarise
with either thep=1
orp=inf
options might be more appropriate.- Parameters:
- XTensor
Input tensor.
- n: int, optional (default: 1)
Number of times to differentiate using the backwards differences method.
- axisint, optional (default: -1)
Axis defining the slice of the input tensor over which differences are computed
- pad_valuefloat, optional (default: None)
Arguments to
jnp.diff
. Values to prepend to the input along the specified axis before computing the difference.
SmoothnessLoss
#
- class hypercoil.loss.SmoothnessLoss(nu: float = 1.0, name: str | None = None, *, n: int = 1, pad_value: float | None = None, axis: int = -1, scalarisation: Callable | None = None, key: 'jax.random.PRNGKey' | None = None)[source]#
Smoothness loss function.
This loss penalises large or sudden changes in the input tensor. It is currently a thin wrapper around
jax.numpy.diff
.Warning
This function returns both positive and negative values, and so should probably not be used with a scalarisation map like
mean_scalarise
orsum_scalarise
. Instead, maps likemeansq_scalarise
orvnorm_scalarise
with either thep=1
orp=inf
options might be more appropriate.- Parameters:
- name: str
Designated name of the loss function. It is not required that this be specified, but it is recommended to ensure that the loss function can be identified in the context of a reporting utilities. If not explicitly specified, the name will be inferred from the class name and the name of the scoring function.
- nu: float
Loss strength multiplier. This is a scalar multiplier that is applied to the loss value before it is returned. This can be used to modulate the relative contributions of different loss functions to the overall loss value. It can also be used to implement a schedule for the loss function, by dynamically adjusting the multiplier over the course of training.
- n: int, optional (default: 1)
Number of times to differentiate using the backwards differences method.
- axisint, optional (default: -1)
Axis defining the slice of the input tensor over which differences are computed
- pad_valuefloat, optional (default: None)
Arguments to
jnp.diff
. Values to prepend to the input along the specified axis before computing the difference.- scalarisation: Callable
The scalarisation function to be used to aggregate the values returned by the scoring function. This function should take a single argument, which is a tensor of arbitrary shape, and return a single scalar value. By default, the L1 norm scalarisation is used.
Methods
__call__
(X, *[, key])Call self as a function.