pytorch_backend#
- class pyhf.tensor.pytorch_backend.pytorch_backend(**kwargs)[source]#
Bases:
object
PyTorch backend for pyhf
Attributes
- name#
- precision#
- dtypemap#
- default_do_grad#
- array_subtype#
The array content type for pytorch
- array_type#
The array type for pytorch
Methods
- astensor(tensor_in, dtype='float')[source]#
Convert to a PyTorch Tensor.
Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> tensor = pyhf.tensorlib.astensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) >>> tensor tensor([[1., 2., 3.], [4., 5., 6.]]) >>> type(tensor) <class 'torch.Tensor'>
- Parameters:
tensor_in (Number or Tensor) – Tensor object
- Returns:
A multi-dimensional matrix containing elements of a single data type.
- Return type:
torch.Tensor
- clip(tensor_in, min_value, max_value)[source]#
Clips (limits) the tensor values to be within a specified min and max.
Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> a = pyhf.tensorlib.astensor([-2, -1, 0, 1, 2]) >>> pyhf.tensorlib.clip(a, -1, 1) tensor([-1., -1., 0., 1., 1.])
- concatenate(sequence, axis=0)[source]#
Join a sequence of arrays along an existing axis.
- Parameters:
sequence – sequence of tensors
axis – dimension along which to concatenate
- Returns:
the concatenated tensor
- Return type:
output
- conditional(predicate, true_callable, false_callable)[source]#
Runs a callable conditional on the boolean value of the evaluation of a predicate
Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> tensorlib = pyhf.tensorlib >>> a = tensorlib.astensor([4]) >>> b = tensorlib.astensor([5]) >>> tensorlib.conditional((a < b)[0], lambda: a + b, lambda: a - b) tensor([9.])
- Parameters:
- Returns:
The output of the callable that was evaluated
- Return type:
PyTorch Tensor
- einsum(subscripts, *operands)[source]#
This function provides a way of computing multilinear expressions (i.e. sums of products) using the Einstein summation convention.
- Parameters:
subscripts – str, specifies the subscripts for summation
operands – list of array_like, these are the tensors for the operation
- Returns:
the calculation based on the Einstein summation convention
- Return type:
tensor
- erf(tensor_in)[source]#
The error function of complex argument.
Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> a = pyhf.tensorlib.astensor([-2., -1., 0., 1., 2.]) >>> pyhf.tensorlib.erf(a) tensor([-0.9953, -0.8427, 0.0000, 0.8427, 0.9953])
- Parameters:
tensor_in (
tensor
) – The input tensor object- Returns:
The values of the error function at the given points.
- Return type:
PyTorch Tensor
- erfinv(tensor_in)[source]#
The inverse of the error function of complex argument.
Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> a = pyhf.tensorlib.astensor([-2., -1., 0., 1., 2.]) >>> pyhf.tensorlib.erfinv(pyhf.tensorlib.erf(a)) tensor([-2.0000, -1.0000, 0.0000, 1.0000, 2.0000])
- Parameters:
tensor_in (
tensor
) – The input tensor object- Returns:
The values of the inverse of the error function at the given points.
- Return type:
PyTorch Tensor
- normal(x, mu, sigma)[source]#
The probability density function of the Normal distribution evaluated at
x
given parameters of mean ofmu
and standard deviation ofsigma
.Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> pyhf.tensorlib.normal(0.5, 0., 1.) tensor(0.3521) >>> values = pyhf.tensorlib.astensor([0.5, 2.0]) >>> means = pyhf.tensorlib.astensor([0., 2.3]) >>> sigmas = pyhf.tensorlib.astensor([1., 0.8]) >>> pyhf.tensorlib.normal(values, means, sigmas) tensor([0.3521, 0.4648])
- Parameters:
- Returns:
Value of Normal(x|mu, sigma)
- Return type:
PyTorch FloatTensor
- normal_cdf(x, mu=0.0, sigma=1.0)[source]#
The cumulative distribution function for the Normal distribution
Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> pyhf.tensorlib.normal_cdf(0.8) tensor(0.7881) >>> values = pyhf.tensorlib.astensor([0.8, 2.0]) >>> pyhf.tensorlib.normal_cdf(values) tensor([0.7881, 0.9772])
- normal_dist(mu, sigma)[source]#
The Normal distribution with mean
mu
and standard deviationsigma
.Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> means = pyhf.tensorlib.astensor([5, 8]) >>> stds = pyhf.tensorlib.astensor([1, 0.5]) >>> values = pyhf.tensorlib.astensor([4, 9]) >>> normals = pyhf.tensorlib.normal_dist(means, stds) >>> normals.log_prob(values) tensor([-1.4189, -2.2258])
- outer(tensor_in_1, tensor_in_2)[source]#
Outer product of the input tensors.
Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> a = pyhf.tensorlib.astensor([1.0, 2.0, 3.0]) >>> b = pyhf.tensorlib.astensor([1.0, 2.0, 3.0, 4.0]) >>> pyhf.tensorlib.outer(a, b) tensor([[ 1., 2., 3., 4.], [ 2., 4., 6., 8.], [ 3., 6., 9., 12.]])
- Parameters:
tensor_in_1 (
tensor
) – 1-D input tensor.tensor_in_2 (
tensor
) – 1-D input tensor.
- Returns:
The outer product.
- Return type:
PyTorch tensor
- percentile(tensor_in, q, axis=None, interpolation='linear')[source]#
Compute the \(q\)-th percentile of the tensor along the specified axis.
Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> a = pyhf.tensorlib.astensor([[10, 7, 4], [3, 2, 1]]) >>> pyhf.tensorlib.percentile(a, 50) tensor(3.5000) >>> pyhf.tensorlib.percentile(a, 50, axis=1) tensor([7., 2.])
- Parameters:
tensor_in (tensor) – The tensor containing the data
q (
float
or tensor) – The \(q\)-th percentile to computeaxis (number or tensor) – The dimensions along which to compute
interpolation (
str
) –The interpolation method to use when the desired percentile lies between two data points
i < j
:'linear'
:i + (j - i) * fraction
, wherefraction
is the fractional part of the index surrounded byi
andj
.'lower'
: Not yet implemented in PyTorch.'higher'
: Not yet implemented in PyTorch.'midpoint'
: Not yet implemented in PyTorch.'nearest'
: Not yet implemented in PyTorch.
- Returns:
The value of the \(q\)-th percentile of the tensor along the specified axis.
- Return type:
PyTorch tensor
Added in version 0.7.0.
- poisson(n, lam)[source]#
The continuous approximation, using \(n! = \Gamma\left(n+1\right)\), to the probability mass function of the Poisson distribution evaluated at
n
given the parameterlam
.Note
Though the p.m.f of the Poisson distribution is not defined for \(\lambda = 0\), the limit as \(\lambda \to 0\) is still defined, which gives a degenerate p.m.f. of
\[\begin{split}\lim_{\lambda \to 0} \,\mathrm{Pois}(n | \lambda) = \left\{\begin{array}{ll} 1, & n = 0,\\ 0, & n > 0 \end{array}\right.\end{split}\]Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> pyhf.tensorlib.poisson(5., 6.) tensor(0.1606) >>> values = pyhf.tensorlib.astensor([5., 9.]) >>> rates = pyhf.tensorlib.astensor([6., 8.]) >>> pyhf.tensorlib.poisson(values, rates) tensor([0.1606, 0.1241])
- Parameters:
- Returns:
Value of the continuous approximation to Poisson(n|lam)
- Return type:
PyTorch FloatTensor
- poisson_dist(rate)[source]#
The Poisson distribution with rate parameter
rate
.Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> rates = pyhf.tensorlib.astensor([5, 8]) >>> values = pyhf.tensorlib.astensor([4, 9]) >>> poissons = pyhf.tensorlib.poisson_dist(rates) >>> poissons.log_prob(values) tensor([-1.7403, -2.0869])
- Parameters:
rate (
tensor
orfloat
) – The mean of the Poisson distribution (the expected number of events)- Returns:
The Poisson distribution class
- Return type:
PyTorch Poisson distribution
- ravel(tensor)[source]#
Return a flattened view of the tensor, not a copy.
Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> tensor = pyhf.tensorlib.astensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) >>> pyhf.tensorlib.ravel(tensor) tensor([1., 2., 3., 4., 5., 6.])
- Parameters:
tensor (Tensor) – Tensor object
- Returns:
A flattened array.
- Return type:
torch.Tensor
- simple_broadcast(*args)[source]#
Broadcast a sequence of 1 dimensional arrays.
Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> pyhf.tensorlib.simple_broadcast( ... pyhf.tensorlib.astensor([1]), ... pyhf.tensorlib.astensor([2, 3, 4]), ... pyhf.tensorlib.astensor([5, 6, 7])) [tensor([1., 1., 1.]), tensor([2., 3., 4.]), tensor([5., 6., 7.])]
- Parameters:
args (Array of Tensors) – Sequence of arrays
- Returns:
The sequence broadcast together.
- Return type:
list of Tensors
- tile(tensor_in, repeats)[source]#
Repeat tensor data along a specific dimension
Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> a = pyhf.tensorlib.astensor([[1.0], [2.0]]) >>> pyhf.tensorlib.tile(a, (1, 2)) tensor([[1., 1.], [2., 2.]])
- Parameters:
tensor_in (
tensor
) – The tensor to be repeatedrepeats (
tensor
) – The tuple of multipliers for each dimension
- Returns:
The tensor with repeated axes
- Return type:
PyTorch tensor
- to_numpy(tensor_in)[source]#
Convert the PyTorch tensor to a
numpy.ndarray
.Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> tensor = pyhf.tensorlib.astensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) >>> tensor tensor([[1., 2., 3.], [4., 5., 6.]]) >>> numpy_ndarray = pyhf.tensorlib.to_numpy(tensor) >>> numpy_ndarray array([[1., 2., 3.], [4., 5., 6.]]) >>> type(numpy_ndarray) <class 'numpy.ndarray'>
- Parameters:
tensor_in (
tensor
) – The input tensor object.- Returns:
The tensor converted to a NumPy
ndarray
.- Return type:
- transpose(tensor_in)[source]#
Transpose the tensor.
Example
>>> import pyhf >>> pyhf.set_backend("pytorch") >>> tensor = pyhf.tensorlib.astensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) >>> tensor tensor([[1., 2., 3.], [4., 5., 6.]]) >>> pyhf.tensorlib.transpose(tensor) tensor([[1., 4.], [2., 5.], [3., 6.]])
- Parameters:
tensor_in (
tensor
) – The input tensor object.- Returns:
The transpose of the input tensor.
- Return type:
PyTorch FloatTensor
Added in version 0.7.0.