numpy_backend

class pyhf.tensor.numpy_backend.numpy_backend(**kwargs: str)[source]

Bases: Generic[pyhf.tensor.numpy_backend.T]

NumPy backend for pyhf

__init__(**kwargs: str)[source]

Attributes

name
precision
dtypemap: Mapping[FloatIntOrBool, DTypeLike]
default_do_grad: bool
array_type = <class 'numpy.ndarray'>

The array type for numpy

array_subtype = <class 'numpy.number'>

The array content type for numpy

Methods

_setup() None[source]

Run any global setups for the numpy lib.

abs(tensor: Tensor[T]) ArrayLike[source]
astensor(tensor_in: numpy.typing.ArrayLike, dtype: Literal['float', 'int', 'bool'] = 'float') numpy.typing.ArrayLike[source]

Convert to a NumPy array.

Example

>>> import pyhf
>>> pyhf.set_backend("numpy")
>>> tensor = pyhf.tensorlib.astensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
>>> tensor
array([[1., 2., 3.],
       [4., 5., 6.]])
>>> type(tensor)
<class 'numpy.ndarray'>
Parameters:

tensor_in (Number or Tensor) – Tensor object

Returns:

A multi-dimensional, fixed-size homogeneous array.

Return type:

numpy.ndarray

boolean_mask(tensor: Tensor[T], mask: NDArray[np.bool_]) ArrayLike[source]
clip(tensor_in: Tensor[T], min_value: np.integer[T] | np.floating[T], max_value: np.integer[T] | np.floating[T]) ArrayLike[source]

Clips (limits) the tensor values to be within a specified min and max.

Example

>>> import pyhf
>>> pyhf.set_backend("numpy")
>>> a = pyhf.tensorlib.astensor([-2, -1, 0, 1, 2])
>>> pyhf.tensorlib.clip(a, -1, 1)
array([-1., -1.,  0.,  1.,  1.])
Parameters:
  • tensor_in (tensor) – The input tensor object

  • min_value (scalar or tensor or None) – The minimum value to be clipped to

  • max_value (scalar or tensor or None) – The maximum value to be clipped to

Returns:

A clipped tensor

Return type:

NumPy ndarray

concatenate(sequence: Tensor[T], axis: None | int = 0) ArrayLike[source]

Join a sequence of arrays along an existing axis.

Parameters:
  • sequence – sequence of tensors

  • axis – dimension along which to concatenate

Returns:

the concatenated tensor

Return type:

output

conditional(predicate: NDArray[np.bool_], true_callable: Callable[[], Tensor[T]], false_callable: Callable[[], Tensor[T]]) ArrayLike[source]

Runs a callable conditional on the boolean value of the evaluation of a predicate

Example

>>> import pyhf
>>> pyhf.set_backend("numpy")
>>> tensorlib = pyhf.tensorlib
>>> a = tensorlib.astensor([4])
>>> b = tensorlib.astensor([5])
>>> tensorlib.conditional((a < b)[0], lambda: a + b, lambda: a - b)
array([9.])
Parameters:
  • predicate (scalar) – The logical condition that determines which callable to evaluate

  • true_callable (callable) – The callable that is evaluated when the predicate evaluates to true

  • false_callable (callable) – The callable that is evaluated when the predicate evaluates to false

Returns:

The output of the callable that was evaluated

Return type:

NumPy ndarray

divide(tensor_in_1: Tensor[T], tensor_in_2: Tensor[T]) ArrayLike[source]
einsum(subscripts: str, *operands: Sequence[Tensor[T]]) ArrayLike[source]

Evaluates the Einstein summation convention on the operands.

Using the Einstein summation convention, many common multi-dimensional array operations can be represented in a simple fashion. This function provides a way to compute such summations. The best way to understand this function is to try the examples below, which show how many common NumPy functions can be implemented as calls to einsum.

Parameters:
  • subscripts – str, specifies the subscripts for summation

  • operands – list of array_like, these are the tensors for the operation

Returns:

the calculation based on the Einstein summation convention

Return type:

tensor

erf(tensor_in: Tensor[T]) ArrayLike[source]

The error function of complex argument.

Example

>>> import pyhf
>>> pyhf.set_backend("numpy")
>>> a = pyhf.tensorlib.astensor([-2., -1., 0., 1., 2.])
>>> pyhf.tensorlib.erf(a)
array([-0.99532227, -0.84270079,  0.        ,  0.84270079,  0.99532227])
Parameters:

tensor_in (tensor) – The input tensor object

Returns:

The values of the error function at the given points.

Return type:

NumPy ndarray

erfinv(tensor_in: Tensor[T]) ArrayLike[source]

The inverse of the error function of complex argument.

Example

>>> import pyhf
>>> pyhf.set_backend("numpy")
>>> a = pyhf.tensorlib.astensor([-2., -1., 0., 1., 2.])
>>> pyhf.tensorlib.erfinv(pyhf.tensorlib.erf(a))
array([-2., -1.,  0.,  1.,  2.])
Parameters:

tensor_in (tensor) – The input tensor object

Returns:

The values of the inverse of the error function at the given points.

Return type:

NumPy ndarray

exp(tensor_in: Tensor[T]) ArrayLike[source]
gather(tensor: Tensor[T], indices: NDArray[np.integer[T]]) ArrayLike[source]
isfinite(tensor: Tensor[T]) NDArray[np.bool_][source]
log(tensor_in: Tensor[T]) ArrayLike[source]
normal(x: Tensor[T], mu: Tensor[T], sigma: Tensor[T]) ArrayLike[source]

The probability density function of the Normal distribution evaluated at x given parameters of mean of mu and standard deviation of sigma.

Example

>>> import pyhf
>>> pyhf.set_backend("numpy")
>>> pyhf.tensorlib.normal(0.5, 0., 1.)
0.35206532...
>>> values = pyhf.tensorlib.astensor([0.5, 2.0])
>>> means = pyhf.tensorlib.astensor([0., 2.3])
>>> sigmas = pyhf.tensorlib.astensor([1., 0.8])
>>> pyhf.tensorlib.normal(values, means, sigmas)
array([0.35206533, 0.46481887])
Parameters:
  • x (tensor or float) – The value at which to evaluate the Normal distribution p.d.f.

  • mu (tensor or float) – The mean of the Normal distribution

  • sigma (tensor or float) – The standard deviation of the Normal distribution

Returns:

Value of Normal(x|mu, sigma)

Return type:

NumPy float

normal_cdf(x: Tensor[T], mu: float | Tensor[T] = 0, sigma: float | Tensor[T] = 1) ArrayLike[source]

The cumulative distribution function for the Normal distribution

Example

>>> import pyhf
>>> pyhf.set_backend("numpy")
>>> pyhf.tensorlib.normal_cdf(0.8)
0.78814460...
>>> values = pyhf.tensorlib.astensor([0.8, 2.0])
>>> pyhf.tensorlib.normal_cdf(values)
array([0.7881446 , 0.97724987])
Parameters:
  • x (tensor or float) – The observed value of the random variable to evaluate the CDF for

  • mu (tensor or float) – The mean of the Normal distribution

  • sigma (tensor or float) – The standard deviation of the Normal distribution

Returns:

The CDF

Return type:

NumPy float

normal_dist(mu: Tensor[T], sigma: Tensor[T]) _BasicNormal[source]

The Normal distribution with mean mu and standard deviation sigma.

Example

>>> import pyhf
>>> pyhf.set_backend("numpy")
>>> means = pyhf.tensorlib.astensor([5, 8])
>>> stds = pyhf.tensorlib.astensor([1, 0.5])
>>> values = pyhf.tensorlib.astensor([4, 9])
>>> normals = pyhf.tensorlib.normal_dist(means, stds)
>>> normals.log_prob(values)
array([-1.41893853, -2.22579135])
Parameters:
  • mu (tensor or float) – The mean of the Normal distribution

  • sigma (tensor or float) – The standard deviation of the Normal distribution

Returns:

The Normal distribution class

Return type:

Normal distribution

normal_logpdf(x: Tensor[T], mu: Tensor[T], sigma: Tensor[T]) ArrayLike[source]
ones(shape: Tuple[int, ...], dtype: Literal['float', 'int', 'bool'] = 'float') numpy.typing.ArrayLike[source]
outer(tensor_in_1: Tensor[T], tensor_in_2: Tensor[T]) ArrayLike[source]
percentile(tensor_in: Tensor[T], q: Tensor[T], axis: None | Shape = None, interpolation: Literal['linear', 'lower', 'higher', 'midpoint', 'nearest'] = 'linear') ArrayLike[source]

Compute the \(q\)-th percentile of the tensor along the specified axis.

Example

>>> import pyhf
>>> pyhf.set_backend("numpy")
>>> a = pyhf.tensorlib.astensor([[10, 7, 4], [3, 2, 1]])
>>> pyhf.tensorlib.percentile(a, 50)
3.5
>>> pyhf.tensorlib.percentile(a, 50, axis=1)
array([7., 2.])
Parameters:
  • tensor_in (tensor) – The tensor containing the data

  • q (float or tensor) – The \(q\)-th percentile to compute

  • axis (number or tensor) – The dimensions along which to compute

  • interpolation (str) –

    The interpolation method to use when the desired percentile lies between two data points i < j:

    • 'linear': i + (j - i) * fraction, where fraction is the fractional part of the index surrounded by i and j.

    • 'lower': i.

    • 'higher': j.

    • 'midpoint': (i + j) / 2.

    • 'nearest': i or j, whichever is nearest.

Returns:

The value of the \(q\)-th percentile of the tensor along the specified axis.

Return type:

NumPy ndarray

New in version 0.7.0.

poisson(n: Tensor[T], lam: Tensor[T]) ArrayLike[source]

The continuous approximation, using \(n! = \Gamma\left(n+1\right)\), to the probability mass function of the Poisson distribution evaluated at n given the parameter lam.

Note

Though the p.m.f of the Poisson distribution is not defined for \(\lambda = 0\), the limit as \(\lambda \to 0\) is still defined, which gives a degenerate p.m.f. of

\[\begin{split}\lim_{\lambda \to 0} \,\mathrm{Pois}(n | \lambda) = \left\{\begin{array}{ll} 1, & n = 0,\\ 0, & n > 0 \end{array}\right.\end{split}\]

Example

>>> import pyhf
>>> pyhf.set_backend("numpy")
>>> pyhf.tensorlib.poisson(5., 6.)
0.16062314...
>>> values = pyhf.tensorlib.astensor([5., 9.])
>>> rates = pyhf.tensorlib.astensor([6., 8.])
>>> pyhf.tensorlib.poisson(values, rates)
array([0.16062314, 0.12407692])
Parameters:
  • n (tensor or float) – The value at which to evaluate the approximation to the Poisson distribution p.m.f. (the observed number of events)

  • lam (tensor or float) – The mean of the Poisson distribution p.m.f. (the expected number of events)

Returns:

Value of the continuous approximation to Poisson(n|lam)

Return type:

NumPy float

poisson_dist(rate: Tensor[T]) _BasicPoisson[source]

The Poisson distribution with rate parameter rate.

Example

>>> import pyhf
>>> pyhf.set_backend("numpy")
>>> rates = pyhf.tensorlib.astensor([5, 8])
>>> values = pyhf.tensorlib.astensor([4, 9])
>>> poissons = pyhf.tensorlib.poisson_dist(rates)
>>> poissons.log_prob(values)
array([-1.74030218, -2.0868536 ])
Parameters:

rate (tensor or float) – The mean of the Poisson distribution (the expected number of events)

Returns:

The Poisson distribution class

Return type:

Poisson distribution

poisson_logpdf(n: Tensor[T], lam: Tensor[T]) ArrayLike[source]
power(tensor_in_1: Tensor[T], tensor_in_2: Tensor[T]) ArrayLike[source]
product(tensor_in: Tensor[T], axis: Shape | None = None) ArrayLike[source]
ravel(tensor: Tensor[T]) ArrayLike[source]

Return a flattened view of the tensor, not a copy.

Example

>>> import pyhf
>>> pyhf.set_backend("numpy")
>>> tensor = pyhf.tensorlib.astensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
>>> pyhf.tensorlib.ravel(tensor)
array([1., 2., 3., 4., 5., 6.])
Parameters:

tensor (Tensor) – Tensor object

Returns:

A flattened array.

Return type:

numpy.ndarray

reshape(tensor: Tensor[T], newshape: Shape) ArrayLike[source]
shape(tensor: Tensor[T]) Shape[source]
simple_broadcast(*args: Sequence[Tensor[T]]) Sequence[Tensor[T]][source]

Broadcast a sequence of 1 dimensional arrays.

Example

>>> import pyhf
>>> pyhf.set_backend("numpy")
>>> pyhf.tensorlib.simple_broadcast(
...   pyhf.tensorlib.astensor([1]),
...   pyhf.tensorlib.astensor([2, 3, 4]),
...   pyhf.tensorlib.astensor([5, 6, 7]))
[array([1., 1., 1.]), array([2., 3., 4.]), array([5., 6., 7.])]
Parameters:

args (Array of Tensors) – Sequence of arrays

Returns:

The sequence broadcast together.

Return type:

list of Tensors

sqrt(tensor_in: Tensor[T]) ArrayLike[source]
stack(sequence: Sequence[Tensor[T]], axis: int = 0) ArrayLike[source]
sum(tensor_in: Tensor[T], axis: int | None = None) ArrayLike[source]
tile(tensor_in: Tensor[T], repeats: int | Sequence[int]) ArrayLike[source]

Repeat tensor data along a specific dimension

Example

>>> import pyhf
>>> pyhf.set_backend("numpy")
>>> a = pyhf.tensorlib.astensor([[1.0], [2.0]])
>>> pyhf.tensorlib.tile(a, (1, 2))
array([[1., 1.],
       [2., 2.]])
Parameters:
  • tensor_in (tensor) – The tensor to be repeated

  • repeats (tensor) – The tuple of multipliers for each dimension

Returns:

The tensor with repeated axes

Return type:

NumPy ndarray

to_numpy(tensor_in: Tensor[T]) ArrayLike[source]

Return the input tensor as it already is a numpy.ndarray. This API exists only for pyhf.tensorlib compatibility.

Example

>>> import pyhf
>>> pyhf.set_backend("numpy")
>>> tensor = pyhf.tensorlib.astensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
>>> tensor
array([[1., 2., 3.],
       [4., 5., 6.]])
>>> numpy_ndarray = pyhf.tensorlib.to_numpy(tensor)
>>> numpy_ndarray
array([[1., 2., 3.],
       [4., 5., 6.]])
>>> type(numpy_ndarray)
<class 'numpy.ndarray'>
Parameters:

tensor_in (tensor) – The input tensor object.

Returns:

The tensor converted to a NumPy ndarray.

Return type:

numpy.ndarray

tolist(tensor_in: Tensor[T] | list[T]) list[T][source]
transpose(tensor_in: Tensor[T]) ArrayLike[source]

Transpose the tensor.

Example

>>> import pyhf
>>> pyhf.set_backend("numpy")
>>> tensor = pyhf.tensorlib.astensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
>>> tensor
array([[1., 2., 3.],
       [4., 5., 6.]])
>>> pyhf.tensorlib.transpose(tensor)
array([[1., 4.],
       [2., 5.],
       [3., 6.]])
Parameters:

tensor_in (tensor) – The input tensor object.

Returns:

The transpose of the input tensor.

Return type:

numpy.ndarray

New in version 0.7.0.

where(mask: NDArray[np.bool_], tensor_in_1: Tensor[T], tensor_in_2: Tensor[T]) ArrayLike[source]
zeros(shape: Tuple[int, ...], dtype: Literal['float', 'int', 'bool'] = 'float') numpy.typing.ArrayLike[source]