Using Calculators#

One low-level functionality of pyhf when it comes to statistical fits is the idea of a calculator to evaluate with asymptotics or toybased hypothesis testing.

This notebook will introduce very quickly what these calculators are meant to do and how they are used internally in the code. We’ll set up a simple model for demonstration and then show how the calculators come into play.

[1]:
import numpy as np
import pyhf

np.random.seed(0)
[2]:
model = pyhf.simplemodels.uncorrelated_background([6], [9], [3])
data = [9] + model.config.auxdata

The high-level API#

If the only thing you are interested in is the hypothesis test result you can just run the high-level API to get it:

[3]:
CLs_obs, CLs_exp = pyhf.infer.hypotest(1.0, data, model, return_expected_set=True)
print(f'CLs_obs = {CLs_obs}')
print(f'CLs_exp = {CLs_exp}')
CLs_obs = 0.1677886052335611
CLs_exp = [array(0.0159689), array(0.05465771), array(0.16778861), array(0.41863467), array(0.74964133)]

The low-level API#

Under the hood, the hypothesis test computes test statistics (such as \(q_\mu, \tilde{q}_\mu\)) and uses calculators in order to assess how likely the computed test statistic value is under various hypotheses. The goal is to provide a consistent API that understands how you wish to perform your hypothesis test.

Let’s look at the asymptotics calculator and then do the same thing for the toybased.

Asymptotics#

First, let’s create the calculator for asymptotics using the \(\tilde{q}_\mu\) test statistic.

[4]:
asymp_calc = pyhf.infer.calculators.AsymptoticCalculator(
    data, model, test_stat='qtilde'
)

Now from this, we want to perform the fit and compute the value of the test statistic from which we can get our \(p\)-values:

[5]:
teststat = asymp_calc.teststatistic(poi_test=1.0)
print(f'qtilde = {teststat}')
qtilde = 0.0

In addition to this, we can ask the calculator for the distributions of the test statistic for the background-only and signal+background hypotheses:

[6]:
sb_dist, b_dist = asymp_calc.distributions(poi_test=1.0)

From these distributions, we can ask for the \(p\)-value of the test statistic and use this to calculate the \(\mathrm{CL}_\mathrm{s}\) — a “modified” \(p\)-value.

[7]:
p_sb = sb_dist.pvalue(teststat)
p_b = b_dist.pvalue(teststat)
p_s = p_sb / p_b

print(f'CL_sb = {p_sb}')
print(f'CL_b = {p_b}')
print(f'CL_s = CL_sb / CL_b = {p_s}')
CL_sb = 0.08389430261678055
CL_b = 0.5
CL_s = CL_sb / CL_b = 0.1677886052335611

In a similar procedure, we can do the same thing for the expected \(\mathrm{CL}_\mathrm{s}\) values as well. We need to get the expected value of the test statistic at each \(\pm\sigma\) and then ask for the expected \(p\)-value associated with each value of the test statistic.

[8]:
teststat_expected = [b_dist.expected_value(i) for i in [2, 1, 0, -1, -2]]
p_expected = [sb_dist.pvalue(t) / b_dist.pvalue(t) for t in teststat_expected]
p_expected
[8]:
[0.01596890401598493,
 0.05465770873260968,
 0.1677886052335611,
 0.4186346709326618,
 0.7496413276864433]

However, these sorts of steps are somewhat time-consuming and lengthy, and depending on the calculator chosen, may differ a little bit. The calculator API also serves to harmonize the extraction of the observed \(p\)-values:

[9]:
p_sb, p_b, p_s = asymp_calc.pvalues(teststat, sb_dist, b_dist)

print(f'CL_sb = {p_sb}')
print(f'CL_b = {p_b}')
print(f'CL_s = CL_sb / CL_b = {p_s}')
CL_sb = 0.08389430261678055
CL_b = 0.5
CL_s = CL_sb / CL_b = 0.1677886052335611

and the expected \(p\)-values:

[10]:
p_exp_sb, p_exp_b, p_exp_s = asymp_calc.expected_pvalues(sb_dist, b_dist)

print(f'exp. CL_sb = {p_exp_sb}')
print(f'exp. CL_b = {p_exp_b}')
print(f'exp. CL_s = CL_sb / CL_b = {p_exp_s}')
exp. CL_sb = [array(0.00036329), array(0.00867173), array(0.0838943), array(0.35221608), array(0.73258689)]
exp. CL_b = [array(0.02275013), array(0.15865525), array(0.5), array(0.84134475), array(0.97724987)]
exp. CL_s = CL_sb / CL_b = [array(0.0159689), array(0.05465771), array(0.16778861), array(0.41863467), array(0.74964133)]

Toy-Based#

The calculator API abstracts away a lot of the differences between various strategies, such that it returns what you want, regardless of whether you choose to perform asymptotics or toy-based testing. It hopefully delivers a simple but powerful API for you!

Let’s create a toy-based calculator and “throw” 500 toys.

[11]:
toy_calc = pyhf.infer.calculators.ToyCalculator(
    data, model, test_stat='qtilde', ntoys=500
)

Like before, we’ll ask for the test statistic. Unlike the asymptotics case, where we compute the Asimov dataset and perform a series of fits, here we are just evaluating the test statistic for the observed data.

[12]:
teststat = toy_calc.teststatistic(poi_test=1.0)
print(f'qtilde = {teststat}')
qtilde = 1.902590865638981
[13]:
inits = model.config.suggested_init()
bounds = model.config.suggested_bounds()
fixeds = model.config.suggested_fixed()
pyhf.infer.test_statistics.qmu_tilde(1.0, data, model, inits, bounds, fixeds)
[13]:
array(1.90259087)

So now the next thing to do is get our distributions. This is where, in the case of toys, we fit each and every single toy that we’ve randomly sampled from our model.

Note, again, that the API for the calculator is the same as in the asymptotics case.

[14]:
sb_dist, b_dist = toy_calc.distributions(poi_test=1.0)

From these distributions, we can ask for the \(p\)-value of the test statistic and use this to calculate the \(\mathrm{CL}_\mathrm{s}\).

[15]:
p_sb, p_b, p_s = toy_calc.pvalues(teststat, sb_dist, b_dist)

print(f'CL_sb = {p_sb}')
print(f'CL_b = {p_b}')
print(f'CL_s = CL_sb / CL_b = {p_s}')
CL_sb = 0.084
CL_b = 0.52
CL_s = CL_sb / CL_b = 0.16153846153846155

In a similar procedure, we can do the same thing for the expected \(\mathrm{CL}_\mathrm{s}\) values as well. We need to get the expected value of the test statistic at each \(\pm\sigma\) and then ask for the expected \(p\)-value associated with each value of the test statistic.

[16]:
p_exp_sb, p_exp_b, p_exp_s = toy_calc.expected_pvalues(sb_dist, b_dist)

print(f'exp. CL_sb = {p_exp_sb}')
print(f'exp. CL_b = {p_exp_b}')
print(f'exp. CL_s = CL_sb / CL_b = {p_exp_s}')
exp. CL_sb = [array(0.), array(0.008), array(0.084), array(0.318), array(1.)]
exp. CL_b = [array(0.02540926), array(0.17), array(0.52), array(0.846), array(1.)]
exp. CL_s = CL_sb / CL_b = [array(0.), array(0.04594333), array(0.16153846), array(0.37939698), array(1.)]