{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Using Calculators\n", "\n", "One low-level functionality of `pyhf` when it comes to statistical fits is the idea of a calculator to evaluate with `asymptotics` or `toybased` hypothesis testing.\n", "\n", "This notebook will introduce very quickly what these calculators are meant to do and how they are used internally in the code. We'll set up a simple model for demonstration and then show how the calculators come into play." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pyhf\n", "\n", "np.random.seed(0)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "model = pyhf.simplemodels.uncorrelated_background([6], [9], [3])\n", "data = [9] + model.config.auxdata" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The high-level API\n", "\n", "If the only thing you are interested in is the hypothesis test result you can\n", "just run the high-level API to get it:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CLs_obs = 0.1677886052335611\n", "CLs_exp = [array(0.0159689), array(0.05465771), array(0.16778861), array(0.41863467), array(0.74964133)]\n" ] } ], "source": [ "CLs_obs, CLs_exp = pyhf.infer.hypotest(1.0, data, model, return_expected_set=True)\n", "print(f'CLs_obs = {CLs_obs}')\n", "print(f'CLs_exp = {CLs_exp}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The low-level API\n", "\n", "Under the hood, the hypothesis test computes *test statistics* (such as $q_\\mu, \\tilde{q}_\\mu$) and uses *calculators* in order to \n", "assess how likely the computed test statistic value is under various hypotheses. The goal is to provide a consistent API that understands how you wish to perform your hypothesis test.\n", "\n", "Let's look at the `asymptotics` calculator and then do the same thing for the `toybased`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Asymptotics" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's create the calculator for asymptotics using the $\\tilde{q}_\\mu$ test statistic." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "asymp_calc = pyhf.infer.calculators.AsymptoticCalculator(\n", " data, model, test_stat='qtilde'\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now from this, we want to perform the fit and compute the value of the test statistic from which we can get our $p$-values:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "qtilde = 0.0\n" ] } ], "source": [ "teststat = asymp_calc.teststatistic(poi_test=1.0)\n", "print(f'qtilde = {teststat}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In addition to this, we can ask the calculator for the distributions of the test statistic for the background-only and signal+background hypotheses:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "sb_dist, b_dist = asymp_calc.distributions(poi_test=1.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From these distributions, we can ask for the $p$-value of the test statistic and use this to calculate the $\\mathrm{CL}_\\mathrm{s}$ — a \"modified\" $p$-value." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CL_sb = 0.08389430261678055\n", "CL_b = 0.5\n", "CL_s = CL_sb / CL_b = 0.1677886052335611\n" ] } ], "source": [ "p_sb = sb_dist.pvalue(teststat)\n", "p_b = b_dist.pvalue(teststat)\n", "p_s = p_sb / p_b\n", "\n", "print(f'CL_sb = {p_sb}')\n", "print(f'CL_b = {p_b}')\n", "print(f'CL_s = CL_sb / CL_b = {p_s}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In a similar procedure, we can do the same thing for the expected $\\mathrm{CL}_\\mathrm{s}$ values as well. We need to get the expected value of the test statistic at each $\\pm\\sigma$ and then ask for the expected $p$-value associated with each value of the test statistic." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[0.01596890401598493,\n", " 0.05465770873260968,\n", " 0.1677886052335611,\n", " 0.4186346709326618,\n", " 0.7496413276864433]" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "teststat_expected = [b_dist.expected_value(i) for i in [2, 1, 0, -1, -2]]\n", "p_expected = [sb_dist.pvalue(t) / b_dist.pvalue(t) for t in teststat_expected]\n", "p_expected" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "However, these sorts of steps are somewhat time-consuming and lengthy, and depending on the calculator chosen, may differ a little bit. The calculator API also serves to harmonize the extraction of the observed $p$-values:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CL_sb = 0.08389430261678055\n", "CL_b = 0.5\n", "CL_s = CL_sb / CL_b = 0.1677886052335611\n" ] } ], "source": [ "p_sb, p_b, p_s = asymp_calc.pvalues(teststat, sb_dist, b_dist)\n", "\n", "print(f'CL_sb = {p_sb}')\n", "print(f'CL_b = {p_b}')\n", "print(f'CL_s = CL_sb / CL_b = {p_s}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "and the expected $p$-values:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "exp. CL_sb = [array(0.00036329), array(0.00867173), array(0.0838943), array(0.35221608), array(0.73258689)]\n", "exp. CL_b = [array(0.02275013), array(0.15865525), array(0.5), array(0.84134475), array(0.97724987)]\n", "exp. CL_s = CL_sb / CL_b = [array(0.0159689), array(0.05465771), array(0.16778861), array(0.41863467), array(0.74964133)]\n" ] } ], "source": [ "p_exp_sb, p_exp_b, p_exp_s = asymp_calc.expected_pvalues(sb_dist, b_dist)\n", "\n", "print(f'exp. CL_sb = {p_exp_sb}')\n", "print(f'exp. CL_b = {p_exp_b}')\n", "print(f'exp. CL_s = CL_sb / CL_b = {p_exp_s}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Toy-Based\n", "\n", "The calculator API abstracts away a lot of the differences between various strategies, such that it returns what you want, regardless of whether you choose to perform asymptotics or toy-based testing. It hopefully delivers a simple but powerful API for you!\n", "\n", "Let's create a toy-based calculator and \"throw\" 500 toys." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "toy_calc = pyhf.infer.calculators.ToyCalculator(\n", " data, model, test_stat='qtilde', ntoys=500\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Like before, we'll ask for the test statistic. Unlike the asymptotics case, where we compute the Asimov dataset and perform a series of fits, here we are just evaluating the test statistic for the observed data." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "qtilde = 1.902590865638981\n" ] } ], "source": [ "teststat = toy_calc.teststatistic(poi_test=1.0)\n", "print(f'qtilde = {teststat}')" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array(1.90259087)" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "inits = model.config.suggested_init()\n", "bounds = model.config.suggested_bounds()\n", "fixeds = model.config.suggested_fixed()\n", "pyhf.infer.test_statistics.qmu_tilde(1.0, data, model, inits, bounds, fixeds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So now the next thing to do is get our distributions. This is where, in the case of toys, we fit each and every single toy that we've randomly sampled from our model.\n", "\n", "Note, again, that the API for the calculator is the same as in the asymptotics case." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ " \r" ] } ], "source": [ "sb_dist, b_dist = toy_calc.distributions(poi_test=1.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From these distributions, we can ask for the $p$-value of the test statistic and use this to calculate the $\\mathrm{CL}_\\mathrm{s}$." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CL_sb = 0.084\n", "CL_b = 0.52\n", "CL_s = CL_sb / CL_b = 0.16153846153846155\n" ] } ], "source": [ "p_sb, p_b, p_s = toy_calc.pvalues(teststat, sb_dist, b_dist)\n", "\n", "print(f'CL_sb = {p_sb}')\n", "print(f'CL_b = {p_b}')\n", "print(f'CL_s = CL_sb / CL_b = {p_s}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In a similar procedure, we can do the same thing for the expected $\\mathrm{CL}_\\mathrm{s}$ values as well. We need to get the expected value of the test statistic at each $\\pm\\sigma$ and then ask for the expected $p$-value associated with each value of the test statistic." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "exp. CL_sb = [array(0.), array(0.008), array(0.084), array(0.318), array(1.)]\n", "exp. CL_b = [array(0.02540926), array(0.17), array(0.52), array(0.846), array(1.)]\n", "exp. CL_s = CL_sb / CL_b = [array(0.), array(0.04594333), array(0.16153846), array(0.37939698), array(1.)]\n" ] } ], "source": [ "p_exp_sb, p_exp_b, p_exp_s = toy_calc.expected_pvalues(sb_dist, b_dist)\n", "\n", "print(f'exp. CL_sb = {p_exp_sb}')\n", "print(f'exp. CL_b = {p_exp_b}')\n", "print(f'exp. CL_s = CL_sb / CL_b = {p_exp_s}')" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.6" } }, "nbformat": 4, "nbformat_minor": 4 }