Basics
You will learn basic usage of iminuit and how to approach standard fitting problems with iminuit
.
iminuit
is a Python frontend to the Minuit2
library in C++, an integrated software that combines a local minimizer (called MIGRAD) and two error calculators (called HESSE and MINOS). You provide it an analytical function, which accepts one or several parameters, and an initial guess of the parameter values. It will then find a local minimum of this function starting from the initial guess. In that regard, iminuit minimizer is like other local minimizers, like those in scipy.optimize
.
In addition, iminuit has the ability to compute uncertainty estimates for model parameters. iminuit
was designed to solve statistics problems, where uncertainty estimates are an essential part of the result. The two ways of computing uncertainty estimates, HESSE and MINOS, have different advantages and disadvantages.
iminuit
is the successor of pyminuit
. If you used pyminuit
before, you will find iminuit very familiar. An important feature of iminuit
(and pyminuit
) is that it uses introspection to detect the parameter names of your function. This is very convenient, especially when you work interactively in a Jupyter notebook. It also provides special output routines for Jupyter notebooks to pretty print the fit results, as you will see below.
[1]:
# basic setup of the notebook
%config InlineBackend.figure_formats = ['svg']
from matplotlib import pyplot as plt
import numpy as np
# everything in iminuit is done through the Minuit object, so we import it
from iminuit import Minuit
# we also need a cost function to fit and import the LeastSquares function
from iminuit.cost import LeastSquares
# display iminuit version
import iminuit
print("iminuit version:", iminuit.__version__)
iminuit version: 2.30.2
Quick start
In this first section, we look at a simple case where line should be fitted to scattered \((x, y)\) data. A line has two parameters \((\alpha, \beta)\). We go through the full fit, showing all basic steps to get you started quickly. In the following sections we will revisit the steps in more detail.
[2]:
# our line model, unicode parameter names are supported :)
def line(x, α, β):
return α + x * β
# generate random toy data with random offsets in y
rng = np.random.default_rng(1)
data_x = np.linspace(0, 1, 10)
data_yerr = 0.1 # could also be an array
data_y = rng.normal(line(data_x, 1, 2), data_yerr)
To recover the parameters α and β of the line model from this data, we need to minimize a suitable cost function. The cost function must be twice differentiable and have a minimum at the optimal parameters. We use the method of least-squares here, whose cost function computes the sum of squared residuals between the model and the data. The task of iminuit is to find the minimum of that function.
The iminuit module provides the LeastSquares
class to conveniently generate a least-squares cost function. We will revisit how to write one by hand in a later section. Using a built-in cost function comes with some perks, for example, the fit (if data are 1D) is automatically visualized in a Jupyter notebook.
[3]:
least_squares = LeastSquares(data_x, data_y, data_yerr, line)
m = Minuit(least_squares, α=0, β=0) # starting values for α and β
m.migrad() # finds minimum of least_squares function
m.hesse() # accurately computes uncertainties
[3]:
Migrad | |
---|---|
FCN = 3.959 (χ²/ndof = 0.5) | Nfcn = 46 |
EDM = 3.65e-21 (Goal: 0.0002) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse ok | Covariance accurate |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 1.02 | 0.06 | |||||
1 | β | 2.0 | 0.1 |
α | β | |
---|---|---|
α | 0.00345 | -0.0049 (-0.843) |
β | -0.0049 (-0.843) | 0.00982 |
And that is all for a basic fit. The fit result is immediately visible here in the notebook, since calls to m.migrad()
and m.hesse()
return the Minuit
object, which then automatically renders its state in a Jupyter notebook.
The automatically generated plot of the fitted function is intentionally very basic. You can make a nicer plot by hand with matplotlib
.
[4]:
# draw data and fitted line
plt.errorbar(data_x, data_y, data_yerr, fmt="ok", label="data")
plt.plot(data_x, line(data_x, *m.values), label="fit")
# display legend with some fit info
fit_info = [
f"$\\chi^2$/$n_\\mathrm{{dof}}$ = {m.fval:.1f} / {m.ndof:.0f} = {m.fmin.reduced_chi2:.1f}",
]
for p, v, e in zip(m.parameters, m.values, m.errors):
fit_info.append(f"{p} = ${v:.3f} \\pm {e:.3f}$")
plt.legend(title="\n".join(fit_info), frameon=False)
plt.xlabel("x")
plt.ylabel("y");
In the following, we dive into details step by step; how the Minuit object is initialized, how to run the algorithms, and how to get the results.
iminuit
was designed to make it easy to fit cost functions like least_squares(...)
, where the parameters are individual arguments of the function. There is an alternative function signature that Minuit supports, which is more convenient when you explore models that have a not-yet-defined number of parameters, for example, a polynomial. Here, the parameters are passed as a NumPy array. We will discuss both in the following, but focus on the first.
Initialize the Minuit object
To minimize a function, one has to create an instance of the Minuit class and pass the function and a starting value for each parameter. This does not start the minimization yet, this will come later.
The Minuit
object uses introspection to get the number and names of the function parameters automatically, so that they can be initialized with keywords.
[5]:
m = Minuit(least_squares, α=0, β=0)
If we forget a parameter or mistype them, Minuit will raise an error.
[6]:
try:
Minuit(least_squares)
except RuntimeError:
import traceback
traceback.print_exc()
Traceback (most recent call last):
File "/tmp/ipykernel_3163/3025208821.py", line 2, in <module>
Minuit(least_squares)
File "/home/runner/work/iminuit/iminuit/src/iminuit/minuit.py", line 648, in __init__
raise RuntimeError(
RuntimeError: starting value(s) are required for [α β]
[7]:
try:
Minuit(least_squares, a=0, b=0)
except RuntimeError:
import traceback
traceback.print_exc()
Traceback (most recent call last):
File "/tmp/ipykernel_3163/2981874672.py", line 2, in <module>
Minuit(least_squares, a=0, b=0)
File "/home/runner/work/iminuit/iminuit/src/iminuit/minuit.py", line 683, in __init__
self._init_state = _make_init_state(self._pos2var, start, kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/iminuit/iminuit/src/iminuit/minuit.py", line 2611, in _make_init_state
raise RuntimeError(
RuntimeError: a is not one of the parameters [α β]
Initial parameter values
The main algorithm MIGRAD is a local minimizer. It searches for a local minimum by a doing a mix of Newton steps and gradient-descents from a starting point. If your function has several minima, the minimum found will depend on the starting point. Even if it has only one minimum, iminuit will converge to it faster if you start in the proximity of the minimum.
You can set the starting point using the parameter names as keywords, <name> = <value>
.
[8]:
Minuit(least_squares, α=5, β=5) # pass starting values for α and β
[8]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 5.00 | 0.05 | |||||
1 | β | 5.00 | 0.05 |
Alternatively, the starting values can also be passed as positional arguments.
[9]:
Minuit(least_squares, 5, 5) # another way of passing starting values for α and β
[9]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 5.00 | 0.05 | |||||
1 | β | 5.00 | 0.05 |
You can also use iminuit with functions that accept NumPy arrays. This has pros and cons.
Pros
Easy to change number of fitted parameters
Sometimes simpler function body that’s easier to read
Technically this is more efficient, but this is hardly going to be noticeable
Cons
iminuit cannot figure out names for each parameter
To demonstrate, use a version of the line model which accepts the parameters as a NumPy array.
[10]:
def line_np(x, par):
return np.polyval(par, x) # for len(par) == 2, this is a line
Calling line_np
with more or less arguments is easy and will use a polynomial of the corresponding order to predict the behavior of the data.
The built-in cost functions support such a model. For it to be detected properly, you need to pass the starting values in form a single sequence of numbers.
[11]:
least_squares_np = LeastSquares(data_x, data_y, data_yerr, line_np)
Minuit(least_squares_np, (5, 5)) # pass starting values as a sequence
[11]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | x0 | 5.00 | 0.05 | |||||
1 | x1 | 5.00 | 0.05 |
Any sequence will work for initialization, you can also pass a list or a NumPy array here. iminuit
uses the length of the sequence to detect how many parameters the model has. By default, the parameters are named automatically x0
to xN
. One can override this with the keyword name
, passing a sequence of parameter names.
[12]:
Minuit(least_squares_np, (5, 5), name=("a", "b"))
[12]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | a | 5.00 | 0.05 | |||||
1 | b | 5.00 | 0.05 |
Since least_squares_np
works for parameter arrays of any length, one can easily change the number of fitted parameters.
[13]:
# fit a forth order polynomial
Minuit(least_squares_np, (5, 5, 5, 5))
[13]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | x0 | 5.00 | 0.05 | |||||
1 | x1 | 5.00 | 0.05 | |||||
2 | x2 | 5.00 | 0.05 | |||||
3 | x3 | 5.00 | 0.05 |
It is often useful to try different orders of a polynomial model. If the order is too small, the polynomial will not follow the data. If it is too large, it will overfit the data and pick up random fluctuations and not the underlying trend. One can figure out the right order by experimenting or using an algorithm like cross-validation.
Inspecting current parameters
You can check the current parameter values and settings with the method Minuit.params
at any time. It returns a special list of Param
objects which pretty-prints in Jupyter and in the terminal.
[14]:
m.params
[14]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 0.0 | 0.1 | |||||
1 | β | 0.0 | 0.1 |
This produces a nice table with numbers rounded according to the rules of the Particle Data Group. The table will be updated once you run the actual minimization. To look at the initial conditions later, use Minuit.init_params
. We will come back to the meaning of Hesse Error and Minos Error later.
Minuit.params
returns a tuple-like container of Param
objects, which are data objects with attributes that one can query. Use repr()
to get a detailed representation of the data object.
[15]:
for p in m.params:
print(repr(p), "\n")
Param(number=0, name='α', value=0.0, error=0.1, merror=None, is_const=False, is_fixed=False, lower_limit=None, upper_limit=None)
Param(number=1, name='β', value=0.0, error=0.1, merror=None, is_const=False, is_fixed=False, lower_limit=None, upper_limit=None)
Parameters with limits
iminuit
allows you to set parameter limits. Often a parameter is limited mathematically or physically to a certain range. For example, if your function contains sqrt(x)
, then \(x\) must be non-negative, \(x \ge 0\). You can set upper-, lower-, or two-sided limits for each parameter individually with the limits
property.
Lower limit: use
Minuit.limits[<name>] = (<value>, None)
or(<value>, float("infinity"))
Upper limit: use
Minuit.limits[<name>] = (None, <value>)
or(-float("infinity"), <value>)
Two-sided limit: use
Minuit.limits[<name>] = (<min_value>, <max_value>)
Remove limits: use
Minuit.limits[<name>] = None
or(-float("infinity"), float("infinity")
You can also set limits for several parameters at once with a sequence. To impose the limits \(α \ge 0\) and \(0 \le β \le 10\) in our example, we use:
[16]:
m.limits = [(0, None), (0, 10)]
m.params
[16]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 0.0 | 0.1 | 0 | ||||
1 | β | 0.0 | 0.1 | 0 | 10 |
It is also possible for the cost function to declare limits on its parameters. For this you need the Annotated
type, which is available in Python-3.9 or later, and from the package typing-extensions
in Python-3.8. The restrictions should be imported from the external package annotated-types
. The built-in cost functions propagate such annotations of model parameters.
[17]:
# Annotated and Gt are imported from iminuit.typing here for universal compatibility,
# but users should in general import them from external packages `typing-extensions` and
# `annotated-types` to decouple models from the `iminuit` package
from iminuit.typing import Annotated, Gt
def line_with_positive_slope(x, slope: Annotated[float, Gt(0)], offset):
return slope * x + offset
lsq = LeastSquares(data_x, data_y, data_yerr, line_with_positive_slope)
Minuit(lsq, 1, 0)
[17]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | slope | 1.00 | 0.01 | 0 | ||||
1 | offset | 0.0 | 0.1 |
You can reset the limit automatically set by such an annotation by calling minuit_instance.limit["slope"] = None
before fitting, if you wish.
Fixing and releasing parameters
Sometimes you have a parameter which you want to set to a fixed value temporarily. Perhaps you have a guess for its value, and you want to see how the other parameters adapt when this parameter is fixed to that value.
Or you have a complex function with many parameters that do not all affect the function at the same scale. Then you can manually help the minimizer to find the minimum faster by first fixing the less important parameters to initial guesses and fit only the important parameters. Once the minimum is found under these conditions, you can release the fixed parameters and optimize all parameters together. Minuit remembers the last state of the minimization and starts from there. The minimization time roughly scales with the square of the number of parameters. Iterated minimization over subspaces of the parameters can reduce that time.
To fix an individual parameter, use minuit_instance.fixed[<name>] = True
. In our example, we fix α:
[18]:
m.fixed["α"] = True
m.params
[18]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 0.0 | 0.1 | 0 | yes | |||
1 | β | 0.0 | 0.1 | 0 | 10 |
[19]:
# migrad will not vary α, only β
m.migrad()
[19]:
Migrad | |
---|---|
FCN = 307.5 (χ²/ndof = 34.2) | Nfcn = 41 |
EDM = 1.56e-07 (Goal: 0.0002) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse ok | Covariance accurate |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 0.0 | 0.1 | 0 | yes | |||
1 | β | 3.45 | 0.05 | 0 | 10 |
α | β | |
---|---|---|
α | 0 | 0.0000 |
β | 0.0000 | 0.00284 |
Now we release α and fix β and minimize again, you can also use the parameter index instead of its name.
[20]:
m.fixed[0] = False
m.fixed[1] = True
m.migrad()
[20]:
Migrad | |
---|---|
FCN = 219.7 (χ²/ndof = 24.4) | Nfcn = 77 |
EDM = 3.91e-08 (Goal: 0.0002) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse ok | Covariance accurate |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 0.296 | 0.032 | 0 | ||||
1 | β | 3.45 | 0.05 | 0 | 10 | yes |
α | β | |
---|---|---|
α | 0.001 | 0e-3 |
β | 0e-3 | 0 |
We could iterate this and would slowly approach the minimum, but that’s silly; instead we release both parameters and run again. The array-like views support broadcasting to enable this shortcut notation:
[21]:
m.fixed = False
m.migrad()
[21]:
Migrad | |
---|---|
FCN = 3.959 (χ²/ndof = 0.5) | Nfcn = 127 |
EDM = 3.18e-07 (Goal: 0.0002) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse ok | Covariance accurate |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 1.02 | 0.06 | 0 | ||||
1 | β | 2.0 | 0.1 | 0 | 10 |
α | β | |
---|---|---|
α | 0.00345 | -0.0049 (-0.843) |
β | -0.0049 (-0.843) | 0.00982 |
It is also possible to fix a parameter and set a value with one convenient call, using Minuit.fixto
.
[22]:
m.fixto("α", 3)
m.params
[22]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 3.00 | 0.06 | 0 | yes | |||
1 | β | 2.0 | 0.1 | 0 | 10 |
Varying starting points for minimization
It is sometimes useful to change the values of some fixed parameters by hand and fit the others or to restart the fit from another starting point. For example, if the cost function has several minima, changing the starting value can be used to find the other minimum.
[23]:
def cost_function_with_two_minima(x):
return x**4 - x**2 + 1
x = np.linspace(-1.5, 1.5)
plt.plot(x, cost_function_with_two_minima(x));
[24]:
# starting at -0.1 gives the left minimum
m = Minuit(cost_function_with_two_minima, x=-0.1)
m.migrad()
print("starting value -0.1, minimum at", m.values["x"])
# changing the starting value to 0.1 gives the right minimum
m.values["x"] = 0.1 # m.values[0] = 0.1 also works
m.migrad()
print("starting value +0.1, minimum at", m.values["x"])
starting value -0.1, minimum at -0.7085906080341975
starting value +0.1, minimum at 0.708796091342642
Advanced: Simplex and Scan minimizers
iminuit
also offers two other minimizers which are less powerful than MIGRAD, but may be useful in special cases.
SIMPLEX
The Nelder-Mead method (aka SIMPLEX) is well described on Wikipedia. It is a gradient-free minimization method that usually converges more slowly, but may be more robust. For some problems it can help to start the minimization with SIMPLEX and then finish with MIGRAD. Since the default stopping criterion for SIMPLEX is much more lax than MIGRAD, either running MIGRAD after SIMPLEX or reducing the tolerance with Minuit.tol
is
strongly recommended.
[25]:
Minuit(cost_function_with_two_minima, x=10).simplex()
[25]:
Simplex | |
---|---|
FCN = 0.7501 | Nfcn = 23 |
EDM = 0.0176 (Goal: 0.1) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse not run | NO covariance |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | x | 0.7 | 0.8 |
Let’s run MIGRAD after SIMPLEX to finish the minimization.
[26]:
Minuit(cost_function_with_two_minima, x=10).simplex().migrad()
[26]:
Migrad | |
---|---|
FCN = 0.75 | Nfcn = 36 |
EDM = 1.12e-08 (Goal: 0.0002) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse ok | Covariance accurate |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | x | 0.7 | 0.7 |
x | |
---|---|
x | 0.5 |
This combination uses slightly fewer function evaluations and produced a more accurate result than just running MIGRAD alone in this case (for another problem this may not be true).
[27]:
Minuit(cost_function_with_two_minima, x=10).migrad()
[27]:
Migrad | |
---|---|
FCN = 0.75 | Nfcn = 38 |
EDM = 4.38e-06 (Goal: 0.0002) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse ok | Covariance accurate |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | x | 0.7 | 0.7 |
x | |
---|---|
x | 0.497 |
Scan
Scan is a last resort. It does an N-dimensional grid scan over the parameter space. The number of function evaluations scale like \(n^k\), where \(k\) is the number of parameters and \(n\) the number of steps along one dimension. Using scan for high-dimensional problems is unfeasible, but it can be useful in low-dimensional problems and when all but a few parameters are fixed. The scan needs bounds, which are best set with Minuit.limits
. The number of scan points is set with the
ncall
keyword.
[28]:
m = Minuit(cost_function_with_two_minima, x=10)
m.limits = (-10, 10)
m.scan(ncall=50)
[28]:
Scan | |
---|---|
FCN = 0.7657 | Nfcn = 55 |
EDM = 0.0188 (Goal: 0.1) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse not run | NO covariance |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | x | -0.6 | 0.9 | -10 | 10 |
The scan brought us in proximity of the minimum.
In this case, the minimum is considered valid, because the EDM value is smaller than the EDM goal, but the scan may also end up in an invalid minimum, which is also ok. The scan minimizes the cost function using a finite number of steps, regardless of the EDM value (which is only computed after the scan for the minimum).
One should always run MIGRAD or SIMPLEX after a SCAN.
[29]:
m.migrad()
[29]:
Migrad | |
---|---|
FCN = 0.75 | Nfcn = 69 |
EDM = 1.92e-05 (Goal: 0.0002) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse ok | Covariance accurate |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | x | -0.7 | 0.7 | -10 | 10 |
x | |
---|---|
x | 0.494 |
Advanced: Errordef
If you do not use one of the cost functions from the iminuit.cost
module, you may need to pass an additional parameter to Minuit.
Minuit by default assumes that the function scales like a chi-square function when one of the parameters is moved away from the minimum. If your cost function is constructed as a log-likelihood, it scales differently, and you must indicate that to Minuit wit the errordef
parameter. Setting this is not needed for the cost functions in iminuit.cost
.
The errordef
parameter is required to compute correct uncertainties. If you don’t care about uncertainty estimates (but why are you using Minuit then?), you can ignore it. Minuit supports two kinds of cost functions, the negative log-likelihood and the least-squares function. Each has a corresponding value for errordef
:
0.5
or the constantMinuit.LIKELIHOOD
for negative log-likelihood functions1
or the constantMinuit.LEAST_SQUARES
for least-squares functions (the default)
If you like to understand the origin of these numbers, have a look into the study Hesse and Minos, which explains in depth how uncertainties are computed.
For our custom cost function, we could set m.errordef=1
or m.errordef=Minuit.LEAST_SQUARES
, which is more readable.
[30]:
# a simple least-squares cost function looks like this...
def custom_least_squares(a, b):
ym = line(data_x, a, b)
z = (data_y - ym) / data_yerr
return np.sum(z**2)
m = Minuit(custom_least_squares, 1, 2)
m.migrad() # standard errordef, correct in this case
[30]:
Migrad | |
---|---|
FCN = 3.959 | Nfcn = 32 |
EDM = 2.46e-22 (Goal: 0.0002) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse ok | Covariance accurate |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | a | 1.02 | 0.06 | |||||
1 | b | 2.0 | 0.1 |
a | b | |
---|---|---|
a | 0.00345 | -0.0049 (-0.843) |
b | -0.0049 (-0.843) | 0.00982 |
[31]:
m.errordef = Minuit.LIKELIHOOD # errordef for negative log-likelihoods, wrong here
m.migrad()
[31]:
Migrad | |
---|---|
FCN = 3.959 | Nfcn = 42 |
EDM = 9.54e-23 (Goal: 0.0001) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse ok | Covariance accurate |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | a | 1.02 | 0.04 | |||||
1 | b | 2.00 | 0.07 |
a | b | |
---|---|---|
a | 0.00173 | -0.0025 (-0.843) |
b | -0.0025 (-0.843) | 0.00491 |
The reported errors are now by a factor sqrt(2)
smaller than they really are.
An even better way is to add an attribute called errordef
to the cost function. If such an attribute is present, Minuit uses it. Since this cost function has the default scaling, we do not need to set anything, but keep it in mind for negative log-likelihoods.
[32]:
# artificial cost function that scales like a negative log-likelihood
def custom_least_squares_2(a, b):
return 0.5 * custom_least_squares(a, b)
# Instead of calling Minuit.errordef, we assign an errordef attribute to the cost
# function. Minuit will automatically use this value.
custom_least_squares_2.errordef = Minuit.LIKELIHOOD
m = Minuit(custom_least_squares_2, 1, 2)
m.migrad() # uses the correct errordef automatically
[32]:
Migrad | |
---|---|
FCN = 1.98 | Nfcn = 32 |
EDM = 1.27e-24 (Goal: 0.0001) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse ok | Covariance accurate |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | a | 1.02 | 0.06 | |||||
1 | b | 2.0 | 0.1 |
a | b | |
---|---|---|
a | 0.00345 | -0.0049 (-0.843) |
b | -0.0049 (-0.843) | 0.00982 |
We get the correct errors. The built-in cost functions from the module iminuit.cost
all define the errordef
attribute, so you don’t need to worry about that.
If the cost function defines the errordef
, it should not be necessary to set it to another value, so Minuit
warns you if you try to set it.
[33]:
# raises a warning
m.errordef = 1
/home/runner/work/iminuit/iminuit/src/iminuit/minuit.py:139: ErrordefAlreadySetWarning: cost function has an errordef attribute equal to 0.5, you should not override this with Minuit.errordef
warnings.warn(msg, ErrordefAlreadySetWarning)
Advanced: Initial step sizes
Minuit uses a gradient-descent method to find the minimum, and the gradient is computed numerically using finite differences. The initial step size is used to compute the first gradient. A good step size is small compared to the curvature of the function, but large compared to numerical resolution. Using a good step size can slightly accelerate the convergence, but Minuit is not very sensitive to the choice. If you don’t provide a value, iminuit will guess a step size based on a heuristic.
You can set initial step sizes with the errors
property, Minuit.errors[<name>] = <step size>
. Using an appropriate step size is important when you have you a parameter which has physical bounds. Varying the initial parameter value by the step size may not create a situation where the parameter goes outside its bounds. For example, a parameter \(x\) with \(x > 0\) and initial value \(0.1\) may not have a step size of \(0.2\).
In our example, we could use an initial step size of \(\Delta α = 0.1\) and \(\Delta β = 0.2\). Setting both can be done conveniently by assigning a sequence:
[34]:
m = Minuit(least_squares, α=5, β=5)
m.errors = (0.1, 0.2) # assigning sequences works
m.params
[34]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 5.0 | 0.1 | |||||
1 | β | 5.0 | 0.2 |
Broadcasting is also supported.
[35]:
m.errors = 0.3 # broadcasting
m.params
[35]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 5.0 | 0.3 | |||||
1 | β | 5.0 | 0.3 |
Only positive step sizes are allowed. Non-positive values are replaced with the heuristic and a warning is emitted.
[36]:
m.errors["β"] = -0.3
m.params
/home/runner/work/iminuit/iminuit/src/iminuit/util.py:177: IMinuitWarning: Assigned errors must be positive. Non-positive values are replaced by a heuristic.
warnings.warn(
[36]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 5.0 | 0.3 | |||||
1 | β | 5.000 | 0.003 |
Advanced: Override parameter name detection
iminuit
tries hard to detect the parameter names correctly. It works for a large variety of cases. For example, if you pass a functor instead of a function, it will use the arguments of the __call__
method, automatically skipping self
. It even tries to parse the docstring if all else fails.
You can check which parameter names iminuit finds for your function with the describe
function.
[37]:
from iminuit import describe
def foo(x, y, z):
pass
assert describe(foo) == ["x", "y", "z"]
class Foo:
def __call__(self, a, b):
pass
assert describe(Foo()) == ["a", "b"]
Sometimes parameter names cannot be determined, for example, when a function accepts a variable number of arguments.
[38]:
def func_varargs(*args): # function with variable number of arguments
return np.sum((np.array(args) - 1) ** 2)
assert describe(func_varargs) == []
describe
cannot detect the number and names of the parameters in this case and returns an empty list. If you work with functions that accept a variable number of arguments a lot, it is better to use a cost function which accepts a parameter array (this is explained in the next section).
When iminuit cannot detect the arguments, but you know how many arguments there are, or if you simply want to override the names found by iminuit
, you can do that with the keyword name
, like so:
[39]:
Minuit(func_varargs, name=("a", "b"), a=1, b=2).migrad()
[39]:
Migrad | |
---|---|
FCN = 2.867e-19 | Nfcn = 24 |
EDM = 2.87e-19 (Goal: 0.0002) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse ok | Covariance accurate |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | a | 1 | 1 | |||||
1 | b | 1 | 1 |
a | b | |
---|---|---|
a | 1 | 0 |
b | 0 | 1 |
Alternative interface: iminuit.minimize
Those familiar with scipy
may find the minimize
function useful. It exactly mimics the function interface of scipy.optimize.minimize
, but uses Minuit
for the actual minimization. The scipy
package must be installed to use it.
[40]:
from iminuit import minimize # has same interface as scipy.optimize.minimize
minimize(least_squares_np, (5, 5))
[40]:
message: Optimization terminated successfully.
success: True
fun: 3.959436273265028
x: [ 1.997e+00 1.024e+00]
hess_inv: ┌────┬─────────────────┐
│ │ x0 x1 │
├────┼─────────────────┤
│ x0 │ 0.00491 -0.0025 │
│ x1 │ -0.0025 0.00173 │
└────┴─────────────────┘
nfev: 32
njev: 0
minuit: ┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 3.959 │ Nfcn = 32 │
│ EDM = 4.78e-22 (Goal: 0.0001) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ Below EDM threshold (goal x 10) │
├──────────────────────────────────┼──────────────────────────────────────┤
│ No parameters at limit │ Below call limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Hesse ok │ Covariance accurate │
└──────────────────────────────────┴──────────────────────────────────────┘
┌───┬──────┬───────────┬───────────┬────────────┬────────────┬─────────┬─────────┬───────┐
│ │ Name │ Value │ Hesse Err │ Minos Err- │ Minos Err+ │ Limit- │ Limit+ │ Fixed │
├───┼──────┼───────────┼───────────┼────────────┼────────────┼─────────┼─────────┼───────┤
│ 0 │ x0 │ 2.00 │ 0.07 │ │ │ │ │ │
│ 1 │ x1 │ 1.02 │ 0.04 │ │ │ │ │ │
└───┴──────┴───────────┴───────────┴────────────┴────────────┴─────────┴─────────┴───────┘
┌────┬─────────────────┐
│ │ x0 x1 │
├────┼─────────────────┤
│ x0 │ 0.00491 -0.0025 │
│ x1 │ -0.0025 0.00173 │
└────┴─────────────────┘
This interface is handy if you want to be able to switch between iminuit and scipy.optimize.minimize
, but we recommend the standard interface instead. It is an advantage of Minuit that you can interact and manually steer the minimization process. This is not as convenient with a functional interface like minimize
.
Investigating the fit status
Calling Minuit.migrad()
runs the actual minimization with the MIGRAD algorithm. MIGRAD essentially tries a Newton-step and if that does not produce a smaller function value, it tries a line search along the direction of the gradient. So far so ordinary. The clever bits in MIGRAD are how various pathological cases are handled.
Let’s look again at the output of Minuit.migrad()
.
[41]:
m = Minuit(least_squares, α=5, β=5)
m.migrad()
[41]:
Migrad | |
---|---|
FCN = 3.959 (χ²/ndof = 0.5) | Nfcn = 30 |
EDM = 1.4e-22 (Goal: 0.0002) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse ok | Covariance accurate |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 1.02 | 0.06 | |||||
1 | β | 2.0 | 0.1 |
α | β | |
---|---|---|
α | 0.00345 | -0.0049 (-0.843) |
β | -0.0049 (-0.843) | 0.00982 |
The Minuit.migrad
method returns the Minuit instance so that one can chain method calls. The instance also pretty prints the latest state of the minimization.
The first block in this output is showing information about the function minimum. This is good for a quick check:
All blocks should be green.
Purple means something bad.
Yellow may be bad or not. Be careful.
Let’s see how it looks when the function is bad.
[42]:
m_bad = Minuit(lambda x: 0, x=1) # a constant function has no minimum
m_bad.migrad()
[42]:
Migrad | |
---|---|
FCN = 0 | Nfcn = 216 |
EDM = nan (Goal: 0.0002) | |
INVALID Minimum | ABOVE EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse FAILED | Covariance NOT pos. def. |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | x | 1 | 0 |
Coming back to our previous good example, the info about the function minimum can be directly accessed with Minuit.fmin
:
[43]:
m.fmin
[43]:
Migrad | |
---|---|
FCN = 3.959 (χ²/ndof = 0.5) | Nfcn = 30 |
EDM = 1.4e-22 (Goal: 0.0002) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse ok | Covariance accurate |
[44]:
# print(repr(...)) to see a detailed representation of the data object
print(repr(m.fmin))
<FMin algorithm='Migrad' edm=1.4020311473299171e-22 edm_goal=0.0002 errordef=1.0 fval=3.959436273265028 has_accurate_covar=True has_covariance=True has_made_posdef_covar=False has_parameters_at_limit=False has_posdef_covar=True has_reached_call_limit=False has_valid_parameters=True hesse_failed=False is_above_max_edm=False is_valid=True nfcn=30 ngrad=0 reduced_chi2=0.4949295341581285 time=0.0004715679999662825>
The most important one here is is_valid
. If this is false, the fit did not converge and the result is useless. Since this is so often queried, a shortcut is provided with Minuit.valid
.
If the fit fails, there is usually a numerical or logical issue.
The fit function is not analytical everywhere in the parameter space or does not have a local minimum (the minimum may be at infinity, the extremum may be a saddle point or maximum). Indicators for this are
is_above_max_edm=True
,hesse_failed=True
,has_posdef_covar=False
, orhas_made_posdef_covar=True
. A non-analytical function is one with a discrete step, for example.MIGRAD reached the call limit before the convergence so that
has_reached_call_limit=True
. The number of function calls is given bynfcn
, and the call limit can be changed with the keyword argumentncall
in the methodMinuit.migrad
. Note thatnfcn
can be slightly larger thanncall
, because MIGRAD internally only checks this condition after a full iteration, in which several function calls can happen.
MIGRAD detects convergence by a small edm
value, the estimated distance to minimum. This is the difference between the current minimum value of the minimized function and the prediction based on the current local quadratic approximation of the function (something that MIGRAD computes as part of its algorithm). If the fit did not converge, is_above_max_edm
is true.
If you are interested in parameter uncertainties, you should make sure that:
has_covariance
,has_accurate_covar
, andhas_posdef_covar
are true.has_made_posdef_covar
andhesse_failed
are false.
The second object of interest after the fit is the parameter list, which can be directly accessed with Minuit.params
.
[45]:
m.params
[45]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 1.02 | 0.06 | |||||
1 | β | 2.0 | 0.1 |
[46]:
for p in m.params:
print(repr(p))
Param(number=0, name='α', value=1.0240435955962377, error=0.05877538149803514, merror=None, is_const=False, is_fixed=False, lower_limit=None, upper_limit=None)
Param(number=1, name='β', value=1.996894456425372, error=0.09908673876519222, merror=None, is_const=False, is_fixed=False, lower_limit=None, upper_limit=None)
m.params
is a tuple-like container of Param
data objects which contain information about the fitted parameters. Important fields are:
number
: parameter index.name
: parameter name.value
: value of the parameter at the minimum.error
: uncertainty estimate for the parameter value.
Whether the uncertainty estimate is accurate depends on the correct mathematical modeling of your fitting problem and using the right errordef
value for Minuit. What do we mean by correct mathematical modelling? If you look into the function simple_least_squares(a, b)
, you see that each squared residual is divided by the expected variance of the residual. This is necessary to get accurate uncertainty estimates for the parameters.
Sometimes the expected variance of the residual is not well known. If the cost function to minimize satisfies certain conditions, there is a simple test to check whether the residual variances are ok. One should look at the function value at the minimum, given by Minuit.fmin.fval
, and divide it by the so-called degrees of freedom, which is difference of the number of residuals and the number of fitted parameters, and can be queried with the attribute Minuit.ndof
. This is called reduced
chi2, it can be directly queried with Minuit.fmin.reduced_chi2
.
The reduced chi2 is available for all built-in binned cost functions and the LeastSquares
cost function in iminuit.cost
. It cannot be automatically provided for unbinned cost functions, since that requires binning the data, which has to be defined by the user. For unbinned cost functions, you can still compute a reduced chi2 yourself, but it is not possible to do automatically. Querying Minuit.fmin.reduced_chi2
is safe, it either returns a valid value or nan
if the chi2 cannot be
computed automatically for the current cost function.
[47]:
f"𝜒²/ndof = {m.fval:.2f} / {m.ndof} = {m.fmin.reduced_chi2:.2f}"
[47]:
'𝜒²/ndof = 3.96 / 8.0 = 0.49'
This value should be around 1. The more data points one has, the closer. If the value is much larger than 1, then the data variance is underestimated or the model does not describe the data. If the value is much smaller than 1, then the data variance is overestimated (perhaps because of positive correlations between the fluctuations of the data values).
The last block shows the covariance matrix, this is useful to check for large correlations which are usually a sign of trouble.
[48]:
m.covariance
[48]:
α | β | |
---|---|---|
α | 0.00345 | -0.0049 (-0.843) |
β | -0.0049 (-0.843) | 0.00982 |
We will discuss this matrix in more detail in the next section.
Parameter uncertainties, covariance, and confidence intervals/regions
You saw how to get the uncertainty of each individual parameter and how to access the full covariance matrix of all parameters together, which includes the correlations. Correlations are essential additional information if you want to work with parameter uncertainties seriously.
Minuit offers two ways to compute the parameter uncertainties, Hesse and Minos. Both have pros and cons.
Hesse for covariance and correlation matrices
The Hesse algorithm numerically computes the matrix of second derivatives at the function minimum (called the Hesse matrix) and inverts it. The Hesse matrix is symmetric by construction. In the limit of infinite data samples to fit, the result of this computation converges to the true covariance matrix of the parameters. It is often a good and sometimes even an unbiased estimate for finite samples. These errors obtained from this method are sometimes called parabolic errors, because the Hesse matrix method is exact if the function is a hyperparabola (third and higher-order derivatives are all zero). The errors are also by construction symmetric in positive and negative direction.
Pros
(Comparably) fast computation.
Provides covariance matrix for error propagation.
Provides symmetric errors which are easy to work with.
Cons
May not have good coverage probability when sample size is small.
The MIGRAD algorithm computes an approximation of the Hesse matrix automatically during minimization. When the default strategy is used, Minuit does a check whether this approximation is sufficiently accurate and if not, it computes the Hesse matrix automatically.
All this happens inside the C++ Minuit2 code and is a bit intransparent, so to be on the safe side, we recommend to call Minuit.hesse
explicitly after the minimization, if exact errors are important.
[49]:
# let's mess up the current errors a bit so that hesse has something to do
m.errors = (0.16, 0.2)
m.params
[49]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 1.02 | 0.16 | |||||
1 | β | 2.0 | 0.2 |
[50]:
m.hesse().params # note the change in "Hesse Error"
[50]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 1.02 | 0.06 | |||||
1 | β | 2.0 | 0.1 |
Covariance and correlation Matrix
To see the covariance matrix of the parameters, you do:
[51]:
m.covariance
[51]:
α | β | |
---|---|---|
α | 0.00345 | -0.0049 (-0.843) |
β | -0.0049 (-0.843) | 0.00982 |
The parameters α and β are strongly anti-correlated, the numerical value of the correlation is shown in parentheses. The correlation is also highlighted by the blue color of the off-diagonal elements.
[52]:
print(repr(m.covariance)) # use print(repr(...) to skip pretty printing
[[ 0.00345455 -0.0049091 ]
[-0.0049091 0.00981819]]
To get the correlation matrix, use:
[53]:
m.covariance.correlation() # returns a newly created correlation matrix
[53]:
α | β | |
---|---|---|
α | 1 | -0.8 |
β | -0.8 | 1 |
Nonzero correlation is not necessarily a bad thing, but if you have freedom in redefining the parameters of the fit function, it is good to chose parameters which are not strongly correlated.
Warning: Minuit cannot accurately minimize the function if two parameters are (almost) perfectly (anti-)correlated. It also means that one of two parameters is superfluous, it doesn’t add new information. You should rethink the cost function in this case and try to remove one of the parameters from the fit, either by fixing its value or by expressing it as a function of the other parameters.
Both matrices are subclasses of numpy.ndarray
, so you can use them everywhere you would use a NumPy array. In addition, these matrices support value access via parameter names:
[54]:
m.covariance["α", "β"]
[54]:
np.float64(-0.004909095238434344)
MINOS for non-parabolic minima
Minuit has another algorithm to compute uncertainties: MINOS. It implements the so-called profile likelihood method, where the neighborhood around the function minimum is scanned until the contour is found where the function increase by the value of errordef
. The contour defines a confidence region that covers the true parameter point with a certain probability. The probability is exactly known in the limit of infinitely large data samples, but approximate for the finite case. Please consult
a textbook about statistics about the mathematical details or look at the tutorial “Error computation with HESSE and MINOS”.
Pros
Produces asymmetric errors, which may better visualize the uncertainty in the parameter.
Produces pretty two-dimensional confidence regions for scientific plots.
Cons
Computationally expensive.
Asymmetric errors are difficult to error-propagate, see Barlow 2004.
MINOS is not automatically called during minimization, it needs to be called explicitly afterwards, like so:
[55]:
m.minos()
[55]:
External | |
---|---|
FCN = 3.959 (χ²/ndof = 0.5) | Nfcn = 85 |
EDM = 2.39e-21 (Goal: 0.0002) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse ok | Covariance accurate |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 1.02 | 0.06 | -0.06 | 0.06 | |||
1 | β | 2.0 | 0.1 | -0.1 | 0.1 |
α | β | |||
---|---|---|---|---|
Error | -0.06 | 0.06 | -0.1 | 0.1 |
Valid | True | True | True | True |
At Limit | False | False | False | False |
Max FCN | False | False | False | False |
New Min | False | False | False | False |
α | β | |
---|---|---|
α | 0.00345 | -0.0049 (-0.843) |
β | -0.0049 (-0.843) | 0.00982 |
By now you are probably used to seeing green colors, which indicate that Minos ran successful. Be careful when these are red instead, Minos can fail. The fields in the new Minos table mean the following:
Valid: Whether Minos considers the scan result valid.
At Limit: True if Minos hit a parameter limit before the finishing the contour, which would be bad.
Max FCN: True if Minos reached the maximum number of allowed calls before finishing the contour, also bad.
New Min: True if Minos discovered a deeper local minimum in the neighborhood of the current one. Not necessarily bad, but should not happen.
The errors computed by Minos are now also shown in the parameter list.
[56]:
m.params
[56]:
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | α | 1.02 | 0.06 | -0.06 | 0.06 | |||
1 | β | 2.0 | 0.1 | -0.1 | 0.1 |
Note: If the absolute values of the Minos errors are very close to the Hesse Error, the function is well approximated by a hyperparabola around the minimum. You can use this as a check instead of explicitly plotting the function around the minimum (for which we provide tools, see below).
Coverage probability of intervals/regions constructed with Hesse and Minos algorithms
It is important to construct confidence intervals (and confidence regions in multiple dimensions) which have a well-defined coverage probability. How confidence intervals are correctly interpreted is explained in the Wikipedia, this Stackoverflow article, and in good introductory text books on statistics. The slightly unintuitive interpretation is the price to pay in the frequentist framework of statistics to avoid subjective priors.
Standard one-dimensional confidence intervals should have 68 % coverage probability. As previously mentioned, the coverage probability of the intervals constructed from the uncertainties reported by Hesse and Minos are not necessarily the standard 68 %.
Whether Hesse or Minos produce an interval with a coverage probability closer to the desired level in finite samples depends on the case. There are theoretical results which suggest that Hesse may be slightly better, but we also found special cases where Minos intervals performed better.
Some sources claim that Minos gives better coverage when the cost function is not parabolic around the minimum; that is not generally true, in fact Hesse intervals may have better coverage.
As a rule-of-thumb, use Hesse as the default and try both algorithms if accurate coverage probability matters.
Quick access to fit results
You get the main fit results with properties and methods from the Minuit
object. We used several of them already. Here is a summary:
[57]:
print(m.values) # array-like view of the parameter values
<ValueView α=1.0240435955962377 β=1.996894456425372>
[58]:
# access values by name or index
print("by name ", m.values["α"])
print("by index", m.values[0])
by name 1.0240435955962377
by index 1.0240435955962377
[59]:
# iterate over values
for key, value in zip(m.parameters, m.values):
print(f"{key} = {value}")
α = 1.0240435955962377
β = 1.996894456425372
[60]:
# slicing works
print(m.values[:1])
[1.0240435955962377]
[61]:
print(m.errors) # array-like view of symmetric uncertainties
<ErrorView α=0.058775400946791456 β=0.09908678011350225>
Minuit.errors
supports the same access as Minuit.values
.
[62]:
print(m.params) # parameter info (using str(m.params))
┌───┬──────┬───────────┬───────────┬────────────┬────────────┬─────────┬─────────┬───────┐
│ │ Name │ Value │ Hesse Err │ Minos Err- │ Minos Err+ │ Limit- │ Limit+ │ Fixed │
├───┼──────┼───────────┼───────────┼────────────┼────────────┼─────────┼─────────┼───────┤
│ 0 │ α │ 1.02 │ 0.06 │ -0.06 │ 0.06 │ │ │ │
│ 1 │ β │ 2.0 │ 0.1 │ -0.1 │ 0.1 │ │ │ │
└───┴──────┴───────────┴───────────┴────────────┴────────────┴─────────┴─────────┴───────┘
[63]:
print(repr(m.params)) # parameter info (using repr(m.params))
(Param(number=0, name='α', value=1.0240435955962377, error=0.058775400946791456, merror=(-0.058775381364708695, 0.05877538136434314), is_const=False, is_fixed=False, lower_limit=None, upper_limit=None), Param(number=1, name='β', value=1.996894456425372, error=0.09908678011350225, merror=(-0.09908673886165943, 0.09908673886108646), is_const=False, is_fixed=False, lower_limit=None, upper_limit=None))
[64]:
# asymmetric uncertainties (using str(m.merrors))
print(m.merrors)
┌──────────┬───────────────────────┬───────────────────────┐
│ │ α │ β │
├──────────┼───────────┬───────────┼───────────┬───────────┤
│ Error │ -0.06 │ 0.06 │ -0.1 │ 0.1 │
│ Valid │ True │ True │ True │ True │
│ At Limit │ False │ False │ False │ False │
│ Max FCN │ False │ False │ False │ False │
│ New Min │ False │ False │ False │ False │
└──────────┴───────────┴───────────┴───────────┴───────────┘
[65]:
# asymmetric uncertainties (using repr(m.merrors))
print(repr(m.merrors))
<MErrors
<MError number=0 name='α' lower=-0.058775381364708695 upper=0.05877538136434314 is_valid=True lower_valid=True upper_valid=True at_lower_limit=False at_upper_limit=False at_lower_max_fcn=False at_upper_max_fcn=False lower_new_min=False upper_new_min=False nfcn=16 min=1.0240435955962377>,
<MError number=1 name='β' lower=-0.09908673886165943 upper=0.09908673886108646 is_valid=True lower_valid=True upper_valid=True at_lower_limit=False at_upper_limit=False at_lower_max_fcn=False at_upper_max_fcn=False lower_new_min=False upper_new_min=False nfcn=16 min=1.996894456425372>
>
[66]:
print(m.covariance) # covariance matrix computed by Hesse (using str(m.covariance))
┌───┬─────────────────┐
│ │ α β │
├───┼─────────────────┤
│ α │ 0.00345 -0.0049 │
│ β │ -0.0049 0.00982 │
└───┴─────────────────┘
[67]:
print(
repr(m.covariance)
) # covariance matrix computed by Hesse (using repr(m.covariance))
[[ 0.00345455 -0.0049091 ]
[-0.0049091 0.00981819]]
As already mentioned, you can play around with iminuit by assigning new values to m.values
and m.errors
and then run m.migrad()
again. The values will be used as a starting point.
Plotting
iminuit
comes with built-in methods to draw the likelihood around the minimum. These can be used to draw confidence regions with a defined confidence level or for debugging the likelihood.
Drawing confidence regions
To get a generic overview, use the method Minuit.draw_mnmatrix
. It shows scans over the likelihood where all other parameters than the ones scanned are minimized, in other words, it is using the Minos algorithm. The regions and intervals found in this way correspond to uncertainty intervals. It is also a great way to see whether the likelihood is sane around the minimum.
[68]:
# find the minimum again after messing around with the parameters
m.migrad()
# draw matrix of likelihood contours for all pairs of parameters at 1, 2, 3 sigma
m.draw_mnmatrix();
The diagonal cells show the 1D profile around each parameter. The points were the horizontal lines cross the profile correspond to confidence intervals with confidence level cl
(a probability). The off-diagonal cells show confidence regions with confidence level cl
. Asymptotically (in large samples), the cl
is equal to the probability that the region contains the true value. In finite samples, this is usually only approximately so.
For convenience, the drawing functions interpret cl >= 1
as the number of standard deviations with a confidence level that corresponds to a standard normal distribution:
cl = 1: 68.3 %
cl = 2: 95.4 %
cl = 3: 99.7 %
Drawing all profiles and regions can be time-consuming. The following commands show how to draw only individual contours or profiles.
[69]:
# draw three confidence regions with 68%, 90%, 99% confidence level
m.draw_mncontour("α", "β", cl=(0.68, 0.9, 0.99));
[70]:
# get individual contours to plot them yourself
pts = m.mncontour("α", "β", cl=0.68, size=20)
x, y = np.transpose(pts)
plt.plot(x, y, "o-");
To make the contour look nicer, you can increase the size
parameter or use the interpolated
parameter to do cubic spline interpolation or use the experimental
algorithm.
[71]:
# draw original points
plt.plot(x, y, ".", label="size=20")
# draw interpolated points
pts2 = m.mncontour("α", "β", cl=0.68, size=20, interpolated=100)
x2, y2 = np.transpose(pts2)
plt.plot(x2, y2, label="size=20, interpolated")
# actual curve at higher resolution
pts = m.mncontour("α", "β", cl=0.68, size=100)
x3, y3 = np.transpose(pts)
plt.plot(x3, y3, "-", label="size=100")
plt.legend();
[72]:
# with experimental algorithm
pts = m.mncontour("α", "β", cl=0.68, size=50, experimental=True)
x4, y4 = np.transpose(pts)
plt.plot(x4, y4, "-", label="size=50 experimental");
The experimental algorithm takes more time but produces a smoother contour.
To draw the 1D profile, call Minuit.draw_mnprofile
.
[73]:
m.draw_mnprofile("α");
[74]:
# or use this to plot the result of the scan yourself
a, fa, ok = m.mnprofile("α")
plt.plot(a, fa);
Likelihood debugging
mnmatrix
, mnprofile
, and mncontour
do Minos scans. If you have trouble with Minos or with the minimization, you should check how the likelihood looks like where you are. The following functions perform no minimization, they just draw the likelihood function as it is at certain coordinates.
[75]:
# draw 1D scan over likelihood, the minimum value is subtracted by default
m.draw_profile("α");
[76]:
# or draw it yourself, the minimum value is not subtracted here
x, y = m.profile("α")
plt.plot(x, y);
[77]:
# draw 2D scan over likelihood
m.draw_contour("α", "β");
[78]:
# or use this to plot the result of the scan yourself
x, y, z = m.contour("α", "β", subtract_min=True)
cs = plt.contour(x, y, z, (1, 2, 3, 4)) # these are not sigmas, just the contour values
plt.clabel(cs);
Interactive fit
In Jupyter notebooks, it is possible to fit a model to data interactively, by calling Minuit.interactive
. This functionality requires optional extra packages. If they are not there, you will get a friendly error message telling you what you need to install.
[79]:
m.interactive()
[79]:
You can change the parameter values with the sliders. Clicking the “Fit” button runs Minuit.migrad
with these as starting values.
Note: If you see this notebook on ReadTheDocs or otherwise statically rendered, changing the sliders won’t change the plot. This requires a running Jupyter kernel.
Interactive fits are useful to find starting values and to debug the fit. The following issues are easy to detect:
Starting values are way off.
You forgot to set limits on some parameters.
Some parameters are strongly correlated.
Your model is not analytical.
Strong correlations are caused when a change to one parameter can be almost perfectly undone by a changing one or more other parameters. If the model suddenly jumps when you move the sliders, this may indicate that the model is not analytical, but also note that the sliders have finite resolution and the model curve is also only drawn with finite resolution. Set tighter limits on the affected parameter or investigate the root cause with numerical experiments.
Minuit.interactive
uses the visualize
method on the cost function, if it is available. All built-in cost functions provide this method, but it only works for 1D distributions, since there is no obvious general way to visualize data-model agreement in higher dimensions. You can provide your visualization though, see the documentation of Minuit.interactive
. This can also be useful to draw the model in more detail, for example, if you want to give different components in an additive
model different colors (e.g. signal and background).