Acceleration with Numba
We explore how the computation of cost functions can be dramatically accelerated with numba’s JIT compiler.
The run-time of iminuit is usually dominated by the execution time of the cost function. To get good performance, it recommended to use array arthimetic and scipy and numpy functions in the body of the cost function. Python loops should be avoided, but if they are unavoidable, Numba can help. Numba can also parallelize numerical calculations to make full use of multi-core CPUs and even do computations on the GPU.
Note: This tutorial shows how one can generate faster pdfs with Numba. Before you start to write your own pdf, please check whether one is already implemented in the numba_stats library. If you have a pdf that is not included there, please consider contributing it to numba_stats.
[ ]:
# !pip install matplotlib numpy numba scipy iminuit
%config InlineBackend.figure_formats = ['svg']
from iminuit import Minuit
import numpy as np
import numba as nb
import math
from scipy.stats import expon, norm
from matplotlib import pyplot as plt
from argparse import Namespace
The standard fit in particle physics is the fit of a peak over some smooth background. We generate a Gaussian peak over exponential background, using scipy.
[ ]:
np.random.seed(1) # fix seed
# true parameters for signal and background
truth = Namespace(n_sig=2000, f_bkg=10, sig=(5.0, 0.5), bkg=(0.0, 4.0))
n_bkg = truth.n_sig * truth.f_bkg
# make a data set
x = np.empty(truth.n_sig + n_bkg)
# fill m variables
x[: truth.n_sig] = norm(*truth.sig).rvs(truth.n_sig)
x[truth.n_sig :] = expon(*truth.bkg).rvs(n_bkg)
# cut a range in x
xrange = np.array((1.0, 9.0))
ma = (xrange[0] < x) & (x < xrange[1])
x = x[ma]
plt.hist(
(x[truth.n_sig :], x[: truth.n_sig]),
bins=50,
stacked=True,
label=("background", "signal"),
)
plt.xlabel("x")
plt.legend();
[ ]:
# ideal starting values for iminuit
start = np.array((truth.n_sig, n_bkg, truth.sig[0], truth.sig[1], truth.bkg[1]))
# iminuit instance factory, will be called a lot in the benchmarks blow
def m_init(fcn):
m = Minuit(fcn, start, name=("ns", "nb", "mu", "sigma", "lambd"))
m.limits = ((0, None), (0, None), None, (0, None), (0, None))
m.errordef = Minuit.LIKELIHOOD
return m
[ ]:
# extended likelihood (https://doi.org/10.1016/0168-9002(90)91334-8)
# this version uses numpy and scipy and array arithmetic
def nll(par):
n_sig, n_bkg, mu, sigma, lambd = par
s = norm(mu, sigma)
b = expon(0, lambd)
# normalisation factors are needed for pdfs, since x range is restricted
sn = s.cdf(xrange)
bn = b.cdf(xrange)
sn = sn[1] - sn[0]
bn = bn[1] - bn[0]
return (n_sig + n_bkg) - np.sum(
np.log(s.pdf(x) / sn * n_sig + b.pdf(x) / bn * n_bkg)
)
nll(start)
np.float64(-103168.78482586428)
[ ]:
%%timeit -r 3 -n 1
m = m_init(nll) # setup time is negligible
m.migrad();
327 ms ± 66.2 ms per loop (mean ± std. dev. of 3 runs, 1 loop each)
Let’s see whether we can beat that. The code above is already pretty fast, because numpy and scipy routines are fast, and we spend most of the time in those. But these implementations do not parallelize the execution and are not optimised for this particular CPU, unlike numba-jitted functions.
To use numba, in theory we just need to put the njit
decorator on top of the function, but often that doesn’t work out of the box. numba understands many numpy functions, but no scipy. We must evaluate the code that uses scipy in ‘object mode’, which is numba-speak for calling into the Python interpreter.
[ ]:
# first attempt to use numba
@nb.njit(parallel=True)
def nll(par):
n_sig, n_bkg, mu, sigma, lambd = par
with nb.objmode(spdf="float64[:]", bpdf="float64[:]", sn="float64", bn="float64"):
s = norm(mu, sigma)
b = expon(0, lambd)
# normalisation factors are needed for pdfs, since x range is restricted
sn = np.diff(s.cdf(xrange))[0]
bn = np.diff(b.cdf(xrange))[0]
spdf = s.pdf(x)
bpdf = b.pdf(x)
no = n_sig + n_bkg
return no - np.sum(np.log(spdf / sn * n_sig + bpdf / bn * n_bkg))
nll(start) # test and warm-up JIT
OMP: Info #276: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
-103168.78482586429
[ ]:
%%timeit -r 3 -n 1 m = m_init(nll)
m.migrad()
432 ms ± 31.5 ms per loop (mean ± std. dev. of 3 runs, 1 loop each)
It is even a bit slower. :( Let’s break the original function down by parts to see why.
[ ]:
# let's time the body of the function
n_sig, n_bkg, mu, sigma, lambd = start
s = norm(mu, sigma)
b = expon(0, lambd)
# normalisation factors are needed for pdfs, since x range is restricted
sn = np.diff(s.cdf(xrange))[0]
bn = np.diff(b.cdf(xrange))[0]
spdf = s.pdf(x)
bpdf = b.pdf(x)
%timeit -r 3 -n 100 norm(*start[2:4]).pdf(x)
%timeit -r 3 -n 500 expon(0, start[4]).pdf(x)
%timeit -r 3 -n 1000 n_sig + n_bkg - np.sum(np.log(spdf / sn * n_sig + bpdf / bn * n_bkg))
1.75 ms ± 24 μs per loop (mean ± std. dev. of 3 runs, 100 loops each)
1.32 ms ± 92.1 μs per loop (mean ± std. dev. of 3 runs, 500 loops each)
154 μs ± 9.82 μs per loop (mean ± std. dev. of 3 runs, 1,000 loops each)
Most of the time is spend in norm
and expon
which numba could not accelerate and the total time is dominated by the slowest part.
This, unfortunately, means we have to do much more manual work to make the function faster, since we have to replace the scipy routines with Python code that numba can accelerate and run in parallel.
[ ]:
# when parallel is enabled, also enable associative math
kwd = {"parallel": True, "fastmath": {"reassoc", "contract", "arcp"}}
@nb.njit(**kwd)
def sum_log(fs, spdf, fb, bpdf):
return np.sum(np.log(fs * spdf + fb * bpdf))
@nb.njit(**kwd)
def norm_pdf(x, mu, sigma):
invs = 1.0 / sigma
z = (x - mu) * invs
invnorm = 1 / np.sqrt(2 * np.pi) * invs
return np.exp(-0.5 * z**2) * invnorm
@nb.njit(**kwd)
def nb_erf(x):
y = np.empty_like(x)
for i in nb.prange(len(x)):
y[i] = math.erf(x[i])
return y
@nb.njit(**kwd)
def norm_cdf(x, mu, sigma):
invs = 1.0 / (sigma * np.sqrt(2))
z = (x - mu) * invs
return 0.5 * (1 + nb_erf(z))
@nb.njit(**kwd)
def expon_pdf(x, lambd):
inv_lambd = 1.0 / lambd
return inv_lambd * np.exp(-inv_lambd * x)
@nb.njit(**kwd)
def expon_cdf(x, lambd):
inv_lambd = 1.0 / lambd
return 1.0 - np.exp(-inv_lambd * x)
def nll(par):
n_sig, n_bkg, mu, sigma, lambd = par
# normalisation factors are needed for pdfs, since x range is restricted
sn = norm_cdf(xrange, mu, sigma)
bn = expon_cdf(xrange, lambd)
sn = sn[1] - sn[0]
bn = bn[1] - bn[0]
spdf = norm_pdf(x, mu, sigma)
bpdf = expon_pdf(x, lambd)
no = n_sig + n_bkg
return no - sum_log(n_sig / sn, spdf, n_bkg / bn, bpdf)
nll(start) # test and warm-up JIT
np.float64(-103168.78482586428)
Let’s see how well these versions do:
[ ]:
%timeit -r 5 -n 100 norm_pdf(x, *start[2:4])
%timeit -r 5 -n 500 expon_pdf(x, start[4])
%timeit -r 5 -n 1000 sum_log(n_sig / sn, spdf, n_bkg / bn, bpdf)
The slowest run took 10.86 times longer than the fastest. This could mean that an intermediate result is being cached.
191 μs ± 152 μs per loop (mean ± std. dev. of 5 runs, 100 loops each)
45.6 μs ± 11.4 μs per loop (mean ± std. dev. of 5 runs, 500 loops each)
64.2 μs ± 23.9 μs per loop (mean ± std. dev. of 5 runs, 1,000 loops each)
Only a minor improvement for sum_log
, but the pdf calculation was drastically accelerated. Since this was the bottleneck before, we expect also Migrad to finish faster now.
[ ]:
%%timeit -r 3 -n 1
m = m_init(nll) # setup time is negligible
m.migrad();
55.5 ms ± 24.6 ms per loop (mean ± std. dev. of 3 runs, 1 loop each)
Success! We managed to get a big speed improvement over the initial code. This is impressive, but it cost us a lot of developer time. This is not always a good trade-off, especially if you consider that library routines are heavily tested, while you always need to test your own code in addition to writing it.
By putting these faster functions into a library, however, we would only have to pay the developer cost once. You can find those in the numba-stats library.
[ ]:
from numba_stats import norm, expon
%timeit -r 5 -n 100 norm.pdf(x, *start[2:4])
%timeit -r 5 -n 500 expon.pdf(x, 0, start[4])
151 μs ± 19.2 μs per loop (mean ± std. dev. of 5 runs, 100 loops each)
185 μs ± 31.9 μs per loop (mean ± std. dev. of 5 runs, 500 loops each)
The implementation of the normal pdf in numba-stats is even faster than our simple implementation here.
Try to compile the functions again with parallel=False
to see how much of the speed increase came from the parallelization and how much from the generally optimized code that numba
generated for our specific CPU. On my machine, the gain was entirely due to numba.
In general, it is good advice to not automatically add parallel=True
, because this comes with an overhead of breaking data into chunks, copy chunks to the individual CPUs and finally merging everything back together. For large arrays, this overhead is negligible, but for small arrays, it can be a net loss.
So why is numba
so fast even without parallelization? We can look at the assembly code generated.
[ ]:
for signature, code in norm_pdf.inspect_asm().items():
print(
f"signature: {signature}\n{'-'*(len(str(signature)) + 11)}\n{code[:1000]}\n[...]"
)
signature: (Array(float64, 1, 'C', False, aligned=True), float64, float64)
--------------------------------------------------------------------------
.section __TEXT,__text,regular,pure_instructions
.build_version macos, 14, 0
.section __TEXT,__literal8,8byte_literals
.p2align 3
LCPI0_0:
.quad 0x3ff0000000000000
LCPI0_1:
.quad 0x3fd9884533d43651
.section __TEXT,__const
.p2align 5
LCPI0_2:
.quad 0
.quad 8
.quad 8
.quad 8
.section __TEXT,__text,regular,pure_instructions
.globl __ZN8__main__8norm_pdfB3v22B150c8tJTC_2fWQAliW1xhDEoY6EEMEUOEMISPGsAQMVj4QniQ4IXKQEMXwoMGLoQDDVsQR1NHAS2hQ9XgStYw86ABbYse0tXqiUXJBeo6CurJ_2bXklRYnJJSB2ETCRF_2bcnq9cC7QNGJsRqAA_3d_3dE5ArrayIdLi1E1C7mutable7alignedEdd
.p2align 4, 0x90
__ZN8__main__8norm_pdfB3v22B150c8tJTC_2fWQAliW1xhDEoY6EEMEUOEMISPGsAQMVj4QniQ4IXKQEMXwoMGLoQDDVsQR1NHAS2hQ9XgStYw86ABbYse0tXqiUXJBeo6CurJ_2bXklRYnJJSB2ETCRF_2bcnq9cC7QNGJsRqAA_3d_3dE5ArrayIdLi1E1C7mutable7alignedEdd:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
pushq %r15
.cfi_def_cfa_offset 24
pushq %r14
.cfi_def_cfa_offset 32
pushq %r13
.cfi_def_cfa_offset 40
pushq %r12
.cfi_def_cfa_offset 48
pushq %r
[...]
This code section is very long, but the assembly grammar is very simple. Constants start with .
and SOMETHING:
is a jump label for the assembly equivalent of goto
. Everything else is an instruction with its name on the left and the arguments are on the right.
The SIMD instructions are the interesting commands that operate on multiple values at once. This is where the speed comes from. - If you are on the x86 platform, those instructions end with pd
and ps
. - On arch64, they contain a dot .
and some letters/numbers afterwards.
[ ]:
from collections import Counter
for signature, code in norm_pdf.inspect_asm().items():
print(f"signature: {signature}\n{'-'*(len(str(signature)) + 11)}")
instructions = []
for line in code.split("\n"):
instr = line.strip().split("\t")[0]
if instr.startswith("."):
continue
for match in ("add", "sub", "mul", "mov"):
if match in instr:
instructions.append(instr)
c = Counter(instructions)
print("Instructions")
for k in sorted(c):
print(f"{k:10}: {c[k]:5}")
signature: (Array(float64, 1, 'C', False, aligned=True), float64, float64)
--------------------------------------------------------------------------
Instructions
addq : 26
imulq : 10
movabsq : 105
movl : 23
movq : 305
movslq : 2
subq : 8
vmovapd : 17
vmovaps : 7
vmovsd : 28
vmovupd : 34
vmovups : 8
vmulpd : 19
vmulsd : 5
vsubpd : 10
vsubsd : 1
add
: subtract numberssub
: subtract numbersmul
: multiply numbersmov
: copy values from memory to CPU registers and back
You can google all the other commands.
There is a lot of repetition in the assembly code, because the optimizer unrolls loops over subsequences to make them faster. Using an unrolled loop only works if the remaining chunk of data is large enough. Since the compiler does not know the length of the incoming array, it generates sections which handle shorter chunks and all the code to select which section to use. Finally, there is some code which does the translation from and to Python objects with corresponding error handling.
We don’t need to write SIMD instructions by hand, the optimizer does it for us and in a very sophisticated way.