minuit_optimizer#

class pyhf.optimize.opt_minuit.minuit_optimizer(*args, **kwargs)[source]#

Bases: pyhf.optimize.mixins.OptimizerMixin

Optimizer that minimizes via iminuit.Minuit.migrad().

__init__(*args, **kwargs)[source]#

Create iminuit.Minuit optimizer.

Note

errordef should be 1.0 for a least-squares cost function and 0.50 for negative log-likelihood function — see MINUIT: Function Minimization and Error Analysis Reference Manual Section 7.1: Function normalization and ERROR DEF. This parameter is sometimes called UP in the MINUIT docs.

Parameters:
  • errordef (float) – See minuit docs. Default is 1.0.

  • steps (int) – Number of steps for the bounds. Default is 1000.

  • strategy (int) – See iminuit.Minuit.strategy. Default is None, which results in either iminuit.Minuit.strategy 0 or 1 from the evaluation of int(not pyhf.tensorlib.default_do_grad).

  • tolerance (float) – Tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.

Attributes

name#
errordef#
steps#
strategy#
tolerance#
maxiter#
verbose#

Methods

_get_minimizer(objective_and_grad, init_pars, init_bounds, fixed_vals=None, do_grad=False, par_names=None)[source]#
_minimize(minimizer, func, x0, do_grad=False, bounds=None, fixed_vals=None, options={})[source]#

Same signature as scipy.optimize.minimize().

Note: an additional minuit is injected into the fitresult to get the underlying minimizer.

Minimizer Options:
  • maxiter (int): Maximum number of iterations. Default is 100000.

  • strategy (int): See iminuit.Minuit.strategy. Default is to configure in response to do_grad.

  • tolerance (float): Tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.

Returns:

the fit result

Return type:

fitresult (scipy.optimize.OptimizeResult)