class documentation

Mixed-integer convex optimization model.

Static Method sum Create a linear expression from a summation.
Method __init__ Optimization model constructor.
Method add_linear_constr Add a linear constraint to the model.
Method add_nonlinear_constr Add a nonlinear constraint to the model.
Method add_objective_term Add an objective term to the model.
Method add_var Add a decision variable to the model.
Method add_var_tensor Add a tensor of decision variables to the model.
Method optimize Optimize the model.
Method reset Reset the model.
Method start.setter Set the starting solution or partial solution, provided as tuple of (variable, value) pairs.
Method var_by_name Get a variable by name.
Method var_value Get the value one or more decision variables corresponding to the best solution.
Instance Variable infeasibility_tol The maximum allowed constraint violation permitted for a solution to be considered feasible.
Instance Variable log_freq The frequency with which logs are
Instance Variable max_gap The maximum relative optimality gap allowed before the search is terminated.
Instance Variable max_gap_abs The maximum absolute optimality gap allowed before the search is terminated.
Instance Variable minimize Whether the objective should be minimized. If False, the objective will be maximized - note that in this case the objective must concave, not convex.
Instance Variable smoothing The smoothing parameter used to update the query point. If None, the query point will not be updated.
Instance Variable solver_name The MIP solver to use. Valid options are 'CBC' and 'GUROBI'. Note that 'GUROBI' requires a license.
Instance Variable step_size The step size used to numerically evaluate gradients using the central finite difference method. Only used when a function for analytically computing the gradient is not provided.
Property best_bound Get the best bound.
Property best_solution Get the best solution (all variables).
Property gap Get the (relative) optimality gap.
Property gap_abs Get the absolute optimality gap.
Property linear_constrs Get the linear constraints of the model.
Property nonlinear_constrs Get the nonlinear constraints of the model.
Property objective_terms Get the objective terms of the model.
Property objective_value Get the objective value of the best solution.
Property search_log Get the search log.
Property start Get the starting solution or partial solution provided.
Property status Get the status of the model.
Static Method _validate_bounds Undocumented
Method _validate_params Undocumented
Instance Variable _best_bound Undocumented
Instance Variable _best_solution Undocumented
Instance Variable _model Undocumented
Instance Variable _nonlinear_constrs Undocumented
Instance Variable _objective_terms Undocumented
Instance Variable _objective_value Undocumented
Instance Variable _search_log Undocumented
Instance Variable _start Undocumented
Instance Variable _status Undocumented
@staticmethod
def sum(terms: Iterable[Union[mip.Var, mip.LinExpr]]) -> mip.LinExpr: (source)

Create a linear expression from a summation.

def __init__(self, minimize: bool = True, max_gap: float = 0.0001, max_gap_abs: float = 0.0001, infeasibility_tol: float = 0.0001, step_size: float = 1e-06, smoothing: Optional[float] = 0.5, solver_name: Optional[str] = 'CBC', log_freq: Optional[int] = 1): (source)

Optimization model constructor.

Parameters
minimize:boolValue for the minimize attribute.
max_gap:floatValue for the max_gap attribute. Must be positive.
max_gap_abs:floatValue for the max_gap_abs attribute. Must be positive.
infeasibility_tol:floatValue for the infeasibility_tol attribute. Must be positive.
step_size:floatValue for the step_size attribute. Must be positive.
smoothing:Optional[float]Value for the smoothing attribute. If provided, must be in the range (0, 1).
solver_name:Optional[str]Value for the solver_name attribute.
log_freq:Optional[int]Value for the log_freq attribute.
def add_linear_constr(self, constraint: mip.LinExpr, name: str = '') -> mip.Constr: (source)

Add a linear constraint to the model.

Returns: The constraint expression.

Parameters
constraint:mip.LinExprThe linear constraint.
name:strThe name of the constraint.
Returns
mip.ConstrUndocumented
def add_nonlinear_constr(self, var: Var, func: Union[Func, FuncGrad], grad: Optional[Union[Grad, bool]] = None, name: str = '') -> ConvexTerm: (source)

Add a nonlinear constraint to the model.

Parameters
var:VarThe variable(s) included in the term. This can be provided in the form of a single variable, an iterable of multiple variables or a variable tensor.
func:Union[Func, FuncGrad]A function for computing the term's value. This function should except one argument for each variable in var. If var is a variable tensor, then the function should accept a single array.
grad:Optional[Union[Grad, bool]]A function for computing the term's gradient. This function should except one argument for each variable in var. If var is a variable tensor, then the function should accept a single array. If None, then the gradient is approximated numerically using the central finite difference method. If grad is instead a Boolean and is True, then func is assumed to return a tuple where the first element is the function value and the second element is the gradient. This is useful when the gradient is expensive to compute.
name:strThe name of the constraint.
Returns
ConvexTermThe convex term representing the constraint.
def add_objective_term(self, var: Var, func: Union[Func, FuncGrad], grad: Optional[Union[Grad, bool]] = None, name: str = '') -> ConvexTerm: (source)

Add an objective term to the model.

Parameters
var:VarThe variable(s) included in the term. This can be provided in the form of a single variable, an iterable of multiple variables or a variable tensor.
func:Union[Func, FuncGrad]A function for computing the term's value. This function should except one argument for each variable in var. If var is a variable tensor, then the function should accept a single array.
grad:Optional[Union[Grad, bool]]A function for computing the term's gradient. This function should except one argument for each variable in var. If var is a variable tensor, then the function should accept a single array. If None, then the gradient is approximated numerically using the central finite difference method. If grad is instead a Boolean and is True, then func is assumed to return a tuple where the first element is the function value and the second element is the gradient. This is useful when the gradient is expensive to compute.
name:strThe name of the term.
Returns
ConvexTermThe objective term.
def add_var(self, lb: Optional[float] = None, ub: Optional[float] = None, var_type: str = mip.CONTINUOUS, name: str = '') -> mip.Var: (source)

Add a decision variable to the model.

Parameters
lb:Optional[float]The lower bound for the decision variable. Must be finite and less than the upper bound. Cannot be None if var_type is 'C' or 'I'.
ub:Optional[float]The upper bound for the decision variable. Must be finite and greater than the lower bound. Cannot be None if var_type is 'C' or 'I'.
var_type:strThe variable type. Valid options are 'C' (continuous), 'I' (integer) and 'B' (binary).
name:strThe name of the decision variable.
Returns
mip.VarThe decision variable.
def add_var_tensor(self, shape: tuple[int, ...], lb: Optional[float] = None, ub: Optional[float] = None, var_type: str = mip.CONTINUOUS, name: str = '') -> mip.LinExprTensor: (source)

Add a tensor of decision variables to the model.

Parameters
shape:tuple[int, ...]The shape of the tensor.
lb:Optional[float]The lower bound for the decision variables. Must be finite and less than the upper bound. Cannot be None if var_type is 'C' or 'I'.
ub:Optional[float]The upper bound for the decision variables. Must be finite and greater than the lower bound. Cannot be None if var_type is 'C' or 'I'.
var_type:strThe variable type. Valid options are 'C' (continuous), 'I' (integer) and 'B' (binary).
name:strThe name of the decision variable.
Returns
mip.LinExprTensorThe tensor of decision variables.
def optimize(self, max_iters: int = 100, max_iters_no_improvement: Optional[int] = None, max_seconds_per_iter: Optional[float] = None) -> mip.OptimizationStatus: (source)

Optimize the model.

Parameters
max_iters:intThe maximum number of iterations to run the search for.
max_iters_no_improvement:Optional[int]The maximum number of iterations to continue the search without improvement in the objective value, once a feasible solution has been found. If None, then the search will continue until max_iters regardless of lack of improvement in the objective value.
max_seconds_per_iter:Optional[float]The maximum number of seconds allow the MIP solver to run for each iteration. If None, then the MIP solver will run until its convergence criteria are met.
Returns
mip.OptimizationStatusThe status of the search.
def reset(self): (source)

Reset the model.

@start.setter
def start(self, value: Start): (source)

Set the starting solution or partial solution, provided as tuple of (variable, value) pairs.

def var_by_name(self, name: str) -> mip.Var: (source)

Get a variable by name.

def var_value(self, x: Union[mip.Var, mip.LinExprTensor, str]) -> Union[float, np.ndarray]: (source)

Get the value one or more decision variables corresponding to the best solution.

Returns: float or np.ndarray
The value(s) of the variable(s).
Parameters
x:Union[mip.Var, mip.LinExprTensor, str]mip.Var or mip.LinExprTensor or str The variable(s) to get the value of. This can be provided as a single variable, a tensor of variables or the name of a variable.
Returns
Union[float, np.ndarray]Undocumented
infeasibility_tol = (source)

The maximum allowed constraint violation permitted for a solution to be considered feasible.

log_freq = (source)

The frequency with which logs are

The maximum relative optimality gap allowed before the search is terminated.

max_gap_abs = (source)

The maximum absolute optimality gap allowed before the search is terminated.

minimize = (source)

Whether the objective should be minimized. If False, the objective will be maximized - note that in this case the objective must concave, not convex.

smoothing = (source)

The smoothing parameter used to update the query point. If None, the query point will not be updated.

solver_name = (source)

The MIP solver to use. Valid options are 'CBC' and 'GUROBI'. Note that 'GUROBI' requires a license.

step_size = (source)

The step size used to numerically evaluate gradients using the central finite difference method. Only used when a function for analytically computing the gradient is not provided.

Get the best bound.

@property
best_solution: dict[mip.Var, float] = (source)

Get the best solution (all variables).

Get the (relative) optimality gap.

Get the absolute optimality gap.

@property
linear_constrs: mip.ConstrList = (source)

Get the linear constraints of the model.

After the model is optimized, this will include the cuts added to the model.

@property
nonlinear_constrs: list[ConvexTerm] = (source)

Get the nonlinear constraints of the model.

Get the objective terms of the model.

@property
objective_value: float = (source)

Get the objective value of the best solution.

@property
search_log: pd.DataFrame = (source)

Get the search log.

Get the starting solution or partial solution provided.

@property
status: mip.OptimizationStatus = (source)

Get the status of the model.

@staticmethod
def _validate_bounds(lb: float, ub: float, var_type: str) -> tuple[float, float]: (source)

Undocumented

def _validate_params(self): (source)

Undocumented

_best_bound: float = (source)

Undocumented

_best_solution: dict[mip.Var, float] = (source)

Undocumented

_model: mip.Model = (source)

Undocumented

_nonlinear_constrs: list[ConvexTerm] = (source)

Undocumented

_objective_terms: list[ConvexTerm] = (source)

Undocumented

_objective_value: float = (source)

Undocumented

_search_log: list[dict[str, float]] = (source)

Undocumented

_start: dict[mip.Var, float] = (source)

Undocumented

_status: Optional[mip.OptimizationStatus] = (source)

Undocumented