class Model: (source)
Constructor: Model(minimize, max_gap, max_gap_abs, infeasibility_tol, ...)
Mixed-integer convex optimization model.
| Static Method | sum |
Create a linear expression from a summation. |
| Method | __init__ |
Optimization model constructor. |
| Method | add |
Add a linear constraint to the model. |
| Method | add |
Add a nonlinear constraint to the model. |
| Method | add |
Add an objective term to the model. |
| Method | add |
Add a decision variable to the model. |
| Method | add |
Add a tensor of decision variables to the model. |
| Method | optimize |
Optimize the model. |
| Method | reset |
Reset the model. |
| Method | start |
Set the starting solution or partial solution, provided as tuple of (variable, value) pairs. |
| Method | var |
Get a variable by name. |
| Method | var |
Get the value one or more decision variables corresponding to the best solution. |
| Instance Variable | infeasibility |
The maximum allowed constraint violation permitted for a solution to be considered feasible. |
| Instance Variable | log |
The frequency with which logs are |
| Instance Variable | max |
The maximum relative optimality gap allowed before the search is terminated. |
| Instance Variable | max |
The maximum absolute optimality gap allowed before the search is terminated. |
| Instance Variable | minimize |
Whether the objective should be minimized. If False, the objective will be maximized - note that in this case the objective must concave, not convex. |
| Instance Variable | smoothing |
The smoothing parameter used to update the query point. If None, the query point will not be updated. |
| Instance Variable | solver |
The MIP solver to use. Valid options are 'CBC' and 'GUROBI'. Note that 'GUROBI' requires a license. |
| Instance Variable | step |
The step size used to numerically evaluate gradients using the central finite difference method. Only used when a function for analytically computing the gradient is not provided. |
| Property | best |
Get the best bound. |
| Property | best |
Get the best solution (all variables). |
| Property | gap |
Get the (relative) optimality gap. |
| Property | gap |
Get the absolute optimality gap. |
| Property | linear |
Get the linear constraints of the model. |
| Property | nonlinear |
Get the nonlinear constraints of the model. |
| Property | objective |
Get the objective terms of the model. |
| Property | objective |
Get the objective value of the best solution. |
| Property | search |
Get the search log. |
| Property | start |
Get the starting solution or partial solution provided. |
| Property | status |
Get the status of the model. |
| Static Method | _validate |
Undocumented |
| Method | _validate |
Undocumented |
| Instance Variable | _best |
Undocumented |
| Instance Variable | _best |
Undocumented |
| Instance Variable | _model |
Undocumented |
| Instance Variable | _nonlinear |
Undocumented |
| Instance Variable | _objective |
Undocumented |
| Instance Variable | _objective |
Undocumented |
| Instance Variable | _search |
Undocumented |
| Instance Variable | _start |
Undocumented |
| Instance Variable | _status |
Undocumented |
bool = True, max_gap: float = 0.0001, max_gap_abs: float = 0.0001, infeasibility_tol: float = 0.0001, step_size: float = 1e-06, smoothing: Optional[ float] = 0.5, solver_name: Optional[ str] = 'CBC', log_freq: Optional[ int] = 1):
(source)
¶
Optimization model constructor.
| Parameters | |
minimize:bool | Value for the minimize attribute. |
maxfloat | Value for the max_gap attribute. Must be positive. |
maxfloat | Value for the max_gap_abs attribute. Must be positive. |
infeasibilityfloat | Value for the infeasibility_tol attribute. Must be positive. |
stepfloat | Value for the step_size attribute. Must be positive. |
smoothing:Optional[ | Value for the smoothing attribute. If provided, must be in the range (0, 1). |
solverOptional[ | Value for the solver_name attribute. |
logOptional[ | Value for the log_freq attribute. |
Add a linear constraint to the model.
Returns: The constraint expression.
| Parameters | |
constraint:mip.LinExpr | The linear constraint. |
name:str | The name of the constraint. |
| Returns | |
mip.Constr | Undocumented |
Var, func: Union[ Func, FuncGrad], grad: Optional[ Union[ Grad, bool]] = None, name: str = '') -> ConvexTerm:
(source)
¶
Add a nonlinear constraint to the model.
| Parameters | |
var:Var | The variable(s) included in the term. This can be provided in the form of a single variable, an iterable of multiple variables or a variable tensor. |
func:Union[ | A function for computing the term's value. This function should except one argument for each
variable in var. If var is a variable tensor, then the function should accept a single array. |
grad:Optional[ | A function for computing the term's gradient. This function should except one argument for each
variable in var. If var is a variable tensor, then the function should accept a single array. If
None, then the gradient is approximated numerically using the central finite difference method. If
grad is instead a Boolean and is True, then func is assumed to return a tuple where the first
element is the function value and the second element is the gradient. This is useful when the gradient
is expensive to compute. |
name:str | The name of the constraint. |
| Returns | |
ConvexTerm | The convex term representing the constraint. |
Var, func: Union[ Func, FuncGrad], grad: Optional[ Union[ Grad, bool]] = None, name: str = '') -> ConvexTerm:
(source)
¶
Add an objective term to the model.
| Parameters | |
var:Var | The variable(s) included in the term. This can be provided in the form of a single variable, an iterable of multiple variables or a variable tensor. |
func:Union[ | A function for computing the term's value. This function should except one argument for each
variable in var. If var is a variable tensor, then the function should accept a single array. |
grad:Optional[ | A function for computing the term's gradient. This function should except one argument for each
variable in var. If var is a variable tensor, then the function should accept a single array. If
None, then the gradient is approximated numerically using the central finite difference method. If
grad is instead a Boolean and is True, then func is assumed to return a tuple where the first
element is the function value and the second element is the gradient. This is useful when the gradient
is expensive to compute. |
name:str | The name of the term. |
| Returns | |
ConvexTerm | The objective term. |
Optional[ float] = None, ub: Optional[ float] = None, var_type: str = mip.CONTINUOUS, name: str = '') -> mip.Var:
(source)
¶
Add a decision variable to the model.
| Parameters | |
lb:Optional[ | The lower bound for the decision variable. Must be finite and less than the upper bound. Cannot be
None if var_type is 'C' or 'I'. |
ub:Optional[ | The upper bound for the decision variable. Must be finite and greater than the lower bound. Cannot be
None if var_type is 'C' or 'I'. |
varstr | The variable type. Valid options are 'C' (continuous), 'I' (integer) and 'B' (binary). |
name:str | The name of the decision variable. |
| Returns | |
mip.Var | The decision variable. |
tuple[ int, ...], lb: Optional[ float] = None, ub: Optional[ float] = None, var_type: str = mip.CONTINUOUS, name: str = '') -> mip.LinExprTensor:
(source)
¶
Add a tensor of decision variables to the model.
| Parameters | |
shape:tuple[ | The shape of the tensor. |
lb:Optional[ | The lower bound for the decision variables. Must be finite and less than the upper bound. Cannot be
None if var_type is 'C' or 'I'. |
ub:Optional[ | The upper bound for the decision variables. Must be finite and greater than the lower bound. Cannot be
None if var_type is 'C' or 'I'. |
varstr | The variable type. Valid options are 'C' (continuous), 'I' (integer) and 'B' (binary). |
name:str | The name of the decision variable. |
| Returns | |
mip.LinExprTensor | The tensor of decision variables. |
int = 100, max_iters_no_improvement: Optional[ int] = None, max_seconds_per_iter: Optional[ float] = None) -> mip.OptimizationStatus:
(source)
¶
Optimize the model.
| Parameters | |
maxint | The maximum number of iterations to run the search for. |
maxOptional[ | The maximum number of iterations to continue the search without improvement in
the objective value, once a feasible solution has been found. If None, then the search will continue
until max_iters regardless of lack of improvement in the objective value. |
maxOptional[ | The maximum number of seconds allow the MIP solver to run for each iteration. If
None, then the MIP solver will run until its convergence criteria are met. |
| Returns | |
mip.OptimizationStatus | The status of the search. |
Union[ mip.Var, mip.LinExprTensor, str]) -> Union[ float, np.ndarray]:
(source)
¶
Get the value one or more decision variables corresponding to the best solution.
- Returns: float or np.ndarray
- The value(s) of the variable(s).
| Parameters | |
x:Union[ | mip.Var or mip.LinExprTensor or str The variable(s) to get the value of. This can be provided as a single variable, a tensor of variables or the name of a variable. |
| Returns | |
Union[ | Undocumented |
Whether the objective should be minimized. If False, the objective will be maximized - note that in
this case the objective must concave, not convex.
The smoothing parameter used to update the query point. If None, the query point will not be
updated.
The step size used to numerically evaluate gradients using the central finite difference method. Only used when a function for analytically computing the gradient is not provided.
Get the linear constraints of the model.
After the model is optimized, this will include the cuts added to the model.