Optimization Status Functions


Optimization status functions provide information about the status, search, and results for optimization. They are helpful for troubleshooting.

OptStatusNum(Opt) and OptStatusText(Opt)

Returns the status number as an integer and corresponding text message, respectively, of the optimization problem «Opt». It is wise to examine the status before evaluating OptSolution() to avoid an error message. Possible results are shown in the table below.

Status Number Status Text
-3 Invalid status.
-2 Ignore status. Used when dummy result code needs to be overridden.
-1 Invalid license status. (License expired, missing, invalid, etc.)
0 Optimal solution has been found.
1 The Solver has converged to the current solution.
2 “No remedies” status. (All remedies failed to find better point.)
3 Iterates limit reached. Indicates an early exit of the algorithm.
4 Optimizing an unbounded objective function.
5 Feasible solution could not be found.
6 Optimization aborted by user. Indicates an early exit of the algorithm.
7 Invalid linear model. Returned when a linearity assumption renders incorrect.
8 Bad data set status. Returned when a problem data set renders inconsistent
9 Float error status. (Internal float error.)
10 Time out status. Returned when the maximum allowed time has been exceeded. Indicates an early exit of the algorithm.
11 Memory dearth status. Returned when the system cannot allocate enough memory to perform the optimization.
12 Interpretation error. (Parser, Diagnostics, or Executor error.)
13 Fatal API error. (API not responding.)
14 The Solver has found an integer solution within integer tolerance.
15 Branching and bounding node limit reached. Indicates an early exit of the algorithm.
16 Branching and bounding maximum number of incumbent points reached. Indicates an early exit of the algorithm.
17 Probable global optimum reached. Returned when MSL (Bayesian) global optimality test has been satisfied.
18 Missing bounds status. Returned for EV/MSL Require Bounds when bounds are missing.
19 Bounds conflict status. Indicates <=, =>, = bounds conflict with existing binary or all different constraints.
20 Bounds inconsistency status. Returned when the lower bound value of a variable is grater than the upper bound value, i.e., lb[i] > ub[i] for some variable bound i.
21 Derivative error. Returned when API_Jacobian has not been able to compute gradients.
22 Cone overlap status. Returned when a variable appears in more than one cone.
999 Exception occurred status. Returned when an exception has been caught by try/catch top-level.
1000 Custom base status. (Base for Solver engine custom results.)
1102 The quadratic constraints are non-convex, the SOCP engine cannot solve this problem.

OptInfo(Opt, Item, Decision, Constraint, asRef)

OptInfo() is a versatile function that can reveal any available details of the optimization. The most common information available through OptInfo() can also be viewed by evaluating the DefineOptimization() function and double-clicking the encoded object (e.g. «LP»). This opens a hierarchy of information and click-able reference objects that reveal finer levels of detail.

Parameters

«Opt»

Type: variable

Identifies the node defined using DefineOptimization().

«Item»

Type: text

The characteristic of the optimization you are interested in.

«Decision», «Constraint»

Type: variable, optional

Optional «Decision» and «Constraint» parameters can filter information to be relevant to individual decisions and constraints.

«asRef»

A Boolean value (0 or 1). If True (1), the result will be encoded in a clickable reference object.

The following table shows the relevance of various information items to optimization engines, along with a description of the information revealed.

Item LP QP QCP NLP Description
"All"
Returns a comprehensive view inside the optimization, listing most items shown here. You can see the same information by double-clicking the Optimization object displayed when the DefineOptimization() function is evaluated: («LP», «NLP», etc.)
"DecisionVector"
Lists all scalar decision variables in a one-dimensional list.
"ConstraintVector"
Lists all scalar scalar constraints in a one-dimensional list.
"Decisions"
Lists the names of each structured decision array in the optimization.
"Constraints"
Lists the names of each structured constraint array in the optimization.
"ObjCoef"
Lists scalar objective linear coefficients.
"Q"
Displays the matrix of coefficients in the quadratic objective matrix.
"Lhs"
Displays the matrix of linear constraint coefficients. You may optionally specify «constraint» to obtain the coefficients for just one constraint corresponding to all scalar decision variables. Or you may specify both «decision» and «constraint» to get the coefficients for one decision and one constraint.
"LhsQ"
Displays the matrix of quadratic constraint coefficients for QCP programs. Dense matrixes may be too large to fit in memory for some large QCPs. You may optionally specify «decision» and/or «constraint» to obtain the quadratic coefficients for just that structured decision and constraint.
"Rhs"
Displays right-hand side coefficients for all scalar constraints. You may optionally specify «constraint» to obtain the coefficients for just one constraint corresponding to all scalar decision variables. Or you may specify both «decision» and «constraint» to get the coefficients for one decision and one constraint. For a linear or quadratic constraint, the RHS will be the constant term. There is no guarantee of the sign, since it depends on how DefineOptimization re-arranges the constraint when it processes the coefficients. For a non-linear constraint, Rhs will usually be 0. For a range constraint, e.g., a <= f(x) <= b, the far right constant (b) is returned. It is better to use "constraintLb" and "constraintUb" for range constraint
"ConstraintUb"
Upper bound for each scalar constraint. You may optionally specify «constraint» to obtain the values for a single structured constraint.
"ConstraintLb"
Lower bound for each scalar constraint. You may optionally specify «constraint» to obtain the values for a single structured constraint.
"Sense"
Shows the inequality operator for each scalar constraint (’<=’,’<=’,’=’) or ’R’ for Range (lb & ub). You may optionally specify «constraint» to obtain the values for a single structured constraint.
"Lb"
Lower bound for each scalar variable. You may optionally specify «decision» to obtain the value for a single decision array.
"Ub"
Upper bound for each scalar variable. You may optionally specify «decision» to obtain the value for a single decision array.
"IntegerType"
The type of all scalar decision variables. Optionally you may specify «decision» to get the integer type(s) for a single decision array. Possible values are: (’Continuous’, ’Integer’, ’Boolean’, ’Grouped Integer’, or ’Semi-Continuous)’
"Group"
Applies only to a grouped integer variable, returns the group number for each scalar decision variable. You may optionally specify «decision» to obtain the groups for a single decision array.
"Maximize"
Indicates whether the optimization maximizes an objective (’TRUE’) or minimizes an objective (’FALSE’), or whether there is no objective at all for a constraints-only problem (Null)
"Engine"
Indicates the engine chosen by Analytica or by user override.
"Settings"
Displays a list of engine setting names and corresponding setting values.
"Type"
The problem type. This matches the object displayed when DefineOptimization() is evaluated: (’LP’, ’QP’, ’QCP’, ’CQCP’, ’NCQCP’, ’NLP’, ’NSP’)
"Intrinsic Indexes"
Displays a table of indexes of Decision and Constraint arrays that are intrinsic to the optimization.
"Extrinsic Indexes"
Displays a table of indexes of Decision and Constraint arrays that are abstracted, resulting in multiple optimizations. You may optionally specify either «decision» or «constraint» to get the extrinsic indexes for a single array. This option is very useful when an index is abstracted unexpectedly. The Extrinsic Indexes display can point you to the source of the extra dimension, just like Stephen Hawking.
"Decision IntrinsicIndexes"
Lists all decision arrays in the optimization, and for each, a set containing the indexes that are intrinsic to the decision variable. You may optionally select a single decision node by specifying «decision».
"Decision ExtrinsicIndexes"
Lists all decision arrays in the optimization, and for each, a set containing the indexes that are extrinsic to the decision variable. You may optionally select a single decision node by specifying «decision».
"Objective Dims"
Lists all dimensions of the Objective that warrant array abstraction, resulting in an array of optimizations. Since Parametric indexes of Decision variables are ignored by the optimization, they are not listed in the OptInfo() result even though they can be seen in the evaluation of the Objective array.

OptEngineInfo(Engine, Item, asRef)

The SolverInfo function provides information about a specific optimizer engine, or about the solver engines that are currently installed and ready for use.'

Parameters

«Engine»

Type: text

The name of a solver engine. The following are included with Analytica:

  • "Lp/Quadratic"
  • "SOCP Barrier"
  • "GRG Nonlinear"
  • "Evolutionary"

Various add-on engines can be purchased separately. These include:

  • "LSLP"
  • "LSGRG"
  • "LSSQP"
  • "Knitro"
  • "OptQuest"
  • "MOSEK"
  • "XPress"
  • "Gurobi"

The engine parameter can be specified as "All" to obtain the indicated information for every installed engine.

«Item»

Type: text

Item Type Description
"SettingNames" numeric Array of control setting names
"MaxSetting" numeric Upper bounds for setting
"MinSetting" numeric Lower bounds for setting
"Default" numeric Default value for setting
"EngineName" text The engine name (Null without error if engine not installed)
"DLL" text File path to solver engine’s DLL, "" for builtin engines
"TrialPeriod" numeric number of days until Frontline solver trial license expires
"ProblemTypes" boolean A list of the problem types handled by each engine
"MaxVars" numeric Maximum number of decision variables supported by engine
"MaxIntVars" numeric Maximum number of integer variables supported by engine
"MaxConstraints" numeric Maximum number of constraints supported by engine
"MaxVarBounds" numeric Maximum number of variable bounds supported by engine
"Milliseconds" numeric Time spent in computation
"Iterations" numeric Number of iterations engine has performed
"Calls" numeric Number of function evaluations that have occurred
"Jacobians" numeric Number of Jacobian evaluations that have occurred
"Hessians" numeric Number of Hessian evaluations that have occurred

OptShadow(Opt, Constraint, PassNonFeasible)

Note: The OptShadow() function applies only to LP and QP type problems with continuous decision variables and linear constraints.

If a constraint is relaxed, i.e., by increasing the right-hand side, bi, by one unit, how does this impact the objection function? This is referred to as the shadow price, or dual value, of the constraint. A shadow price is valid only for small changes in bi (the actual range for which it is valid can be obtained from the OptRhsSa() function), and is computed by the function

OptShadow(Opt)

where «Opt» is a linear program object returned by DefineOptimization(). The result is indexed by .ConstraintVector. Mathematically, the shadow price is given by this equation.

[math]\displaystyle{ Shadow_i=\frac{\partial Obj}{\partial b_1} }[/math]

This is the partial derivative of the objective function relative to the constraint RHS coefficient.

Warning: Not all linear programming packages use the same convention for the sign of shadow prices. If you have used the LINDO package, note that the convention used by Analytica Optimizer differs from the sign produced by the LINDO package.

OptReducedCost(Opt, Decision, PassNonFeasible)

Note:' The OptReducedCost() function applies only to LP type problems with continuous decision variables.

How far can a coefficient in the objective function be increased (in a minimization program)or decreased (in a maximization program) before the objective function changes? When a decision variable has a non-zero value in the optimal solution, any change in the objective function coefficient changes the objective value, so for those decision variables the answer would be zero. But for decision variables that are zero, the coefficient can change until that variable eventually enters the basis. This amount is known as the reduced cost (or dual value) of the variables and is returned by the function

OptReducedCost(Opt)

The result is indexed by .DecisionVector.

The shadow price and reduced cost are known as dual values, the shadow price being a dual to the solution in the original (or “primal”) problem, and the reduced cost being a dual to the slack price in the original problem. To each problem in the standard form (see Parts of a Linear Program (LP)) there corresponds a dual linear program given by this.

maximize b1 y1 + b2 y2 + … + bm ym

such that

a11 y1 + a21 y2 + … + am1 ym >= c1
a1n y1 + a2n y2 + … + amn ym >= cn

The new variables in this program, y1,y2, …,ym, are the shadow prices, and the slack value for each constraint is the reduced costs in the primal problem. Note that the variables in the primal problem correspond to constraints in the dual problem, and constraints in the primal problem correspond to decision variables in the dual problem.

OptObjectiveSa(Opt, Decision)

Note: The OptObjectiveSa() function applies only to LP type problems with continuous decision variables.

If we change a coefficient in the objective function, the solution (x1, …,xn) continues to be the optimal solution as long as the coefficient remains within a certain range. Note that the solution point is the same, but the value of the objective function at the optimum is affected. This range can be computed with this function.

OptObjectiveSa(Opt: OptType; Decision: optional)

The first parameter, «Opt», is a linear program defined using DefineOptimization(). When called with only a single parameter, the range is computed for all decision variables, and the result is indexed by the linear program variable index .DecisionVector. If the range for only a single decision variable (or a small subset) is required, the second parameter, «Decision», is used to indicate the decision variable for which the sensitivity is to be computed.

The result returned from OptObjectiveSa() is dimensioned by a local index, .range := ['lower','upper']. Thus, to get the smallest value for each coefficient in the objective that would continue to produce the same solution, you would use an expression like this.

Var sa:= OptObjectiveSa(Opt) DO
sa[.range='lower']

When a coefficient can be changed an arbitrary amount without changing the solution basis, the corresponding entry in the result returned by OptObjectiveSa() is -INF for the lower value or +INF for the upper value.

OptRhsSa(Opt, Constraint)

Note: The OptRhsSa() function applies only to LP type problems with continuous decision variables.

The sensitivity of the right-hand side coefficients can be computed using this function.

OptRHSSa(Opt: LpType; constraint: Optional)

This computes the range over which the coefficient in the RHS can vary without changing the basis of the solution. In other words, over the returned range, the set of constraints with zero slack remains the set of constraints with zero slack (i.e., the critical constraints).

The result is indexed by a local index, .range:= ['lower', 'upper'], containing the smallest and largest values for the corresponding RHS coefficient. If the optional second parameter is not specified, the range is computed for all variables and the result is indexed by .ConstraintVector. If the range is needed for only a single coefficient, the second parameter specifies a Constraint node, and only the range for that constraints in the designated array are computed.

When a coefficient can be changed an arbitrary amount without changing the solution basis, the corresponding entry in the result returned by OptRhsSa() is -INF for the lower value or +INF for the upper value.

OptSlack(Opt, Constraint, PassNonFeasible)

When you have a constraint

ai1 x1 + ai2 x2 + … + a1n xn <= bi

the slack (or surplus) for that constraint is the positive value that, when added to the LHS, makes both sides equal, that is

ai1 x1 + ai2 x2 + … + a1n xn + slacki = bi

The constraints that have zero slack are of particular interest, since they are instrumental in constraining the optimum. If these constraints are relaxed (e.g., by increasing bi), a larger maximum value can be obtained. However, as critical constraints are relaxed, other constraints might become relevant. For the constraints, the non-zero slack gives an indication of how close they are to becoming critical.

The slack for each constraint is obtained from this function.

OptSlack(Opt)

It takes as input the object returned from DefineOptimization() and returns an array indexed by .ConstraintVector, containing the slack at the optimum for each constraint.

OptFindIIS(Opt, newLp)

Note: The OptFindIIS() function applies only to LP type problems with continuous decision variables.

Computes and returns the irreducibly infeasible subset (IIS) of the constraints. This is meaningful when LpStatus(Opt)=2 (“no feasible solution”), and is useful for identifying what portions of your constraint formulation make the problem infeasible.

When the optional parameter, «newLp», is specified, returns a new <<LP>> object having the subset of constraints (still infeasible). The components of this object can be accessed using OptInfo().

OptWriteIIS(Opt, filename, format)

Writes an irreducibly infeasible subset (IIS) of a linear or quadratic program to a file, including only a subset of constraints that is infeasible, but with the property that if any single constraint is removed, the resulting problem will be feasible. The format is the same as that used by LpWrite().

«Format» values:

  • "LP" (or 1): CPLex LP format
  • "MPS" (or 2): a legacy format used infrequently
  • "LPFML" (or 3): Open Solver Interface[1]

OptRead(Opt, DecisionVector, ConstraintVector, format)

Reads a linear or quadratic program definition from file «filename», previously written by OptWrite(), and returns an opaque «LP» or «QP» object. The optional index parameters «DecisionVector» and «ConstraintVector» specify the decisions and constraints for the LP. They must be of the same size as the problem read in. The optional «format» parameter can be "LP" (default), "MPS", or "LPFML" to indicate the type of file being read.

OptWrite(Opt, filename, format)

Writes a text description of a Linear Program (LP ) or Quadratic Program to a file with the specified filename. Note that if lp is an array of LP problems, and «filename» does not share the same dimension, the file written by OptWrite() contains the result of only the last lp.

Notes

  1. Also synonymous with "OSI" and "OSIL"

See Also


Comments


You are not allowed to post comments.