Enhancements to Optimizer in Analytica 4.0
(Back to What's new in Analytica 4.0?)
Licensing
The Frontline solver 7.0 has numerous licensing restrictions that were not previously present. Licensing details are covered at Analytica Optimizer Licensing.
Selecting an external engine
The functions LpDefine, QpDefine, and NlpDefine now have an optional «engine» parameter, which can be used to specify an external add-on engine such as Knitro, Mosek, Xpress, etc. For example:
NlpDefine(... ; Engine : "KNITRO")
would use the Knitro engine to solve the non-linear optimization problem. In order to use this feature, the Knitro engine must be properly installed and a license for it acquired. These configurations are described in Using a Solver Add-on Engine.
If the «Engine» parameter is not explicitly specifyed, LpDefine, QpDefine or NlpDefine will use the internal default solver engine that is deemed most appropriate for the problem. To determine which engine has been selected for a given problem previously defined, use:
SolverInfo("engine", lp)
New Functions
- SolverInfo allows you to retrieve the components of a previously-defined LP/QP/NLP, or to obtain information about a solver engine. The function is described at SolverInfo.
Generalized Access to Solver Engine Parameters
Each solver engine in general has its own unique set of parameters that can be used to control its search. For example, the parameters used by Knitro only partially overlap the parameters used by the built-in evolutionary algorithm. To be able to take full advantage of Knitro, you need to be able to access all its parameters. If a third-party engine becomes available after Analytica 4.0 is released, it may have additional parameters.
The solver engine parameters have previously been exposed to the Analytica end-user as optional parameters to LpDefine, QpDefine and NlpDefine. Since these definitions are fixed, it doesn't provide a fully flexible mechanism for accessing and setting the solver parameters. For this reason, a new method for specifying solver control settings has been added to LpDefine, QpDefine and NlpDefine.
To change a single setting, you identify the parameter and its setting using optional parameters, e.g.:
NlpDefine(..., parameter: "PopulationSize", setting: 50)
Note: LpDefine, QpDefine and NlpDefine all provide optional parameter and setting arguments in the same fashion.
To set multiple settings, you must supply one or two arrays. There are two methods for doing this. The first is to specify one array for the «parameter» argument, another for the setting argument, such that the two arrays have the same dimensionality. For example:
Index SettingNum := [1, 2, 3]; Variable ParamArr := Table(SettingNum)("MultiStart", "TopoSearch", "PopulationSize") Variable SettingArr := Table(SettingNum)(1, 1, 10) Variable The_Nlp := NlpDefine(..., parameter: ParamArr, setting: SettingArr)
With this usage, ParamArr
may have any number of indexes in common, they need not be one-dimensional. All settings (at any coordinate) are used. A special case of this places the parameter names in the index:
Index ParamNames := ["MultiStart", "TopoSearch", "PopulationSize"] Variable SettingArr := Table(ParamNames)(1, 1, 10) Variable The_Nlp := NlpDefine(..., parameter: ParamNames, setting: SettingArr)
This arrangement does not work if ParamNames
is a self-indexed table (or any other self-indexed array) where the parameter names reside in the self-index values. This is because the parameter value, not index value, is passed in.
When the setting parameter is guaranteed to be 1-dimensional, and the parameter names are in the index labels, the parameters argument can be omitted entirely. In the above example, The_Nlp could equally be defined using
NlpDefine( ..., setting: SettingArr)
The one caveat with this is that an error will result if SettingArr is not exactly one-dimensional after array abstraction. If the SettingArr contains any indexes that are iterated over by array abstraction (because of other parameters), those dimensions will be sliced from SettingArr. Therefore, a two or three dimensional SettingArr can be provided, as long as multiple instances of optimization problems are being defined via array abstraction. It is best to ensure that your ParamNames dimension does not appear as a dimension of other parameters, otherwise it would also be abstracted over and you'd end up with a zero-dimensional array after array abstraction (hence an error).
Often the most convenient method for specifying search parameters is to set up a self-indexed table, where the index labels are the parameter names and the cell values are the settings. This is an instance of the previous technique, in which the settings parameter is specified, but the «parameter» argument is omitted. E.g.:
IndexVals of Parms := ["MultiStart", "TopoSearch", "PopulationSize"] Definition of Parms := Table(Self)(1, 1, 10) Variable The_NLP := NlpDefine(..., Setting: Parms)
Again, notice that the «Parameter» argument is not explicitly specified.
To view the current search control settings used in a particular LP, QP, or NLP, the function:
SolverInfo("Setting", lp)
returns a 1-D array, in which a local index ".Parameter" contains the parameter name, and the array value contains the current setting. In general, the local index may contain a different set of parameters than those provided in the «Parameter» argument to LpDefine. There may be additional ones since in general only a subset will be specified explicitly, and if parameters that don't apply to the current engine appear in the Parameter argument to LpDefine, then those won't appear in the result from SolverInfo.
To access the default parameter settings for a given solver engine, use:
SolverInfo(Engine: engname, Item: "Setting")
The largest and smallest allowable values for each setting can also be obtained using:
SolverInfo(Engine: engname, Item: ["Setting", "Min Setting", "Max Setting"])
Descriptions of what parameter settings do is found in the Frontline Premium Solver for Excel manual.
LpFindIIS
Two parameters have been added to Function LpFindIIS, which now has the declaration:
LpFindIIS(lp: LP_TYPE; newlp: optional boolean = false; constraintsOnly: optional boolean)
where:
- lp : an LP instance previously defined using LpDefine. LpFindIIS cannot be used with QPs or NLPs.
- newlp: When true, a new LP instance is returned. When false, a subset of the constraints is returned.
- constraintsOnly: When true, the lower and upper variable bounds are not relaxed.
LpFindIIS can now return a new LP instance, having fewer constraints or variable bounds so as to ensure the problem has a feasible solution. When an LP is returned, you access the modified variable bounds or constraint set using:
SolverInfo(iis, "lb") SolverInfo(iis, "ub") SolverInfo(iis, "Constraints")
When newlp is false, the constraintsOnly is implied to be true, and the function returns a subset of the Constraints index. LpFindIIS(lp, false) is equivalent to:
SolverInfo(LpFindIIS(lp, true, true), "Constraints")
The previous version of LpFindIIS returned only a subset of the Constraints index, which provided no method in which reduced variable bounds could be returned. Therefore, the previous version did not relax variable bounds.
When newlp is true, constraintsOnly defaults to false. Note: LpFindIIS(lp,true) should always be able to find a feasible problem, while LpFindIIS(lp, false) does not have this guarantee (i.e., when the variable bounds are not consistent).
Multiple File Formats
LpWrite, LpRead, and LpWriteIIS now all contain an optional «format» parameter, and can read/write to three different file formats:
LpWrite(lp: LP_Type, filename: text; format: optional text = "LP") LpWriteIIS(lp: Lp_type, filename: text; format: optional text = "LP") LpRead(filename: text; Vars,Constraint: optional IndexType; format: optional text = "LP")
Possible file formats are: "LP", "MPS", and "LPFML".
Quadratic Constraints
Previously, a QP could have a quadratic objective function, but constraints had to be linear. QpDefine now permits quadratic constraints. The built-in SOCP engine in Frontline 7.0 can handle convex quadratic constraints, and the external MOSEK engine add-on can handle these at an improved performance level.
To define quadratic constraints, the quadratic component is specified in the «lhsQ» parameter, indexed by Vars, Vars2 and Constraints. The linear part of the constraint is defined in the lhs parameter (as before) indexed by Vars and Constraints. For example, consider the following problem:
Minimize x^2 + y^2 - 2*x Subject to: x^2 + 4*x*y + 6*y <= 36 y^2 + 4*x - 2*y <= 0 where x >= 0, y >= 0
The optimum for this problem is at x=.1286, y=0.3031, objective=-0.1488.
We define indexes as follows:
Index Vars := ['x', 'y'] Index Vars2 := CopyIndex(Vars) Index Constraints := ['c1', 'c2']
The objective is given by:
Variable c := Table(Vars)(-2, 0)
Variable Q :=
Vars Vars2 x y x 1 0 y 0 1
The left-hand side is given by:
Variable Lhs :=
Vars Constraints x y c1 0 6 c2 4 -2
Variable LhsQ :=
Constraints c1 c2 Vars Vars Vars2 x y x y x 1 4 0 0 y 4 0 0 1
Variable Rhs := Table(Constraints)(36, 0)
The problem is then formulated as:
QpDefine(Vars, Vars2, Constraints, c: c, q: Q, lhs: Lhs, rhs: Rhs, sense: '<', lb: 0, maximize: false)
The SOCP Engine may fail when the constraints are not convex. Failure on non-convexity is not guaranteed, they claim to be good at detecting it, but cannot detect all cases of non-convexity. Frontline talks at length about Cone Optimization Programs as a large class of optimization programs that are contained within the set of quadratic constraints that are handled.
Grouped Integer Variables
The optimizer now supports a new type of integer constraint -- group exclusive.
A group of N integer variables assigns an integer 1..N to each of the N variables such that they are constrained to have unique values. For example, if a group contains {x1, x2, x3}
, then x1 = 1, x2 = 2, x3 = 3
is acceptable, as is x1 = 3, x2 = 1, x3 = 2
. But x1 = 2, x2 = 3, x3 = 2
is not acceptable, since the values are not unique.
There can be multiple groups. For example, consider this set of constraints:
Group 1: {x1, x2, x3} Group 2: {x4, x5} Constraints: x1 - x4 = 0 x3 - x5 = 0 x1 + 2*x5 = 4
This system has a single feasible solution: {x1=2, x2=3, x3=1, x4=2, x5=1}.
To specify that a variable belongs to a group, its ctype is set to 'G'. If there is more than one group, then the parameter
group : optional numeric[Vars]
must be specified to LpDefine, QpDefine, NlpDefine. For example:
Variable VarGroups := Table(Vars)(1, 1, 1, 2, 2) Variable ub := Table(Vars)(3, 3, 3, 2, 2) Variable lp := LpDefine(Vars, ..., ctype: 'G', group: VarGroups, lb:1, ub:ub)
The constraints are specified the usual fashion and thus are omitted in the above for conciseness. Here we have two variable groups, numbered 1 and 2, with membership of each variable in each group specified by the «group» parameter.
Trace Files for NLP optimization
When a non-linear optimization performs poorly, you can often figure out why by looking at the path the search took. To make this easier, NlpDefine has a new parameter, «TraceFile». «Tracefile» accepts a filename (full path or relative to CurrentDataDirectory). When specified, Analytica records into that file the independent decision vector and each explicitly computed item (e.g., objective, LHS, gradient, jacobian, objHessian, lhsHessian) at each search step.
After running the optimization, you can view the tab-separated file in a text editor.
NLP hooks for Array Abstraction
NLPs cannot automatically array abstract over the computed «objective», «lhs», «gradient» or «jacobian» parameters. However, Analytica 4.0 introduces several extensions to NlpDefine that give the modeler additional tools enabling a model building to manually adjust his model to be abstractable over explicitly indicated dimensions. Prior to these enhancements, dealing with the array abstraction problem in a large optimized model was particularly difficult (it still is not a piece of cake, but these enhancements are pretty significant).
The underlying problem arises when a spurious dimension appears in computed value, such as the objective or a constraint lhs value. In Analytica this is not an unusual thing. For example, a modeler or user might explore multiple discount_rates, so at a given X vector, we get a different objective for each possible discount rate. Logically the amounts to having multiple separate optimization problems that each should be solved independently. But the NLP optimizer encounters this when it is conducting a single search for a feasible extrema or solution, creating an ambiguity.
To deal with this, you can now tell NlpDefine that you want a separate NLP for each element across a given dimension. This is done using the «Over» parameter, e.g.
NlpDefine(..., Over: discount_rate)
Multiple dimensions may be listed, e.g.
NlpDefine(..., Over: discount_rate, scenario, investment_level)
In addition, if a spurious dimension appears in any of the static parameters to NlpDefine (i.e., any parameter other than «obj», «lhs», «jacobian», «gradient», «hessian», «lhsHessian», «parameter» or «setting»), NlpDefine will recognize this even if the dimension is not listed in the over parameter.
You can list a variable in the «Over» parameter which is not always an index, but may be in certain circumstances. For example, you might do this in anticipation that a user might change a scalar to a list. When a scalar, the parameter has no real effect. You can also list a computed value, where the dimensions of the computed value become recognized as spurious values. This opens up the possibility of this trick:
NlpDefine(Var, Constraint, obj: Cost, lhs: Lhs, Over: Cost, Lhs[@Var = 1, @Constraint = 1])
If your model is structured in such a way that you expect your objective and Lhs variables to contain exactly the same dimensionality when evaluated normally as they will contain when evaluated during an iteration of optimization, then this trick identifies the spurious dimensions of the computed parameter by computing the «object» and «lhs» at the default value (not set by the optimizer), and filtering out all but the spurious dimensions. Note that we don't want Var or Constraint to be included in the spurious dimensions, since this would result in a separate NLP for each of their elements.
When an index appears in the «Over» parameter or in a static parameter value, a separate NLP instance will be created for each index value (or combination of index values when there are multiple such indexes). Each NLP is solved separate, but each is aware of its array abstraction context (its coordinate). Should a spurious dimension be encountered in a computed value, if that spurious dimension is in the array abstraction context, Analytica will keep only the slice corresponding to the current array abstraction context, thus eliminating the spurious dimensions.
If you do nothing more than list the spurious dimensions in the «Over» parameter, you should obtain an array-abstractable optimization, but it will be a very inefficient one. Analytica will compute the objective for every possible case, then throw away all those sliced but one. That will be repeated with each NLP's optimization. In order to an efficient optimization, where only the slices of interest are computed at each iteration, the modeler may need to adjust his model so that it is able to compute a single case at a time. To make this work, the NLP must be able to tell his model which case is being computed.
A new parameter, «ContextVar», tells NlpDefine how it should communicate its current array abstraction context with the model. ContextVars contains a list of variables, one for each potential spurious index. When an NLP is solved, it will set the values of those variables to indicate which slice is to be computed. Each context variable must abide by certain requirements, but there are a few variations on what is permitted.
Choice pulldowns provide one method that a modeler can use to structure his model so that it is capable of computing a single case at a time. For example, discount_rate could be defined as Choice(Self, 0), where domain of discount_rate lists the possible discount rates. When the pulldown is set to ALL, the discount_rate dimension appears in downstream results and becomes a spurious dimension for optimization. Edit tables based on discount_rate should be changed to determ tables, so that they can handle the single-selection case. Structured in this way, discount_rate can serve as its own «ContextVar»:
NlpDefine(..., Over:discount_rate, ContextVar: discount_rate)
When each NLP instance is solved, it will effectively change the Choice to the discount_rate that its problem instance corresponds to.
A second way to structure a model to compute a single case along a dimension is to set a separate variable to indicate what value is to be processed. The dimension itself could be defined by a normal Index variable, ci, but the model contains explicit subscripting operations such as A[I = ci], in such an arrangement that by the time obj, lhs, etc. are computed, the spurious index no longer appears. Of course, a model user has the option of setting ci to a list of values, but while the optimizer is running, it will temporarily reset these to the index element corresponding to the NLP instance's array abstraction context. When a model is structured in this fashion, you must set the domain of the variable ci to an Index domain, where the index used is the actual spurious index variable. The pieces here are:
Index I := ['a', 'b', 'c'] Variable ci := 'a' {perhaps - could be anything } Variable theNlp := NlpDefine(..., over: I, ContextVar: ci)
In general, a context variable must satisfy the following requirements:
- It must have an explicit domain - either list, list of labels, or index.
- It should either be one of the indexes that the NlpDefine statement iterates over, or its index-domain should be one of those indexes. (Note: If both of these cases hold, the first takes precedence).
The second case is not a hard requirement -- if violated, the context var has no effect and is not changed during the NLP optimization; however, in that case a warning is issued (unless NlpDefine is inside an IgnoreWarnings call, or the Show Result Warnings preference is turned off, or the indicated index is Run). You might imagine the Run index being spurious in Sample mode evaluation, but not present in Mid mode evaluation, so a variable having Run as its domain index might be listed in ContextVar, hence the warning is suppressed when a context variable uses Run for its index-valued domain.
You may list any number of context variables in NlpDefine, although the number listed will generally be pretty small since these correspond to spurious dimensions, and since you'll have a separate optimization problem for every combination of elements of spurious dimensions, one would want to keep the number of spurious dimensions to a minimum.
Finally, a number of limitations deserve to be mentioned. These enhancements do not make NlpDefine Dynamic-aware, and do not help in that respect. ContextVar enhancements cannot be applied to indexes containing duplicate elements. The work of structuring a model so that it can compute a single instance over any potential spurious index is not handled automatically by Analytica, it is the responsibility of the model builder. A context variable should never be downstream of another context variable for the same optimization, or downstream of the decision variable for the optimization (because as these are set by the optimizer, some previously set context variables may get invalidated, leading to rather unpredictable results).
Optional Objective / Constraints to NlpDefine
Every parameter of NlpDefine except x is now optional. This makes it less cumbersome to define simple non-linear optimizations, such as optimizations with no constraints (no Constraint index, lhs, rhs, or sense required), optimizations with only one constraint (no Constraints index required), optimizations involving a scalar decision variable (no Vars index required) and finding feasible solutions to systems of equations (no objective needs to be specified).
An error is raised if neither a constraint nor an objective is specified. If the «lhs» parameter is omitted by the Constraints index is not, the Constraints index must be zero-length or an error is raised.
A constant objective is treated the same as a unspecified objective. If an objective is not specified, or a constant is provided, then Analytica will use "Find Feasible Solution" method in Frontline rather than Find Maximum or Find Minimum. Thus, the «maximize» parameter is also ignored in that case.
When no Vars index is specified, the decision variable is taken to be a scalar. The results of all functions that would normally return an array indexed by the Vars index will not have a Vars index. For example, LpSolution returns a scalar.
Simple Goal Seek
A simple goal seek finds a scalar value for a single constraint. Since the decision variable is a scalar, and there is only one constraint, either index is required. This also means that logic using the decision variable (in lhs) does not have to slice over the Vars index. As an example, the following finds the integer scalar value for which f(x)=g:
var nlp := NlpDefine(x: x, lhs: f(x), sense: '=', rhs: g, ctype: 'I' ); if LpStatusNum(nlp) <> 0 Then Error(LpStatusText(nlp)); LpSolution(nlp)
An optimization with only one constraint, with or without an objective, can omit the Constraints index as in this example. The rhs should then be a scalar, and the lhs expression should evaluate to a scalar.
Optimization without Constraints
To find the minimum of a scalar function f(x) with no constraints, only x and an objective need to be specified;
NlpDefine(x: x, obj: f(x))
For a decision vector, then at least a Vars index, decision variable and object are specified, e.g.:
NlpDefine(Vars: project, x: investment, obj: loss)
Hessians
If you can explicitly (analytically) compute the first or second derivatives of the objective and left-hand side constraint functions, expressions for these can be optionally provided to the optimizer. When these are available, the optimizer can evaluate these expressions rather than estimating the derivatives through finite differencing. This can speed up convergence by reducing the number of evaluations, and by providing a "cleaner" computation (derivatives, and especially second derivates, can be quite inaccurate (noisy) when computed through finite differencing).
The first derivative of the objective is called the gradient and is specified in the optional parameter «Gradient». It should compute a value dimensioned by the Vars index. The first derivative of the left-hand side constraint is the Jacobian. It should be dimensioned by the Constraint and Vars indexes. If one is provided, both must be (unless there isn't an objective, or are zero constraints).
Analytica 4.0 now allows expressions for the Hessian (second derivative of the objective) and LhsHessian (second derivative of the constraint left-hand side) to be specified using the optional parameters «Hessian» and «LhsHessian». Hessian should compute a derivative with the same dimensionality as Jacobian (vars), and LhsHessian should have the same dimensionality as Jacobian (constraints x vars). The Hessian gives the optimizer information about the curvature.
The GRG Nonlinear solver is most likely to make good use of derivatives. To ensure these are used well, set the ObjNl and LhsNl to "N" (smooth non-linear).
The built-in GRG Nonlinear solver will evaluate Gradient and Jacobian expressions, but not Hessians. The Knitro engine (an optional add-on) can make use of Hessian information. The Quadratic and SOCP Barrier engines use Hessian information, but these are based on the supplied matrices of coefficients, not NlpDefine. (Question: If a quadratic problem is defined with NlpDefine, can Engine:SOCP Barrier be used? If so then the Hessians would be used. I haven't tried it yet.)
LpStatusNum Changed
The status codes returned by the Frontline engine have changed. Therefore, the return value from LpStatusNum( ) is different in Analytica 4.0 (with Frontline 7.0) than it was in Analytica 4.0 (with Frontline 5.5). There is not a one-to-one correspondence between the old numeric values and the new ones. For example, in Analytica 3.x, the "Optimum found" and the "MIP Optimum found" had separate error codes, while in 4.0 there is a single "Optimum found".
The exact text returned from LpStatusText is also different.
These changes imply that code that relied on specific LpStatusNums, or specific text comparisons with LpStatusText, is not backward compatible.
Enable comment auto-refresher