Piecewise-Linear (PWL) Relationships

Revision as of 17:42, 24 May 2016 by Bbecane (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

DefineOptimization() has the capability to incorporate piecewise-linear relationships between two scalar quantities into a linear program (or quadratic program or quadratically-constrained program). This occurs automatically and transparently when you use the LinearInterp function within your model. You can thus have PWL constraints and a PWL objective, and still have an LP.

To do this, the «d» and «r» parameters in the LinearInterp(d, r, x, i) call cannot depend on any of your decision variables, but «x» can depend on any of your decision variables, even on multiple decision variables.

To incorporate a PWL relationship, DefineOptimization() introduces several auxiliary decision variables and constraints into the formulation. Some of these are binary variables, so your LP will become an integer LP if it isn’t already. This all happens transparently, so you generally don’t need to be aware that these auxiliary variables and constraints are present; however, you can peek at them by double clicking on the «LP» object returned by DefineOptimization, and then looking at Scalar Decision Info and Scalar Constraint Info.


The “Vacation plan with PWL tax” example model, found in the “Optimizer Examples” folder, illustrates the use of LinearInterp() within an LP. This model calculates PWL income tax, where tax rates increase with tax bracket. The PWL relationship is encoded in the Total_tax variable.


Any continuous non-linear relationship between scalar quantities can be approximated with a PWL function, so this feature provides one way to keep your problem an LP even when there are non-linearities. However, the auxiliary binary decision variables introduce combinatorics into the search space, so that the time required to solve the LP often increases dramatically when PWL relationships are introduced. Hence, PWL constraints are not a total panacea for non-linearities. Despite this, is often is still preferable to keep your model as an LP, rather than resorting to an NLP, due to the advantages of LP. They array-abstract more nicely, and when a solution is found, you can be sure it is the global optimum.


In general, LinearInterp(d, r, x, i) handles both interpolation (where «x» is within the range of «d» values) and extrapolation (where «x» is smaller than the smallest value in «d», or larger that the largest value in «d»). Supporting extrapolation adds complexity to the formulation that is often not needed. If you can guarantee that at the solution, the value of «x» will be within the range of «d», then you can disallow extrapolation by specifying the optional «extrapolationMethod» parameter for LinearInterp(..), giving it a value of either 2, 3 or 5.

LinearInterp(d, r, x, i, extrapolationMethod: 2)

Methods 2, 3 and 5 are the same to the optimizer, but change how the function evaluates outside of a linear optimization context. Method 2 returns null rather than extrapolating, whereas method 3 evaluates as usual, returning the value of the nearest point, and method 5 evaluates be extrapolating the first or last slope.

When you use «extrapolationMethod» 2, 3, or 5, the formulation adds an implicit constraint that the «x» value is constrained to be within the range of «d». If your assertion that «x» is ensured to be in this range, you may end up making your problem infeasible. Disallowing extrapolation relieves combinatorics somewhat, and usually is better conditioned numerically, so it may help to reduce solve times in some cases.

See Also


You are not allowed to post comments.