Improving Computational Efficiency of NLPs

Revision as of 18:03, 20 November 2015 by Jhernandez3 (talk | contribs)


Module 4 of the Airline NLP was the first example in which we used an extrinsic index (Run) with an NLP. Since this was a very simple example with only ten samples, the result appeared pretty much instantly. Improving computational efficiency in this case would be important only to the world’s most ambitious competitive coffee drinking champion. But real world NLPs can be very demanding on processing cycles. The dirty secret of the Module 4 example described above is that it made about ten times as many calculations as it needed to. Since Module 4 is intended to be a proxy for larger models, we need to fix this problem and make it run even faster!

Remember that NLP search algorithms are iterative. The optimizer repeatedly sets new Decision values and re-evaluates every downstream variable, all the way to the Objective value.

Nlp 1.png

The influence diagram makes it easy to identify variables that are evaluated repeatedly during the search. These include Annual_Capacity, Seats_Sold, Demand, and of course the Objective: Profit.

If any of these variables contains an extrinsic index (such as Run in this example) Analytica will compute the entire array for each iteration. This includes slices for all elements of Run whether they are needed or not. But Run is an extrinsic index in this version of the Airline model. This means that each element of Run has its own optimization and vice versa. Within a given optimization, the optimizer is interested in only one Run element even though entire arrays are being evaluated repeatedly. The superfluous slices are discarded by the optimizer, and the next iteration starts.

The optional SetContext parameter of DefineOptimization() allows you to identify nodes for which only a single element of the extrinsic index applies to a given optimization. This avoids the inefficiency described above.

Nlp 2.png

To identify the best context variables, let’s look at the same influence diagram in a different light. Nodes have been re-labeled here to show the extrinsic indexes present or “scalar” if none. Iterated quantities (i.e. quantities downstream of Decisions) are colored green.

There are a few basic principles of context setting:

  • A context variable will not propagate extrinsic indexes to downstream variables during the optimization.
  • You should avoid using variables that are downstream of Decisions. They are repeatedly evaluated and will therefore be only partially effective at improving efficiency.
  • Context variables should be as close to the optimization as possible without being downstream of Decisions.
  • The set of context variables should include only as many as necessary to prevent propagation of extrinsic indexes to iterated quantities.


In this example, Setting context on Demand would eliminate the Run index from Seats Sold and Profit. But Demand is downstream of a Decision and is therefore NOT the most suitable candidate.

Nlp 3.png

Base Demand and Elasticity are the right choices. They are only evaluated once, and together they can eliminate Run from the rest of the model during optimization.

The diagram above shows how the chosen context variables prevent the extrinsic index from being propagated to iterated variables. Iterated variables end up being scalar in terms of their extrinsic dimensions in the context of a single optimization. Outside the optimization, the dimensions of these arrays don’t change.

The following definition will dramatically improve the performance of the Airline NLP Module 4 example:

  Variable Opt := DefineOptimization(
     Decisions: Number_of_Planes, Fare,
     Maximize: Profit,
     SetContext: [Base_Demand, Elasticity])
Note
:In this example, Run was merely an example of an extrinsic index. The SetContext parameter is important for all NLPs that use extrinsic indexes whether they contain uncertain quantities or not.
Comments


You are not allowed to post comments.