Difference between revisions of "Reverse Dynamic"
m |
|||
(4 intermediate revisions by 2 users not shown) | |||
Line 2: | Line 2: | ||
<breadcrumbs>Analytica User Guide > Dynamic Simulation > {{PAGENAME}}</breadcrumbs> | <breadcrumbs>Analytica User Guide > Dynamic Simulation > {{PAGENAME}}</breadcrumbs> | ||
− | |||
In some dynamic programming algorithms, you start with a known final value, then compute the value at each time point as a function of future values. This dynamic-in-reverse can be accomplished using [[Dynamic]] by specifying the recurrence as the first parameter to dynamic, followed by the final value(s), and then specifying <code>reverse: true</code>. | In some dynamic programming algorithms, you start with a known final value, then compute the value at each time point as a function of future values. This dynamic-in-reverse can be accomplished using [[Dynamic]] by specifying the recurrence as the first parameter to dynamic, followed by the final value(s), and then specifying <code>reverse: true</code>. | ||
Line 8: | Line 7: | ||
The example model <code>“Optimal Path Dynamic Programming.ana”</code> computes an optimal path over a finite horizon. There is a final payout in the last time period, as a function of the final state, and an action cost (a function of action and state) at each intermediate step. [https://en.wikipedia.org/wiki/Dynamic_programming Dynamic programming] is used to find the optimal policy and the utility at each <code>State</code> x <code>Action</code> x <code>Time</code> point. | The example model <code>“Optimal Path Dynamic Programming.ana”</code> computes an optimal path over a finite horizon. There is a final payout in the last time period, as a function of the final state, and an action cost (a function of action and state) at each intermediate step. [https://en.wikipedia.org/wiki/Dynamic_programming Dynamic programming] is used to find the optimal policy and the utility at each <code>State</code> x <code>Action</code> x <code>Time</code> point. | ||
− | + | Decision Best_Action := [[ArgMax]](Sxa_utility, Action) | |
− | + | Objective Sxa_utility := | |
− | + | [[Dynamic]]( | |
− | + | Sxa_utility[Time + 1][Action = Best_action[Time + 1]] | |
− | + | [State = Transition] - Action_cost, | |
− | + | Final_payout, | |
− | + | reverse: true) | |
Notice the use of <code>[Time + 1]</code> rather than the <code>[Time - 1]</code> that is commonly used in forward [[Dynamic]] loops. | Notice the use of <code>[Time + 1]</code> rather than the <code>[Time - 1]</code> that is commonly used in forward [[Dynamic]] loops. | ||
Line 21: | Line 20: | ||
* [[Dynamic]] | * [[Dynamic]] | ||
− | <footer>Dynamic on non-Time Indexes / {{PAGENAME}} / | + | |
+ | <footer>Dynamic on non-Time Indexes / {{PAGENAME}} / Integration with data and applications</footer> |
Latest revision as of 23:24, 7 August 2017
In some dynamic programming algorithms, you start with a known final value, then compute the value at each time point as a function of future values. This dynamic-in-reverse can be accomplished using Dynamic by specifying the recurrence as the first parameter to dynamic, followed by the final value(s), and then specifying reverse: true
.
The example model “Optimal Path Dynamic Programming.ana”
computes an optimal path over a finite horizon. There is a final payout in the last time period, as a function of the final state, and an action cost (a function of action and state) at each intermediate step. Dynamic programming is used to find the optimal policy and the utility at each State
x Action
x Time
point.
Decision Best_Action := ArgMax(Sxa_utility, Action) Objective Sxa_utility := Dynamic( Sxa_utility[Time + 1][Action = Best_action[Time + 1]] [State = Transition] - Action_cost, Final_payout, reverse: true)
Notice the use of [Time + 1]
rather than the [Time - 1]
that is commonly used in forward Dynamic loops.
See Also
Enable comment auto-refresher