Difference between revisions of "Example Models and Libraries - Table"

Line 20: Line 20:
 
|[[media:Solar Panel Analysis.ana|Solar Panel Analysis.ana]]
 
|[[media:Solar Panel Analysis.ana|Solar Panel Analysis.ana]]
 
|renewable energy, photovoltaics, tax credits
 
|renewable energy, photovoltaics, tax credits
|This model explores whether it would it be cost effective to install solar panels on the roof of a house in San Jose, California.
+
|<div style="text-align: left;">This model explores whether it would it be cost effective to install solar panels on the roof of a house in San Jose, California.</div>
<div style="text-align: center;">
 
 
|net present value, internal rate of return, agile modeling
 
|net present value, internal rate of return, agile modeling
 
|[[Solar Panel Analysis]]
 
|[[Solar Panel Analysis]]
Line 29: Line 28:
 
|[[Media:Items_within_budget.ana|Items within budget.ana]]
 
|[[Media:Items_within_budget.ana|Items within budget.ana]]
 
|
 
|
|Given a set of items, with a priority and a cost for each, the function Items_within_budget function selects out the highest priority items that fit within the fixed budget.  
+
|<div style="text-align: left;">Given a set of items, with a priority and a cost for each, the function Items_within_budget function selects out the highest priority items that fit within the fixed budget. </div>
 
|
 
|
 
|[[Items within Budget function]]
 
|[[Items within Budget function]]
Line 37: Line 36:
 
|[[Media:Grant_exclusion.ANA|Grant exclusion.ana]]
 
|[[Media:Grant_exclusion.ANA|Grant exclusion.ana]]
 
|business analysis
 
|business analysis
|This model tests a hypothesis about the distribution of an attribute of the marginal rejectee of a grant program, given the relevance of that attribute to award of the grant.
+
|<div style="text-align: left;">This model tests a hypothesis about the distribution of an attribute of the marginal rejectee of a grant program, given the relevance of that attribute to award of the grant.</div>
 
|
 
|
 
|[[Grant Exclusion Model]]
 
|[[Grant Exclusion Model]]
Line 45: Line 44:
 
|[[Media:Project Priorities 5 0.ana|Project Priorities 5 0.ana]]
 
|[[Media:Project Priorities 5 0.ana|Project Priorities 5 0.ana]]
 
|business models
 
|business models
|Evaluates a set of R&D projects, including uncertain R&D costs/revenues, uses multiattribute analysis to compare projects & generates the best portfolio given a R&D budget.
+
|<div style="text-align: left;">Evaluates a set of R&D projects, including uncertain R&D costs/revenues, uses multiattribute analysis to compare projects & generates the best portfolio given a R&D budget.</div>
 
|cost analysis, net present value (NPV), uncertainty analysis
 
|cost analysis, net present value (NPV), uncertainty analysis
 
|[[Project Planner]]
 
|[[Project Planner]]
Line 53: Line 52:
 
|[[Media:Steel and aluminum tariff model.ana|Steel and aluminum tariff model.ana]]
 
|[[Media:Steel and aluminum tariff model.ana|Steel and aluminum tariff model.ana]]
 
|
 
|
| Estimate of the net impact of the 2018 import tariffs on steel and aluminum on the US trade deficit.
+
| <div style="text-align: left;">Estimate of the net impact of the 2018 import tariffs on steel and aluminum on the US trade deficit.</div>
 
|
 
|
 
|[[Steel and Aluminum import tariff impact on US trade deficit]]
 
|[[Steel and Aluminum import tariff impact on US trade deficit]]
Line 61: Line 60:
 
|[[Media: Tax bracket interpolation 2021.ana|Tax bracket interpolation 2021.ana]]
 
|[[Media: Tax bracket interpolation 2021.ana|Tax bracket interpolation 2021.ana]]
 
|
 
|
|Computes amount of tax due from taxable income for a 2017 US Federal tax return. To match the IRS's numbers exactly, it is necessary to process tax brackets correctly as well as implementation a complex mix of rounding rules that reproduce the 12 pages of table lookups from the Form 1040 instructions. This model is showcased in a blog article, [http://Lumina.com/blog/how-to-simplify-the-irs-tax-tables How to simplify the IRS Tax Tables].
+
|<div style="text-align: left;">Computes amount of tax due from taxable income for a 2017 US Federal tax return. To match the IRS's numbers exactly, it is necessary to process tax brackets correctly as well as implementation a complex mix of rounding rules that reproduce the 12 pages of table lookups from the Form 1040 instructions. This model is showcased in a blog article, [http://Lumina.com/blog/how-to-simplify-the-irs-tax-tables How to simplify the IRS Tax Tables].</div>
 
|
 
|
 
|[[Tax bracket interpolation]]
 
|[[Tax bracket interpolation]]
Line 69: Line 68:
 
|[[Media:Feasible_Sampler.ana|Feasible Sampler.ana]]
 
|[[Media:Feasible_Sampler.ana|Feasible Sampler.ana]]
 
|feasibility
 
|feasibility
|You have a bunch of chance variables, each with a probability distribution. Their joint sample, however, contains some combinations of points that are (for one reason or another) physically impossible. We'll call those infeasible points. You'd like to eliminate those points from the sample and keep only the feasible points. <br /><br />
+
|<div style="text-align: left;">You have a bunch of chance variables, each with a probability distribution. Their joint sample, however, contains some combinations of points that are (for one reason or another) physically impossible. We'll call those infeasible points. You'd like to eliminate those points from the sample and keep only the feasible points. <br /><br />
 
This module implements a button that will sample a collection of chance variables, then reset the sample size and keep only those sample points that are "feasible". <br /><br />
 
This module implements a button that will sample a collection of chance variables, then reset the sample size and keep only those sample points that are "feasible". <br /><br />
 
Obviously, this approach will work best when most of your samples are feasible. If you can handle the "infeasible" points in your model directly, by conditioning certain chance variables on others, that is far preferable. But there are some cases where this solution (although a bit of a kludge) is more convenient. <br /><br />
 
Obviously, this approach will work best when most of your samples are feasible. If you can handle the "infeasible" points in your model directly, by conditioning certain chance variables on others, that is far preferable. But there are some cases where this solution (although a bit of a kludge) is more convenient. <br /><br />
The instructions for how to use this are in the module description field.
+
The instructions for how to use this are in the module description field.</div>
 
|statistics, sampling, importance sampling, Monte Carlo simulation
 
|statistics, sampling, importance sampling, Monte Carlo simulation
 
|[[Sampling from only feasible points]]
 
|[[Sampling from only feasible points]]
Line 80: Line 79:
 
|[[media:Cross-validation example.ana|Cross-validation example.ana]]
 
|[[media:Cross-validation example.ana|Cross-validation example.ana]]
 
|
 
|
|When fitting a function to data, if you have too many free parameters relative to the number of points in your data set, you may "overfit" the data.  When this happens, the fit to your training data may be very good, but the fit to new data points (beyond those used for training) may be very poor.<br /><br />
+
|<div style="text-align: left;">When fitting a function to data, if you have too many free parameters relative to the number of points in your data set, you may "overfit" the data.  When this happens, the fit to your training data may be very good, but the fit to new data points (beyond those used for training) may be very poor.<br /><br />
 
Cross-validation is a common technique to deal with this problem: We set aside a fraction of the available data as a cross-validation set.  Then we begin by fitting very simple functions to the data (with few free parameters), successively increasing the number of free parameters, and seeing how the predictive performance changes on the cross-validation set.  It is typical to see improvement on the cross-validation set for a while, followed by a deterioration of predictive performance on the cross-validation set once overfitting starts occurring.  <br /><br />
 
Cross-validation is a common technique to deal with this problem: We set aside a fraction of the available data as a cross-validation set.  Then we begin by fitting very simple functions to the data (with few free parameters), successively increasing the number of free parameters, and seeing how the predictive performance changes on the cross-validation set.  It is typical to see improvement on the cross-validation set for a while, followed by a deterioration of predictive performance on the cross-validation set once overfitting starts occurring.  <br /><br />
 
This example model successively fits a non-linear kernel function to the residual error, and uses cross-validation to determine how many kernel functions should be used.<br /><br />
 
This example model successively fits a non-linear kernel function to the residual error, and uses cross-validation to determine how many kernel functions should be used.<br /><br />
Requires Analytica Optimizer: The kernel fitting function (Kern_Fit) uses [[NlpDefine]].
+
Requires Analytica Optimizer: The kernel fitting function (Kern_Fit) uses [[NlpDefine]].</div>
 
|cross-validation, overfitting, non-linear kernel functions
 
|cross-validation, overfitting, non-linear kernel functions
 
|[[Cross-Validation / Fitting Kernel Functions to Data]]
 
|[[Cross-Validation / Fitting Kernel Functions to Data]]
Line 91: Line 90:
 
|[[media:Bootstrapping.ana|Bootstrapping.ana]]
 
|[[media:Bootstrapping.ana|Bootstrapping.ana]]
 
|
 
|
|Bootstrapping is a technique from statistics for estimating the sampling error present in a statistical estimator.  The simplest version estimates sampling error by resampling the original data.  This model demonstrates how to do this in Analytica.
+
|<div style="text-align: left;">Bootstrapping is a technique from statistics for estimating the sampling error present in a statistical estimator.  The simplest version estimates sampling error by resampling the original data.  This model demonstrates how to do this in Analytica.</div>
 
|bootstrapping, sampling error, re-sampling
 
|bootstrapping, sampling error, re-sampling
 
|[[Statistical Bootstrapping]]
 
|[[Statistical Bootstrapping]]
Line 99: Line 98:
 
|[[media:Kernel_Density_Estimation.ana|Kernel Density Estimation.ana]]
 
|[[media:Kernel_Density_Estimation.ana|Kernel Density Estimation.ana]]
 
|
 
|
|This example demonstrates a very simple fixed-width kernel density estimator to estimate a "smooth" probability density.  The built-in PDF function in Analytica often has a choppy appearance due to the nature of histogramming -- it sets up a set of bins and counts how many points land in each bin.  A kernel density estimator smooths this out, producing a less choppy PDF plot.<br /><br />
+
|<div style="text-align: left;">This example demonstrates a very simple fixed-width kernel density estimator to estimate a "smooth" probability density.  The built-in PDF function in Analytica often has a choppy appearance due to the nature of histogramming -- it sets up a set of bins and counts how many points land in each bin.  A kernel density estimator smooths this out, producing a less choppy PDF plot.<br /><br />
This smoothing is built into [[Analytica 4.4]].  You can select [[Kernel Density Smoothing|smoothing]] from the [[Uncertainty Setup dialog]].
+
This smoothing is built into [[Analytica 4.4]].  You can select [[Kernel Density Smoothing|smoothing]] from the [[Uncertainty Setup dialog]].</div>
 
|kernel density estimation, kernel density smoothing
 
|kernel density estimation, kernel density smoothing
 
|[[Smooth PDF plots using Kernel Density Estimation]]
 
|[[Smooth PDF plots using Kernel Density Estimation]]
Line 108: Line 107:
 
|[[media:Output and input columns.ana|Output and input columns.ana]]
 
|[[media:Output and input columns.ana|Output and input columns.ana]]
 
|
 
|
|Presents an input table to a user, where one column is populated with computed output data, the other column with checkboxes for the user to select.  Although the '''Output Data''' column isn't read only, as would be desired, a [[Check Attribute]] has been configured to complain if he does try to change values in that column.  The model that uses these inputs would ignore any changes he makes to data in the '''Output Data''' column.<br /><br />
+
|<div style="text-align: left;">Presents an input table to a user, where one column is populated with computed output data, the other column with checkboxes for the user to select.  Although the '''Output Data''' column isn't read only, as would be desired, a [[Check Attribute]] has been configured to complain if he does try to change values in that column.  The model that uses these inputs would ignore any changes he makes to data in the '''Output Data''' column.<br /><br />
Populating the '''Output Data''' column requires the user to press a button, which runs a button script to populate that column.  This button is presented on the top-level panel.  If you change the input value, the output data will change, and then the button needs to be pressed to refresh the output data column.
+
Populating the '''Output Data''' column requires the user to press a button, which runs a button script to populate that column.  This button is presented on the top-level panel.  If you change the input value, the output data will change, and then the button needs to be pressed to refresh the output data column.</div>
 
|data analysis
 
|data analysis
 
|[[Output and Input Columns in Same Table]]
 
|[[Output and Input Columns in Same Table]]
Line 117: Line 116:
 
|[[media:Platform 2018b.ana|Platform2018b.ana]]
 
|[[media:Platform 2018b.ana|Platform2018b.ana]]
 
|offshore platforms, oil and gas, stakeholders, rigs to reefs, decision support
 
|offshore platforms, oil and gas, stakeholders, rigs to reefs, decision support
|Too many environmental issues cause bitter public controversy. The question of how to decommission California's 27 offshore oil platforms started out as a typical example. But remarkably, after careful analysis a single option, "[http://lumina.com/case-studies/energy-and-power/a-win-win-solution-for-californias-offshore-oil-rigs rigs to reefs]", obtained the support of almost all stakeholders, including oil companies and environmentalists. A law to enable this option was passed by the California State house almost unanimously, and signed by Governor Arnold Schwarzenegger.
+
|<div style="text-align: left;">Too many environmental issues cause bitter public controversy. The question of how to decommission California's 27 offshore oil platforms started out as a typical example. But remarkably, after careful analysis a single option, "[http://lumina.com/case-studies/energy-and-power/a-win-win-solution-for-californias-offshore-oil-rigs rigs to reefs]", obtained the support of almost all stakeholders, including oil companies and environmentalists. A law to enable this option was passed by the California State house almost unanimously, and signed by Governor Arnold Schwarzenegger.</div>
 
|decision analysis, multi-attribute, sensitivity analysis
 
|decision analysis, multi-attribute, sensitivity analysis
 
|[[From Controversy to Consensus: California's offshore oil platforms]]
 
|[[From Controversy to Consensus: California's offshore oil platforms]]
Line 125: Line 124:
 
|[[media:Comparing retirement account types.ana|Comparing retirement account types.ana]] or [[media:Comparing retirement account types without sensitivity.ana|Free 101 Compatible Version]]
 
|[[media:Comparing retirement account types.ana|Comparing retirement account types.ana]] or [[media:Comparing retirement account types without sensitivity.ana|Free 101 Compatible Version]]
 
|401(k), IRA, retirement account, decision analysis, uncertainty
 
|401(k), IRA, retirement account, decision analysis, uncertainty
|Will you end up with a bigger nest egg at retirement with a 401(k), traditional IRA, Roth IRA or a normal non-tax-advantaged brokerage account? For example, comparing a Roth IRA to a normal brokerage, intermediate capital gains compound in the Roth, but eventually you pay taxes on those gains at your income tax rate at retirement, whereas in the brokerage you pay capital gains taxes on the gains, which is likely a lower tax rate. So does the compounding outweigh the tax rate difference? What effect do the higher account maintenance fees in a 401(k) account have? How sensitive are these conclusions to the various input estimates? The answers to all these questions depend on your own situation, and may different for someone else. Explore these questions with this model.
+
|<div style="text-align: left;">Will you end up with a bigger nest egg at retirement with a 401(k), traditional IRA, Roth IRA or a normal non-tax-advantaged brokerage account? For example, comparing a Roth IRA to a normal brokerage, intermediate capital gains compound in the Roth, but eventually you pay taxes on those gains at your income tax rate at retirement, whereas in the brokerage you pay capital gains taxes on the gains, which is likely a lower tax rate. So does the compounding outweigh the tax rate difference? What effect do the higher account maintenance fees in a 401(k) account have? How sensitive are these conclusions to the various input estimates? The answers to all these questions depend on your own situation, and may different for someone else. Explore these questions with this model.</div>
 
|[[MultiTable]]s, sensitivity analysis
 
|[[MultiTable]]s, sensitivity analysis
 
|[[Retirement plan type comparison]]
 
|[[Retirement plan type comparison]]
Line 133: Line 132:
 
|[[media:Plane catching with UI 2020.ANA|Plane catching with UI 2020.ANA]]
 
|[[media:Plane catching with UI 2020.ANA|Plane catching with UI 2020.ANA]]
 
|
 
|
|A simple decision analysis model of a familiar decision: What time I should leave my home to catch an early morning plane departure?  I am uncertain about the time to drive to the airport, walk from parking to gate (including security), and time needed at the departure gate. It also illustrates the Expected Value of Including Uncertainty (EVIU) -- the value of considering uncertainty explicitly in your decision making compared to ignoring it and assuming that all uncertain quantities are fixed at the median estimate.<br /><br />
+
|<div style="text-align: left;">A simple decision analysis model of a familiar decision: What time I should leave my home to catch an early morning plane departure?  I am uncertain about the time to drive to the airport, walk from parking to gate (including security), and time needed at the departure gate. It also illustrates the Expected Value of Including Uncertainty (EVIU) -- the value of considering uncertainty explicitly in your decision making compared to ignoring it and assuming that all uncertain quantities are fixed at the median estimate.<br /><br />
Details at [[Catching a plane example and EVIU]]. Includes downloadable model, slides, and video.
+
Details at [[Catching a plane example and EVIU]]. Includes downloadable model, slides, and video.</div>
 
|decision theory, decision analysis, uncertainty, Monte Carlo simulation, value of information, EVPI, EVIU
 
|decision theory, decision analysis, uncertainty, Monte Carlo simulation, value of information, EVPI, EVIU
 
|[[Plane Catching Decision with Expected Value of Including Uncertainty]]
 
|[[Plane Catching Decision with Expected Value of Including Uncertainty]]
Line 142: Line 141:
 
|[[media:Marginal Analysis for Control of SO2 Emissions.ana|Marginal Analysis for Control of SO2 Emissions.ana]]
 
|[[media:Marginal Analysis for Control of SO2 Emissions.ana|Marginal Analysis for Control of SO2 Emissions.ana]]
 
|environmental engineering
 
|environmental engineering
|Acid rain in eastern US and Canada caused by sulfur dioxide is emitted primarily by coal-burning electric-generating plants in the Midwestern U.S.  This model demonstrates a marginal analysis a.k.a. benefit/cost analysis to determine the policy alternative that leads us to the most economically efficient level of cleanup.
+
|<div style="text-align: left;">Acid rain in eastern US and Canada caused by sulfur dioxide is emitted primarily by coal-burning electric-generating plants in the Midwestern U.S.  This model demonstrates a marginal analysis a.k.a. benefit/cost analysis to determine the policy alternative that leads us to the most economically efficient level of cleanup.</div>
 
|cost-benefit analysis, marginal analysis
 
|cost-benefit analysis, marginal analysis
 
|[[Marginal Analysis for Control of SO2 emissions]]
 
|[[Marginal Analysis for Control of SO2 emissions]]
Line 150: Line 149:
 
|[[Media:Donor_Presenter_Dashboard_II.ANA|Donor-Presenter Dashboard.ana]]
 
|[[Media:Donor_Presenter_Dashboard_II.ANA|Donor-Presenter Dashboard.ana]]
 
|
 
|
|This model implements a continuous-time Markov chain in Analytica's discrete-time dynamic simulation environment.  It supports immigration to, and emigration from, every node.<br /><br />
+
|<div style="text-align: left;">This model implements a continuous-time Markov chain in Analytica's discrete-time dynamic simulation environment.  It supports immigration to, and emigration from, every node.<br /><br />
It can be used by an arts organization to probabilistically forecast future audience evolution, in both the short and the long (steady state) term.  It also allows for uncertainty in the input parameters.
+
It can be used by an arts organization to probabilistically forecast future audience evolution, in both the short and the long (steady state) term.  It also allows for uncertainty in the input parameters.</div>
 
|dynamic models, Markov processes
 
|dynamic models, Markov processes
 
|[[Donor/Presenter Dashboard]]
 
|[[Donor/Presenter Dashboard]]
Line 159: Line 158:
 
|[[media:Photosynthesis regulation.ANA|Photosynthesis Regulation.ana]] - main regulation pathways<br />[[media:Photosystem.ana | Photosystem.ana]] - rough sketch of genetic regulation
 
|[[media:Photosynthesis regulation.ANA|Photosynthesis Regulation.ana]] - main regulation pathways<br />[[media:Photosystem.ana | Photosystem.ana]] - rough sketch of genetic regulation
 
|photosynthesis
 
|photosynthesis
|A model of how photosynthesis is regulated inside a cyanobacteria.  As light exposure varies over time (and you can experiment with various light intensity waveforms), it simulates the concentration levels of key transport molecules along the chain, through the PSII complex, plasto-quinone pool, PSI complex, down to metabolic oxidation.  The dynamic response to light levels, or changes in light levels, over time becomes evident, and the impact of changes to metabolic demand can also be observed.  In the graph of fluorescence above, we can see an indicator of how much energy is being absorbed, in three different cases (different light intensities).  In the two higher intensity cases, photoinhibition is observed -- a protective mechanism of the cell that engages when more energy is coming in than can be utilized by the cell.  Excess incoming energy, in the absence of photoinhibition, causes damage, particularly to the PSII complex.<br /><br />
+
|<div style="text-align: left;">A model of how photosynthesis is regulated inside a cyanobacteria.  As light exposure varies over time (and you can experiment with various light intensity waveforms), it simulates the concentration levels of key transport molecules along the chain, through the PSII complex, plasto-quinone pool, PSI complex, down to metabolic oxidation.  The dynamic response to light levels, or changes in light levels, over time becomes evident, and the impact of changes to metabolic demand can also be observed.  In the graph of fluorescence above, we can see an indicator of how much energy is being absorbed, in three different cases (different light intensities).  In the two higher intensity cases, photoinhibition is observed -- a protective mechanism of the cell that engages when more energy is coming in than can be utilized by the cell.  Excess incoming energy, in the absence of photoinhibition, causes damage, particularly to the PSII complex.<br /><br />
 
This model uses node shapes for a different purpose than is normally seen in decision analysis models.  In this model, ovals, instead of depicting chance variables, depict chemical reactions, where the value depicts the reaction rate, and rounded rectangles depict chemical concentrations.<br /><br />
 
This model uses node shapes for a different purpose than is normally seen in decision analysis models.  In this model, ovals, instead of depicting chance variables, depict chemical reactions, where the value depicts the reaction rate, and rounded rectangles depict chemical concentrations.<br /><br />
Two models are attached.  The first is a bit cleaner, and focused on the core transport chain, as described above.  The second is less developed, but is focused more on genetic regulation processes.
+
Two models are attached.  The first is a bit cleaner, and focused on the core transport chain, as described above.  The second is less developed, but is focused more on genetic regulation processes.</div>
 
|dynamic models
 
|dynamic models
 
|[[Regulation of Photosynthesis]]
 
|[[Regulation of Photosynthesis]]
Line 169: Line 168:
 
|[[media:Time-series-reindexing.ana|Time-series-reindexing.ana]]
 
|[[media:Time-series-reindexing.ana|Time-series-reindexing.ana]]
 
|
 
|
|This model contains some examples of time-series re-indexing.  It is intended to demonstrate some of these basic techniques.
+
|<div style="text-align: left;">This model contains some examples of time-series re-indexing.  It is intended to demonstrate some of these basic techniques.
  
In this example, actual measurements were collected at non-uniform time increments.  Before analyzing these, we map these to a uniformly spaced time index (<code>Week</code>), occurring on Monday of each week.  The mapping is done using an interpolation.  The evenly-spaced data is then used to forecast future behavior.  We first forecast over an index containing only future time points (<code>Future_weeks</code>), using a log-normal process model based on the historical weekly change.  We then combine the historical data with the forecast on a common index (<code>Week</code>).  A prob-bands graph of the weekly_data result shows the range of uncertainty projected by the process model (you'll notice the uncertainty exists only for future forecasted values, not historical ones).
+
In this example, actual measurements were collected at non-uniform time increments.  Before analyzing these, we map these to a uniformly spaced time index (<code>Week</code>), occurring on Monday of each week.  The mapping is done using an interpolation.  The evenly-spaced data is then used to forecast future behavior.  We first forecast over an index containing only future time points (<code>Future_weeks</code>), using a log-normal process model based on the historical weekly change.  We then combine the historical data with the forecast on a common index (<code>Week</code>).  A prob-bands graph of the weekly_data result shows the range of uncertainty projected by the process model (you'll notice the uncertainty exists only for future forecasted values, not historical ones).</div>
 
|dynamic models, forecasting, time-series re-indexing
 
|dynamic models, forecasting, time-series re-indexing
 
|[[Time-series re-indexing]]
 
|[[Time-series re-indexing]]
Line 179: Line 178:
 
|[[Media:PostCompression.ana|Post Compression Model]]
 
|[[Media:PostCompression.ana|Post Compression Model]]
 
|
 
|
|Here is a calculator for computing the maximum load that can be handled by a Douglas Fir - Larch post of a given size, grade, and composition in a construction setting.
+
|<div style="text-align: left;">Here is a calculator for computing the maximum load that can be handled by a Douglas Fir - Larch post of a given size, grade, and composition in a construction setting.</div>
 
|
 
|
 
|[[Timber Post Compression Load Capacity]]
 
|[[Timber Post Compression Load Capacity]]
Line 187: Line 186:
 
|[[media:Compression_Post_Load_Capacity.ana|Compression Post Load Capacity.ana]]
 
|[[media:Compression_Post_Load_Capacity.ana|Compression Post Load Capacity.ana]]
 
|
 
|
|Computes the load that a Douglas-Fir Larch post can support in compression.  Works for different timber types and grades and post sizes.
+
|<div style="text-align: left;">Computes the load that a Douglas-Fir Larch post can support in compression.  Works for different timber types and grades and post sizes.</div>
 
|compression analysis
 
|compression analysis
 
|[[Compression Post Load Calculator]]
 
|[[Compression Post Load Calculator]]
Line 195: Line 194:
 
|[[media:Daylighting analyzer.ana|Daylighting analyzer.ana]]
 
|[[media:Daylighting analyzer.ana|Daylighting analyzer.ana]]
 
|engineering
 
|engineering
|A demonstration showing how to analyze lifecycle costs and savings from daylighting options in building design.<br /><br />
+
|<div style="text-align: left;">A demonstration showing how to analyze lifecycle costs and savings from daylighting options in building design.<br /><br />
Analysis based on Nomograph Cost/Benefit Tool for Daylighting. adapted from S.E. Selkowitz and M. Gabel. 1984. "LBL Daylighting Nomographs," LBL-13534, Lawrence Berkeley Laboratory, Berkeley CA, 94704. (510) 486-6845.
+
Analysis based on Nomograph Cost/Benefit Tool for Daylighting. adapted from S.E. Selkowitz and M. Gabel. 1984. "LBL Daylighting Nomographs," LBL-13534, Lawrence Berkeley Laboratory, Berkeley CA, 94704. (510) 486-6845.</div>
 
|cost-benefits analysis
 
|cost-benefits analysis
 
|[[Daylighting Options in Building Design]]
 
|[[Daylighting Options in Building Design]]
Line 204: Line 203:
 
|[[Media:California_Power_Plants.ANA|California Power Plants.ana ]]
 
|[[Media:California_Power_Plants.ANA|California Power Plants.ana ]]
 
|power plants
 
|power plants
|An example showing how to use Choice menus and Checkbox inside an Edit table. It also shows how to use the Cell default attribute to specify default values (including Choice menu and Checkbox with default selections) specified in "Default Plant Data" to be used when user creates a new row in the Edit table.  This model shows how to demonstrates the use of [[Choice|choice pulldowns]] in edit tables.  The model is created during a mini-tutorial on [[Inserting Choice Controls in Edit Table Cells]] elsewhere on this Wiki.
+
|<div style="text-align: left;">An example showing how to use Choice menus and Checkbox inside an Edit table. It also shows how to use the Cell default attribute to specify default values (including Choice menu and Checkbox with default selections) specified in "Default Plant Data" to be used when user creates a new row in the Edit table.  This model shows how to demonstrates the use of [[Choice|choice pulldowns]] in edit tables.  The model is created during a mini-tutorial on [[Inserting Choice Controls in Edit Table Cells]] elsewhere on this Wiki.</div>
 
|edit table, choice menu, pulldown menu, checkbox
 
|edit table, choice menu, pulldown menu, checkbox
 
|[[California Power Plants]]
 
|[[California Power Plants]]
Line 212: Line 211:
 
|Requires Analytica Optimizer<br />[[media:Electrical Transmission.ana|Electrical Transmission.ana]]
 
|Requires Analytica Optimizer<br />[[media:Electrical Transmission.ana|Electrical Transmission.ana]]
 
|electrical engineering, power generation and transmission
 
|electrical engineering, power generation and transmission
|This model of an electrical network minimizes total cost of generation and transmission.  Each node in the network has power generators and consumers (demand).  Nodes are connected by transmission links. Each link has a maximum capacity in Watts and an admittance (the real part of impedance is assumed to be zero).  Each generator has a min and max power and a marginal cost in $/KWh.  The model uses a linear program to determine how much power each generator should produce so as to minimize total cost of generation and transmission, while satisfying demand and remaining within link constraints.
+
|<div style="text-align: left;">This model of an electrical network minimizes total cost of generation and transmission.  Each node in the network has power generators and consumers (demand).  Nodes are connected by transmission links. Each link has a maximum capacity in Watts and an admittance (the real part of impedance is assumed to be zero).  Each generator has a min and max power and a marginal cost in $/KWh.  The model uses a linear program to determine how much power each generator should produce so as to minimize total cost of generation and transmission, while satisfying demand and remaining within link constraints.</div>
 
|
 
|
 
|[[Electrical Generation and Transmission]]
 
|[[Electrical Generation and Transmission]]
Line 220: Line 219:
 
|[[media:Time of use pricing.ana|Time of use pricing.ana]] & [[media:MECOLS0620.xlsx|MECOLS0620.xlsx]]<br />(both files needed)
 
|[[media:Time of use pricing.ana|Time of use pricing.ana]] & [[media:MECOLS0620.xlsx|MECOLS0620.xlsx]]<br />(both files needed)
 
|reading from spreadsheets, time-of-use pricing, electricity pricing
 
|reading from spreadsheets, time-of-use pricing, electricity pricing
|Electricity demand and generation is not constant, varying by time of day and season. For example, solar panels generate only when the sun is out, and demand drops in the wee morning hours when most people are sleeping. Time-of-use pricing is a rate tariff model used by utility companies that changes more during times when demand tends to exceed supply. This model import actual usage data from a spreadsheet obtained from [https://www9.nationalgridus.com/energysupply/load_estimate.asp NationalGridUS.com] of historic average customer usage, uses that to project average future demand, and then calculates the time-of-use component of PG&E's [https://www.pge.com/tariffs/assets/pdf/tariffbook/ELEC_SCHEDS_E-TOU-C.pdf TOU-C] and [https://www.pge.com/tariffs/assets/pdf/tariffbook/ELEC_SCHEDS_E-TOU-D.pdf TOU-D] tariffs. (Note: The historical data came from Massachussets, the rate plan is from California, but these are used as examples).  Developed during a User Group Webinar on 30-Sep-2020, which you can watch as well to see it built.<br />'''Video''': [http://webinararchive.analytica.com/2020-Sep-30%20Time%20of%20use%20pricing.mp4 Time of use pricing.mp4]
+
|<div style="text-align: left;">Electricity demand and generation is not constant, varying by time of day and season. For example, solar panels generate only when the sun is out, and demand drops in the wee morning hours when most people are sleeping. Time-of-use pricing is a rate tariff model used by utility companies that changes more during times when demand tends to exceed supply. This model import actual usage data from a spreadsheet obtained from [https://www9.nationalgridus.com/energysupply/load_estimate.asp NationalGridUS.com] of historic average customer usage, uses that to project average future demand, and then calculates the time-of-use component of PG&E's [https://www.pge.com/tariffs/assets/pdf/tariffbook/ELEC_SCHEDS_E-TOU-C.pdf TOU-C] and [https://www.pge.com/tariffs/assets/pdf/tariffbook/ELEC_SCHEDS_E-TOU-D.pdf TOU-D] tariffs. (Note: The historical data came from Massachussets, the rate plan is from California, but these are used as examples).  Developed during a User Group Webinar on 30-Sep-2020, which you can watch as well to see it built.<br />'''Video''': [http://webinararchive.analytica.com/2020-Sep-30%20Time%20of%20use%20pricing.mp4 Time of use pricing.mp4]</div>
 
|
 
|
 
|[[Time of Use pricing]]
 
|[[Time of Use pricing]]
Line 228: Line 227:
 
|[[media:color_map.ana|Color map.ana]]
 
|[[media:color_map.ana|Color map.ana]]
 
|
 
|
|A model which highlights [[Cell Format Expression|Cell Formatting]] and [[Computed cell formats|Computed Cell Formats]]. Model result is a 'color map' wherein the cell fill color is computed based on three input variables (R, G, and B), the computed color is displayed in hexadecimal, and the font color of the hexadecimal color is determined by the cell fill color.
+
|<div style="text-align: left;">A model which highlights [[Cell Format Expression|Cell Formatting]] and [[Computed cell formats|Computed Cell Formats]]. Model result is a 'color map' wherein the cell fill color is computed based on three input variables (R, G, and B), the computed color is displayed in hexadecimal, and the font color of the hexadecimal color is determined by the cell fill color.</div>
 
|computed cell formatting
 
|computed cell formatting
 
|[[Color Map]]
 
|[[Color Map]]
Line 236: Line 235:
 
|[[Media:World cup.ana|World cup.ana]]
 
|[[Media:World cup.ana|World cup.ana]]
 
|
 
|
|On July 15, 2018, France beat Croatia 4-2 in the final game of the World Cup to become world champions. But how much of that is can be attributed to France being the better team versus to the random chance? This model accompanies my blog article, [http://lumina.com/blog/world-cup-soccer.-how-much-does-randomness-determine-the-winner World Cup Soccer. How much does randomness determine the winner?], where I explore this question and use the example to demonstrate the [[Poisson|Poisson distribution]].
+
|<div style="text-align: left;">On July 15, 2018, France beat Croatia 4-2 in the final game of the World Cup to become world champions. But how much of that is can be attributed to France being the better team versus to the random chance? This model accompanies my blog article, [http://lumina.com/blog/world-cup-soccer.-how-much-does-randomness-determine-the-winner World Cup Soccer. How much does randomness determine the winner?], where I explore this question and use the example to demonstrate the [[Poisson|Poisson distribution]].</div>
 
|
 
|
 
|[[2018 World Cup Soccer final]]
 
|[[2018 World Cup Soccer final]]
Line 244: Line 243:
 
|[http://AnalyticaOnline.com/Lonnie/resnet18.zip resnet18.zip]
 
|[http://AnalyticaOnline.com/Lonnie/resnet18.zip resnet18.zip]
 
|residual network, deep residual learning, image recognition
 
|residual network, deep residual learning, image recognition
|Show it an image, and it tries to recognize what it is an image of, classifying it among 1000 possible categories. It uses an 18-layer residual network. This model is described and demonstrated in a video in the blog article [http://lumina.com/blog/an-analytica-model-that-recognizes-images An Analytica model that recognizes images].
+
|<div style="text-align: left;">Show it an image, and it tries to recognize what it is an image of, classifying it among 1000 possible categories. It uses an 18-layer residual network. This model is described and demonstrated in a video in the blog article [http://lumina.com/blog/an-analytica-model-that-recognizes-images An Analytica model that recognizes images].</div>
 
|
 
|
 
|[[Image recognition]]
 
|[[Image recognition]]
Line 252: Line 251:
 
|[[Media:Month to quarter.ana|Month to quarter.ana]]
 
|[[Media:Month to quarter.ana|Month to quarter.ana]]
 
|
 
|
|The model shows how to transform an array from a finer-grain index (e.g., Month) onto a coarser index (e.g., Quarter).  We generally refer to this as [[Aggregate|aggregation]].  The model illustrates the direct use of [[Aggregate]], as well as a method to do this used before Aggregate was added to Analytica in release 4.2.
+
|<div style="text-align: left;">The model shows how to transform an array from a finer-grain index (e.g., Month) onto a coarser index (e.g., Quarter).  We generally refer to this as [[Aggregate|aggregation]].  The model illustrates the direct use of [[Aggregate]], as well as a method to do this used before Aggregate was added to Analytica in release 4.2.</div>
 
|aggregation, level of detail, days, weeks, months, quarters, years
 
|aggregation, level of detail, days, weeks, months, quarters, years
 
|[[Transforming Dimensions by transform matrix, month to quarter]]
 
|[[Transforming Dimensions by transform matrix, month to quarter]]
Line 260: Line 259:
 
|[[Media:Convolution.ana|Convolution.ana]]
 
|[[Media:Convolution.ana|Convolution.ana]]
 
|
 
|
|Convolution is used mostly for signal and systems analysis. It is a way to combine two time series.  This model contains function Convolve(Y, Z, T, I), that computes the convolution of two time series.  The model contains several examples of convolved functions.<br /><br />
+
|<div style="text-align: left;">Convolution is used mostly for signal and systems analysis. It is a way to combine two time series.  This model contains function Convolve(Y, Z, T, I), that computes the convolution of two time series.  The model contains several examples of convolved functions.<br /><br />
 
A time series is a set of points, <code>(Y, T)</code>, where <code>T</code> is the ascending X-axis, and the set of points is indexed by <code>I</code>. The values of <code>T</code> do not have to be equally spaced. The function treats <code>Y</code> and <code>Z</code> as being equal to 0 outside the range of <code>T</code>. The two time series here are the set of points <code>(Y, T)</code> and the set of points <code>(Z, T)</code>, where both sets of points are indexed by <code>I</code>.<br /><br />
 
A time series is a set of points, <code>(Y, T)</code>, where <code>T</code> is the ascending X-axis, and the set of points is indexed by <code>I</code>. The values of <code>T</code> do not have to be equally spaced. The function treats <code>Y</code> and <code>Z</code> as being equal to 0 outside the range of <code>T</code>. The two time series here are the set of points <code>(Y, T)</code> and the set of points <code>(Z, T)</code>, where both sets of points are indexed by <code>I</code>.<br /><br />
 
The mathematical definition of the convolution of two time series is the function given by:
 
The mathematical definition of the convolution of two time series is the function given by:
  
:<math>h(t) = \int y(u) z(t-u) dt</math>
+
:<math>h(t) = \int y(u) z(t-u) dt</math></div>
 
|signal analysis, systems analysis
 
|signal analysis, systems analysis
 
|[[Convolution]]
 
|[[Convolution]]
Line 272: Line 271:
 
|[[media:Dependency_Tracker.ANA | Dependency Tracker.ana]]
 
|[[media:Dependency_Tracker.ANA | Dependency Tracker.ana]]
 
|
 
|
|This module tracks dependencies through your model, updating the visual appearance of nodes so that you can quickly visualize the paths by which one variable influences another.  You can also use it to provide a visual indication of which nodes are downstream (or upstream) from an indicated variable.<br /><br />
+
|<div style="text-align: left;">This module tracks dependencies through your model, updating the visual appearance of nodes so that you can quickly visualize the paths by which one variable influences another.  You can also use it to provide a visual indication of which nodes are downstream (or upstream) from an indicated variable.<br /><br />
 
The module contains button scripts that change the bevel appearance of nodes in your model.  To see how Variable <code>X</code> influences Variable <code>Y</code>, the script will bevel the nodes for all variables that are influenced by <code>X</code> and influence <code>Y</code>.  Alternatively, you can bevel all nodes that are influenced by <code>X</code>, or you can bevel all nodes that influence <code>Y</code>.<br /><br />
 
The module contains button scripts that change the bevel appearance of nodes in your model.  To see how Variable <code>X</code> influences Variable <code>Y</code>, the script will bevel the nodes for all variables that are influenced by <code>X</code> and influence <code>Y</code>.  Alternatively, you can bevel all nodes that are influenced by <code>X</code>, or you can bevel all nodes that influence <code>Y</code>.<br /><br />
In the image above, the path from <code>dp_ex_2</code> through <code>dp_ex_4</code> has been highlighted using the bevel style of the nodes.  (The result of pressing the "Bevel all from Ancestor to Descendant" button).
+
In the image above, the path from <code>dp_ex_2</code> through <code>dp_ex_4</code> has been highlighted using the bevel style of the nodes.  (The result of pressing the "Bevel all from Ancestor to Descendant" button).</div>
 
|dependency analysis
 
|dependency analysis
 
|[[Dependency Tracker Module]]
 
|[[Dependency Tracker Module]]
Line 282: Line 281:
 
|[[media:French-English.ana|French-English.ana]]
 
|[[media:French-English.ana|French-English.ana]]
 
|multi-lingual models
 
|multi-lingual models
|Maintains a single influence diagram with Title and Description attributes in both English and French.  With the change of a pull-down, the influence diagram and all object descriptions are instantly reflected in the language of choice.<br /><br />
+
|<div style="text-align: left;">Maintains a single influence diagram with Title and Description attributes in both English and French.  With the change of a pull-down, the influence diagram and all object descriptions are instantly reflected in the language of choice.<br /><br />
If you change a title or description while viewing English, and then change to French, the change you made will become the English-language version of the description.  Similarly if you make a change while viewing French.
+
If you change a title or description while viewing English, and then change to French, the change you made will become the English-language version of the description.  Similarly if you make a change while viewing French.</div>
 
|
 
|
 
|[[Multi-lingual Influence Diagram]]
 
|[[Multi-lingual Influence Diagram]]
Line 291: Line 290:
 
|[[media:Parsing XML example.ana|Parsing XML example.ana]]
 
|[[media:Parsing XML example.ana|Parsing XML example.ana]]
 
|data extraction, xml, DOM parsing
 
|data extraction, xml, DOM parsing
|Suppose you receive data in an XML format that you want to read into your model. This example demonstrates two methods for extracting data: Using a full XML DOM parser, or using regular expressions. The first method fully parses the XML structure, the second simply finds the data of interest by matching patterns, which can be easier for very simple data structures (as is often the case).
+
|<div style="text-align: left;">Suppose you receive data in an XML format that you want to read into your model. This example demonstrates two methods for extracting data: Using a full XML DOM parser, or using regular expressions. The first method fully parses the XML structure, the second simply finds the data of interest by matching patterns, which can be easier for very simple data structures (as is often the case).</div>
 
|
 
|
 
|[[Extracting Data from an XML file]]
 
|[[Extracting Data from an XML file]]
Line 299: Line 298:
 
|[[media:Vector Math.ana|Vector Math.ana]]
 
|[[media:Vector Math.ana|Vector Math.ana]]
 
|
 
|
|Functions used for computing geospatial coordinates and distances. Includes:<br />
+
|<div style="text-align: left;">Functions used for computing geospatial coordinates and distances. Includes:<br />
 
* A cross product of vectors function
 
* A cross product of vectors function
 
* Functions to conversion between spherical and Cartesian coordinates in 3-D
 
* Functions to conversion between spherical and Cartesian coordinates in 3-D
 
* Functions to compute bearings from one latitude-longitude point to another
 
* Functions to compute bearings from one latitude-longitude point to another
 
* Functions for finding distance between two latitude-longitude points along the great circle.
 
* Functions for finding distance between two latitude-longitude points along the great circle.
* Functions for finding the intersection of two great circles
+
* Functions for finding the intersection of two great circles</div>
 
|geospatial analysis, GIS, vector analysis
 
|geospatial analysis, GIS, vector analysis
 
|[[Vector Math]]
 
|[[Vector Math]]
Line 312: Line 311:
 
|[[media:Total Allowable Removal model with Optimizer.ana | Total Allowable w Optimizer.ana]] or<br />[[media:Total Allowable Removal model w StepInterp.ana|Total Allowable w StepInterp.ana]] for those without Optimizer
 
|[[media:Total Allowable Removal model with Optimizer.ana | Total Allowable w Optimizer.ana]] or<br />[[media:Total Allowable Removal model w StepInterp.ana|Total Allowable w StepInterp.ana]] for those without Optimizer
 
|
 
|
|The problem applies to any population of fish or animal whose dynamics are poorly known but can be summarized in a simple model:<br /><br />
+
|<div style="text-align: left;">The problem applies to any population of fish or animal whose dynamics are poorly known but can be summarized in a simple model:<br /><br />
 
:<code>N_t + 1 = N_t*Lambda - landed catch*(1 + loss rate)</code>
 
:<code>N_t + 1 = N_t*Lambda - landed catch*(1 + loss rate)</code>
  
 
where «N_t» is the population size (number of individuals) at time ''t'', «N_t+1» is the population size at time ''t + 1'', «Lambda» is the intrinsic rate of increase and the «loss rate» is the percentage of fish or animals killed but not retrieved relative to the «landed catch», or catch secured.<br /><br />
 
where «N_t» is the population size (number of individuals) at time ''t'', «N_t+1» is the population size at time ''t + 1'', «Lambda» is the intrinsic rate of increase and the «loss rate» is the percentage of fish or animals killed but not retrieved relative to the «landed catch», or catch secured.<br /><br />
 
The question here is to determine how many fish or animals can be caught (landed) annually so that the probability of the population declining X%  in Y years (decline threshold) is less than Z% (risk tolerance).  <br /><br />
 
The question here is to determine how many fish or animals can be caught (landed) annually so that the probability of the population declining X%  in Y years (decline threshold) is less than Z% (risk tolerance).  <br /><br />
Two models are available for download.  One uses the Optimizer ([[NlpDefine]]) to find the maximum landed catch at the risk tolerance level for the given decline threshold. The other (for those using a version of Analytica without Optimizer) uses [[StepInterp]] in an iterative way to get that maximum landed catch.
+
Two models are available for download.  One uses the Optimizer ([[NlpDefine]]) to find the maximum landed catch at the risk tolerance level for the given decline threshold. The other (for those using a version of Analytica without Optimizer) uses [[StepInterp]] in an iterative way to get that maximum landed catch.</div>
 
|population analysis, dynamic models, optimization analysis
 
|population analysis, dynamic models, optimization analysis
 
|[[Total Allowable Harvest]]
 
|[[Total Allowable Harvest]]
Line 325: Line 324:
 
|[[media:Cereal Formulation1.ana|Cereal Formulation1.ana]]
 
|[[media:Cereal Formulation1.ana|Cereal Formulation1.ana]]
 
|product formulation, cereal formulation
 
|product formulation, cereal formulation
|A cereal formulation model<br /><br />
+
|<div style="text-align: left;">A cereal formulation model<br /><br />
 
A discrete mixed integer model that chooses product formulations to minimize total ingredient costs.  This could be an NSP but it uses two methods to linearize:
 
A discrete mixed integer model that chooses product formulations to minimize total ingredient costs.  This could be an NSP but it uses two methods to linearize:
 
1) Decision variable is constructed as a constrained Boolean array
 
1) Decision variable is constructed as a constrained Boolean array
2) Prices are defined as piecewise linear curves
+
2) Prices are defined as piecewise linear curves</div>
 
|
 
|
 
|[[Linearizing a discrete NSP]]
 
|[[Linearizing a discrete NSP]]
Line 336: Line 335:
 
|[[media:Neural-Network.ana|Neural Network.ana]]
 
|[[media:Neural-Network.ana|Neural Network.ana]]
 
|feed-forward neural networks
 
|feed-forward neural networks
|A feed-forward neural network can be trained (fit to training data) using the Analytica Optimizer.  This is essentially an example of non-linear regression.  This model contains four sample data sets, and is set up to train a 2-layer feedforward sigmoid network to "learn" the concept represented by the data set(s), and then test how well it does across examples not appearing in the training set.
+
|<div style="text-align: left;">A feed-forward neural network can be trained (fit to training data) using the Analytica Optimizer.  This is essentially an example of non-linear regression.  This model contains four sample data sets, and is set up to train a 2-layer feedforward sigmoid network to "learn" the concept represented by the data set(s), and then test how well it does across examples not appearing in the training set.
  
Developed during the Analytica User Group Webinar of 21-Apr-2011 -- see the [[Analytica_User_Group/Past_Topics#Neural_Networks|webinar recording]].
+
Developed during the Analytica User Group Webinar of 21-Apr-2011 -- see the [[Analytica_User_Group/Past_Topics#Neural_Networks|webinar recording]].</div>
 
|optimization analysis
 
|optimization analysis
 
|[[Neural Network]]
 
|[[Neural Network]]
Line 346: Line 345:
 
|[[media:Earthquake expenses.ana|Earthquake expenses.ana]]
 
|[[media:Earthquake expenses.ana|Earthquake expenses.ana]]
 
|
 
|
|An example of risk analysis with time-dependence and costs shifted over time.<br /><br />
+
|<div style="text-align: left;">An example of risk analysis with time-dependence and costs shifted over time.<br /><br />
 
Certain organizations (insurance companies, large companies, governments) incur expenses following earthquakes.  This simplified demo model can be used to answer questions such as:<br />
 
Certain organizations (insurance companies, large companies, governments) incur expenses following earthquakes.  This simplified demo model can be used to answer questions such as:<br />
 
* What is the probability of more than one quake in a specific 10 year period.
 
* What is the probability of more than one quake in a specific 10 year period.
Line 354: Line 353:
 
* Earthquakes are Poisson events with mean rate of once every 10 years.
 
* Earthquakes are Poisson events with mean rate of once every 10 years.
 
* Damage caused by such quake is lognormally distributed, with mean $10M adn stddev of $6M.
 
* Damage caused by such quake is lognormally distributed, with mean $10M adn stddev of $6M.
* Cost of damage gets incurred over the period of a year from the date of the quake as equipment is replaced and buildings are repaired over time:  20% in 1st quarter after quake, 50% in 2nd quarter, 20% in 3rd quarter, 10% in 4th quarter.
+
* Cost of damage gets incurred over the period of a year from the date of the quake as equipment is replaced and buildings are repaired over time:  20% in 1st quarter after quake, 50% in 2nd quarter, 20% in 3rd quarter, 10% in 4th quarter.</div>
 
|risk analysis, cost analysis
 
|risk analysis, cost analysis
 
|[[Earthquake Expenses]]
 
|[[Earthquake Expenses]]
Line 361: Line 360:
 
|'''Best used with Analytica Optimizer'''<br />[[media:Loan policy selection.ANA|Loan policy selection.ana]]
 
|'''Best used with Analytica Optimizer'''<br />[[media:Loan policy selection.ANA|Loan policy selection.ana]]
 
|creditworthiness, credit rating, default risk
 
|creditworthiness, credit rating, default risk
|A lender has a large pool of money to loan, but needs to decide what credit rating threshold to require and what interest rate (above prime) to charge.  The optimal value is determined by market forces (competing lenders) and by the probability that the borrower defaults on the loan, which is a function of the economy and borrower's credit rating.  The model can be used without the Analytica optimizer, in which case you can explore the decision space manually or use a parametric analysis to find the near optimal solution.  Those with Analytica Optimizer can find the optimal solution (more quickly) using an [[NlpDefine|NLP]] search.
+
|<div style="text-align: left;">A lender has a large pool of money to loan, but needs to decide what credit rating threshold to require and what interest rate (above prime) to charge.  The optimal value is determined by market forces (competing lenders) and by the probability that the borrower defaults on the loan, which is a function of the economy and borrower's credit rating.  The model can be used without the Analytica optimizer, in which case you can explore the decision space manually or use a parametric analysis to find the near optimal solution.  Those with Analytica Optimizer can find the optimal solution (more quickly) using an [[NlpDefine|NLP]] search.</div>
 
|risk analysis
 
|risk analysis
 
|[[Loan Policy Selection]]
 
|[[Loan Policy Selection]]
Line 369: Line 368:
 
|[[media:Hubbard and Seiersen cyberrisk.ana|Hubbard_and_Seiersen_cyberrisk.ana]]
 
|[[media:Hubbard and Seiersen cyberrisk.ana|Hubbard_and_Seiersen_cyberrisk.ana]]
 
|cybersecurity risk
 
|cybersecurity risk
|The model simulates loss exceedance curves for a set of cybersecurity events, the likelihood and probabilistic monetary impact of which have been characterized by system experts. The goal of the model is assess the impact of mitigation measures, by comparing the residual risk curve to the inherent risk curve (defined as risk without any mitigation measures) and to the risk tolerance curve. This is a translation of a model built by Douglas Hubbard and Richard Seiersen which they describe in their book [https://www.howtomeasureanything.com/cybersecurity/about-the-book/ How to Measure Anything in Cybersecurity Risk], and which they make available [https://www.howtomeasureanything.com/cybersecurity/ here].
+
|<div style="text-align: left;">The model simulates loss exceedance curves for a set of cybersecurity events, the likelihood and probabilistic monetary impact of which have been characterized by system experts. The goal of the model is assess the impact of mitigation measures, by comparing the residual risk curve to the inherent risk curve (defined as risk without any mitigation measures) and to the risk tolerance curve. This is a translation of a model built by Douglas Hubbard and Richard Seiersen which they describe in their book [https://www.howtomeasureanything.com/cybersecurity/about-the-book/ How to Measure Anything in Cybersecurity Risk], and which they make available [https://www.howtomeasureanything.com/cybersecurity/ here].</div>
 
|loss exceedance curve, simulation
 
|loss exceedance curve, simulation
 
|[[Inherent and Residual Risk Simulation]]
 
|[[Inherent and Residual Risk Simulation]]
Line 377: Line 376:
 
|[[media:Red State Blue State plot.ana]]
 
|[[media:Red State Blue State plot.ana]]
 
|map, states
 
|map, states
|This example contains the shape outlines for each of the 50 US states, along with a graph that uses color to depict something that varies by state (historical political party leaning). You may find the shape data useful for your own plots. In addition, it demonstrates the polygon fill feature that is new in [[Analytica 5.2]].
+
|<div style="text-align: left;">This example contains the shape outlines for each of the 50 US states, along with a graph that uses color to depict something that varies by state (historical political party leaning). You may find the shape data useful for your own plots. In addition, it demonstrates the polygon fill feature that is new in [[Analytica 5.2]].</div>
 
|graphing
 
|graphing
 
|[[Red or blue state]]
 
|[[Red or blue state]]
Line 385: Line 384:
 
|[[media:COVID Model 2020--03-25.ana|COVID Model 2020--03-25.ana]]
 
|[[media:COVID Model 2020--03-25.ana|COVID Model 2020--03-25.ana]]
 
|covid, covid-19, coronavirus, corona, epidemic
 
|covid, covid-19, coronavirus, corona, epidemic
|A systems dynamics style SICR model of the COVID-19 outbreak within the state of Colorado. It simulates the progression of the outbreak into the future, examining the expected impact on ventilator (compared to levels available), forecasts number of sick and number of deaths, and also the risk reduction that a "lock down" has based on the date of the start of the lock down and the amount of reduction in social interaction. [https://lumidyne-test-site.webflow.io/the-energy-modeler/covid-19 A Lumidyne blog article] describes the model and conclusions ascertained from it.
+
|<div style="text-align: left;">A systems dynamics style SICR model of the COVID-19 outbreak within the state of Colorado. It simulates the progression of the outbreak into the future, examining the expected impact on ventilator (compared to levels available), forecasts number of sick and number of deaths, and also the risk reduction that a "lock down" has based on the date of the start of the lock down and the amount of reduction in social interaction. [https://lumidyne-test-site.webflow.io/the-energy-modeler/covid-19 A Lumidyne blog article] describes the model and conclusions ascertained from it.</div>
 
|
 
|
 
|[[COVID-19 State Simulator, a Systems Dynamics approach]]
 
|[[COVID-19 State Simulator, a Systems Dynamics approach]]
Line 393: Line 392:
 
|[[Media:Corona Markov.ana|Corona Markov.ana]]
 
|[[Media:Corona Markov.ana|Corona Markov.ana]]
 
|covid, covid-19, coronavirus, corona, epidemic
 
|covid, covid-19, coronavirus, corona, epidemic
|Used to explore the progression of the COVID-19 coronavirus epidemic in the US, and to explore the effects of different levels of social isolation. It also includes sensitivity analyses. [https://analytica.com/how-social-isolation-impacts-covid-19-spread-in-us-a-markov-model-approach/ A blog article] showcases this model.
+
|<div style="text-align: left;">Used to explore the progression of the COVID-19 coronavirus epidemic in the US, and to explore the effects of different levels of social isolation. It also includes sensitivity analyses. [https://analytica.com/how-social-isolation-impacts-covid-19-spread-in-us-a-markov-model-approach/ A blog article] showcases this model.</div>
 
|
 
|
 
|[[How social isolation impacts COVID-19 spread in the US - A Markov model approach]]
 
|[[How social isolation impacts COVID-19 spread in the US - A Markov model approach]]
Line 401: Line 400:
 
|[[Media:Modelo Epidemiologoco para el Covid-19 con cuarentena.ana|Modelo Epidemiológoco para el Covid-19 con cuarentena.ana]]
 
|[[Media:Modelo Epidemiologoco para el Covid-19 con cuarentena.ana|Modelo Epidemiológoco para el Covid-19 con cuarentena.ana]]
 
|covid, covid-19, coronavirus, corona, epidemic
 
|covid, covid-19, coronavirus, corona, epidemic
|Un modelo en cadena de Markov del impacto previsto de la enfermedad coronavirus COVID-19 en el Perú, y del impacto del aislamiento social.  Consulte el artículo [https://www.linkedin.com/posts/jorgemuroarbulu_este-estudio-est%C3%A1-adaptado-a-la-realidad-activity-6650119971621912576-yRbh/ Aislamiento Social y Propagación COVID-19]  para detalles.<br /><br />
+
|<div style="text-align: left;">Un modelo en cadena de Markov del impacto previsto de la enfermedad coronavirus COVID-19 en el Perú, y del impacto del aislamiento social.  Consulte el artículo [https://www.linkedin.com/posts/jorgemuroarbulu_este-estudio-est%C3%A1-adaptado-a-la-realidad-activity-6650119971621912576-yRbh/ Aislamiento Social y Propagación COVID-19]  para detalles.<br /><br />
An adaptation and extension of Robert D. Brown's Markov Model (the previous example) to the country of Perú, translated into Spanish.
+
An adaptation and extension of Robert D. Brown's Markov Model (the previous example) to the country of Perú, translated into Spanish.</div>
 
|
 
|
 
|[[Epidemiological model of COVID-19 for Perú, en español]]
 
|[[Epidemiological model of COVID-19 for Perú, en español]]
Line 410: Line 409:
 
|[[media:COVID-19_Triangle_Suppression.ana|COVID-19 Triangle Suppression.ana]]
 
|[[media:COVID-19_Triangle_Suppression.ana|COVID-19 Triangle Suppression.ana]]
 
|covid, covid-19, coronavirus, corona, epidemic
 
|covid, covid-19, coronavirus, corona, epidemic
|A novel approach to modeling the progression of the COVID-19 pandemic in the US, and understanding the amount of time that is required for lock down measures when a suppression strategy is adopted. This model is features in the blog article [https://lumina.com/forecast-update-us-deaths-from-covid-19-coronavirus-in-2020/ Suppression strategy and update forecast for US deaths from COVID-19 Coronavirus in 2020] on the Analytica blog.
+
|<div style="text-align: left;">A novel approach to modeling the progression of the COVID-19 pandemic in the US, and understanding the amount of time that is required for lock down measures when a suppression strategy is adopted. This model is features in the blog article [https://lumina.com/forecast-update-us-deaths-from-covid-19-coronavirus-in-2020/ Suppression strategy and update forecast for US deaths from COVID-19 Coronavirus in 2020] on the Analytica blog.</div>
 
|
 
|
 
|[[A Triangle Suppression model of COVID-19]]
 
|[[A Triangle Suppression model of COVID-19]]
Line 418: Line 417:
 
|[[Media:Simple COVID-19.ana|Simple COVID-19.ana]]
 
|[[Media:Simple COVID-19.ana|Simple COVID-19.ana]]
 
|covid, covid-19, coronavirus, corona, epidemic
 
|covid, covid-19, coronavirus, corona, epidemic
|Used to explore possible COVID-19 Coronavirus scenarios from the beginning of March, 2020 through the end of 2020 in the US. The US is modeled as a closed system, which people classified as being in one of the progressive stages: Susceptible, Incubating, Contagious or Recovered. Deaths occur only from the Contagious stage. There is no compartimentalization such as by age or geography.
+
|<div style="text-align: left;">Used to explore possible COVID-19 Coronavirus scenarios from the beginning of March, 2020 through the end of 2020 in the US. The US is modeled as a closed system, which people classified as being in one of the progressive stages: Susceptible, Incubating, Contagious or Recovered. Deaths occur only from the Contagious stage. There is no compartimentalization such as by age or geography.</div>
 
|
 
|
 
|[[COVID-19 Coronavirus SICR progression for 2020]]
 
|[[COVID-19 Coronavirus SICR progression for 2020]]
Line 426: Line 425:
 
|[[Media:US COVID-19 Data.ana|US COVID-19 Data.ana]]
 
|[[Media:US COVID-19 Data.ana|US COVID-19 Data.ana]]
 
|covid, covid-19, coronavirus, corona, epidemic, death, infection
 
|covid, covid-19, coronavirus, corona, epidemic, death, infection
|The [https://github.com/nytimes/covid-19-data New York Times has made data available] to researchers on the number of reported cases and deaths in each US county, and state-wide, on each day the pandemic. This model reads in these files and transforms them into a form that is convenient to work with in Analytica. <br /><br />
+
|<div style="text-align: left;">The [https://github.com/nytimes/covid-19-data New York Times has made data available] to researchers on the number of reported cases and deaths in each US county, and state-wide, on each day the pandemic. This model reads in these files and transforms them into a form that is convenient to work with in Analytica. <br /><br />
'''Requires''': You'll need to install GIT and then clone the NYT repository. The Description of the model gives instructions for getting set up.  You'll also need to have the Analytica Enterprise or Optimizer edition.
+
'''Requires''': You'll need to install GIT and then clone the NYT repository. The Description of the model gives instructions for getting set up.  You'll also need to have the Analytica Enterprise or Optimizer edition.</div>
 
|
 
|
 
|[[COVID-19 Case and Death data for US states and counties]]
 
|[[COVID-19 Case and Death data for US states and counties]]
Line 435: Line 434:
 
|[[Media:Voluntary vs mandatory testing.ana|Voluntary vs mandatory testing.ana]]
 
|[[Media:Voluntary vs mandatory testing.ana|Voluntary vs mandatory testing.ana]]
 
|covid, covid-19, coronavirus, corona, epidemic
 
|covid, covid-19, coronavirus, corona, epidemic
|A Navy wants to compare two COVID-19 testing policies. In the first, all crew members must take a COVID-19 test before boarding a ship, and those with a positive test cannot board. In the second policy, testing is encouraged but voluntary -- each sailor has an option of being tested before boarding. This model computes the rate of infection among those allowed to board under the two scenarios, based on prevalence rates, test accuracies and voluntary testing rates. It also examines the probability of achieving zero infections on board, and the sensitivity of the results to input parameter estimates.  [https://analytica.com/voluntary-vs-mandatory-testing-for-naval-crew-selection This model is described in a blog posting].
+
|<div style="text-align: left;">A Navy wants to compare two COVID-19 testing policies. In the first, all crew members must take a COVID-19 test before boarding a ship, and those with a positive test cannot board. In the second policy, testing is encouraged but voluntary -- each sailor has an option of being tested before boarding. This model computes the rate of infection among those allowed to board under the two scenarios, based on prevalence rates, test accuracies and voluntary testing rates. It also examines the probability of achieving zero infections on board, and the sensitivity of the results to input parameter estimates.  [https://analytica.com/voluntary-vs-mandatory-testing-for-naval-crew-selection This model is described in a blog posting].</div>
 
|
 
|
 
|[[Mandatory vs Voluntary testing policies]]
 
|[[Mandatory vs Voluntary testing policies]]

Revision as of 00:43, 17 January 2023


This page lists example models and libraries. You can download them from here or (in some cases) link to a page with more details. Feel free to include and upload your own models and libraries.

Model/Library Download Domain Description Methods For more
Model Marginal abatement home heating.ana carbon price, energy efficiency, climate policy
This model, along with the accompanying blog article, show how to set up a Marginal Abatement graph in Analytica.
graph methods, optimal allocation, budget constraint Marginal Abatement Graph
Model Solar Panel Analysis.ana renewable energy, photovoltaics, tax credits
This model explores whether it would it be cost effective to install solar panels on the roof of a house in San Jose, California.
net present value, internal rate of return, agile modeling Solar Panel Analysis
Model Items within budget.ana
Given a set of items, with a priority and a cost for each, the function Items_within_budget function selects out the highest priority items that fit within the fixed budget.
Items within Budget function
Model Grant exclusion.ana business analysis
This model tests a hypothesis about the distribution of an attribute of the marginal rejectee of a grant program, given the relevance of that attribute to award of the grant.
Grant Exclusion Model
Model Project Priorities 5 0.ana business models
Evaluates a set of R&D projects, including uncertain R&D costs/revenues, uses multiattribute analysis to compare projects & generates the best portfolio given a R&D budget.
cost analysis, net present value (NPV), uncertainty analysis Project Planner
Model Steel and aluminum tariff model.ana
Estimate of the net impact of the 2018 import tariffs on steel and aluminum on the US trade deficit.
Steel and Aluminum import tariff impact on US trade deficit
Model Tax bracket interpolation 2021.ana
Computes amount of tax due from taxable income for a 2017 US Federal tax return. To match the IRS's numbers exactly, it is necessary to process tax brackets correctly as well as implementation a complex mix of rounding rules that reproduce the 12 pages of table lookups from the Form 1040 instructions. This model is showcased in a blog article, How to simplify the IRS Tax Tables.
Tax bracket interpolation
Model Feasible Sampler.ana feasibility
You have a bunch of chance variables, each with a probability distribution. Their joint sample, however, contains some combinations of points that are (for one reason or another) physically impossible. We'll call those infeasible points. You'd like to eliminate those points from the sample and keep only the feasible points.

This module implements a button that will sample a collection of chance variables, then reset the sample size and keep only those sample points that are "feasible".

Obviously, this approach will work best when most of your samples are feasible. If you can handle the "infeasible" points in your model directly, by conditioning certain chance variables on others, that is far preferable. But there are some cases where this solution (although a bit of a kludge) is more convenient.

The instructions for how to use this are in the module description field.
statistics, sampling, importance sampling, Monte Carlo simulation Sampling from only feasible points
Model Cross-validation example.ana
When fitting a function to data, if you have too many free parameters relative to the number of points in your data set, you may "overfit" the data. When this happens, the fit to your training data may be very good, but the fit to new data points (beyond those used for training) may be very poor.

Cross-validation is a common technique to deal with this problem: We set aside a fraction of the available data as a cross-validation set. Then we begin by fitting very simple functions to the data (with few free parameters), successively increasing the number of free parameters, and seeing how the predictive performance changes on the cross-validation set. It is typical to see improvement on the cross-validation set for a while, followed by a deterioration of predictive performance on the cross-validation set once overfitting starts occurring.

This example model successively fits a non-linear kernel function to the residual error, and uses cross-validation to determine how many kernel functions should be used.

Requires Analytica Optimizer: The kernel fitting function (Kern_Fit) uses NlpDefine.
cross-validation, overfitting, non-linear kernel functions Cross-Validation / Fitting Kernel Functions to Data
Model Bootstrapping.ana
Bootstrapping is a technique from statistics for estimating the sampling error present in a statistical estimator. The simplest version estimates sampling error by resampling the original data. This model demonstrates how to do this in Analytica.
bootstrapping, sampling error, re-sampling Statistical Bootstrapping
Model Kernel Density Estimation.ana
This example demonstrates a very simple fixed-width kernel density estimator to estimate a "smooth" probability density. The built-in PDF function in Analytica often has a choppy appearance due to the nature of histogramming -- it sets up a set of bins and counts how many points land in each bin. A kernel density estimator smooths this out, producing a less choppy PDF plot.

This smoothing is built into Analytica 4.4. You can select smoothing from the Uncertainty Setup dialog.
kernel density estimation, kernel density smoothing Smooth PDF plots using Kernel Density Estimation
Model Output and input columns.ana
Presents an input table to a user, where one column is populated with computed output data, the other column with checkboxes for the user to select. Although the Output Data column isn't read only, as would be desired, a Check Attribute has been configured to complain if he does try to change values in that column. The model that uses these inputs would ignore any changes he makes to data in the Output Data column.

Populating the Output Data column requires the user to press a button, which runs a button script to populate that column. This button is presented on the top-level panel. If you change the input value, the output data will change, and then the button needs to be pressed to refresh the output data column.
data analysis Output and Input Columns in Same Table
Model Platform2018b.ana offshore platforms, oil and gas, stakeholders, rigs to reefs, decision support
Too many environmental issues cause bitter public controversy. The question of how to decommission California's 27 offshore oil platforms started out as a typical example. But remarkably, after careful analysis a single option, "rigs to reefs", obtained the support of almost all stakeholders, including oil companies and environmentalists. A law to enable this option was passed by the California State house almost unanimously, and signed by Governor Arnold Schwarzenegger.
decision analysis, multi-attribute, sensitivity analysis From Controversy to Consensus: California's offshore oil platforms
Model Comparing retirement account types.ana or Free 101 Compatible Version 401(k), IRA, retirement account, decision analysis, uncertainty
Will you end up with a bigger nest egg at retirement with a 401(k), traditional IRA, Roth IRA or a normal non-tax-advantaged brokerage account? For example, comparing a Roth IRA to a normal brokerage, intermediate capital gains compound in the Roth, but eventually you pay taxes on those gains at your income tax rate at retirement, whereas in the brokerage you pay capital gains taxes on the gains, which is likely a lower tax rate. So does the compounding outweigh the tax rate difference? What effect do the higher account maintenance fees in a 401(k) account have? How sensitive are these conclusions to the various input estimates? The answers to all these questions depend on your own situation, and may different for someone else. Explore these questions with this model.
MultiTables, sensitivity analysis Retirement plan type comparison
Model Plane catching with UI 2020.ANA
A simple decision analysis model of a familiar decision: What time I should leave my home to catch an early morning plane departure? I am uncertain about the time to drive to the airport, walk from parking to gate (including security), and time needed at the departure gate. It also illustrates the Expected Value of Including Uncertainty (EVIU) -- the value of considering uncertainty explicitly in your decision making compared to ignoring it and assuming that all uncertain quantities are fixed at the median estimate.

Details at Catching a plane example and EVIU. Includes downloadable model, slides, and video.
decision theory, decision analysis, uncertainty, Monte Carlo simulation, value of information, EVPI, EVIU Plane Catching Decision with Expected Value of Including Uncertainty
Model Marginal Analysis for Control of SO2 Emissions.ana environmental engineering
Acid rain in eastern US and Canada caused by sulfur dioxide is emitted primarily by coal-burning electric-generating plants in the Midwestern U.S. This model demonstrates a marginal analysis a.k.a. benefit/cost analysis to determine the policy alternative that leads us to the most economically efficient level of cleanup.
cost-benefit analysis, marginal analysis Marginal Analysis for Control of SO2 emissions
Model Donor-Presenter Dashboard.ana
This model implements a continuous-time Markov chain in Analytica's discrete-time dynamic simulation environment. It supports immigration to, and emigration from, every node.

It can be used by an arts organization to probabilistically forecast future audience evolution, in both the short and the long (steady state) term. It also allows for uncertainty in the input parameters.
dynamic models, Markov processes Donor/Presenter Dashboard
Model Photosynthesis Regulation.ana - main regulation pathways
Photosystem.ana - rough sketch of genetic regulation
photosynthesis
A model of how photosynthesis is regulated inside a cyanobacteria. As light exposure varies over time (and you can experiment with various light intensity waveforms), it simulates the concentration levels of key transport molecules along the chain, through the PSII complex, plasto-quinone pool, PSI complex, down to metabolic oxidation. The dynamic response to light levels, or changes in light levels, over time becomes evident, and the impact of changes to metabolic demand can also be observed. In the graph of fluorescence above, we can see an indicator of how much energy is being absorbed, in three different cases (different light intensities). In the two higher intensity cases, photoinhibition is observed -- a protective mechanism of the cell that engages when more energy is coming in than can be utilized by the cell. Excess incoming energy, in the absence of photoinhibition, causes damage, particularly to the PSII complex.

This model uses node shapes for a different purpose than is normally seen in decision analysis models. In this model, ovals, instead of depicting chance variables, depict chemical reactions, where the value depicts the reaction rate, and rounded rectangles depict chemical concentrations.

Two models are attached. The first is a bit cleaner, and focused on the core transport chain, as described above. The second is less developed, but is focused more on genetic regulation processes.
dynamic models Regulation of Photosynthesis
Model Time-series-reindexing.ana
This model contains some examples of time-series re-indexing. It is intended to demonstrate some of these basic techniques. In this example, actual measurements were collected at non-uniform time increments. Before analyzing these, we map these to a uniformly spaced time index (Week), occurring on Monday of each week. The mapping is done using an interpolation. The evenly-spaced data is then used to forecast future behavior. We first forecast over an index containing only future time points (Future_weeks), using a log-normal process model based on the historical weekly change. We then combine the historical data with the forecast on a common index (Week). A prob-bands graph of the weekly_data result shows the range of uncertainty projected by the process model (you'll notice the uncertainty exists only for future forecasted values, not historical ones).
dynamic models, forecasting, time-series re-indexing Time-series re-indexing
Model Post Compression Model
Here is a calculator for computing the maximum load that can be handled by a Douglas Fir - Larch post of a given size, grade, and composition in a construction setting.
Timber Post Compression Load Capacity
Model Compression Post Load Capacity.ana
Computes the load that a Douglas-Fir Larch post can support in compression. Works for different timber types and grades and post sizes.
compression analysis Compression Post Load Calculator
Model Daylighting analyzer.ana engineering
A demonstration showing how to analyze lifecycle costs and savings from daylighting options in building design.

Analysis based on Nomograph Cost/Benefit Tool for Daylighting. adapted from S.E. Selkowitz and M. Gabel. 1984. "LBL Daylighting Nomographs," LBL-13534, Lawrence Berkeley Laboratory, Berkeley CA, 94704. (510) 486-6845.
cost-benefits analysis Daylighting Options in Building Design
Model California Power Plants.ana power plants
An example showing how to use Choice menus and Checkbox inside an Edit table. It also shows how to use the Cell default attribute to specify default values (including Choice menu and Checkbox with default selections) specified in "Default Plant Data" to be used when user creates a new row in the Edit table. This model shows how to demonstrates the use of choice pulldowns in edit tables. The model is created during a mini-tutorial on Inserting Choice Controls in Edit Table Cells elsewhere on this Wiki.
edit table, choice menu, pulldown menu, checkbox California Power Plants
Model Requires Analytica Optimizer
Electrical Transmission.ana
electrical engineering, power generation and transmission
This model of an electrical network minimizes total cost of generation and transmission. Each node in the network has power generators and consumers (demand). Nodes are connected by transmission links. Each link has a maximum capacity in Watts and an admittance (the real part of impedance is assumed to be zero). Each generator has a min and max power and a marginal cost in $/KWh. The model uses a linear program to determine how much power each generator should produce so as to minimize total cost of generation and transmission, while satisfying demand and remaining within link constraints.
Electrical Generation and Transmission
Model Time of use pricing.ana & MECOLS0620.xlsx
(both files needed)
reading from spreadsheets, time-of-use pricing, electricity pricing
Electricity demand and generation is not constant, varying by time of day and season. For example, solar panels generate only when the sun is out, and demand drops in the wee morning hours when most people are sleeping. Time-of-use pricing is a rate tariff model used by utility companies that changes more during times when demand tends to exceed supply. This model import actual usage data from a spreadsheet obtained from NationalGridUS.com of historic average customer usage, uses that to project average future demand, and then calculates the time-of-use component of PG&E's TOU-C and TOU-D tariffs. (Note: The historical data came from Massachussets, the rate plan is from California, but these are used as examples). Developed during a User Group Webinar on 30-Sep-2020, which you can watch as well to see it built.
Video: Time of use pricing.mp4
Time of Use pricing
Model Color map.ana
A model which highlights Cell Formatting and Computed Cell Formats. Model result is a 'color map' wherein the cell fill color is computed based on three input variables (R, G, and B), the computed color is displayed in hexadecimal, and the font color of the hexadecimal color is determined by the cell fill color.
computed cell formatting Color Map
Model World cup.ana
On July 15, 2018, France beat Croatia 4-2 in the final game of the World Cup to become world champions. But how much of that is can be attributed to France being the better team versus to the random chance? This model accompanies my blog article, World Cup Soccer. How much does randomness determine the winner?, where I explore this question and use the example to demonstrate the Poisson distribution.
2018 World Cup Soccer final
Model resnet18.zip residual network, deep residual learning, image recognition
Show it an image, and it tries to recognize what it is an image of, classifying it among 1000 possible categories. It uses an 18-layer residual network. This model is described and demonstrated in a video in the blog article An Analytica model that recognizes images.
Image recognition
Model Month to quarter.ana
The model shows how to transform an array from a finer-grain index (e.g., Month) onto a coarser index (e.g., Quarter). We generally refer to this as aggregation. The model illustrates the direct use of Aggregate, as well as a method to do this used before Aggregate was added to Analytica in release 4.2.
aggregation, level of detail, days, weeks, months, quarters, years Transforming Dimensions by transform matrix, month to quarter
Model Convolution.ana
Convolution is used mostly for signal and systems analysis. It is a way to combine two time series. This model contains function Convolve(Y, Z, T, I), that computes the convolution of two time series. The model contains several examples of convolved functions.

A time series is a set of points, (Y, T), where T is the ascending X-axis, and the set of points is indexed by I. The values of T do not have to be equally spaced. The function treats Y and Z as being equal to 0 outside the range of T. The two time series here are the set of points (Y, T) and the set of points (Z, T), where both sets of points are indexed by I.

The mathematical definition of the convolution of two time series is the function given by:

[math]\displaystyle{ h(t) = \int y(u) z(t-u) dt }[/math]
signal analysis, systems analysis Convolution
Model Dependency Tracker.ana
This module tracks dependencies through your model, updating the visual appearance of nodes so that you can quickly visualize the paths by which one variable influences another. You can also use it to provide a visual indication of which nodes are downstream (or upstream) from an indicated variable.

The module contains button scripts that change the bevel appearance of nodes in your model. To see how Variable X influences Variable Y, the script will bevel the nodes for all variables that are influenced by X and influence Y. Alternatively, you can bevel all nodes that are influenced by X, or you can bevel all nodes that influence Y.

In the image above, the path from dp_ex_2 through dp_ex_4 has been highlighted using the bevel style of the nodes. (The result of pressing the "Bevel all from Ancestor to Descendant" button).
dependency analysis Dependency Tracker Module
Model French-English.ana multi-lingual models
Maintains a single influence diagram with Title and Description attributes in both English and French. With the change of a pull-down, the influence diagram and all object descriptions are instantly reflected in the language of choice.

If you change a title or description while viewing English, and then change to French, the change you made will become the English-language version of the description. Similarly if you make a change while viewing French.
Multi-lingual Influence Diagram
Model Parsing XML example.ana data extraction, xml, DOM parsing
Suppose you receive data in an XML format that you want to read into your model. This example demonstrates two methods for extracting data: Using a full XML DOM parser, or using regular expressions. The first method fully parses the XML structure, the second simply finds the data of interest by matching patterns, which can be easier for very simple data structures (as is often the case).
Extracting Data from an XML file
Model Vector Math.ana
Functions used for computing geospatial coordinates and distances. Includes:
  • A cross product of vectors function
  • Functions to conversion between spherical and Cartesian coordinates in 3-D
  • Functions to compute bearings from one latitude-longitude point to another
  • Functions for finding distance between two latitude-longitude points along the great circle.
  • Functions for finding the intersection of two great circles
geospatial analysis, GIS, vector analysis Vector Math
Model Total Allowable w Optimizer.ana or
Total Allowable w StepInterp.ana for those without Optimizer
The problem applies to any population of fish or animal whose dynamics are poorly known but can be summarized in a simple model:

N_t + 1 = N_t*Lambda - landed catch*(1 + loss rate)

where «N_t» is the population size (number of individuals) at time t, «N_t+1» is the population size at time t + 1, «Lambda» is the intrinsic rate of increase and the «loss rate» is the percentage of fish or animals killed but not retrieved relative to the «landed catch», or catch secured.

The question here is to determine how many fish or animals can be caught (landed) annually so that the probability of the population declining X% in Y years (decline threshold) is less than Z% (risk tolerance).

Two models are available for download. One uses the Optimizer (NlpDefine) to find the maximum landed catch at the risk tolerance level for the given decline threshold. The other (for those using a version of Analytica without Optimizer) uses StepInterp in an iterative way to get that maximum landed catch.
population analysis, dynamic models, optimization analysis Total Allowable Harvest
Model Cereal Formulation1.ana product formulation, cereal formulation
A cereal formulation model

A discrete mixed integer model that chooses product formulations to minimize total ingredient costs. This could be an NSP but it uses two methods to linearize: 1) Decision variable is constructed as a constrained Boolean array

2) Prices are defined as piecewise linear curves
Linearizing a discrete NSP
Model Neural Network.ana feed-forward neural networks
A feed-forward neural network can be trained (fit to training data) using the Analytica Optimizer. This is essentially an example of non-linear regression. This model contains four sample data sets, and is set up to train a 2-layer feedforward sigmoid network to "learn" the concept represented by the data set(s), and then test how well it does across examples not appearing in the training set. Developed during the Analytica User Group Webinar of 21-Apr-2011 -- see the webinar recording.
optimization analysis Neural Network
Model Earthquake expenses.ana
An example of risk analysis with time-dependence and costs shifted over time.

Certain organizations (insurance companies, large companies, governments) incur expenses following earthquakes. This simplified demo model can be used to answer questions such as:

  • What is the probability of more than one quake in a specific 10 year period.
  • What is the probability that in my time window my costs exceed $X?


Assumptions in this model:

  • Earthquakes are Poisson events with mean rate of once every 10 years.
  • Damage caused by such quake is lognormally distributed, with mean $10M adn stddev of $6M.
  • Cost of damage gets incurred over the period of a year from the date of the quake as equipment is replaced and buildings are repaired over time: 20% in 1st quarter after quake, 50% in 2nd quarter, 20% in 3rd quarter, 10% in 4th quarter.
risk analysis, cost analysis Earthquake Expenses
Best used with Analytica Optimizer
Loan policy selection.ana
creditworthiness, credit rating, default risk
A lender has a large pool of money to loan, but needs to decide what credit rating threshold to require and what interest rate (above prime) to charge. The optimal value is determined by market forces (competing lenders) and by the probability that the borrower defaults on the loan, which is a function of the economy and borrower's credit rating. The model can be used without the Analytica optimizer, in which case you can explore the decision space manually or use a parametric analysis to find the near optimal solution. Those with Analytica Optimizer can find the optimal solution (more quickly) using an NLP search.
risk analysis Loan Policy Selection
Model Hubbard_and_Seiersen_cyberrisk.ana cybersecurity risk
The model simulates loss exceedance curves for a set of cybersecurity events, the likelihood and probabilistic monetary impact of which have been characterized by system experts. The goal of the model is assess the impact of mitigation measures, by comparing the residual risk curve to the inherent risk curve (defined as risk without any mitigation measures) and to the risk tolerance curve. This is a translation of a model built by Douglas Hubbard and Richard Seiersen which they describe in their book How to Measure Anything in Cybersecurity Risk, and which they make available here.
loss exceedance curve, simulation Inherent and Residual Risk Simulation
Model media:Red State Blue State plot.ana map, states
This example contains the shape outlines for each of the 50 US states, along with a graph that uses color to depict something that varies by state (historical political party leaning). You may find the shape data useful for your own plots. In addition, it demonstrates the polygon fill feature that is new in Analytica 5.2.
graphing Red or blue state
Model COVID Model 2020--03-25.ana covid, covid-19, coronavirus, corona, epidemic
A systems dynamics style SICR model of the COVID-19 outbreak within the state of Colorado. It simulates the progression of the outbreak into the future, examining the expected impact on ventilator (compared to levels available), forecasts number of sick and number of deaths, and also the risk reduction that a "lock down" has based on the date of the start of the lock down and the amount of reduction in social interaction. A Lumidyne blog article describes the model and conclusions ascertained from it.
COVID-19 State Simulator, a Systems Dynamics approach
Model Corona Markov.ana covid, covid-19, coronavirus, corona, epidemic
Used to explore the progression of the COVID-19 coronavirus epidemic in the US, and to explore the effects of different levels of social isolation. It also includes sensitivity analyses. A blog article showcases this model.
How social isolation impacts COVID-19 spread in the US - A Markov model approach
Model Modelo Epidemiológoco para el Covid-19 con cuarentena.ana covid, covid-19, coronavirus, corona, epidemic
Un modelo en cadena de Markov del impacto previsto de la enfermedad coronavirus COVID-19 en el Perú, y del impacto del aislamiento social. Consulte el artículo Aislamiento Social y Propagación COVID-19 para detalles.

An adaptation and extension of Robert D. Brown's Markov Model (the previous example) to the country of Perú, translated into Spanish.
Epidemiological model of COVID-19 for Perú, en español
Model COVID-19 Triangle Suppression.ana covid, covid-19, coronavirus, corona, epidemic
A novel approach to modeling the progression of the COVID-19 pandemic in the US, and understanding the amount of time that is required for lock down measures when a suppression strategy is adopted. This model is features in the blog article Suppression strategy and update forecast for US deaths from COVID-19 Coronavirus in 2020 on the Analytica blog.
A Triangle Suppression model of COVID-19
Model Simple COVID-19.ana covid, covid-19, coronavirus, corona, epidemic
Used to explore possible COVID-19 Coronavirus scenarios from the beginning of March, 2020 through the end of 2020 in the US. The US is modeled as a closed system, which people classified as being in one of the progressive stages: Susceptible, Incubating, Contagious or Recovered. Deaths occur only from the Contagious stage. There is no compartimentalization such as by age or geography.
COVID-19 Coronavirus SICR progression for 2020
Model US COVID-19 Data.ana covid, covid-19, coronavirus, corona, epidemic, death, infection
The New York Times has made data available to researchers on the number of reported cases and deaths in each US county, and state-wide, on each day the pandemic. This model reads in these files and transforms them into a form that is convenient to work with in Analytica.

Requires: You'll need to install GIT and then clone the NYT repository. The Description of the model gives instructions for getting set up. You'll also need to have the Analytica Enterprise or Optimizer edition.
COVID-19 Case and Death data for US states and counties
Model Voluntary vs mandatory testing.ana covid, covid-19, coronavirus, corona, epidemic
A Navy wants to compare two COVID-19 testing policies. In the first, all crew members must take a COVID-19 test before boarding a ship, and those with a positive test cannot board. In the second policy, testing is encouraged but voluntary -- each sailor has an option of being tested before boarding. This model computes the rate of infection among those allowed to board under the two scenarios, based on prevalence rates, test accuracies and voluntary testing rates. It also examines the probability of achieving zero infections on board, and the sensitivity of the results to input parameter estimates. This model is described in a blog posting.
Mandatory vs Voluntary testing policies
Comments


You are not allowed to post comments.