Example Models
The Wiki pages here provide a repository for Analytica models and libraries. Supplementary material may be included here describing the model, its usage, etc. Models or libraries may be contributed because they are useful for particular applications, provide a starting point for certain modeling tasks, demonstrate an Analytica concept, etc.
Several dozen models are included with the Analytica distribution, installed onto your machine when you install Analytica. These models are not also here on the Wiki yet, but may be added in the future. Furthermore, as updates to these models occur, more recent versions will be made available here.
Analytica users may also contribute their own models and examples here. For instructions on how to upload your own contributions, see Uploading Example Models.
Timber Post Compression Load Capacity
Author: Lonnie Chrisman
Here is a calculator for computing the maximum load that can be handled by a Douglas Fir - Larch post of a given size, grade, and composition in a construction setting: Post Compression Model
Transforming Dimensions by transform matrix, month to qtr
Author: Lonnie Chrisman
Model: Month to quarter.ana
The shows how to transform an array from a finer-grain index (e.g., Month) onto a coarser index (e.g., Quarter). We generally refer to this as aggregation. The example was introduced when Analytica 4.1 was the current release, but since the addition of the Aggregate function in Analytica 4.2, this transformation is very straightforward. The model has been updated to reflect both methods -- the direct use of Aggregate in Analytica 4.2, as well as the previous method.
For additional reference material, a webinar exists on using the Aggregate function.
Items within Budget function
Author: Max Henrion
Given a set of items, with a priority and a cost for each, the function Items_within_budget function selects out the highest priority items that fit within the fixed budget. The function is available from: Items within budget
Convolution
Author: Lonnie Chrisman
The model Convolution contains a function, Convolve(Y,Z,T,I), that computes the convolution of two time series.
A time series is a set of points, (Y,T), where T is the ascending X-axis, and the set of points is indexed by I. The values of T do not have to be equally spaced. The function treats Y and Z as being equal to 0 outside the range of T. The two time series here are the set of points (Y,T) and the set of points (Z,T), where both sets of points are indexed by I.
The model contains a couple examples of convolved functions.
The mathematical definition of the convolution of two time series is the function given by:
h(t) = Integral y(u) z(t-u) dt
Convolution is used predominantly in signal and systems analysis.
Sampling from only feasible points
Author: Lonnie Chrisman
Consider this scenario. You have a bunch of chance variables, each defined by a distribution. They joint sample generated, however, contains some combinations of points that are (for one reason or another) physically impossible. We'll call those infeasible points. You'd like to eliminate those points from the sample and keep only the feasible points.
The module Feasible Sampler implements a button that will sample a collection of chance variables, then reset the sample size and keep only those sample points that are "feasible".
Obviously, this approach will work best when most of your samples are feasible. If you can handle the "infeasible" points in your model directly, by conditioning certain chance variables on others, that is far preferable. But there are certainly cases where this solution (although a bit of a kludge) is more readily usable.
The instructions for how to use this are in the module description field.
Grant Exclusion Model
This model tests a hypothesis about the distribution of an attribute of the marginal rejectee of a grant program, given the relevance of that attribute to award of the grant. It could be used by an organization to make decisions as to whether to fiscally-sponsor another organization that will use that fiscal sponsorship to apply for grants, by looking at the effect on the pool of grant recipients overall.
Donor/Presenter Dashboard
This model implements a continuous-time Markov chain in Analytica's discrete-time dynamic simulation environment. It supports immigration to, and emigration from, every node.
It can be used by an arts organization to probabilistically forecast future audience evolution, in both the short and the long (steady state) term. It also allows for uncertainty in the input parameters.
Project Planner
Download: Project Priorities 5 0.ana
A demo model that shows how to:
- Evaluate a set of R&D projects, including uncertain R&D costs, and uncertain revenues if it leads to release of a commercial product.
- Use multiattribute analysis to compare projects, including a hard attribute -- expected net present value -- and soft attributes -- strategic fit, staff development, and public good will.
- Compare cost, NPV, and multiattribute value for a selected portfolio of projects.
- Generate the best portfolio (ratio of NPV or multiattribute merit to cost) given a R&D budget.
This link is only a test, and to an older version: <link target="blank">http://lumina.com/wiki/images/4/43/Project_priorities_2007_4.0.ANA</link>
California Power Plants
A model that demonstrates the use of choice pulldowns in edit tables. The model is created during a mini-tutorial on Inserting_Choice_Controls_in_Edit_Table_Cells elsewhere on this Wiki.
Media:California_Power_Plants.ANA
Dependency Tracker Module
This module tracks dependencies through your model, updating the visual appearance of nodes so that you can quickly visualize the paths by which one variable influences another. You can also use it to provide a visual indication of which nodes are downstream (or upstream) from an indicated variable.
The module contains button scripts that change the bevel appearance of nodes in your model. To see how Variable X influences Variable Y, the script will bevel the nodes for all variables that are influenced by X and influence Y. Alternatively, you can bevel all nodes that are influenced by X, or you can bevel all nodes that influence Y.
In the image above, the path from dp_ex_2 through dp_ex_4 has been highlight using the bevel style of the nodes. (The result of pressing the "Bevel all from Ancestor to Descendant" button)
Total Allowable Harvest
The problem applies to any population of fish or animal whose dynamics are poorly known but can be summarized in a simple model:
N_t+1 = N_t*Lambda - landed catch*(1+loss rate)
where N_t is the population size (number of individuals) at time t, N_t+1 is the population size at time t+1, Lambda is the intrinsic rate of increase and the loss rate is the percentage of fish or animals killed but not retrieved relative to the landed catch, or catch secured.
The question here is to determine how many fish or animals can be caught (landed) annually so that the probability of the population declining X% in Y years (decline threshold) is less than Z% (risk tolerance).
Two models are available for download. One uses the Optimizer (NlpDefine) to find the maximum landed catch at the risk tolerance level for the given decline threshold. The other (for those using a version of Analytica without Optimizer) uses StepInterp in an iterative way to get that maximum landed catch.
Models contributed by Pierre Richard
Earthquake Expenses
An example of risk analysis with time-dependence and costs shifted over time.
Certain organizations (insurance companies, large companies, governments) incur expenses following earthquakes. This simplified demo model can be used to answer questions such as:
- What is the probability of more than one quake in a specific 10 year period.
- What is the probability that in my time window my costs exceed $X?
Assumptions in this model:
- Earthquakes are Poisson events with mean rate of once every 10 years.
- Damage caused by such quake is lognormally distributed, with mean $10M adn stddev of $6M.
- Cost of damage gets incurred over the period of a year from the date of the quake as equipment is replaced and buildings are repaired over time: 20% in 1st quarter after quake, 50% in 2nd quarter, 20% in 3rd quarter, 10% in 4th quarter.
Model file: Earthquake expenses.ana
Regulation of Photosynthesis
Author Lonnie Chrisman, Ph.D.
A model of how photosynthesis is regulated inside a cyanobacteria. As light exposure varies over time (and you can experiment with various light intensity waveforms), it simulates the concentration levels of key transport molecules along the chain, through the PSII complex, plasto-quinone pool, PSI complex, down to metabolic oxidation. The dynamic response to light levels, or changes in light levels, over time becomes evident, and the impact of changes to metabolic demand can also be observed. In the graph of fluorescence above, we can see an indicator of how much energy is being absorbed, in three different cases (different light intensities). In the two higher intensity cases, photoinhibition is observed -- a protective mechanism of the cell that engages when more energy is coming in than can be utilized by the cell. Excess incoming energy, in the absense of photoinhibition, causes damage, particularly to the PSII complex.
This model uses node shapes for a different purpose than is normally seen in decision analysis models. In this model, ovals, instead of depicting chance variables, depict chemical reactions, where the value depicts the reaction rate, and rounded rectangles depict chemical concentrations.
Two models are attached. The first is a bit cleaner, and focused on the core transport chain, as described above. The second is less developed, but is focused more on genetic regulation processes.
- Photosynthesis Regulation.ana - main regulation pathways
- Photosystem.ana - rough sketch of genetic regulation.
Cross-Validation / Fitting Kernel Functions to Data
Author: Lonnie Chrisman, Ph.D., Lumina Decision Systems
Model: Cross-validation example.ana
When fitting a function to data, if you have too many free parameters relative to the number of points in your data set, you may "overfit" the data. When this happens, the fit to your training data may be very good, but the fit to new data points (beyond those used for training) may be very poor.
Cross-validation is a common technique to deal with this problem. With this technique, we set aside a fraction of the available data as a cross-validation set. Then we begin by fitting very simple functions to the data (with very few free parameters), successively increasing the number of free parameters, and seeing how the predictive performance changes on the cross-validation set. It is typical to see improvement on the cross-validation set for a while, followed by a deterioriation of predictive performance on the cross-validation set once overfitting starts occuring.
This example model successively fits a non-linear kernel function to the residual error, and uses cross-validation to determine how many kernel functions should be used.
Requires Analytica Optimizer: The kernel fitting function (Kern_Fit) uses NlpDefine.
Statistical Bootstrapping
Model: Bootstrapping.ana
Bootstrapping is a technique from statistics for estimating the sampling error present in a statistical estimator. The simplest version estimates sampling error by resampling the original data. This model demonstrates how this is accomplished in Analytica.
Compression Post Load Calculator
Author: Lonnie Chrisman
Model: Compression_Post_Load_Capacity.ana
Computes the load that a Douglas-Fir Larch post can support in compression. Works for different timber types and grades and post sizes.
Daylighting Options in Building Design
Author: Max Henrion
Model: Daylighting analyzer.ana
A demonstration showing how to analyze lifecycle costs and savings from daylighting options in building design.
Analysis based on Nomograph Cost/Benefit Tool for Daylighting. adapted from S.E. Selkowitz and M. Gabel. 1984. "LBL Daylighting Nomographs," LBL-13534, Lawrence Berkeley Laboratory, Berkeley CA, 94704. (510) 486-6845.
Plane Catching Decision with EVIU
Author: Max Henrion
Model: Plane catching decision with EVIU.ana
A simple model to assess what time I should leave my home to catch a plane, with uncertain driving time, walking from parking to gate (including security), and how long I need to be at the gate ahead of scheduled departure time. It uses a loss model based on minutes, assuming I value each extra minute snoozing in bed and set the loss if I miss the plane to 400 of those minutes.
It illustrates the EVIU (expected value of including uncertainty) i.e. the difference in expected value if I make a decision to minimize expected loss instead of decision to minimize time ignoring uncertainty (assuming each distribution is fixed at its mid value). For more details see "The Value of Knowing How Little You Know", Max Henrion, PhD Dissertation, Carnegie Mellon University, 1982.
Marginal Analysis for Control of SO2 emissions
Author: Surya Swamy
Model: Marginal Analysis for Control of SO2 Emissions.ana
Acid rain in eastern US and Canada caused by sulfur dioxide is emitted primarily by coal-burning electric-generating plants in the Midwestern U.S. This model demonstrates a marginal analysis a.k.a. benefit/cost analysis to determine the policy alternative that leads us to the most economically efficient level of cleanup.
Electrical Generation and Transmission
Author: Lonnie Chrisman
Model: Electrical Transmission.ana
Requires Analytica Optimizer
This is a simple model of an electrical distribution network. At each node in the network we have power generators and power consumers (demand). Nodes are connected by branches (i.e., transmission lines), where each branch has a given admittance (the real part of impedeance is assumed to be zero) and a maximum capacity in Watts. Each power generator has a min and max generation capability with a given marginal rate per kilowatt-hour. The model uses a linear program to determine how much power each generator should produce so as to minimize total production cost, while satisfying demand and remaining within branch capacity constraints.
Loan Policy Selection
Author: Lonnie Chrisman
Model: Loan policy selection.ANA
Best used with Analytica Optimizer
A lender has a large pool of money to loan, but needs to decide what credit rating threshold to require and what interest rate (above prime) to charge. The optimal value is determined by market forces (competiting lenders) and by the probability that the borrower defaults on the loan, which is a function of the economy and borrower's credit rating. The model can be used without the Analytica optimizer, in which case you can explore the decision space manually or use a parametric analysis to find the near optimal solution. Those with Analytica Optimizer can find the optimal solution (more quickly) using an NLP search.
Time-series re-indexing
Author: Lonnie Chrisman
Model: Time-series-reindexing.ana
This model contains some examples of time-series re-indexing. It is intended to demonstrate some of these basic techniques.
In this example, actual measurements were collected at non-uniform time increments. Before analyzing these, we map these to a uniformly spaced time index (Week), occuring on Monday of each week. The mapping is done using an interpolation. The evenly-spaced data is then used to forecast future behavior. We first forecast over an index containing only future time points (Future_weeks), using a log-normal process model based on the historical weekly change. We then combine the historical data with the forecast on a common index (Week). A prob-bands graph of the weekly_data result shows the range of uncertainty projected by the process model (you'll notice the uncertainty exists only for future forecasted values, not historical ones).
Multi-lingual Influence Diagram
Author: D. Rice, Lumina Decision Systems.
Model: French-English.ana
Maintains a single influence diagram with Title and Description attributes in both English and French. With the change of a pull-down, the influence diagram and all object descriptions are instantly reflected in the language of choice.
If you change a title or description while viewing English, and then change to French, the change you made will become the English-language version of the description. Similarly if you make a change while viewing French.
Smooth PDF plots using Kernel Density Estimation
Author: D. Rice, Lumina Decision Systems
Model: Kernel_Density_Estimation.ana
This example demonstrates a very simple fixed-width kernel density estimator to estimate a "smooth" probability density. The built-in PDF function in Analytica often has a choppy appearance due to the nature of histogramming -- it sets up a set of bins and counts how many points land in each bin. A kernel density estimator smooths this out, producing a less choppy PDF plot.
Output and Input Columns in Same Table
Author: D. Rice, Lumina Decision Systems
Model: Output and input columns.ana
Presents an input table to a user, where one column is populated with computed output data, the other column with checkboxes for the user to select. Although the Output Data column isn't read only, as would be desired, a Check Attribute has been configured to complain if he does try to change values in that column. The model that uses these inputs would ignore any changes he makes to data in the Output Data column.
Populating the Output Data column requires the user to press a button, which runs a button script to populate that column. This button is presented on the top-level panel. If you change the input value, the output data will change, and then the button needs to be pressed to refresh the output data column.
Linearizing a discrete NSP
Author: P. Sanford, Lumina Decision Systems
Model: Cereal Formulation1.ana
Cereal formulation model
A discrete mixed integer model that chooses product formulations to minimize total ingredient costs. This could be an NSP but it uses two methods to linearize: 1) Decision variable is constructed as a constrained Boolean array 2) Prices are defined as piecewise linear curves
Neural Network
Author: Lonnie Chrisman, Lumina Decision Systems
Model: Neural-Network.ana
A feed-forward neural network can be trained (fit to training data) using the Analytica Optimizer. This is essentially an example of non-linear regression. This model contains four sample data sets, and is set up to train a 2-layer feedforward sigmoid network to "learn" the concept represented by the data set(s), and then test how well it does across examples not appearing in the training set.
Developed during the Analytica User Group Webinar of 21-Apr-2011 -- see the webinar recording.
Enable comment auto-refresher