Memory usage and management
Overview
Some models may require large amounts of memory, especially when they contain arrays with many dimensions (Indexes). When the memory used by your model approaches the available memory on your computer, you may see an error saying that there is insufficient memory to carry out the computation, or you may experience slow performance as the process starts using virtual memory. This page explains how to monitor memory usage. It also suggests ways to work around memory bottlenecks, or modify your model to reduce memory usage.
Viewing memory usage
You can monitor the total amount of memory in use by your Analytica process by using the Memory Usage dialog. See also Memory usage.
Memory limits and memory types
Roughly speaking, there are two memory limits on your computer: The amount of RAM (random access memory) is the starting point. The maximum available memory adds VM (virtual memory), which means saving some data to a page file. If your model needs more than the maximum available memory (RAM plus page file, less space used by other processes including the Windows OS), Analytica will report an insufficient memory error.
When your model needs exceeds the RAM available and starts to use VM, performance can slow dramatically. This slowdown occurs because the operating system has to swap memory between RAM and a hard disk, and disk is much slower than RAM. The degree of slowdown depends on how localized the memory demands are. If you have a single array whose storage space exceeds RAM then the slowdown may be intolerable. If all the arrays are much smaller than RAM, but the total usage exceeds RAM only because the model has a lot of variables, it might not experience much slowdown.
Hardware cures for memory limits
Nowadays many computers, including laptops, have a SSD (solid-state drive) which is much faster than a hard disk, but not as fast as RAM. Using SSD can greatly alleviate the slowdown from use of VM. Adding an SSD to your computer can sometimes be an easy fix for slowdown of memory-intensive models. Or you can just add more RAM, which gets every cheaper.
Configuring virtual memory
If you need more memory than the amount of RAM on your computer, you may increase your virtual memory (VM). The Windows operating system lets you adjust the VM (virtual memory) page file -- i.e. the amount of memory available to your process. On Windows, you can configure this from My Computer → Properties → Advanced system settings → Performance → Settings → Advanced → Virtual Memory (Change)
Windows tends to freeze for several minutes when it needs to increase the size of its page file. So you should configure a custom intial size large enough for your large computations. If you have (or add) an solid state hard drive (SSD), make sure to locate your page file on the SSD.
These virtual memory settings show recommended settings:
On this computer, the paging file has been located on the solid state drive E:
for maximum speed. To ensure it uses the SSD, C:
is set to No paging file. It specifies a Custom size for E:
-- this is important. We strongly recommend against using the System managed size option (unless you enjoy having Windows freeze for several minutes). Set the initial size to something large enough to accommodate the largest memory usage you might ever encounter. In this example, the maximum size has been set to something a bit larger, but in reality there is little benefit to setting the Maximum size to anything larger than the Initial size, since you should never exceed the initial size.
How to reduce memory used by your model
There are several ways to reduce memory use by a model, described below. But before you do, we strongly recommend starting out by using the Performance Profiler library to find out which variables are using most of the memory. This is a feature available in Analytica Enterprise (and above). The Profiler shows how much memory (and computation) is used by each variable. Often you will find that just a few variables consume the bulk of memory. Then you can focus your efforts to reduce memory usage on just those variables.
Reduce dimensionality
Excessive memory usage is usually the result of arrays with too many dimensions (Indexes). Analytica makes it very easy to add indexes to your model, but the effect is multiplicative in space and time when those indexes appear in the same array. Finding the right trade-off between dimensionality, level of detail and resource usage (time and computation) is an inherent aspect of model building.
When reducing dimensionality, focus on individual arrays that consume a lot of space as a result of high dimensionality. Can you reformulate your problem using a lower-dimensional array?
Selective parametric analysis
Parametric analysis refers to the practice of inserting an extra index into a model input in order to explore how outputs vary across a range of values for the given input. Analytica makes it easy to do this simultaneously for multiple inputs, but in the process, adds an extra dimension for each parametric variable. Selective parametric analysis means selecting just a few inputs for parametric analysis at a time. Leave the other inputs at a single point. You can repeat the process, selecting a different subset of inputs for parametric analysis. First use a Tornado chart, which varies just one uncertain input at a time , to identify which are the most sensitive variables deserving of parametric analysis. You can facilitate selective parametric analysis by using Choice menu input controls for each potential parametric inputs. See Creating a choice menu.
Configuring caching
Whenever Analytica computes the result of a variable, it stores the result in memory. If the value is requested later by another variable, or if the user views the result, Analytica simply returns the previously computed result without having to recompute it again. This is referred to as caching.
It may be unnecessary to cache many of the intermediate variables within your model, or to keep the cached value once all the children of that variable have been computed. You can configure how cached results are retained by setting the CachingMethod attribute for each variable[1]. You can configure a variable to always cache its results, never cache, never cache array values, or release cached results after all children are fully computed. To control the CachingMethod, you must first make the attribute visible as described in Managing attributes. Note that you should always cache results for any variable containing a random or distribution function. There are some other limitations and interactions to be aware of when managing caching policies, as described further in Controlling When Result Values Are Cached on the Analytica Wiki.
Looping over the model
Analytica’s array abstraction computes the entire model in one pass. Because all intermediate variables are normally cached, this means that the results for all scenarios, across all indexes, are stored in memory for all variables.
An alternative is to loop over key dimensions, computing the entire model for a single case at a time, and collecting only the results for final output variable in each case. This consumes only the amount of memory required to compute the entire model for a single scenario, plus the memory for the final result.
The basic technique is illustrated by the following example. Suppose your model has an input X
indexed by I
, and an output Y
. We also assume that no intermediate steps in the model operate over the index I
. Then Y
by looping using:
For xi[ ] := X do WhatIf(Y, X, xi)
This simple example is illustrative. As you apply this to complex models involving multiple inputs, outputs and looping dimensions, a greater level of sophistication becomes necessary. The article Looping over a model on the Analytica Wiki covers this topic in greater detail.
Large sample library
Monte Carlo simulations with a very large sample size may also lead to insufficient memory. The Large sample library runs a Monte Carlo simulation in small batches, collecting the entire Monte Carlo sample for a selected subset of output variables. The Large Sample Library User Guide is found on the Analytica Wiki, where the library itself is also available for download.
CompressMemoryUsedBy(A)
In some cases, the CompressMemoryUsedBy() function is able to reduce the amount of memory consumed by array A
. The logical contents of A
remains unchanged, so you will not see a difference, other than possibly a drop in memory usage.
Analytica’s internal representation of arrays is able to accommodate certain forms of sparseness, and when such patterns of sparseness occur within A
, CompressMemoryUsedBy(A)
condenses the internal representation to leverage the sparseness. There are two forms of sparseness that may occur: Shared subarrays and constant subvectors. For more details on these forms of sparseness, see CompressMemoryUsedBy on the Analytica Wiki.
Notes
- ↑ Use of CachingMethod requires Analytica Enterprise.
See Also
- Memory usage
- Memory Usage Dialog
- How To Access More Memory
- Managing Memory and CPU Time for large models
- The Large Sample Library User Guide
- Performance Profiler library
- Controlling When Result Values Are Cached
- CachingMethod
- MemoryInUseBy
- CompressMemoryUsedBy
- Working Set Size
- GetProcessInfo
- TestHeapConsistency
- Clock
Enable comment auto-refresher