Managing Memory and CPU Time for large models
With small and medium-sized models, you don't usually have to worry about memory or waiting for long computations. Analytica works fast and handles things for you. For large models, your computer may run out of memory and/or you may find yourself waiting around for the model to compute. This Section explains how Analytica uses memory, and suggests ways to make it faster or more memory-efficient.
How much memory can Analytica 32-bit use?
Any 32-bit process is limited to a 4GB address space (232 = 4G, GB = Gigabytes). On Windows, a 32-bit process is limited to 4GB, 3GB or 2GB of memory space, depending on which edition of Windows you are running on how you have it configured.
If you are running a 64-bit edition of Windows, then your Analytica 32-bit can use up to 4GB of memory space.
If you are running a 32-bit edition of Windows, then your Analytica 32-bit can use up to 3GB of memory space, provided you have the /3GB setting enabled in your boot.ini (see How To Access More Memory), or up to 2GB if you don't have this option set. The option is not set when Windows is first installed, so you should set it.
If your computer has less than 4GB of RAM (Random Access Memory), or if other processes are consuming part of that memory, then it can supplement the actual RAM with virtual memory, but only up to the 2GB, 3GB or 4GB address space limit above. Virtual memory operates by swapping pages from RAM onto the disk drive. Since it is much slower to access a hard disk than RAM, Analytica may slow down significantly when it starts to use virtual memory. Since RAM is relatively cheap these days, we recommend that you install at least 4GB RAM into your 32-bit computer if you plan to run large Analytica models. There is some benefit in installing more than 4GB, since other applications and the operating system can make use of it, even if a single Analytica process cannot.
How does Analytica 64-bit help?
A 64-bit computer with a 64-bit Windows operating system lets Analytica, like other applications, access a much larger amount of memory, depending on the exact version of Windows. For faster running of large models, you should install at least 8GB RAM or more if Windows supports it. If you set the amount of memory used by Analytica to a more than the amount of RAM, it will utilize virtual memory to swap content to hard disk.
Once your model usage exceeds the amount of RAM, it may slow down as it swaps virtual memory to and from the hard disk. For some models, the slow down is hard to perceive, while for others the slow down can be intolerable. The amount of slowdown depends on how localized the computations are. When individual arrays are gigantic and consume more than about 1/3 the available RAM, then an operation as simple as A+B may require the entire contents of RAM to be copied back and forth between disk and memory multiple times, which can cause slow things down by 3 or 4 orders of magnitude. This is because the arrays A, B, and A+B cannot all fit in memory at the same time, and this phenomena is known as "thrashing". Hence, a good heuristic is that the amount of RAM should be at least three times the space required by the largest array in the model. On the other end of the spectrum, a huge model with tens of thousands of variables, but where individual arrays require far less memory than available RAM, will usually evaluate fast even when the total memory usage is 10s of times greater than your avaliable RAM.
One simple way to obtain a further speed advantage on large models is to install a Solid State Hard Drive (SSHD) and configure your virtual memory settings to store its page file on the SSHD.
You should also configure your virtual memory settings to preallocate your virtal memory, using a large mininum page file size. Do not "allow system to manage" the virtual memory. We have observed that when Windows needs to increase its pagefile size, it freezes the entirely system (all applications, mouse and everything) for many minutes. During that time, your computer is entirely unresponsive, and you may even conclude that it has crashed. The way to avoid that is to configure the minimum page file size to be large enough to accomodate any memory your models might ever use, so that Windows will never need to expand its pagefile size.
How much memory does Analytica use for an array?
It uses double-precision to represent each number, using 8 bytes plus 4 bytes overhead, for a total of 12 bytes per number. So, a 2D array A with index I of size 100 and J of size 1000, uses about 100 x 1000 x 12 = 1,200,000 bytes = 1.2 Megabytes of memory (Approximately: If you want to be inordinately precise, a Megabyte is 1024x1024 = 1,048,576 bytes, and there is an overhead of 40 bytes for the first dimension of an array and 38 bytes for each element of the first dimension.)
Analytica represents an uncertain number as a random sample of numbers, with Index Run as an extra dimension of length SampleSize. If array A is uncertain and the Sample size is 200, it uses about 1.2MB x 200 = 240MB to store the probabilistic value.
The above sizes are worst case. Analytica uses an efficient representation for sparse arrays -- e.g. when most of its values are zero. If an array is a copy of all or part of another array, the arrays may share the slices they have in common, which can also save a lot of memory.
How can I measure how much time and memory is used by each variable?
Analytica Enterprise (and higher) contains a function MemoryInUseBy(v) that returns the number of bytes used by the cached Mid and ProbValue of variable v. (If v hasn't yet been evaluated, it doesn't cause it to be evaluated. It just returns 24 bytes, allocated for an unevaluated quantity.) It also provides two read-only Attributes that apply to variables and User-defined Functions:
The Attribute EvaluationTime gives the time in seconds to evaluate the variable, not including its inputs. EvaluationTimeAll gives the time, including time to evaluate all its inputs (and their inputs, and so on), since the last call to ResetElapsedTimings(), which resets all these attributes back to zero.
Analytica Enterprise includes the Profiler Library, which lists the memory used and evaluation time for every variable in the model. You can order the results in descending order to see the variables that use the most memory or most evaluation time at the top.
Can I control which variable results are stored?
The CachingMethod attribute controls whether the computed results of calculations are stored (cached), and whether the cached values are released once all children are fully computed. It is usually a bad idea to turn this off for output variables, chance variables, and you should be very careful about turning caching off for variables in dynamic loops. See CachingMethod for details.
What happens when it runs out of memory?
When it reaches the limits of memory, it aborts any pending but incomplete computations, and displays a message saying so. In some cases, it can take a while to release memory blocks used by the aborted computation, during which time Analytica may be unresponsive. The time required for rollback is primarily a function of how much of existing memory space has been swapped out to virtual memory, since it has to be swapped back into RAM just so it can be released. After aborting the computation and releasing memory used for intermediate computations, the amount of available memory may return to a large number. However, some results may have completed their computation, and their values will remained cached.
A variable can use more memory to evaluate than store
Consider variable X
:
Index I := 1..1000
Index J := 1..2000
Variable X := Sum(I + J, J)
X
is a 1-dimensional array, indexed by I
, but not by J
, because its definition sums over J
. It needs only 1000 x 12 = 12 KB to store the value of X
. But, during evaluation, it computes I + J
, which is indexed by I
and J
, and so needs 1000 x 2000 x 12 = 12 MB temporarily during the computation. To reduce memory usage, you could modify the definition of X
:
X := FOR i1 := I DO Sum(i1 + J, J)
The FOR loop iterates with i1
set to a scalar (single number) successively each element of Index I
. In each iteration, i1*J
generates a 1D array, needing only 1 x 2000 x 12 = 24 KB memory. It then sums the 1D array over J
to return a single number as the value of the loop body for each iteration. The result is indexed by the Iteration index I
. It returns the same result as the first definition of X
, but now uses a maximum of only about 12KB + 24KB = 36 KB during evaluation.
Dot product, Sum(A*B, I) is memory-efficient
For the special case of summing a product of two variables, A
and B
, over a common index I
, it automatically uses the method above. Thus, it does not compute A*B
as an intermediate array, which might be very large. It does not use any more memory than needed for the result, which is not indexed by I
.
Selective Parametric Analyses
You may run into memory issues when you perform a parametric analysis on many inputs simultaneously. In a parametric analysis, you introduce a dimension into one or more inputs in order to study how the output varies as these inputs vary. For example, you may change a scalar input to a list of values, therefore introducing a new dimension into your results.
The time and memory complexity parametric analysis is multiplicative in the number of dimensions introduced. So, if you perform a parametric analysis on 5 input variables simultaneously, using 3 possible values for each, you multiply the complexity by 3^5 = 243 fold. Using 10 possible value for each would increase complexity by 100,000.
A nice way to avoid this problem is to perform Selective Parametric Analysis. Here you choose a small subset of inputs to vary parametrically for each run. As you explore your model, you may change which subset you are varying and recompute your results. By using a combination of Choice pulldowns for inputs and DetermTables, you can configure your model to make selective parametric analysis very convenient. The technique for setting this up is described at Selective Parametric Analysis.
Looping over the model
If your model does not operate over index I
, and you only need the results for a small number of output variables, you can reduce overall memory usage by using horizontal array abstraction. Normally Analytica computes models using vertical array abstraction, in which each intermediate variable is computed entirely, across all dimensions, before moving on to the next child variable. The idea of horizontal abstraction is to loop over your model, i.e., to compute each variable for one scenario only, computing all variables only for that one scenario, then going back and computing all variables for the next scenario. When a model does not operate over the index (or indexes) in question, the two types of array abstraction are equivalent. Memory can be saved using horizontal abstraction if you only keep the results for a single output, or a small set of outputs. Then the intermediate values aren't cached for every scenario.
This technique can also be used for large-scale Monte Carlo sampling.
Horizontal abstraction requires you to explicitly specify a looping construct in your model. Details for how to implement this is described at Looping over a model.
Enable comment auto-refresher