Managing Memory and CPU Time for large models

With small and medium-sized models, you don't usually have to worry about memory or waiting for long computations. Analytica works fast and handles things for you. For large models, your computer may run out of memory and/or you may find yourself waiting around for the model to compute. This Section explains how Analytica uses memory, and suggests ways to make it faster or more memory-efficient.

What to do when your model is slow

Sometimes you may find a model takes a long time to compute. Or it may just run out of memory and show a warning message. The first thing to do is to figure out why. You can open the Memory Usage Dialog from the Windows menu to see how much memory it is using, including current and peak usage of RAM and virtual memory (VM). See the next section to find out which variables or functions are taking the most memory. In brief, here's some things you can do:

  • Increase the size of the page file used for Virtual Memory (VM).
  • Use a computer with a solid state drive (SSD) for its hard drive for VM. SSDs have radically faster read and write compared with conventional hard drives.
  • Use a computer with more RAM or add more RAM to your computer. RAM is remarkable cheap these days.
  • Use the Performance Profiler library to identify which variables or functions use the most memory or computation. Then figure out how to make them more efficient.
  • If your model uses Monte Carlo with uncertainty, you can reduce the SampleSize or use the Large_Sample_Library.

See below for more on these.

How can I see how much time and memory is used by each variable?

Analytica Enterprise and above include the Performance Profiler library, which shows the memory used and evaluation time for every variable and function in the model. It lets you order the results in descending order to see the variables or functions that use the most memory or most evaluation time at the top. This is extremely useful if you are trying to figure out which parts of a model are using a lot of memory or computation. Usually, only a few take the lion's share. So you know where to focus your attention if you want to try to reduce memory usage or computation.

The Profiler uses a function MemoryInUseBy(v) that returns the number of bytes used by the cached Mid and ProbValue of variable v. (If v hasn't yet been evaluated, it doesn't cause it to be evaluated. It just returns 24 bytes, allocated for an unevaluated quantity.)

It also uses two read-only Attributes that apply to variables and User-defined Functions: The Attribute EvaluationTime gives the time in seconds to evaluate the variable, not including its inputs. EvaluationTimeAll gives the time, including time to evaluate all its inputs (and their inputs, and so on), since the last call to ResetElapsedTimings(), which resets all these attributes back to zero. The function MemoryInUseBy and attributes EvaluationTime and EvaluationTimeAll are available only in Analytica Enterprise and above.


How much memory does Analytica use for an array?

Analytica uses double-precision to represent each number, using 8 bytes plus 4 bytes overhead, for a total of 12 bytes per number. So, a 2D array A with indexes of size 100 and 1000 uses approximately 100 x 1000 x 12 = 1,200,000 bytes = 1.2 Megabytes of memory. (If you want to be inordinately precise, a Megabyte is 1024x1024 = 1,048,576 bytes, and there is an overhead of 40 bytes for the first dimension of an array and 38 bytes for each element of the first dimension.)

Analytica represents an uncertain number as a random sample of numbers, with Index Run as an extra dimension of length equal to SampleSize. If array A is uncertain and the Sample size is 1000, it uses about 1.2MB x 1000 = 1.2 GB to store the probabilistic value.

The above sizes are worst case. Analytica uses an efficient representation for sparse arrays -- e.g. when most of its values are zero. If an array is a copy of all or part of another array, subarrays may share the slices they have in common, which can also save a lot of memory.

Can I control which variable results are stored?

Normally Analytica stores the computed values of each variable, which makes it faster to interact with but uses more memory. You can control this behavior to reduce memory usage with the CachingMethod. It controls whether the computed results of calculations are stored (cached), and whether the cached values are released once all children are fully computed. It is usually a bad idea to turn this off for output variables, chance variables, and you should be very careful about turning caching off for variables in dynamic loops. See CachingMethod for details.

What happens when it runs out of memory?

When it reaches the limits of memory, it aborts any incomplete computations, and displays a warning message. Sometimes, it can take a while to release memory blocks used by the aborted computation, during which time Analytica may be unresponsive. The time required for rollback is primarily a function of how much of existing memory space has been swapped out to virtual memory, since it has to be swapped back into RAM just so it can be released. After aborting the computation and releasing memory used for intermediate computations, the amount of available memory may return to a large number. However, some results may have completed their computation, and their values will remained cached.

Using Virtual Memory

Virtual Memory (VM) lets Analytica (or any application) use more memory than is available in RAM (Rapid Access Memory). Once your model uses more memory than is available in RAM (some of which may be used by Windows and other applications), it starts swapping memory out to Virtual Memory, which is allocated to the hard disk. Often a model will slow down after it starts to use VM because reading and writing to a disk is much slower than RAM. Sometimes the slow down will be slight, and sometimes it may become intolerable. The amount of slowdown depends on how localized the computations are. When individual arrays are gigantic and consume more than about 1/3 the available RAM, an operation as simple as A+B may require the entire contents of RAM to be copied back and forth between disk and memory multiple times, which can cause slow things down by 3 or 4 orders of magnitude. This is because the arrays A, B, and A+B cannot all fit in memory at the same time. This effect is known as "thrashing". Hence, a good heuristic is that you should enough RAM for least three times the space required by the largest array in the model. On the other end of the spectrum, a huge model with tens of thousands of variables, but where individual arrays require far less memory than available RAM, will usually evaluate fast even when the total memory usage is 10s of times greater than your avaliable RAM.

One simple way to speed up large models up when using VM is to install a Solid State Hard Drive (SSHD) and configure your virtual memory settings to store its page file on the SSHD. Many newer computers already come with SSHD or SSDs.

You should configure your VM settings to preallocate virtual memory with a large minimum page file size, say 4 to 10 times the RAM you are using. For large models, we encourage you to do this yourself rather than use the Windows defaults. Often when Windows needs to increase its pagefile size, it freezes the entirely system (all applications, mouse and everything) for many minutes. During that time, your computer is entirely unresponsive, and you may even conclude that it has crashed. If you configure the minimum page file size to be large enough to accomodate any memory your models might ever use, Windows will never need to expand its pagefile size.

A variable can use more memory to evaluate than store

Consider variable X:

Index I := 1..1000
Index J := 1..2000
Variable X := Sum(I + J, J)

X is a 1-dimensional array, indexed by I, but not by J, because its definition sums over J. It needs only 1000 x 12 = 12 KB to store the value of X. But, during evaluation, it computes I + J, which is indexed by I and J, and so needs 1000 x 2000 x 12 = 12 MB temporarily during the computation. To reduce memory usage, you could modify the definition of X:

X := FOR i1 := I DO Sum(i1 + J, J)

The FOR loop iterates with i1 set to a scalar (single number) successively each element of Index I. In each iteration, i1*J generates a 1D array, needing only 1 x 2000 x 12 = 24 KB memory. It then sums the 1D array over J to return a single number as the value of the loop body for each iteration. The result is indexed by the Iteration index I. It returns the same result as the first definition of X, but now uses a maximum of only about 12KB + 24KB = 36 KB during evaluation.

Dot product, Sum(A*B, I) is memory-efficient

For the special case of summing a product of two variables, A and B, over a common index I, it automatically uses the method above. Thus, it does not compute A*B as an intermediate array, which might be very large. It does not use any more memory than needed for the result, which is not indexed by I.

Selective Parametric Analyses

You may run into memory issues when you perform a parametric analysis on many inputs simultaneously. In a parametric analysis, you introduce a dimension into one or more inputs in order to study how the output varies as these inputs vary. For example, you may change a scalar input to a list of values, therefore introducing a new dimension into your results.

The time and memory complexity parametric analysis is multiplicative in the number of dimensions introduced. So, if you perform a parametric analysis on 5 input variables simultaneously, using 3 possible values for each, you multiply the complexity by 3^5 = 243 fold. Using 10 possible value for each would increase complexity by 100,000.

A nice way to avoid this problem is to perform Selective Parametric Analysis. Here you choose a small subset of inputs to vary parametrically for each run. As you explore your model, you may change which subset you are varying and recompute your results. By using a combination of Choice pulldowns for inputs and DetermTables, you can configure your model to make selective parametric analysis very convenient. The technique for setting this up is described at Selective Parametric Analysis.

Looping over the model

If your model does not operate over index I, and you only need the results for a small number of output variables, you can reduce overall memory usage by using horizontal array abstraction. Normally Analytica computes models using vertical array abstraction, in which each intermediate variable is computed entirely, across all dimensions, before moving on to the next child variable. The idea of horizontal abstraction is to loop over your model, i.e., to compute each variable for one scenario only, computing all variables only for that one scenario, then going back and computing all variables for the next scenario. When a model does not operate over the index (or indexes) in question, the two types of array abstraction are equivalent. Memory can be saved using horizontal abstraction if you only keep the results for a single output, or a small set of outputs. Then the intermediate values aren't cached for every scenario.

This technique can also be used for large-scale Monte Carlo sampling.

Horizontal abstraction requires you to explicitly specify a looping construct in your model. Details for how to implement this is described at Looping over a model.

Analytica 32-bit vs 64-bit

Nowadays almost all Windows computers use 64-bit Words and almost all applications, including Analytica, do likewise. We no longer offer a 32-bit version of Analytica. In earlier days, many Windows systems used 32-bit words as did Analytica. Any 32-bit process is limited to a 4GB address space (232 = 4G, GB = Gigabytes). On Windows, a 32-bit process is limited to 4GB, 3GB or 2GB of memory space, depending on which edition of Windows you are running and how you have it configured. A 64-bit computer with a 64-bit Windows operating system lets Analytica, like other applications, access a much larger amount of memory, depending on the exact version of Windows. Today, with the low cost of RAM and where most computers have at least 8GB RAM, 32-bit software is quite limited,a and for most of us 32-bit machines are just a memory (sorry;-).


See Also

Comments


You are not allowed to post comments.