Managing Memory and CPU Time for large models
With small and medium-sized models, you don't usually have to worry about memory or waiting for long computations. Analytica handles things for you. But, for larger models, you may run out of RAM memory and/or the model may take an unreasonably long time to compute. This Section explains how Analytica uses memory, and suggests ways to make it faster or more memory-efficient.
How much memory does Analytica use for an array?
It uses double-precision to represent each numbers, using 8 bytes plus 4 bytes overhead, for a total of 12 bytes per number. So, a 2D array A with index I of size 100 and J of size 1000, uses about 100 x 1000 x 12 = 1,200,000 bytes = 1.2 Megabytes of memory (Approximately -- actually a Megabyte is 1024x1024 = 1,048,576 bytes, and there is some overhead for each array of 40 for the first dimension and 38 bytes for each element of the first dimension.)
It represents an uncertain number as a random sample of numbers, with Index Run as an extra dimension of length Samplesize. If array A is uncertain and the Sample size is 200, it uses about 1.2MB x 200 = 240MB to store the probabilistic value.
The above sizes are worst case. Analytica uses a highly efficient representation for sparse arrays -- e.g. when most of its values are zero. If an array is a copy of all or part of another array, the arrays may share the slices they have in common, which can save a lot of memory.
How can I measure how much time and memory is used by each variable?
Analytica Enterprise provides a function MemoryInUseBy(v) that returns the number of bytes used by the cached Mid and Probvalue of variable v. (If v hasn't yet been evaluated, it doesn't cause it to be evaluated. It just returns 24 bytes, allocated for an unevaluated quantity.) It also provides two read-only Attributes that apply to variables and User-defined Functions:
The Attribute EvaluationTime gives the time in seconds to evaluate the variable, not including its inputs. EvaluationTimeAll gives the time, including time to evaluate all its inputs (and their inputs, and so on), since the last call to ResetElapsedTimings, which resets all these attributes back to zero.
Analytica Enterprise includes the Profiler Library, which lists the memory used and evaluation time for every variable in the model. You can order the results in descending order to see the variables that use the most memory or most evaluation time at the top.
What happens when it runs out of memory?
A variable can use more memory to evaluate than store
Consider variable X:
Index I := 1..1000 Index J := 1..2000 Variable X := Sum(I + J, J)
X is a 1-dimensional array, indexed by I, but not by J, because its definition sums over J. It needs only 1000 x 12 = 12 KB to store the value of X. But, during evaluation, it computes I + J, which is indexed by I and J, and so needs 1000 x 2000 x 12 = 12 MB temporarily during the computation. To reduce memory usage, you could modify the definition of X:
X := FOR i1 := I DO Sum(i1 + J, J)
The FOR loop iterates with i1 set to a scalar (single number) successively each element of Index I. In each iteration, i1 * J generates a 1D array, needing only 1 x 2000 x 12 = 24 KB memory. It then sums the 1D array over J to return a single number as the value of the loop body for each iteration. The result is indexed by the Iteration index I. It returns the same result as the first definition of X, but now uses a maximum of only about 12KB + 24KB = 36 KB during evaluation.
Dot product, Sum(A * B, I) is efficient
For the special case of summing a product of two variables, A and B, over a common index I, it automatically uses the method above, and so does NOT compute A * B as an intermediate array and does not use any more memory than the result.
Enable comment auto-refresher