# ChangeArraySparsity

*New to Analytica 5.2*

**Experimental feature**

Sparse arrays are an experimental, not officially supported and hidden feature that are present in Analytica 5.2. This function is also not officially exposed, but is present. This is a useful function for designing tests of sparse array algorithms as they are developed.

## ChangeArraySparsity( a, type*, defaultValue, flags* )

Returns a new array having the same logical information and dimensionality as «a», but where the internal representation has been converted to the specified «type». It can be used to convert a dense array into a sparse array, or a sparse array into a dense array.

**«a»**: The array to be converted.

**«type»**: The internal type of representation to convert to.

`'Full'`

: A full tuple-tree representation. This is the conventional internal representation that has historically been used by Analytica.`'Sparse'`

: A sparse multi-dimensional array representation. In this representation, a default value is listed once and then the non-default values are listed along with their positions.

**«defaultValue**: When converting to a `'Sparse'`

type, you can specify this (it must be atomic) to specify what the default value at the atomic level should be. The most common choices are usually 0 or Null. When this is omitted, it picks the default based on the frequency of occurrence (it may select something other than 0 or Null). This parameter is ignored when «type» is not `'Sparse'`

.

**«flags»**: A bit field, with the following bits recognized

- 1 = Disallow const tuples. This applies to both
`'Full'`

and`'Sparse'`

representations.

## Background

The internal representation used by Analytica to represent arrays and data is something a user of Analytica doesn't see, nor needs to care about in most cases. This function alters that internal representation. The net result is (or should be) something you can't really see, in that it will appear to be exactly the same array. But internally, the representation may change, which may be observable in terms of performance, where you might see differences in the amount of memory required to complete a computation, or you might see some computations complete faster or slower.

Conventionally, Analytica's multidimensional arrays are internally represented in a tree form, which we call a *tuple tree*. A 5-dimensional array is a tuple (i.e., a vector) of 4-dimensional subarrays, each 4-dimensional subarray is a tuple of 3-dimensional arrays, and so on, until eventually the 1-dimensional subarray is represented as a tuple of atoms. Each atom may be a number, text, Null, a reference, a handle, or any of a couple dozen other esoteric data types.

The conventional tuple tree supports one type of sparsity, which we call *const sparsity*. When any tuple in the tree is constant, meaning every element contains the same subtree, then only one copy of the subtree is saved. This reduces the space required to store the subarray by a factor of N, and also speeds many computations by a factor of N.

The newer and experimental `'Sparse'`

representation adds support for more general sparsity. Consider a vector containing 9,999 zeros and one non-zero number, say in the 5,234th spot. The conventional tuple tree representation needs to store all 10,000 values, and when computing a scalar function, needs to iterate over all 10,000 values. It cannot reduce it to a const-tuple because that single number means it isn't const. In the `'Sparse'`

representation, it would store one copy of the default value once, in this case 0, and then save only the non-default values that occur along with their position, in this case a 1 in position 5234. A scalar function evaluation would process only 2 values. The overall representation is still a tuple tree, but with full sparse tuples at each level.

A single array can mix Full and Sparse tuple trees. The result of ChangeArraySparsity does not mix them, with the exception of const-sparseness. By default, sparse tuples at any level will be represented using the conventional const tuple. The «flag»=1 option disallows this, in which case every node in the tree is either a non-const full tuple when «type»=`'Full'`

or a fully-sparse tuple when «type»=`'Sparse'`

. A const tuple is logically a special case of a fully sparse tuple -- specifically, it is logically equivalent to a fully sparse tuple with a default value and zero values listed. But internally they are not the same, with the const-tuple being slightly more efficient in both space and time, and evoking code pathways that have 20+ years of usage and are hence not considered "experimental".

Enable comment auto-refresher