What's new in Analytica 5.2?
For most users, the most valuable new features are about graphing:
- Zoom into graphs: Press your mouse button within a graph and drag the cursor to show a rectangle to get a close up view of that part of the graph. You can do this horizontally to zoom into part of the X axis, vertically to zoom the Y axis, or both together.
- Hide elements on a graph: Further enhances on a popular feature introduced in Analytica 5.1 that gives you the option to hide some lines on a graph. Click an element in the key to toggle whether to hide or show the corresponding line. If there are many key items, and you want to show just one or a few, you can now select Turn off all key items from the right-click menu in the Key. Then you can click just those few key items you want to show.
- Combine symbols and lines: You can show some data sets as symbols and others as a continuous line in the same graph -- for example, to show a regression line fitted to data points. In Graph setup / Chart type, select the second Line style which shows lines and symbols. In the Graph, you can now click each element in the Key to display line only, symbol only, both line and symbol, or neither (to hide that data) by successive clicks. To save these options for future views of the graph, select Save key item visibility from the right-click menu.
- Show and fill regions on a map: Given a series of points with longitude and latitude, you can show them as regions. You should select the longitude and latitude values as X and Y values using XY plot. The Graph setup / Chart type then shows a new Fill type option, which lets you fill in each polygon with a color.
- OnGraphDraw attribute: Lets you draw points, lines, areas, and text on a graph to annotate it, for example to point out a maximum, minimum value, add labels to points, or even show points on a Google map.
- Multiple results from a function: A function can now return multiple results of different dimensions. You can assign multiple results to multiple local variables, for example:
Local (txt, filename) := ReadTextFile('Import file.txt', Dialog: True);
ReadTextFile now returns the file name as its second result in addition to the actual text contents of the file as its first result. This is useful if the user may select a different file from the default on specified as first parameter. Several other built-in functions that access files also now return the file name as a second result. You can also define user defined functions to return multiple results, using this construct, usually as the last expression in its definition:
MultiResult( v1, v2, v3)
- Extended syntax for locals and assignment: As shown above, you can declare multiple local variables at once. You can also assign to multiple local variables outside a declaration:
(txt, filename) := ReadTextFile('Import file.txt', Dialog: True);
You can declare multiple locals together as:
Local x := 1, y := 2, z := 3;
See below for more on these and other new features.
When you first show a result as a graph, it automatically set the ranges for the vertical and horizontal axes so that all data points are visible (ignoring values containing INF, NAN, or Null). Now, you can zoom in to show just part of the graph by dragging the cursor. Drag the cursor horizontally from a low to high value to zoom in along the horizontal axis. Drag it vertically to zoom along the vertical axis. Drag it diagonally to create a rectangle to zoom in on both axes. It records the zoomed min and max values in the Axis ranges tab of Graph setup as if you had set them manually.
- Drag the mouse horizontally to zoom to a manually-scaled x-axis interval:
- Drag the mouse diagonally to zoom to manually-scaled intervals on both axes:
- When you move the mouse over a zoomed graph, autoscale hover icon buttons appear. Click on one to auto-scale that axis.
- Drag vertically to zoom to a manually-scaled y-axis interval.
- Zooming has the same effect as changing the manual scale range in Graph Setup. In fact, it records your selected end points there.
Key item visibility states
Analytica 5.2 further enhances the popular feature (introduced in Analytica 5.1) that lets you click on each item in Graph Key to show or hide the corresponding line or bar.
Turn on all, Turn off all
Right-click on a graph Key to see menu options Turn on all key items and Turn off all key items. Use these when you want to show or hide almost all of them. They save you many clicks when there are a lot of key items.
Show lines and symbols
You can now show a graph with some data as symbols only, some as lines without symbols, and some lines and symbols. This is very useful when you want to show data points and a line fitted to those points on the same graph.
To achieve this, select the Line+Symbol plot style:
Initially, every data series (key item) has lines and symbols (as before). Click on each Key item to toggle it from line+symbol to line-only to symbol-only, and back to line+symbol. So with just a few clicks, you can select how to display each each series.
What if your graph has multiple keys, line and symbol or symbol size each depicting different information? You can then toggle each element on each key separately through three states: On to Off to Partial, and back to On. In the Off state, it hides both lines and symbols. In the Partial state on the Line key, it hides the line but shows the symbols. Or in the Symbol Key, when an item is Partial, it shows the line but not the symbol.
In the above graph, the 2 points corresponding to Least would appear at x=2 in the fully-on graph, but fit_err=Least is been clicked once, putting it into the off state. As a result, there is a gap in the lines at x=2. In the symbol size key (x cat), the x_cat=8 item is in the partial state. The points corresponding to x_cat=8 appear at x=8. In this case the lines pass through the points, but the symbols are hidden.
When fit_graph=y in the line key is off, all the red "ink" on the graph disappears. But when it is in the partial state, the line disappears but the symbols continue to show, as illustrated here:
Save visibility states
Changes to key item visibility reset by default when you close and re-open a graph anew. If you want your current visibility state to be saved as the starting point next time the graph is opened, select Save key item visibilities from the right-mouse menu.
Polygon fill plots
When you plot a curve in an X-Y plot, there is now a polygon fill option that fills the interior of the polygon(s). For example, starting with this data, already plotted as Latitude vs Longitude
you can use the new Polygon fill type (the Fill type option is new).
The difference between alternate and solid arises when a curve crosses itself. In this example with a state outline, that doesn't happen so they are equivalent. Polygon fill closes each curve segment (null data starts a new segment) and fills the interior.
You can use the OnGraphDraw attribute to adorn or customize your graphs in various ways. You can enter an expression that gets evaluated at any of 4 phases during the graph rendering. A Canvas is provided that you can draw on using the canvas drawing functions.
Using OnGraphDraw, by drawing on the canvas after the graph has been rendered, you you can annotate points on a plot such as the maximum or minimum, label a threshold level, draw error bars or labels for individual points.
By drawing after the axes have been scaled, but before the data has been plotted, you can draw under the data, or replace the rendering of the data entirely, such as by replacing probability bands with Tukey bars that your code renders.
You can also draw a Google map under the data (Analytica Enterprise is required to support the functions necessary for downloading a Google map).
The Google map also requires running the OnGraphDraw after the graph is laid out, but before the axes are drawn, at which opportunity it can register the actual latitute/longitude axis scale to match the downloaded map.
Finally, you can draw before anything has been drawn to completely replace the graph with a totally customized data depiction, such as a pie chart, dendritic tree, etc.
- To use OnGraphDraw, go to the Attributes dialog and enable it for Variables. It appears there now.
- On the Object window for the variable on interest, the OnGraphDraw attribute appears, along with a series of check box where you can select which phases of drawing you want your code called.
- Several local variables are provided to OnGraphDraw:
roleChanges. These supply the canvas, information about the graph itself and its layout, the pivot (roles) of the graph, and which phase of rendering is being processed. You can set
continueto tell it not to draw further, or set
roleChangesto make actual changes (like when registering actual latitude bounds of a Google map).
- The new GraphToCanvasCoord function makes it easy to find the pixel location for a data point, based on the data's own units.
See OnGraphDraw for instructions on how to use this.
Specific adornments or novel plot types can be encapsulated as User-Defined Functions and bundled in libraries, so that with these you need only add a function call in OnGraphDraw and check the appropriate phase check boxes. We have not released any such library yet, but may post libraries like this on the Analytica Wiki and Analytica blog in the future, so stay tuned.
- Bug fix impacting marginal abatement graphs
- Changing symbol size role less likely to swap other roles, less confusing
Expression language & Engine
Declaring local identifiers
Omission of initial value
When declaring a local, you can now omit its initial value, so that
is equivalent to
Local x := Null;
Multiple locals in one declaration
You can declare multiple local identifiers in the same Local declaration. For example,
Local a, b := 5, c[ J ], d[ ] := Va1;
This is then equivalent to four separate declarations as follows
Capture of multiple return values
Functions can now return multiple values, each with different dimensionality. This is covered below in #Multiple return values. These values can be captured by Local (or the other declaration constructs like For, etc.) by placing the declared identifiers inside parentheses, as illustrated here:
where the function, SingularValueDecomp returns three matrices, each with a different dimensionality. With dimensional restrictions, this could also be declared as
Note that SingularValueDecomp is called a single time, and returns three separate values. This function, as well as EigenDecomp, behave differently when multiple return values are captured compared to when only the main value is used, which is done for backward compatibility with their legacy behavior. Previously, the returned a data structure with 3 references to the 3 matricies, which required some work to unpack. They illustrate that the capability to return multiple values isn't entirely new, but is far more convenient. These two functions are the only functions that change their behavior. For all other functions, when only the main value is used, the secondary return values are simply dropped (and are usually not computed in the first place). For example, in addition to reading the contents of a file, you can also capture the file name selected by the user using
but you don't need to capture filename if you don't need it, and can simple use
As with previous releases, you can declare the indexes that the value named by a local identifier is allowed to have using brackets, such as
Local x[I, J, K] := Z;
When declared in this fashion, and expressions you include in the body (i.e., the lexical scope of these local identifiers) can be treated as if it does not contain any indexes not listed. The new keyword "
reduced" also restricts the dimensionality of a local variable, but in a somewhat different way. The same qualifier exists for function parameter declarations and in fact works in exactly the same way here.
Local ( a[ ], b reduced ) := _( X, Y );
a might need to be iterated over the indexes of
b will have the indexes of
Y that are not also indexes of
X. In addition, the slice of X or Y named by a or b is coordinated. When a names
b also names
Y[J=3]. This leads to some convenient new iteration constructs. For example, you may have seen expressions such as this one:
the ensuing looped body code makes use of the index position (n), the index label (ii), and the array slice (x_i), and so it has to extract two of the items from the loop variable. This can now be condensed to a single looping structure,
- For (n, ii, x_i reduced) := _(@I, I, X) Do ( …
Note: In many cases, the word Local can be used in place of For here. Note that
n names each position of
ii names each label of
x_i names the slice of
X corresponding to
[@I=n]. This is similar to calling a function with
F(@I, I, X) with has Parameters declared as
- Function F(n,i :  ; x_i : reduced)
Iterating over repeated parameters (Local xi := repeated x Do)
( x : ... )
(x : repeated)
the "correct" way to iterate over the supplied parameters in the Definition of the UDF is now
Local xi := repeated x;
repeated keyword at this position is recognized. So for example, if someone calls this function as
c each have different dimensions,
xi will alias each of these values in turn. A couple syntactic variations are to use For in place of Local, or to include parens around the name, e.g.,
For xi := repeated(x) Do ...
As before, you can also access these using, e.g., Slice(x,2) for the second repeated. It is best, however, not to treat the repeated "dimension" as you would the implicit dimension, otherwise you may end up with the union of the incoming dimensions.
Preservation of local name instead of local1, local2
Prior to this release, the parse tree (an internal data structure) for an Analytica expression did not remember the local identifier. Instead, it numbered the locals, and so every once in a while you might encounter a situation where the identify of the local is extracted from the parse tree and appears as
local2, etc. Analytica 5.2 now annotates the parse tree with the local identifier, so that depictions of the parse use the original names. For example, suppose
Va1 is defined as
You can see the difference here:
Multiple return values
Functions can now return multiple values, each with potentially different indexes, not just a single value or array, as previously. An expression calling the function can use only the first returned value, or it can use some or all of the vales. Several built-in functions have been enhanced to return additional information in the second and third return values.
Built-in functions that return multiple values
These built-in functions read information from a file and return its contents in one way or other in the main value. Each has now been enhanced to return the file name that was opened as a second parameter. Since in each case, a user might potentially select a file from a file selector dialog, you would otherwise not knowing which file was actually read. The primary return value is the same as in previous releases, the second return value is new.
- ReadTextFile: Returns ( file_contents, filename )
- SpreadsheetOpen: Returns ( workbook, filename )
- ReadExportFile: Returns ( array, filename )
- ReadBinaryFile: Returns (date, filename)
- ReadImageFile: Returns (image, filename)
Note: the Write functions don't yet return filenames in this fashion.
- CanvasDrawText returns the coordinates of the bounding box of the text.
- Regression will return the bias coefficient as a second return value without having it included in the basis.
- MantissaAndExponent returns the mantissa and exponents as separate return values.
- MsgBox returns the check box state as a second return value.
The function SingularValueDecomp computes three matrices, each with a different dimensionality. Formerly, these were returned as a vector of 3 references. You would then usually unpack this, extracting the three items. With the ability for it to return three return values, the call is more convenient, namely:
(u, w, v) := SingularValueDecomp( a, I, J, J2)
eliminating the ugly unpacking code. To ensure backward compatibility with code that used the 3-reference data structure, it is able to detect when you are capturing multiple return values and return the 3-reference structure when you are not.
A similar enhancement applies to the EigenDecomp function, which returns a vector of eigen values and 2-D matrix of eigen vectors. Formerly, these were bundled using a reference for the eigen vectors, but with multiple return values it is convenient to immediately separate these, e.g.,
(eigenVals, eigenVecs) := EigenDecomp(a, I, J)
EigenDecomp also retains its legacy behavior when you are not capturing multiple return values.
The Regression function automatically adds a bias term to the basis and return the bias coefficient as the second return value when you capture the second return value. This means you can now find
b for simple y=m*x+b regression for scalar x using just
You can include a checkbox on a message box displayed by MsgBox(). When you do so, the state of the checkbox is returned as a second return value.
Capturing multiple return values
When a function returns multiple return values, you have to capture these values. There are two ways of doing this: In a local declaration, or via the assignment operator (:=).
Local( x, y, z ) := FuncWithMultiple( );
(a, b, c) := FuncWithMultiple( );
When assignment is used, each destination (a, b and c) can be anything that could appear on the left-hand side of an normal assignment operator. This includes local identifiers, global variables (in contexts where a side-effect is legal), a slice of a local variable, an attribute of an object, etc.
Although I'm showing the local declaration example using Local..Do, you can also use other local declaration constructs in the same way, including For and LocalAlias, or the legacy Var..Do, Using..Do, MetaVar..Do, etc.
When you capture a return value that isn't actually returned by the function, it is equivalent to capturing null. For example:
( x,y ) := Sqrt(5)
x to 5 and
y to null.
Returning multiple values from a UDF
To return multiple values from your own UDFs, simple return
MultiResult( v1, v2, v3, v4 )
v1, v2, … are the expressions whose result is to be returned. Be aware that for any of these values that are not captured by the caller, the corresponding expression will not be evaluated. You can use that to your advantage by placing any time-consuming code not shared by v1 in the expression for v2, so that when the caller uses only v1, no wasted computation ensues.
There is a synonymous syntax,
_( v1, v2, v3, v4 ) that can be used. When returning values from a UDF, we feel that it clearer to call MultiResult. However, the underscore function is convenient when using a multiple assignment directly, such as
Local (key, val) := _( dict.Key, dict );
This example combines
_( ) with Repeated parameter forwarding to accomplish something that would have taken four lines of code otherwise.
Emergent iteration constructs
Some new convenient forms of iteration constructs emerge from the introduction of multiple return values. For example, when iterating over an index, you can simultaneously grab the index position and the index labels in the loop declaration.
- For ( pos, label ) := _( @J, J ) Do …
A 1-D array with an index containing labels is called as associative array in other programming languages. The index values are the keys and the array values are the values. Your iteration can conveniently name both the current key and the current value.
- For ( key, val reduced ) := _( I, a ) Do …
- (Definition of X := array) -- setting cell expressions
- new ParseExpression() function
- Capturing multiple values
Added finer-grained control over which multi-threaded algorithms are enabled or disabled. This is so that if you encounter a multi-threading-related problem with one particular algorithm, you can disable it without having to sacrifice all multi-threading.
- DisableMultithreaded: New system variable that contains the flags.
- SetEvaluationFlag('multithreaded'): Function for turning on or off only within one expression.
You should not rely on experimental features while they are still experimental. They are subject to change or even cancellation in future releases, and are less thoroughly tested.
Analytica has long had support for a certain form of sparsity, which we call constant sparsity (or just const sparsity for short). When the values along a slice of an array don't vary (are constant), Analytica is often able to store the single value only, and also avoid an iteration during computations. However, many sparse arrays can't take advantage of const sparseness. We've been experimenting with fully sparse multi-dimensional arrays. With these, it is possible to have multidimensional cubes with large numbers of dimensions and immense numbers of distinct coordinates as long as the number of cell with actual data is relatively small. Such sparsity is often seen in MdTable relational-to-array transformations.
You can use the same Analytica operators and functions on sparse arrays as you would on any other Analytica arrays, but when possible, Analytica will attempt to use sparse algorithms to process the only values that are present and produce a sparse array, maintaining fast and low-memory computation. With this we have been able to demonstrate some real-life computations on sparse BI hypercubes that were not previously possible (and which are not realistically possible with a relational table representation).
Some operations produce non-sparse results, even when applied to sparse arrays. Cumulate applied to an array with a non-zero default value is one obvious example. This is a serious "gotcha", because if you are manipulating arrays with quadrillions of cells, one application of a function like that and you've just created a multi-petabyte array.
At present, only a subset of operations and functions that could have sparse-array-aware algorithms actually do; however, many of the ones that are present are the most common operations. But this incompleteness is one reason this is classified as an experimental feature.
In addition, the sparse algorithms are at this point quite new and not battle tested, so it not unlikely that bugs may be lurking, including bugs that might produce incorrect results. One of the reasons we are including it as an experimental feature is to promote testing of the algorithms.
Using sparse arrays
The use of sparse arrays is enabled by setting the system variable EnableSparse to 1. Or, you can enable the production of a sparse array from a specific expression by using
SetEvaluationFlag('sparse', true', «expr»).
MdTable is able to return a sparse array from a relational table, provided EnableSparse is set, or the sparse evaluation flag is on, or you set the optional «sparse» parameter to true. From there, the sparseness will propagate as you slice, add, and so on.
a = b are often sparse, but will only return a sparse result when the evaluation context allows it. When these produce dense arrays, and those dense arrays are combined with sparse arrays, a lot of sparsity is often lost.
with Multiple return values
The following built-in functions now return multiple results: ReadTextFile, ReadImageFile, ReadBinaryFile, SpreadsheetOpen, ReadExportFile, SingularValueDecomp, EigenDecomp, MsgBox, MantissaAndExponent, CanvasDrawText and Regression.
See #Multiple return values above.
New optional parameters to existing functions
- Added an optional parameter named «initially» parameter to ComputedBy. This provides an initial value for the parent variable, and retains the value assigned by the called in the definition. (But only when the value is a simple atom). For example, if you want to remember what file a user selected, you could use:
Variable filename := ComputedBy(wb,"")
Variable wb := Local tmp; (tmp,filename) := SpreadsheetOpen(filename); tmp
- Added an optional Boolean parameter, «w1D» parameter to SingularValueDecomp. When false or omitted, it the resulting W is a square diagonal matrix. When set to true, the returned W is a vector (the diagonal).
- Added an optional «except» parameter to IndexesOf, which accepts any number of index identifiers. For example, the following sums over every index of array A except for
- Added a «first» parameter to ArgMin, Argmax, SubIndex, and PositionInIndex. When omitted or false, in the event of a tie these return the last occurrence. When set to true, they return the position of the first occurrence of the tie.
- The basis index,
K, is now optional for the Regression function. This is for convenience when performing a 1-D regression over a scalar x.
- Added an optional «measureOnly» boolean parameter to CanvasDrawText.
For sparse arrays
- Added an optional boolean parameter «sparse» to MdTable. Experimental. See #Sparse arrays.
- The optional Boolean «sparseCount» parameter to Size. When true, counts the actual number of values (including default values) in the sparse array.
Analytic distribution functions
The analytic probability functions for all built-in distributions are now also built-in functions. For example, corresponding to the Triangular distribution function there are also the DensTriangular, CumTriangular and CumTriangularInv functions. Previously, to use these functions you have to add the Distribution Densities Library to your model.
The general naming pattern for these functions is (where «dist» is the name of the distribution):
Dens«dist»: The probability density function for a continuous distribution. Returns the density at «x».
Prob«dist»: The discrete probability function for a discrete distribution. Returns the probability of «x».
Cum«dist»: The cumulative probability function, also known as the probability function and cumulative density function. Returns the probability of being less than or equal to «x».
Cum«dist»Inv: The inverse cumulative probability function, also called the quantile function. Returns the «p»th fractile/percentile/quartile.
A few of these analytic functions were already built in previously, but all of the following were added:
- DensBeta, DensChiSquared, DensCumDist, DensExponential, DensFDist, DensGamma, DensLogistic, DensProbDist, DensStudentT, DensTriangular, DensWeibull.
- ProbBernoulli, ProbBinomial, ProbGeometric, ProbHyperGeometric, ProbNegativeBinomial, ProbPoisson, ProbUniform
- CumBernoulli, CumBeta, CumChiSquared, CumCumDist, CumExponential, CumFDist, CumGamma, CumGeometric, CumHyperGeometric, CumLogistic, CumNegativeBinomial, CumProbDist, CumStudentT, CumTriangular, CumUniform, CumWeibull.
- CumBernoulliInv, CumBetaInv, CumChiSquaredInv, CumCumDistInv, CumExponentialInv, CumFDistInv, CumGammaInv, CumGeometricInv, CumHyperGeometricInv, CumLogisticInv, CumNegativeBinomInv, CumProbDistInv, CumStudentTInv, CumTriangularInv, CumUniformInv, CumWeibullInv.
New built-in functions
- The preceding section covered the many newly built-in analytic distribution functions.
- The F-distribution was added as a built-in function (functions FDist, DensFDist, CumFDist and CumFDistInv).
- ParseExpression: Returns a parse tree for an Analytica expression. When this is assigned to a global variable, the edit table cells are Analytica expressions.
- ChangeArraySparsity: (experimental) converts between a sparse and standard multi-dimensional array representation.
- GraphToCanvasCoord: For use in the OnGraphDraw expression, it maps from a data value to a pixel coordinate. See #OnGraphDraw on this page.
- MantissaAndExponent(x): Returns the mantissa and base-2 exponent of a floating point number.
Enhancements to existing functions
- MakeJSON handles the encoding of multidimensional arrays better, with better control over nesting orders and ability to map some indexes to JSON objects and others to JSON arrays.
- Added the «except» parameter to the IndexesOf function.
- Added the «first» parameter to SubIndex, ArgMin, ArgMax, SubIndex and PositionInIndex
- CellOnClick allows local variables in expression and supplies several new local variables to the expression. See below.
- When the probabilities in a ProbTable don't add to 1, the Mid-value is now determined from the normalized version, and hence may differ from the mid-value in pre-5.2 releases. The Sample-value is unaffected, since it was already determined from the normalized probabilities.
- Optional parameters added to MsgBox allow you to include one or two URL links and a checkbox input on the dialog.
- An optional parameter «getFrom» was added to NumberToText. Specify a handle to use the stored number format from an indicated object.
New System Variables
The following system variables are new:
Libraries and Example Models
- New library (for Analytica Enterprise users): "Google Maps from OnGraphDraw". Makes it easy to plot data over a Google Map via the new OnGraphDraw attribute.
- New example model (for Analytica Enterprise users): "CSV read and Google Maps plot.ana", in the Data Analysis folder. Shows how to use the Google Maps plotting, how to download CSV data from a web source with fail-over to a second URL source if the first fails, and how to parse the CSV file.
File saving and loading
- The save author and save date are no longer written as part of the model file. These were inconvenient when tracking a model in a source control system (like git or svn) because they changed every time, and collided every time when merges were required. When a model file is read that does not have the save date, the SaveDate attribute is now set to the file system's last-modified time stamp.
- The «expr» inside a CellOnClick now has access to the coordinates of the cell that was clicked. From «expr», evaluating any of the index identifiers in a value context returns the coordinate of that index at the clicked cell.
- All the special local variables available in the Cell Format Expression attribute in general are now available inside «expr» when it is run, plus two more locals can be used:
TotalIndexes: A list of handles containing all indexes that are being summed over for the clicked cell.
comparisonColumn: When a cell is in a comparison variable column of a result table, this is the exogenous comparison variable or expression (i.e., it is either a handle or a parsed expression). If the click was not in a comparison column (the more normal case), this is null.
- Assignment can now occur directly from inside the «expr» of CellOnClick. Previously, you had to do it from a UDF.
- When a cell contains a handle or reference, the default behavior is to hyperlink when the user double clicks. But a CellOnClick handler overrides that. Now if the CellOnClick returns false (0), the default behavior will (also) execute.
- Paste XML Spreadsheet format
- Copy/paste from Excel to Analytica or from Analytica to Excel is as faithful as possible to data types and what you see is what you get
- Copy/paste from one instance of Analytica to another instance is similarly faithful
- Paste special offers XML Spreadsheet format as the default option
- Paste special unlinked supports user choice of clipboard format used
- XML Spreadsheet format offers special support for row headers and column headers on copy/paste from Analytica to Analytica
- The Paste menu option is enabled in browse mode when the focus could be pasted to if only you changed to edit mode. When you attempt a paste or Ctrl+V, it then asks you if you want to change to edit mode. Address a confusing "why isn't paste working?" case, where you forgot you are in browse mode.
- Several enhancements to the Indexes dialog make it easier to select indexes when the list is very long.
- The dialog is larger, so indexes more are visible in the panes.
- You can now view index identifiers by pressing Ctrl+Y
- You can type the first few characters to quickly jump to the index that starts with those characters.
- The Up, Down, Page Up, and Page Down keyboard can be used to scroll the index pane.
- The Left or Right keys can be used to move the selected index or indexes from one pane to the other (same as pressing the >> or << button).
- Use the mouse wheel to scroll the Indexes pane.
- More recent files recorded by default (was 6. Now 11).
- Added the Remove quotes option for lists
- Added the Remove quotes and Add quotes dropdown menu options for description and definition fields in object windows
- These operations support undo and redo
- These operations are performed in the selected area in these input windows only
- Add quotes is useful for the definition field when a paste operation puts data there that is meant to be a string but does not initially have quotes, escaping internal quotes as part of the operation
- Comparison Tolerance appears on the Definition menu.
- Undo for
- Uncertainty options... changes
- The preferred declaration constructs (e.g., Local..Do) appear on Definition menu, and not deprecated ones (e.g., Var..Do).
- When you exceed the number of characters allowed in the identifier or units field, it now displays the error in a bubble instead of a primitive model alert dialog.
- The add-on OptQuest engine is now available from Analytica 64-bit. Previously this engine was only available in 32-bits.
- Some pages of the wiki have a release bar at the top and display different content depending on which release number is selected. The release bar looks like this:
• • • • • • • • • 6.3 •
- Links from Analytica into the wiki now include the release number in the URL, so that the Wiki can automatically select the same release number that you are using. Since this is new to 5.2, release 4.6, 5.0 or 5.1 won't auto-select the release, but it will enable future releases to show the correct version-specific pages.
- Changed the CATable::GraphWithStoredPivot property to default to true.
- Method for installation of ADE without running installer. Used to create a docker container image.
- The binary for AdeTest is now compiled for .NET 4.6. The project file is saved for Visual Studio 2018.
Analytica 5.2 is being released only in 64-bit. The 32-bit edition has been retired.
- Fixed a bug where RLM Server name didn't stick in licensing dialog. This caused problems for users of floating licenses and required them to manually set a registry setting to get around it.
- Changed beta-build licensing. Formerly a separate beta testing license was required. Now any active subscription license (i.e., expiring 5.x license) is sufficient. Thus, we eliminated lots of complex code for automatically acquiring and updating beta test licenses during the beta testing period.
- The Up and Down arrow keys recall history, like Ctrl+Up and Ctrl+Down already do.