What's new in Analytica 4.2?
This page outlines new features and other improvements in Analytica 4.2 added since the Analytica 4.1.2 release. It does not mention most performance improvements and bug fixes.
License Manager
Analytica 4.2 uses a general license manager (from Reprise), providing some important new license options:
- Single-user license: Same as we've always offered -- each license is for one end user, who can install Analytica on up to two computers -- e.g. office and home.
- Floating license: Lets any person in your organization use Analytica, but not more than one person at the same time. So if you have five floating licenses, any five people can use Analytica simultaneously. Anyone in your organization can install and run Analytica using a floating license without have to purchase an additional license. "License Roaming" means that a user can check out a floating license onto their laptop computer to use off-site.
- Centralized License Management: Organizations with multiple Analytica users have the option of managing all licenses -- whether single-user or floating licenses -- within your organization on a central Reprise License Server. For organizations with many licenses, this makes it much easier to track how many and what kind of licenses they have and who is using them/
- Multiple licenses: You can now maintain multiple licenses. For example, it is possible to have Enterprise and Power Player on the same installation, and easy to switch between them. This example would likely be most common in an organization with floating licenses, where there could be a floating license for each edition. Everyone has a Free Player license, which you can switch to easy (from the Update License... dialog), if you wanted to experience how your model works in Player.
- Model Licensing: You can specify a license for a given model, so that people can only run the model when they have a valid license for that model. These models use the same Reprise licenses (as found in *.lic files) that Analytica itself uses. One catch here, however, is that these model licenses must be digitally signed by Lumina. See Model Licensing and AnalyticaLicenseInfo.
See What License do I Purchase?.
64-bit Editions
- 64-bit support breaks through the 32-bit memory barrier, allowing your models to access up to 128GB of memory -- instead of the maximum of 3GB available for 32-bit Windows applications. The actual memory cap depends on which 64-bit Windows OS you have and how much free disk space you have available for virtual memory. Performance will slow once you exceed available RAM.
- Analytica 4.2 Enterprise, Optimizer, Power Player, and ADE are now available in 64-bit editions. You will need to be running a 64-bit Windows operating system, such as XP 64-bit, Vista 64-bit, Windows 7, or Windows Server 64-bit. You cannot run a 64-bit edition on a standard 32-bit Windows operating system, even if you have 64-bit hardware.
- Note: To use the Microsoft Jet database drivers for reading data through ODBC (including the Excel database driver, Access driver, and text file driver) from Analytica 64-bit, you will need the MS Office 2010 64-bit edition (or later). Prior to Office 2010, 64-bit ODBC drivers for JET were unavailable. Analytica 64-bit ODBC works fine with other 64-bit ODBC drivers, such as for Oracle or MySQL. If you want your model to read from or write to Microsoft Excel, we suggest using the new Spreadsheet access functions. For CSV files, use the Flat File Library to read from them directly. For reading from Microsoft Access databases if you don't have MS Access 64-bit, see Querying Access database from Analytica 64.
Installer
- Anonymous install of Free Player: You no longer need a license code to run Analytica Player -- and you don't need to register on a web form to get a code emailed to you.
- Install from non-admin account: You can now install Analytica from a non-admin account -- in fact, it is recommended now to install it from the account where you will be end-user.
- License: You have three options for license type: Individual license, Reprise license manager (RLM) server, or Free Player.
- Activation: Licenses are not activated, specific for your machine and user ID. The installer and Analytica itself can do the activation automatically if you are connected to the internet. The result of the activation is the download of a *.lic file (usually Analytica.lic) containing a signed textual license file that allows the software to run. [MH: What's the process if you're not connected to Internet? Email or phone call?]
- User Registration: The installer asks for information that helps us associate your license with your support contract and Analytica Wiki account. The information allows us to set up and administer your Wiki account more seamlessly (and soon, we expect in an automated fashion), and also allows Analytica to automatically log into the Wiki when you access it from Analytica. [MH: Is user registration a different process from activation (from user's point of view)? ]
- Alerts when new updates are available: If you use Analytica while connected to the internet, every couple days it will automatically check whether any newer release or patch release is available, and alert you when this is the case.
Linked Modules and Libraries
- Hiding and Browse-only settings can now be applied separately to linked module and linked library files. So, you can distribute a library or module that is locked as browse only or with hidden definitions, which they can include in unlocked models, using "Add Module... (link)" . The distributed module remains protected, within a parent module that can be unprotected. As before, Browse-only means users can see anything but only change variables with input nodes. Hiding means that selected definitions are invisible In either case, you need the Enterprise edition to save a model as obfuscated (encrypted) to enforce such protection.
- The Save A Copy In... option now contains an option to Save everything in one file by embedding linked modules..
- When a linked module (or linked library) is inside a model, and changes to that linked module are saved, only the User-Defined Attributes that are used within the linked module are saved in the module file. Previously all User-Defined Attributes were saved with all model and module files, which caused User-Defined Attributes to collect and propagate as linked modules were shared among models.
Expression Language
Local Variables
- When you declare the dimensionality of a local variable, such as the following:
Var x[I, J] := «expr» Do «body»
- and then assign a value to x within «body», it will now detect and warn you if you are assigning a value with dimensions other than those declared for x.
- There are now a couple new types of local variables -- specifically, local meta variables and local alias variables. These provide a cleaner and more consistent semantics for their treatment of Handles. When you are dealing just with numbers, text, and other non-Handles, there are no salient distinctions between these. But when creating meta-inference algorithms, it is far better to use local variables defined by MetaVar..Do or LocalAlias..Do (there is also MetaIndex..Do which was already present in 4.1). Meta-variables consistently treat their values (when handles) as handle objects, without acting as an alias, while alias locals consistently act as aliases. This can be compared to locals defined via Var..Do, which exhibit somewhat of a hybrid semantics, which some people have found to be a source of confusion when attempting to write algorithms that perform meta-inference.
- Slice Assignment allows assignment to matrices (with multiple slice/subscript indexes), as well as abstraction across the slice/subscript value. When abstracting, you are essentially executing multiple assignments in one swoop.
- Assignment of a list value to a local variable that happens to be a local index variable now creates a new index variable, with the same name, having the list as its new index value. With this change, the local variable continues acting like a local index. Previously, such an assignment caused the local variable to convert from being a local index to being a local variable holding an unindexed list. Quite a few people encountered this oddity in previous releases (based on questions to support), and this change obtains the intuited result.
- Assignment to a local variable that was declared as the parameter to a function using the Variable qualifier now changes the value of the variable passed in (which of course, requires the function to be invoked from a button script). Previously, the local variable was simply changed to a new value, without impacting the original variable. Hence, with the new behavior, the parameter acts as a true alias. To relate to the earlier bullet, a function parameter qualified as Variable acts the same as a local variable defined via LocalAlias..Do. Example:
Function ChangeIt(x : Variable) Definition: x := x + 1 Variable Va1 := 5 Button Inc Script: ChangeIt(Va1)
- In this example, pressing the button causes Va1 to become 6. In Analytica 4.1, it would not have changed Va1, but simply caused the local variable Va1 to have the value 6 (and thus cease to be an alias of Va1. Note that in rare cases, this change could impact the behavior of existing models, but only in cases where you were doing something rather non-kosher to start with.
- The handling of list expressions and how an implicit dimension promotes to become a self-index has been rearchitected. Analytica's treatment of list expression has been a weak point of its expression language since the early days, which this should remedy. When you use a list within an expression, such as [ Cos(x),Sin(x) ], this introduces an implicit dimension. If the implicit dimension makes it to the final result, Analytica promotes the self-index to become a self-index, which is essentially a full-fledged index that can be named using the variable's identifier. At the time of promotion, Analytica must deduce what the index values are for dimension. In the [ Cos(x), Sin(x) ] example, the index value contains the expressions Cos(x) and Sin(x). Previously, Analytica deduced what these index values are by Analyzing the parsed definition, and trying to figure out from that Analysis which lists within the definition had introduced the implicit dimension. This was error prone, and it sometimes got it wrong, causing weirdness (such as an index that was a different length from the array indexed by it). Over the years, the algorithm to analyze the parsed definition improved, so that this occurred less often, but it still did occur, and was often a reason to avoid using an explicit list inside an expression. In 4.2, the identification of index values are done in a totally different fashion, that does not require an Analysis of the parsed definition, but rather carries through the null-dimension's index values through the computation, which should hopefully avoid the mistakes made by the earlier approach.
- The coerce text qualifier will now coerce a handle to text (returning the identifier as the text).
- You can declare a local variable to allow the implicit dimension using the syntax:
Var x[Null] := ...
- Using Null where an index is expected is interpreted now as the implicit (aka null-) dimension.
Expression Evaluation
- A result is now returned for several cases of x^y, where x is negative and y is not an odd integer and the result is real-valued. For example, (-27)^(1/3) → -3. In Analytica 4.1, such cases always resulted in NaN. The theory and scope of these cases is described at Exponentiation of negative numbers.
- You can tell the reference operator to swallow the implicit dimension using Null where an index is expected. For example:
\[I, J, Null]A
- consumes the three dimensions, I, J, and the implicit dimension, leaving any additional dimensions outside of the reference.
- The evaluation of Dynamic recurrences has been reworked in a couple fundamental ways.
- User-Defined Functions can now be part of a dynamic loop.
- When a UDF is part of a dynamic loop, and is called during the evaluation of that loop from a variable in the loop, the expressions
Time
and@Time
, when evaluated from a value context, evaluate to a single atomic value -- the current dynamic time point. (The same holds for other non-[Time] dynamic indexes). - Suppose when a UDF belongs to a dynamic loop, and the identifier X appears in the definition of the UDF (in a value context). If X belongs to the same dynamic loop as the UDF, then X is evaluated in the dynamic context, so that the result of evaluating X is
X[Time = Time]
-- i.e., the value sliced along the Time index at the current dynamic time being evaluated. On the other hand, when X is not part of dynamic loop that contains the UDF, then the dynamic context is dropped when evaluating X, so the result might be indexed by Time. - Note: In a UDF inside a dynamic loop, there may be times where you want only the
X[Time=Time]
slice. In that case, writeX[Time=Time]
explicitly.
- When a UDF is part of a dynamic loop, and is called during the evaluation of that loop from a variable in the loop, the expressions
- You can use Dynamic[T](...) to define a dynamic recurrence over an index other than Time. For example:
- User-Defined Functions can now be part of a dynamic loop.
Dynamic[Year](5.6, Self[Year - 1] * (1 + inflation_rate)
- (value Of obj) - Now returns the cached (mid) value of obj at the moment this is evaluated, even if it is not fully computed. No longer forces the full evaluation of the result before returning the value. For the equivalent of the former functionality use Mid(obj). Likewise, probValue of obj returns the currently cached sample value without forcing evaluation -- use Sample(obj) for the equivalent of the former functionality.
New Functions and Function Enhancements
Regular Expressions
- Regular Expressions are a powerful processing language in and of themselves, that can greatly simplify tasks such as parsing data files. Analytica 4.2 now includes the power of the Perl Compatible Regular Expression library], (c)2008 University of Cambridge.
- FindInText(...,re: true) -- treats the pattern as a regular expression. You can also have named patterns, and return numerous different things, such as the position, length, matching text, etc., for each pattern.
- FindInText(...repeat: true) -- returns all matches, rather than just the first one. Creates a local index, or you can supply the index for the results in the optional parameter «repeatIndex».
- SplitText(..., re: true) -- interprets the separator as a regular expression, thus making it possible to split text in a flexible way, where your input data may have some variation, as often occurs when parsing stuff like XML or HTML.
- TextReplace: Interprets pattern as a regular expression, and then allows the matching text for subpatterns to be substituted into the result at matching positions.
Sorting
- Sort(A, I): new -- A new convenience function, returns the sorted array directly, as opposed to SortIndex that returns the index ordering for the sort. Supports multi-key sorts, ascending/descending sorts on each key, and case-insensitivity/sensitivity options.
- SortIndex: Enhanced to support multi-key sorts, ascending/descending options (on each key), and a case-insensitivity option for textual sorts.
- SortIndex(..., position: true) -- the positional dual to SortIndex. Normally SortIndex returns the elements from the index. With the optional position parameter set to true, it returns the positions.
- Rank: Supports multi-key ranking and case-insensitive comparison for textual values.
- Rank: Enhanced to support another rank-type: unique rank (set parameter rankType:Null) . Unique rank ensures that every element has a different rank, so that ties are arbitrarily broken. In some cases, this can allow you to keep all elements when using the result of Rank as a mapping.
Array/List Functions
- Concat: The array concatentation function has been improved to be much easier to use. It can be reliably composed to concatenate more than two arrays, e.g.,
Concat(Concat(A, B, In1, In2), C, In3)
- and you can omit the final K index to let it create a local result index.
- ConcatRows: Now a built-in function. Also, the result index is optional, if omitted, a local index is created. Composed well with Concat.
- Sequence(M,N,strict:true) : An option for a "strict" sequence. The Sequence function has always generated a forward sequence when M<N, and a reverse sequence when M>N. In many cases, one wants a strict sequence, such as if M>N, and empty list should result. In a strict sequence, such as:
For i:= Sequence(5, 4, strict: true) Do...
- no iterations would occur. In a non-strict sequence, the iterations with i=5 would be followed by i=4.
- Subset(...,position:true): The positional dual to Subset. Normally Subset returns the elements of the index, indentifying which items satisfy the condition. With the optional position parameter set to true, it returns the positions of the elements. See Associative vs. Positional Indexing for more on this distinction. This enhancement, along with the same for SortIndex, now means that all relevant built-in functions that return index values have both associational and positional variations.
- MdTable accepts 1-D statistical functions for the aggregation, like Mean, SDeviation, Variance, Kurtosis, etc.
- Aggregate -- new function for reindexing from a fine-grained index (e.g., Months) to a coarse index (e.g., Years).
- Size(A,ListLen:true) -- returns the length of the implicit dimension of A. In Analytica 4.1, it was quite challenging to determine the length of the implicit dimension. The length of explicit dimensions was easy to obtain since we could refer to them by name -- for example, Size(IndexValue(I)) or Sum(1,I) gives the length of the index I. But when A has an implicit dimension, there wasn't an explicit way to obtain its length. Also, for repeated parameters in a function, there really was not any mechanism for obtaining the number of repeated parameters reliably. This extension to the Size function remedies those gaps.
- IsNull(x): Returns 1 when x is exactly Null, 0 otherwise. Does not array abstract, so it not equivalent to x=Null. If x is an array, even an array of Nulls, returns 0. This is useful when checking whether an attribute is set or not, which is otherwise awkward to do without this function.
- Functions Integrate, Area and Normalize now compute the integrals in a meaningful way when the points passed in are not already sorted along x. They also allow the arrays to contain null values. The parameter names for Area were changed to be more consistent.
Time/Date Functions
- Date Sequences: Sequence can now be used to generate a sequence of dates from a given starting date, extending as far as but no further than a given stop date. The optional increment can be in any of the standard date units, such as "M" for month:
Sequence(start, stop, dateUnit: "M")
- MakeDate allows the month to extend past 12, as a convenience for creating a month sequence, e.g.:
MakeDate(year, 1..36, "M")
- MakeTime previously ignored fractional seconds, but now will include them. E.g., MakeTime(12,15,33.34)
- YearFrac: Gives the fraction of a year elapsed between two dates, according to various accounting year/date bases. Many financial functions found in Excel are based on this difficult to replicate function. Analytica's YearFrac replicates Excel's function.
Excel Integration
- Functions to read directly from Excel Spreadsheets: SpreadsheetOpen, SpreadsheetCell, SpreadsheetRange. These were present in 4.1 (with slightly different names), but were labelled "experimental".
- Functions to write directly to Excel spreadsheets: SpreadsheetSetCell, SpreadsheetSetRange, SpreadsheetSave.
Statistical
Table Functions
- DetermTable: No longer evaluates the cells that aren't actually selected. Thus, if there are some options that require substantial computational resources, or that might result in errors, as long as they aren't selected by the discrete variables of the DetermTable, they don't get computed.
- When you paste data into an edit table containing and displaying Choice pulldowns, the cell remains a choice and selects the pasted value. Previously the choice pulldown would be replaced by the pasted value and would no longer be a choice cell.
- Access to unselected DetermTable and ProbTable values. It is now possible, using the SubTable function with optional parameters, to access the full table of computed values for a DetermTable and ProbTable, before any selection has taken place. In the case of a ProbTable, you can access these actual probability numbers (after they've been computed, since they might be defined by expressions) before selection based on parent discrete variables occurs, or you can access the sample prior to selection. This ability opens up several meta-inferential possibilities.
Database
- DbQuery(connnection,sql,key); An optional key parameter gives you control over what values are used for the row index. Instead of having to use 1..N for the row index returned by DbQuery, key can identify a column of your result data, and the values in that column become the index values.
- ReadFromUrl: A new function can be used to download content directly from HTTP, HTTPS, FTP or GOPHER servers.
- Empty parameters to SqlDriverInfo and DbTableNames are now optional parameters, and can be omitted the way optional parameters are omitted with other functions in Analytica when not explicitly used.
- New ReadImageFile function.
Financial
- MIrr, XMIrr: New functions for computed the modified internal rate of return. Although the standard internal rate of return is a standard metric for evaluating cash flows over time, it is seriously flawed as a metric for that use. The modifified IRR remedies many of the problems inherent to Irr, while still providing an intuitive rate-of-return measure. See the IRR and MIRR webinars, especially Part 3 (IRR.wmv]) which covers the downsides of Irr and how MIrr is used.
- Npv has a new optional parameter, offset. Using offset:0 treats the first value as occuring in the zeroth period, as is usually done in cash flow models.
- Npv will now handle a non-constant discount rate (i.e., time-varying discount rate).
Hypothetical evaluation functions
- The hypothetical evaluation functions such as WhatIf, WhatIfAll, Dydx, Elasticity, and NlpDefine, operate by substituting one or more values into input variables, and evaluating downstream output variables. In 4.1 and earlier, this had the effect of invalidating all previously computed results downstream of the input variable(s). This also made it virtually impossible to have two hypotheticals with a shared input computed at the same time (each time one would be computed, the other would become invalidated). In 4.2, these functions will now remember and restore all previously computed results, so that after their computation is complete, previously computed values are still in tact. This allows the computation of hypotheticals with a common parent to be simultaneously complete and cached. It does require more memory during the computation of the hypothetical than previous (since the original computed values are retained), so an optional parameter called preserve has been added to all these functions which can be set to false to obtain the previous behavior.
- Dydx and Elasticity can now be meaningfully applied in sample mode to x variables that are defined by a probability distribution. Previously, if the x variable was defined as a probability distribution, the result of these functions was meaningless. This is because it would resample after the delta was applied -- for example, Dydx would evaluate x and x+Δ, but each evaluation in sample mode generated an independent sample so the differencing resulted in garbage. Dydx and Elasticity now work with the original sample stored in the probValue attribute, so that the uncertain derivative is meaningful.
- Given a handle to a function object, the function Evaluate(..) can now be used to call that function, supplying parameters as necessary.
Math Functions
- Round(x, digits): For digits less than or equal 6, now guarantees exact rounding. The result is exact, even for cases where the resulting number cannot be represented exactly in a 64-bit IEEE binary floating point number (known as a double).
User Interface
List/Index editing
- Drag/drop re-ordering of list/index elements: If you want to change the ordering of elements in list or list-of-label definitions, simply drag and drop the element(s) in the list view to a new position. All downstream edit tables automatically shift their rows to the new positions.
- List entries and index definitions no longer regenerate their definitions. If you type expressions for list elements, like 1+2*3, these aren't converted to (1+(2*3)) as previously occurred.
- If you first set the metaOnly attribute to 1, then you can enter identifiers of any object into a list, including identifiers that would normally not result in valid expressions. These include function identifiers (without any parens following), module identifiers, picture, button, text node identifiers, etc.
Tables
- Edit tables and index lists now allow and support multi-line cells.
- New lines are used only when a new-line character, Chr(13), appears in the cell.
- When typing, ALT-Enter can be used to insert a new line within a cell.
- The internal representation for table definitions has been massively reworked for greater efficiency with very large edit tables (e.g., edit tables with a million-plus cells). These efficiencies result:
- Changing a single cell in a extremely large table is now fast.
- Copy/pasting huge amounts of data into a table, e.g., from Excel, is 1 to 2 orders of magnitude faster.
- Assignment of a large array value to a variable (from a button script), which creates an edit table, is much faster.
- Double clicking on a handle displayed in a result table now opens the diagram containing that object and selects the indicated object. In 4.1 this would open the object window. A new preference attribute, Att_hyperlinkPref, can be used to alter this behavior, e.g., if you want the object window to open instead, or you want handles to modules to open the module's diagram, rather than to select the module node.
- There is a new typescript command, MakeChangesInTable, that can be used to make incremental changes to an edit table, DetermTable, ProbTable or IntraTable. However, this isn't really intended for end-user use -- it is used internally by the crash recovery, so that when you change a few cells in a huge edit table, the command is used to log only the changes to the log of incremental changes, so that the entire definition doesn't have to be written. Nevertheless, it is likely that someone will find the MakeChangesInTable useful.
- In the Index Chooser dialog (accessed by clicking the icon in the upper right corner of an edit table), when adding a new index to a table there is now an option to extending the existing value to all slices across the new index (which ensures that the new value is equivalent to the original), or to use the existing value only for the first slice of the table.
- It is now possible to include Checkbox controls in the cells of an edit table.
Diagram
- Two new attributes, TemplateInput and TemplateOutput, can be defined for a module node. When using these, you would usually set these to variables that reside within the module. By utilizing these, you can essentially encapsulate a sub-model and turn that module into a template module, which you can then duplicate and connect up with other models.
- The variable listed in TemplateInput acts as the input for the module (template). When you draw an arrow to the module node, it is equivalent to drawing the arrow to the input variable. By defining your input variable as a list or as a function that accepts a repeated parameter, you can graphically add inputs to the module.
- The variable listed in TemplateOutput acts as the module's (module template's) output. Drawing an arrow from the module node to another node is equvalent to drawing the arrow from the output variable to the other node. Also, the result button becomes active when the module is selected, and shows the result of the output variable.
- Model building by mouse describes an approach to building models in an entirely graphical fashion, exclusively by copying nodes and drawing arrows. This feature with TemplateInput and TemplateOutput substantially expands that capability by allowing new "components" to be encapsulated as module templates.
- When you draw an arrow from an index to a variable currently defined as an atomic number or text value, it will now ask whether you want to convert the variable to an edit table with the given index. You also have the option of using the existing scalar values for all cells (so that the new definition is equvalent to the original scalar definition), or to set only the first cell to that value.
- When you draw an arrow from an index to an edit table, the question about whether you want to add the index as an index of the table lets you select whether to use the existing value for all slices, or for only the first slice.
- When you enter a distribution into the definition of a variable, Analytica will transform the variable node into a chance node. Analytica 4.1 would also do this when you used the Random function with a distribution specified, such as Random(Normal(0,1)). In Analytica 4.2, Random no longer causes the transformation to a chance node (i.e., random is not the same as uncertain).
- Several subtle changes to selected input and output controls in browse mode, which make input/output forms behave more like a Windows application. For example, when selected in browse mode, the input control or button is selected, rather than the entire node. Tab jumps directly into the input field, and if an input field contains a quoted string, the portion between quotes is automatically selected.
- By defining a variable as
Checkbox(0)
, you can create a checkbox input control on a form.
- A list input control now opens the object window. Previous it jumped to the diagram where the original list variable resided and opened that diagram's attribute panel.
Results
- When selecting which probability bands to show in a Bands result view, we've added 2.5% and 97.5% percentiles to the list of options.
Graphs
- Axis range settings are now stored for each separate column along a comparison index.
- There is a new option for stacked line graphs, including stacked line filled area graphs.
- You can change the color of a data series by right clicking on a data point on the graph and selecting Change Series Color... from the right-mouse menu.
- When using a categorical comparison variable for the independent (X) axis role (vert axis when Swap XY is on, horiz axis otherwise), the labels now retain the original ordering (along the common index) as long as the comparison variable does not contain a graph-key index (such as color) for the slice of the result being graphed. Under this condition, each position along the running index has a unique X-axis label, so the data can be plotted in its original order with a unique label at each X-axis tic mark. This change gives you more control over the actual sort order of the categorical axis labels. It also makes it possible to create a graph where the x-axis labels changes as you toggle through each slicer value.
- Note: (these remaining cases unchanged from 4.1): If the comparison variable used for the X-axis is categorical, but contains an index found in the key of the graph (such as the clustering index for a bar graph), then the axis labels are sorted (or the domain ordering of the labels is used if it has one, which also provides a way to control the sort order). In this later case, there may be many different labels for a given position along the running index, so a fixed ordering allows the x-axis to be labelled unambiguously. Also, if the X-axis dimension is continuous, the axis is ordered numerically, rather than in the order of the data.
Misc
- There is now a new Memory Usage Dialog, both a terse and expanded view, showing more information about the virtual memory and RAM in use by Analytica.
- A feature on the expanded view of the Memory Usage Dialog allows you to watch which variable, and which dynamic context is currently being evaluated. Turning this on slows down evaluation speed dramatically, but may provide insight.
- The Script attribute now appears on the attribute pane for a variable node defined as a Choice pulldown, and on the object window when the Script attribute is checked in the Attributes... dialog, and the variable is defined as a Choice pulldown. The script can contain arbitrary typescript that will be executed whenever the pulldown selection is changed.
- A new function parameter qualifier, recommended, can be used as an equivalent to optional for evaluation purposes. However, the user interface uses this to decide whether to include the parameter when pasting in a function template, either by selecting the function from the Definition menu, or via the Object Finder.
- We've eliminated the annoyance from Excel in which Excel would throw up a message box saying "The picture is too large and will be truncated" whenever you attempted to copy a large range from cells from Excel while Analytica was running.
- If you create a recursive User-Defined Function, if you haven't marked the recursive attribute explicitly, when you enter the definition the error/warning message now asks you whether you intended it to be recursive, and marks the attribute for you if so, eliminating the hastle of setting the recursive attribute to 1 yourself.
- When an evaluation error occurs and you press [Edit Definition], Analytica will now usually position the cursor in the sub-expression that caused the error.
- Analytica 4.2 contains a capability to show a more information hyperlink with error messages. Clicking on the hyperlink jumps to a page on the Analytica Wiki dedicated to that specific error, where detailed information and hints about the error can reside. This feature is off by default in release 4.2 (because there was not much content on error messages when 4.2 was released), and was enabled in Analytica 4.3 and later.
Example Models and Libraries
- A new and improved Foxes and Hares example model. This is the model used in the Tutorial to introduce Dynamic.
Memory Management
- Caching Control: You can now control how and when result values are cached. It is possible to configure particular variables to always cache their values (the default), never cache, or release their cached values when all children are full computed.
- Deeper nesting: The number of nested evaluations, formerly limited to 256, is now limited by available stack space. This primarily impacts recursive functions, which can now recurse much deeper than before. In the x64 edition, the number of recursions will be greater than in x86. The number of recursions depends somewhat on the specific expressions being processed, but will often be 10 to 50 times deeper than previously possible before stack space is exhausted.
- The internal representations of edit tables (and DetermTables, ProbTables, and IntraTables) and been changed, which allows greater efficiency with extremely large edit tables, as well as faster assignment to global variables (from button scripts).
- A substantially expanded Memory Usage Dialog provides more information on RAM and virtual memory usage.
- The hypothetical-style functions (WhatIf, Elasticity, Dydx, WhatIfAll, NlpDefine) now restore the original computed values that were present prior to evaluation. If you are in a situation where the extra space required to preserve those values puts you over the available memory, this functionality can be suppressed with an option parameter.
- There are several new options to GetProcessInfo for accessing system memory statistics.
- Management of Analytica's working set size (in Windows 64-bit or Windows Server) can help keep other applications responsive even when a memory-intensive Analytica model is evaluating. Some registry setting give you control of this. See Working Set Size.
Optimizer
- The solver add-on engines (sold by Frontline at extra cost) can be used with 4.2. We now have these engines in-house and have tested them with Optimizer. These include: Frontline Large-scale GRG (LSGRG), Large-scale LP (LSLP), LSSQP, XPress, MOSEK, OptQuest, and Knitro. Some of these engines are known for their high performance (e.g., XPress is one of the best for LP/MIP, and Knitro is considered king of NLP), and handle far more decision variables and constraints.
Player
- Player 4.2 allows data to be pasted from an external application into input variables and input tables.
- Nag dialog when existing player contains a "Don't show this again" checkbox.
- We have a CD Player -- an edition that can be burned to a CD, along with your model, and given to your clients to use without any installation. The CD player edition does not read or write from the system registry and makes no changes to the file system, so that your model is fully contained on the CD.
Esoteric
- Two new typescript commands: SkipOverNextLines and MakeChangesInTable.
- Several new options to the AnalyticaLicenseInfo function for accessing the components of the current Reprise-style license, and for checking out a new license from an Analytica expression.
- When saving models in standard model format, lines are no longer limited to 70 characters. In the early days, before email attachments were the norm, model files were sent as text in the body of email messages, and had to be limited to 70 characters to avoid line breaks by email readers. Since that "feature" is now obsolete, the auto-break with double-tilde continuation in model files is now turned off. Model file lines may now be arbitrarily long, but easier to interpret by humans and programs you might write to parse them.
- The Sys_PreLoadScript attribute can be used to specify typescript that is executed just prior to loading the object it is associated with into memory.
Changes that could "break" existing models
The vast majority of models created with Analytica 4.1 and earlier should load and evaluate in Analytica 4.2 with no changes required in the model, nor changes in the computed result. However, with any new build, there is always a possibility that your model was taking advantage of a bug that was fixed, or using a feature that has been impacted by an enhancement. This section attempts to highlight those enhancements that could potentially impact some existing models.
- DetermTable and ProbTable optimizations: New optimizations in these functions avoid evaluating cells that aren't selected. For DetermTables containing distributions, and for ProbTables, this means that in some cases fewer random numbers will be generated during evaluation, so that the random number sequence used in Monte Carlo simulation could differ from what it is when your model is evaluated in Analytica 4.1. In both cases, the results would be valid samples, but with slightly different sampling error.
- Assignment to Variable parameter in a User-Defined Function. If you have created a user defined function that assigns to a parameter variable that is declared as type Variable, the semantics has changed. The example from above, repeated here, demonstrates. In this example, an assignment is made to x, a local parameter declared as type Variable:
Function ChangeIt(x: Variable) Definition: x := x + 1
- High-memory WhatIf evaluations. If you are using WhatIf for large-scale parametric sampling, where one evaluation barely fits within available memory, it is possible that the preservation of previously computed values could prevent your computation from fitting within available memory. If this happens, you can modify your call to WhatIf by adding the parameter: preserve:false. (The same holds for WhatIfAll, Dydx, Elasticity, and NlpDefine, although this is less likely to appear for these).
- (value of x) or (probValue of x): If you use access the value or probValue attributes, either in an expression, typescript, button script, or in ADE via CAEngine::SendCommand(), the functionality has changed. These now return whatever cached value is currently there, without forcing evaluation -- formerly they would force x to evaluate before returning the value. If you use these and relied on the forced evaluation, you need to replace these with Mid(x) and Sample(x) respectively.
- The parameter names to Area were changed -- previously Area(r, i, x1, x2, j), now Area(y, x, x1, x2, i). If you used a named-parameter syntax in an existing model when calling Area, you'll get an easily corrected syntax error. (extremely rare)
Forward compatibility
This refers to the ability to load models built in 4.2 into earlier releases of Analytica, such as Analytica 4.1.
Models stored from 4.2 in standard Analytica file format can be read into earlier releases of Analytica. A warning will occur, rightfully warning you that the model was created in a newer release. Any new 4.2 functionality that is utilized by the model will, of course, not work in the earlier release. Also, because there were a great many bugs fixed in 4.2, even if you avoid new 4.2 enhancements, the possibility exists that your 4.2 model could encounter bugs in 4.1 that don't occur in 4.2. The changes in the previous section are also salient.
Models stored from 4.2 in the XML file format cannot be loaded into Analytica 4.1 or earlier releases. This is due to a bug in release 4.1, which was remedied in 4.2 so that 4.2 will be able to load in XML format model files from 4.3, etc. You can get around this limitation in two ways. You can hand-edit the XML file in a text editor and spoof the softwareversion on the second line to make it look like it was saved from 4.1.2, or you can use Save As... to resave your model in the standard, non-XML format before attempting to load it into 4.1.
Enable comment auto-refresher