Tutorial videos

Revision as of 19:43, 18 February 2010 by Lchrisman (talk | contribs) (Large Sample library webinar)

New: I've attempted to impose some categorization on the past user group webinar topics.

The most recent talks were:


Table and Array Topics

The Basics of Analytica Arrays and Indexes

This webinar is continued across two sessions.

Date and Time (Part 1): January 10, 2008, 10:00 - 11:00 Pacific Standard Time

Date and Time (Part 1, repeat): January 11, 2008, 10:00 - 11:00 Pacific Standard Time

Date and Time (Part 2): January 17, 2008, 10:00 - 11:00 Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

This introductory talk introduces the basic concepts of Analytica indexes and multi-dimensional arrays, as well as the basics of Intelligent Array Abstraction. There are several important differences between Analytica arrays compared to multi-dimensional arrays found in other modeling, database, and programming environments. For example, each dimension of an array is associated with an index object, and there is no inherent ordering to the dimensions of a multi-D array. Intelligent array abstraction is perhaps the most powerful feature in Analytica. The session will include a brief description of what array abstraction does, and how you should take advantage of it.

Part 1 focuses on indexes, 1-D arrays and the uses of the Subscript/Slice Operator.

Part 2 focuses on array functions, multi-D arrays, and the principles and philosphy of arrays in Analytica.

This talk is intended for beginning Analytica modelers, and for people who have been using Analytica without making substantial use of its array features.

A recording of the two sessions can be viewed at (requires Windows Media Player):

An Analytica model containing the examples created during the webinar can be downloaded from Intro to intelligent arrays.ana. During part 1, the Plane catching decision with EVIU.ana was also used briefly during the webinar.


Manipulating Indexes and Arrays in Analytica Expressions

Date and Time: Thursday, Aug 9, 2007 at 10:00 - 11:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

In this webinar, I will review many of the common operations applied to indexes and arrays from within Analytica expressions, with a particular emphasis on enhancements in this area that are new to Analytica 4.0. I'll review the often used and very powerful Subscript and Slice operations, along with the duality of associational and positional indexing. I'll introduce newly introduced extensions for positional indexes, such as the @I, A[@I=n], and @[I=n] operations, and extensions that expose positional duals to various previously-existing associational array functions. I will describe the distinction between index and value contexts in Analytica expressions, along with the distinction between a variable's index value, mid value and sample value, how these may differ (Self-Indexed Arrays), and how we may access each context-value explicitly. I will also introduce slice assignment -- the ability to assign values to individual slices of an array within an algorithm.

The content of this webinar is most appropriate for users with moderate to advanced Analyica model-building experience.

Here is the Analytica model that was created during this talk: "Indexes and Arrays UG2.ANA". (This wouldn't be very interesting for someone who didn't attend, but it contains the examples we tried).


Local Variables

Date and Time: Thursday, 23 July 2009, 10:00-11:00am Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

I'll explain distinctions between different types of local variables that can be used within expressions. These distinctions are of primary interest for people implementing Meta-Inference algorithms, since they have a lot to do with how Handles are treated. Analytica 4.2 introduces some new distinctions to the types of local variables, designed to make the behavior of local variables cleaner and more understandable. One type of local variable is the LocalAlias, in which the local variable identifier serves as an alias to another existing object. In contrast, there is the MetaVar, which may hold a Handle to another object, but does not act as an alias. The only local variable option that existed previously, declared using Var..Do, is a hybrid of these two, which leads to confusion when manipulating handles. Since LocalAlias..Do and MetaVar..Do have very clean semantics, the use of these when writing Meta-Inference algorithm should help to reduce that confusion considerably. Inside a User-Defined Function, parameters are also instances of local variables, and depending on how they are declared, may behave as a MetaVar or LocalAlias, so I'll discuss how these fit into the picture, as well as local indexes and local indexes.

This is appropriate for advanced Analytica modelers.

You can watch a recording of this webinar at Local-Variables.wmv. The analytica file from the webinar is at Local Variables.ana, where I've also implemented the exercises that I had suggested at the end of the webinar, so you can look in the model for the solutions.

Array Concatenation

Date and Time: Thursday, 25 June 2009 10:00am-11:00 Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

Array concatenation combines two (or more) arrays by joining them side-by-side, creating an array having all the elements of both arrays. The special case of list-concatenation joins 1-D arrays or lists to create a list of elements that can function as an index. Array concatenation is a basic, and common, form of array manipulation.

The Concat function has been improved in Analytica 4.2, so that array concatenation is quite a bit easier in many cases, and the ConcatRows function is now built-in (formerly it was available as a library function).

I'll take you through examples of array concatenation, including cases that have been simplified with the 4.2 enhancements, to help develop your skills at using Concat and ConcatRows.

This webinar is appropriate for all levels of Analytica modelers.

You can view a recording of this webinar at Array_Concatenation.wmv. The model file created during the webinar is: Array_Concatenation.ana.

Flattening and Unflattening of Arrays

Date and Time: January 31, 2008, 10:00 - 11:00 Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

On occassion you may need to flatten a multi-dimensional array into a 2-D table. The table could be called a relational representation of the data. In some circles it is also refered to as a fact table. Or, you may need to convert in the other direction -- expanding, or unflattening a relational/fact table into a multi-dimensional array. In Analytica, the MdTable and MdArrayToTable functions are the primary tools for unflattening and flattening. In this session, I'll introduce these functions and how to use them, several examples, and many variations.

The model developed during this talk is at Flattening_and_Unflatting_Arrays.ana. A recording of the webinar can be viewed at Array-Flattening.wmv

The Aggregate Function

Date and Time: Thursday, 2 July 2009, 10:00am - 11:00am Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

Aggregation is the process of transforming an array based on a fine-grain index into a smaller array based on a coarser-grain index. For example, you might map a daily cash stream into monthly revenue (i.e., reindexing from days to months).

This has always been a pretty common operation in Analytica models, with a variety of techniques for accomplishing it, but it has just become more convenient with the Aggregate function, new to Analytica 4.2.

In the webinar, I'll be demonstrating the use and generality of the Aggregate function. In the process, it will also be a chance to review a number of other basic intelligent array concepts, including array abstraction, subscripting, re-indexing, etc.

This webinar is appropriate for all levels of Analytica modelers.

A recording of this webinar can be viewed at Aggregate.wmv. The model file created during this webinar is: Aggregate Function.ana.

Sorting

Date and Time: Thursday, 6 Aug 2009, 10:00am-11:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

This webinar will demonstrate the functions in Analytica that are used to sort (i.e. re-order) data -- the functions sortIndex, Rank, and the new to 4.2 Sort. I'll cover the basics of using these functions, including how they interact with indexes, how to apply them to arrays of data, and their use with array abstraction. I'll then introduce several new 4.2 extensions for handling multi-key sorts, descending options, and case insensitivity.

This webinar is appropriate for all levels of Analytica modelers.

A recording of this webinar can be viewed at Sorting.wmv. The model file created during the webinar is at Sorting.ana.

Self-Indexes, Lists and Implicit Dimensions

Date and Time: January 24, 2008, 10:00 - 11:00 Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract


Every dimension of an Analytica array is associated with an index object. Array Abstraction recognizes when two arrays passed as parametes to an operator or function contain the same indexes. These indexes are more commonly defined by a global index object, i.e., an index object that appears on a diagram as a parallelogram node. However, variable and decision nodes can serve as indexes, and can even have a multi-dimensional value in addition to being an index itself. This is refered to as a self index. If a variable identifier is used in an expression, the context in which it appears always makes it clear whether the identifier is being used as an index, or as a variable with a value. Self-indexes can arise in several ways, which I will cover. In rare cases, when writing an expression, you may need to be aware of whether you intend to use the index value or the context value of a self-indexed variable. I'll discuss these cases, for example in For..Do loops, and the use of the IndexValue function.

In some cases, lists may be used in expressions, and when combined with other results, lists can end up serving as an implicit dimension of an array. An implicit dimension is a bit different from a full-fledged index since it has not name, and hence no way to refer to it in an expression where an index parameter is expected. Yet most built-in Analytica functions can still be employed to operate over an implicit index. When an implicit index reaches the top level of an expression, it is promoted to be a self-index. I will explain and demonstrate these concepts.

The model developed during this talk is at Self-Indexes_Lists_and_Implicit_dimensions.ana. A recording of the webinar can be viewed at Self-Indexes-Implicit-Dims.wmv


Introduction to DetermTables

Date and Time: 18 September 2008, 10:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

A DetermTable provides an input view like that of an edit table, allowing you to specify values or expressions in each cell for all index combinations; however, unlike a table, the evaluation of a determtable conditionally returns only selected values from the table. It is called a determtable because it acts as a deterministic function of one or more discrete-valued variables. You can conceptualize a determtable as a multi-dimensional generalization of a select-case statement found in many programming languages, or as a value that varies with the path down a decision tree.

DetermTables can be used to encode a table of utilities (or costs) for each outcome in a probabilistic model. In this usage, they combine very naturally with ProbTables (probability tables) for discrete probabilistic models. They are also extremely useful in combination with Choice pulldowns, allowing you to keep lots of data in your model, but using only a selected part of that for your analysis. This leads to Selective Parametric Analysis, which is often an effective way of coping with memory capacity limitation in high dimensional models.

In this talk, I'll introduce the DetermTable, show how you create one and describe the requirements for the table indexes. The actual "selection" of slices occurs in the table indexes. Not all indexes have to be selectors, but I'll explain the difference and how the domain attribute is used to establish the table index, while the value is used to select the slice. When you define the domain of a variable that will serve as a DetermTable index, you have the option of defining the domain as an index domain. This can be extremely useful in combination with a DetermTable, so I will cover that feature as well. It is helpful to understand how the functionality a DetermTable can be replicated using two nodes -- the first containing an Edit Table and the second using Subscript. Despite this equivalence, DetermTable can be especially convenient, both because it simplifies things by requiring one less node, but also because an Edit Table can be easily converted into a DetermTable.

You can watch a recording of this webinar at DetermTables.wmv. The examples created while demonstrating the mechanics of DetermTables is saved here: DetermTable intro.ana. Other example models used were the 2-branch party problem.ana and the Compression post load calculator.ana, both distributed in the Example models folder with Analytica, and the Loan policy selection.ana model.

Table Splicing

Date and Time: Thursday, August 14, 2008, 10:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

Edit tables, probability tables and determ tables automatically adjust when their index's values are altered. When new elements are inserted into an index, rows (or columns or slices) are automatically inserted, and when elements are deleted, rows (or columns or slices) are deleted from the tables. This process of adjusting tables is referred to as splicing.

Some indexes in Analytica may be computed, so that changes to some input variables could result in dramatic changes to the index value, both in terms of the elements that appear and the order of the elements in the index. This creates a correspondence problem for Analytica -- how do the rows after the change correspond to the rows before the change. Analytica can utilize three different methods for determining the correspondence: associative, positional, or flexible correspondence. I'll discuss what these are and show you how you can control which method is used for each index.

When slices (rows or columns) are inserted in a table, Analytica will usually insert 0 (zero) as the default value for the new cells. It is possible, however, to explicit set a default value, and even to set a different default for each column of the table. Doing so requires some typescripting, but I'll take you through the steps.

Using blank cells as a default value, rather than zero, has some advantages. It becomes quickly apparent which cells need to be filled in after index items are inserted, and Analytica will issue a warning message if blank cells exist that you haven't yet filled in. I'll take you through the steps of enabling blank cells by default.

You can watch a recording of this webinar at Edit-Table-Splicing.wmv. (Note: There is a gap in the recording's audio from 18:43-27:35).

Analytica Web Player

Date and Time: Thursday, August 7, 2008, 10:00am Pacific Daylight Time

Presenter: Max Henrion, Lumina Decision Systems

Abstract

The Analytica Web Player (AWP) is scheduled to launch on July 31, 2008. AWP is a subscription service hosted on Lumina's servers. As a subscriber, you can upload your models to the server and send your colleagues a URL so that they can view your model. To view your models, they need only a Flash-enabled web browser. They can browser your model, change inputs, and evaluate results, all from within their web browser.

In this talk we'll cover the available subscription plans, pricing, limitations, and how you sign up. We'll also demonstrate the process of uploading models and sharing these with colleagues.

You can watch a recording of this webinar at AWP.wmv.

SubTables

Date and Time: Thursday, 31 July 2008, 10:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

The SubTable function allows a subset of another edit table to be edited by the user as a different view. To the user, it appears as if he is editing any other edit table; however, the changes are stored in the original edit table. The rows and columns can be transformed to other dimensions in the Subtable, with different index element orders, based on Subset indexes, and with different number formats.

A recording of this webinar can be viewed at SubTables.wmv. The model file from this webinar is at media:SubTable_webinar.ana.


Edit Table Enhancements in Analytica 4.0

Date and Time: Thursday, Aug 2, 2007 at 10:00 - 11:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

In this webinar, I will demonstrate several new edit table functionalities in Analytica 4.0, including:

  • Insert Choice drop-down controls in table cells.
  • Splicing tables based on computed indexes.
  • Customizing the default cell value(s).
  • Blank cells to catch entries that need to be filled in.
  • SubTables
  • Using different number formats for each column.

This talk is oriented for model builders with Analytica model-building experience.

The Analytica session that existed by the end of the talk is stored in the following model file: "Edit Table Features.ana".

Modeling Time

Manipulating Dates in Analytica

Date and Time: Thursday, Sept. 13, 2007 at 10:00 - 11:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

In this talk, I'll cover numerous aspects relating to the manipulation of dates in Analytica. I'll introduce the encoding of dates as integers and the date origin preference. I'll review how to configure input variables, edit tables, or even individual columns of edit tables to accept (and parse) dates as input. I'll cover date number format capabilities in depth, including how to create your own custom date formats, understanding how date formats interact with your computer's regional settings, and how to restrict a date format to a single column only. We'll also see how axis scaling in graphs is date-aware.

Next, we'll examine various ways to manipulate dates in Analytica expressions. This includes use of the new and powerful functions MakeDate, DatePart, and DateAdd, and some interesting ways in which these can be used, for example, to define date sequences. Finally, we'll practice our array mastery by aggregating results to and from different date granularities, such aggregating from a month sequence to a years, or interpolating from years to months.

The model file resulting by the end of the session is available here: Manipulating Dates in Analytica.ana.

You can watch a recording of this webinar here: Manipulating Dates.wmv (Windows Media Player required) Unfortunately, this one seems to have recorded poorly -- the video size is too small. If you magnify it in your media player, it does become readable. Sorry -- I don't know why it recorded like this.


The Dynamic Function

Date and Time: Thursday, 12 June 2008, 10:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

The Dynamic function is used for modeling or simulating changes over time, in which values of variables at time t depend on the values of those variables at earlier time points. Analytica provides a special system index named Time that can be used like any other index, but which also has the additional property that it is used by the Dynamic function for dynamic simulation.

This webinar is a brief introduction to the use of the Dynamic function and to the creation of dynamic models. I'll cover the basic syntax of the Dynamic function, as well as various ways in which you can refer to values at earlier time points within an expression. Dynamic models result in influence diagrams that have directed cycles (i.e., where you can start at a node, follow the arrows forward and return to where you started), called dynamic loops. Similar cyclic dependencies are disallowed in non-dynamic influence diagrams.

During the webinar, we'll loop at several simple examples of Dynamic, oriented especially for those of you with little or no experience with using Dynamic in models. I'll provide some helpful hints for keeping things straight when building dynamic models. For the more seasoned modelers, I'll also try to fold in a few more detailed tidbits, such as some explanation about how dynamic loops are evaluated, and how variable identifiers are interpreted somewhat differently from within dynamic loops.

The model developed (extension of Fibonacci's rabbit growth model) can be downloaded here: The Dynamic Function.ana. A recording of the webinar can be viewed at Dynamic-Function.wmv.



Modeling Markov Processes in Analytica

Date and Time: Thursday, Sept. 20, 2007 at 10:00 - 11:00am Pacific Daylight Time

Presenter: Matthew Bingham, Principal Economist, Veritas Economic Consulting

Abstract

The class of mathematical processes characterized by dynamic dependencies between successive random variables is called Markov chains. The rich behavior and wide applicability of Markov chains make them important in a variety of applied mathematical applications including population and demographics, health outcomes, marketing, genetics, and renewable resources. Analytica’s dynamic modeling capabilities, robust array handling, and flexible uncertainty capabilities support sophisticated Markov modeling. In this webinar, a Markov modeling application is demonstrated. The model develops age-structured population simulations using a Leslie matrix structure and dynamic simulation in Analytical.

A recording of this session can be viewed at: Markov-Processes.wmv (requires Windows Media Player)

An article about the model presented here: AnalyticaMarkovtext.pdf

Analytica Language Features

Local Indexes

Date and Time: Thursday, Dec. 13, 2007 at 10:00 - 11:00am Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract


A local index is an index object created during the evaluation of an expression using either the Index..Do or MetaIndex..Do construction. Local indexes may exist only temporarily, being reclaimed when they are no longer used, or they may live on after the evaluation of the expression has completed, as an index of the result. Some operations require the use of local indexes, or otherwise could not be expressed.

In this talk, I'll introduce simple uses of local indexes, covering how they are declared using Index..Do, with several examples. We'll see how to access a local index using the A.I operator. I'll discuss the distinctions between local indexes and local variables. I'll show how the name of a local index can be computed dynamically, and I'll briefly cover the IndexNames and IndexesOf functions.

The model created during this talk is here: Webinar_Local_Indexes.ana.

You can watch a recording of this webinar at: Local-Indexes.wmv (Requires Windows Media Player)

Handles and Meta-Inference

Date and Time: Thursday, Dec. 6, 2007 at 10:00 - 11:00am Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

Meta-inference refers to computations that reason about your model itself, or that actually alter your model. For example, if you were to write an expression that counted how many variables are in your model, you would be reasoning about your model. Other examples of meta inference include changing visual appearance of nodes to communicate some property, re-arranging nodes, finding objects with given properties, or even creating a transformed model based on portion of your model's structure.

The ability to implement meta-inferential algorithms in Analytica has been greatly enhanced in Analytica 4.0. The key to implementation of meta-inference is the manipulation of Handles to objects (formerly refered to as varTerms). This webinar will provide a very brief introduction to handles and using them from within expressions. I will assume you are pretty familiar with creating models and writing expressions in Analyica, but I will not assume that have previous seen or used Handles. This topic is oriented towards more advanced Analytica users.

The model used/created during this webinar as at: Handle and MetaInference Webinar.ANA.

You can watch a recording of this webinar at: Handles.wmv (Requires Windows Media Player)


The Iterate Function

Date and Time: Thursday, Nov. 29, 2007 at 10:00 - 11:00am Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

With Iterate, you can create a recurrent loop around a large model, which can be useful for iterating until a convergence condition is reached, for example. For complex iterations, where many variables are being updated at each iteration, requires you to structure your model appropriate, bundling and unbundling values within the single iterative loop. With some work, Iterate can be used to simulate the functionality Dynamic, and thus provides one option when a second Time-like index is needed (although not nearly as convenient as Dynamic).

In this session, we'll explore how Iterate can be used.

Here is the model file developed during the webinar: Iterate Demonstration.ANA

You can watch a recording of this webinar at: Iterate.wmv (Requires Windows Media Player)


The Reference and Dereference Operators

Date and Time: Thursday, Nov. 15, 2007 at 10:00 - 11:00am Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract The reference operators make it possible to represent complex data structures like trees or non-rectangular arrays, bundle heterogenous data into records, maintain arrays of local indexes, and seize control of array abstraction in a variety of scenarios. Using a reference, an array can be made to look like an atomic element to array abstraction, so that arrays of differing dimensionality can be bundled into a single array without an explosion of dimensions. The flexibilities afforded by references are generally for the advanced modeler or programmer, but once mastered, they come in useful fairly often.

Here is the model used during the webinar: Reference and Dereference Operators.ana. Near the end of the webinar, I encountered a glitch that I was not able to resolve until after the webinar was over. This has been fixed in the attached model. For an explanation of what was occurring, see: Analytica_User_Group/Reference_Webinar_Glitch.

You can watch a recording of this webinar at: Reference-And-Dereference.wmv (Requires Windows Media Player)


Writing User-Defined Functions

Date and Time: Thursday, Sept. 27, 2007 at 10:00 - 11:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

When you need a specialized function that is not already built into Analytica, never fear -- you can create your own User-Defined Function (UDF). Creating UDFs in Analytica is very easy. I'll introduce this convenient capability, and demonstrate how UDFs can be organized into libraries and re-used in other models. I'll also review the libraries of functions that come with Analytica, providing dozens of additional functions.

After this introduction to the basics of UDFs, I'll dive into an in-depth look at Function Parameter Qualifiers. There is a deep richness to function parameter qualifiers, mastery of which can be used to great benefit. One of the main objectives for a UDF author, and certainly a hallmark of good modeling style, should be to ensure that the function fully array abstracts. Although this usually comes for free with simple algorithms, it is sometimes necessary to worry about this explicitly. I will demonstrate how this objective can often be achieved through appropriate function parameter qualification.

Finally, I will cover how to write a custom distribution function, and how to ensure it works with Mid, Sample and Random.

This talk is appropriate for Analytica modelers from beginning through expert level. At least some experience building Analytica models and writing Analytica expressions is assumed.

The model created during this webinar, complete with the UDFs written during that webinar, can be downloaded here: Writing User Defined Functions.ana.

You can watch this webinar here: Writing-UDFs.wmv (Windows Media Player required)

Custom Distribution Functions

Date and Time: Thursday, 24 July 2008, 10:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

Analytica comes with most of the commonly seen distributions built-in, and many additions distribution functions available in the standard libraries. However, in specific application areas, you may encounter distribution types that aren't already provided, or you may wish to create a variation on an existing distribution based on a different set of parameters. In these cases, you can create your own User-Defined Distribution Function (UDDF). Once you've created your function, you can utilize it within your model like you would any other distribution function.

User-defined distribution functions are really just instances of User-Defined Functions (UDFs) that behave in certain special ways. This webinar discusses the various functionalities that a user-defined distribution function should exhibit and various related considerations. Most fundamentally, the defining feature of a UDDF is that it returns a median value when evaluated in Mid mode, but a sample indexed by Run when evaluated from Sample mode. This contrasts with non-distribution functions whose behavior does not depend on the Mid/Sample evaluation mode. Custom distributions are most often implemented in terms of existing distributions (which includes Inverse CDF methods for implementing distributions), so that this property is achieved automatically since the existing distributions already have this property. But in less common cases, UDDFs may treat the two evaluation modes differently.

When you create a UDDF, you may also want to ensure that it works with Random() to generate a single random variate, and supports the Over parameter for generating independent distributions. You may also want to create a companion function for computing the density (or probability for discrete distributions) at a point, which may be useful in a number of contexts including, for example, during importance sampling. I'll show you how these features are obtained.

There are several techniques that are often used to implement distribution functions. The two most common, especially in Analytica, are the Inverse CDF technique and the transformation from existing distributions method. I'll explain and show examples of both of these. The Inverse CDF is particularly convenient in that it supports all sampling methods (Median Latin Hypercube, Random Latin Hypercube, and Monte Carlo).

A recording of this webinar can be viewed at Custom-Distribution-Functions.wmv. The model file created during the webinar is Custom Distribution Functions.ana.


Regular Expressions

Date and Time: Thursday, 9 July 2009, 10:00am - 11:00am Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

Analytica 4.2 exposes a new and powerful ability to utilize Perl-Compatible regular expressions for text expression analysis. This feature has particular applicability for parsing application when importing data. Long known as the feature that makes Perl and Python popular as data file processing languages, that same power is now readily available within Analytica's FindInText, SplitText, and TextReplace functions.

This talk will only touch on the regular expression language itself (information on which is readily available elsewhere), but instead focuses on the use of these expressions from the Analytica expressions, especially the extracting of text that matches to subpatterns and finding repeated matches.

One relatively complex example that I plan to work through is the parsing of census population data from datafiles downloaded from the U.S. census web site. The task includes parsing highly variable HTML, as well as multiple CSV files with formatting variations that occur from element to element. These variations, which are typical in many sources of data, demonstrate why the flexibility of regular expressions can be extremely helpful when parsing data files.

Regular expressions themselves are extremely powerful, but when overused, can be very cryptic. So even though it is possible to get carried away with this power, it is good to know how to balance the temptation.

This talk is appropriate for moderate to advanced level modelers.

A recording of this webinar can be watched at Regular-Expressions.wmv. If you are new to regular expressions, I've included a slides on the regular expression patterns that I made use of in this power point show (these were not shown during the webinar). The model file developed during the webinar is Regular expressions.ana.

Using the Check Attribute to validate inputs and results

Date and Time: Thursday, 17 July 2008 10:00 Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

The check attribute provides a way to validate inputs and computed results. When users of your model are entering data, this can provide immediate feedback when they enter values that are out of range or inconsistent. When applied to computed results, it can help catch inconsistencies, which can help reduce error rates and accidental introduction of errors later.

In this talk, I'll demonstrate how to define a check validation for a variable, and how to turn on the check attribute visible so that it is visible in the object window. I'll demonstrate how the failed check alert messages can be customized. And perhaps most interestingly, how the check can be used in edit tables for cell-by-cell validation, so that out-of-range inputs are flagged with a red background, and alert balloons pop-up when out-of-range inputs are entered. Cell-by-cell validation when certain restrictions on the check expression are followed, which I'll discuss.

A recording of this webinar can be viewed at Check-Attribute.wmv (Note: There is audio, but screen is black, for first 50 seconds). The model used during this webinar, with the check attributes inserted, is at Check attribute -- car costs.ana.

The Performance Profiler

Date and Time: October 9, 2008 10:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

Requires Analytica Enterprise

When you have a model that takes a long time to compute, thrashes in virtual memory, or uses up available memory, the Performance Profiler can tell you where your model is spending its time and how much memory is being consumed by each variable to cache results. It is not uncommon to find that even in a very large model, a small number (e.g., 2 to 5) of variables account for the lion's share of time and memory. With this knowledge, you can focus your attention optimizing the definition of those few variables. On several occassions I've achieved more than 100-fold sped up in computation time on large models using this technique.

The Performance Profiler requires with Analytica Enterprise or Optimizer. I'll demonstrate how to use the profiler with some basic discussions of what is does and does not measure. One neat aspect of the profiler is that you can actually activate it after the fact. In otherwords, even though you haven't adding profiling to your model, if you happen to notice something taking a long time, you can add it in to find out where the time was spent.

Using the Profiler is pretty simple, so I expect this session will be somewhat shorter than usual. The content will be oriented primarily to people who are unfamiliar with the profiler, although I will also try to provide some behind the scenes details and can answer questions about it for

You can watch a recording of this webinar at Performance-Profiler.wmv. The model file containing the first few examples from the webinar can be downloaded from Simple Performance Profiler Example.ana.


Organizing Models

Modules and Libraries

Date and Time: 10 Dec 2009 10:00am Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

Modules form the basic organizational principle of an Analytica model, allowing models to be structured hierarchically, keeping things simple at every level even in very large complex models. You can use linked modules to store your model across multiple files. This capability enables reuse of libraries and model logic across different models, and allows you to divide your model into separate pieces so that different people can work concurrently on different pieces of the model.

In this talk, I will review many aspects of modules and libraries. We'll see how to use linked modules effectively. I'll cover what the the distinctions are between Modules, Libraries, Models and Forms. I'll demonstrate various considerations when adding modules to existing models -- such as whether you want to import system variables or merge (update) existing objects, and some variations on what is possible there. We'll see how to change modules (or libraries) from being embedded to linked, or vise versa, and how to change the file location for a linked module. When distributing a model consisting of multiple module files, I'll go over directory structure considerations (the relative placement of module files), and also demonstrate how you can store a copy of your model with everything embedded in a single file for easy distribution.

I'll also discuss definition hiding and browse-only locking. By locking individual modules, you can create libraries with hidden and unchangeable logic that can be used in the context of other people's models, keeping your algorithms hidden. Or, you can distribute individual models that are locked as browse only, even in the context of a larger model where the remainder of the model is editable.

I'll talk about using linked modules in the context of a source control system, which is often of interest for projects where multiple people are modifying the same model. I'll also reveal an esoteric feature, the Sys_PreLoadScript attribute, and how this can be used to implement your own licensing and protection of intellectual property.

This webinar is appropriate for all levels of Analytica model builders.

You can watch a recording of this webinar at Linked-Modules.wmv. The starting model used in the webinar can be downloaded from Loan_policy_selection_start.ana, and then you can follow along to introduce and adjust modules as depicted in the recording if you like.

Uncertainty & Probability Topics

Correlated and Multivariate Distributions

Date and Time: Thursday, March 13, 2008 10:00 Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

This talk will discuss various techniques within Analytica for defining probability distributions with specified marginal distributions, and also being correlated with other uncertain variables. Techniques include the use of conditional and hierarchical distributions, multivariate distributions, and Iman-Conover rank-correlated distributions.

The model created during session talk is Correlated distributions.ana. You can watch a recording of the webinar from Correlated-Distributions.wmv.

Assessment of Probability Distributions

Date and Time: March 6, 2008 10:00am Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

When building a quantitative model, we usually need to come up with estimates for many of the parameters and input variables that we use in the model. Because these are estimates, it is good idea to encode these as probability distributions, so that our degree of subjective uncertainty is explicit in the model. The process of encoding a distribution to reflect the level of knowledge that you (or the experts you work with) have about the true value of the quantity is referred to as probability (or uncertainty) assessment or probability elicitation.

This webinar will be a highly interactive one, where all attendees are expected to participate in a series of uncertainty assessments as we explore the effects of cognitive biases (such as over-confidence and anchoring), understand what it means to be well-calibrated, and utilize scoring metrics to measure your own degree of calibration. These exercises can help you improve the quality of your distribution assessments, and serve as tools that can help you to when eliciting estimates of uncertainty from other domain experts.

The Analytica model Probability assessment.ana contains a game of sorts that takes you through several probability assessments and scores your responses. Participants of the webinar played this game by running this model, if you are going to watch the webinar, you will want to do the same. You may want to wait until the appropriate point in the webinar (after preliminary stuff has been covered) before starting. You can watch the webinar recording here: Probability-Assessment.wmv. The power point slides from the talk are here: Assessment_of_distributions.ppt.


Statistical Functions

Date and Time: Thursday, 21 Aug 2008, 10:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

This topic was presented in Aug 2007, but not recorded at that time.

A statistical function is a function that process a data set containing many sample points, computing a "statistic" that summarizes the data. Simple examples are Mean and Variance, but more complex examples may return matrices or tables. In this talk, I'll review statistical functions that are built into Analytica. I'll describe several built-in statistical functions such as Mean, SDeviation, GetFract, Pdf, Cdf, and Covariance. I'll demonstrate how all built-in statistical functions can be applied to historical data sets over an arbitrary index, as well as to uncertain samples (the Run index). I'll discuss how the domain attribute should be utilized to indicate that numeric-valued data is discrete (such as integer counts, for example), and how various statistical functions (e.g., Frequency, GetFract, Pdf, Cdf, etc) make use of this information. In the process, I'll demonstrate numerous examples using these functions, such things as inferring sample covariance or correlation matricies from data, quickly histogramming arbitrary data and using the coordinate index setting to plot it, or using a weighted Frequency for rapid aggregation.

In addition, all built-in statistical functions can compute weighted statistics, where each point is assigned a different weight. I'll briefly touch on this feature as a segue into next week's topic, Importance Sampling.

This talk can be viewed at Statistical-Functions.wmv. The model built during this talk is available for download at Intro to Statistical Functions.ana.

Statistical Functions in Analytica 4.0

Date and Time: Thursday, Aug 16, 2007 at 10:00 - 11:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

A statistical function is a function that process a data set containing many sample points, computing a "statistic" that summarizes the data. Simple examples are Mean and Variance, but more complex examples may return matrices or tables. In this talk, I'll review statistical functions that are built into Analytica 4.0. In Analytica 4.0, all built-in statistical functions can now be applied to historical data sets over an arbitrary index, as well as to uncertain samples (the Run index), eliminating the need for separate function libraries. I will demonstrate this use, as well as several new statistical functions, e.g., Pdf, Cdf, Covariance. I will explain how the domain attribute should be utilized to indicate that numeric-valued data is discrete (such as integer counts, for example), and how various statistical functions (e.g., Frequency, GetFract, Pdf, Cdf, etc) make use of this information. In the process, I'll demonstrate numerous examples using these functions, such things as inferring sample covariance or correlation matricies from data, quickly histogramming arbitrary data and using the coordinate index setting to plot it, or using a weighted Frequency for rapid aggregation.

In addition, all statistical functions in Analytica 4.0 can compute weighted statistics, where each point is assigned a different weight. I'll cover the basics of sample weighting, and demonstrate some simple examples of using this for computing a Bayesian posterior and for importance sampling from an extreme distribution.

The Analytica model file that had resulted by the end of the presentation can be downloaded here: User Group Webinar - Statistical Functions.ANA.


The Large Sample Library

Date and Time: Thursday, 18 Feb 2010 10:00am Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

The Large Sample Library is an Analytica library that lets you run a Monte Carlo simulation for a large model or a large sample size that might otherwise exhaust computer memory, including virtual memory. It breaks up a large sample into a series of batch samples, each small enough to run in memory. For selected variables, known as the Large Sample Variables or LSVs, it accumulates the batches into a large sample. You can then view the probability distributions for each LSV using the standard methods — confidence bands, PDF, CDF, etc. — with the full precision of the large sample.

Memory is saved by not storing results for non-LSVs.

This presentation introduces this library and how to use it.

You can watch a recording of this webinar at Large-Sample-Library.wmv. The Large Sample library can be downloaded for use in your own models from the Large Sample Library: User Guide page. The two example models used during this webinar were: Enterprise model3.ana and Simple example for Large Sample Library.ana.

Sensitivity Analysis Topics

Tornado Charts

Time and Date: Thursday, 20 Mar 2008 10:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems


Abstract:

Tornado plot.png

A tornado chart depicts the result of a local sensitivity analysis, showing how much a computed result would change if each input were varied one input at a time, with all other inputs held to their baseline value. The result is usually plotted with horizontal bars, sorted with larger bars on top, resulting in a graph resembling the shape of a tornado, hence the name. There a numerous variations on tornado charts, resulting from different ways of varying the inputs, and in some cases, different metrics graphed.

This talk will walk through the steps of setting up a Tornado chart, and explore different variations of varying inputs. We'll also explore some more complex issues that can arise when some inputs are arrays.

The model used during this talk is here: Tornado Charts.ana (the stuff for the talk was in the Tornado Analysis module). You can watch a recording of this webinar from Tornado-Charts.wmv.

Advanced Tornado Charts -- when inputs are Array-Valued

Date and Time: Thursday, April 17, 2008 10:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

The webinar of 20-Mar-2008 (Tornado-Charts.wmv, see webinar archives) went through the fundamentals of setting up a local sensitivity analysis and plotting the results in the form of a tornado chart. That webinar also discussed the many variations of tornado analyses (or more generally, local sensitivity analyses) that are possible.

This talk builds on those foundations by going a step further and addressing tornado analyses when some of the input variables are array-valued. The presence of array-valued inputs introduces many additional possible variations of analyses, as well as many modeling complications. For example, a local sensitivity analysis varies one input at a time, but that could mean you vary each input variable (as a whole) at a time, or it could mean that you vary each cell of each input array individually. Either is possible, each resulting in a different analysis. Some of these variations compute the correct result automatically through the magic of array abstraction, once you've set up the basic tornado analysis that we covered in the first talk, while other require quite a bit of additional modeling effort. However, even the ones that produce the correct result can often be made more efficient, particularly when the indexes of each input variable are different across input variables.

When we do opt to vary input arrays one cell at a time, the display of the results may be dramatically effected. Although we can keep the results in an array form, the customary tornado chart require us to flatten the multi-D arrays and label each bar on the chart with a cell coordinate.


A recording of this webinar can be viewed at Tornados-With-Arrays.wmv. This webinar made use of the following models: Sales Effectiveness Model with tornado.ana, Biotech R&D Portfolio with Tornado.ana, Sensitivity Analysis Library.ana, and Sensitivity Functions Examples.ana. See The Sensitivity Analysis Library for more information on how to use Sensitivity Analysis Library.ana in your own models.

Financial Analysis

Internal Rate of Return (IRR) and Modified Internal Rate of Return (MIRR)

Date and Time: 18 Dec 2008, 10:00am Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

This is Part 3 of a multi-part webinar series where we have been covering the modeling and evaluation of cash flows over time in an interactive exercise-based webinar format, where concepts are introduced in the form of modeling exercises, and participants are asked to complete the exercises in Analytica during the webinar. Part 3 covers Internal Rate of Return (IRR) and Modified Internal Rate of Return (MIRR), and includes seven modeling exercises.

To speed the presentation up, I am providing the exercises in advance: NPV_and_IRR.ppt. I urge you to take a shot at completing them before the webinar begins, and we'll advance through the exercises more rapidly so as to complete the topic material within the hour. By attempting the exercises in advance, you'll have a good opportunity to compare your solutions to mine, and to ask questions about things you got stuck on.

A dollar received today is not worth the same as a dollar received next year. Taking this time-value of money (or more generally, time-value of utility) into account is very important when comparing cash flows over time that result from long-term capital budgeting decisions. Net Present Value (NPV) and Internal Rate of Return (IRR) are the two most commonly used metrics examining the effective value of an investment's cash flow over time. Both concepts are pervasive in decision-analytic models.

This webinar will be highly interactive. Fire up a instance of Analtyica as you join on. As I introduce each concept, I'll provide you with cash flow scenarios, and give you a chance to compute the result yourself using Analytica. This talk is intended for people who are not already well-versed in NPV and IRR, or for people who already have a good background with those concepts but are new to Analytica and thus can learn from the interactive practice of addressing these exercises during the talk.

See also the materials from Parts 1 and 2 (Net Present Value, 20 Nov 2008 and 4 Dec 2008) elsewhere on this page. This session begins with the model Cash Flow Metrics 2.ana, and ends with Cash Flow Metrics 3.ana. You can watch a recording of this webinar at IRR.wmv.


Bond Portfolio Analysis

Date and Time: 11 Dec 2008, 10:00am Pacific Standard Time

Presenter: Rob Brown, Incite! Decision Technologies

Abstract

I demonstrate how to value a bond portfolio in which bonds are bought and sold on an uncertain frequency. The demonstration shows how Intelligent Arrays and related functions can greatly simplify calculations of multiple dimensions that would typically require multiple interconnected sheets in a spreadsheet or nested do-loops in a procedural language.

You can watch a recording of this webinar at Bond-Portfolio-Analysis.wmv. The model underlying the presentation is Bond Portfolio Valuation.ana, and the power point slides are at Bond Portfolio Valuation.ppt.

Net Present Value (NPV)

Date and Time: Part I : Thursday, 20 Nov 2008, 10:00am Pacific Standard Time

Part II : Thursday, 4 Dec 2008, 10:00am Pacific Standard Time

(Parts 1 & 2 cover NPV -- part 3, listed now separately, covers IRR)

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

A dollar received today is not worth the same as a dollar received next year. Taking this time-value of money (or more generally, time-value of utility) into account is very important when comparing cash flows over time that result from long-term capital budgeting decisions. Net Present Value (NPV) and Internal Rate of Return (IRR) are the two most commonly used metrics examining the effective value of an investment's cash flow over time. Both concepts are pervasive in decision-analytic models.

This multi-part webinar provides an introduction to the concepts of present value, discount rate, NPV and IRR. We'll discuss the interpretation of discount rate, and we'll get practice computing these metrics in Analytica. We'll examine the pitfalls of each metric, and we'll examine the interplay of each metric with explicitly modelled uncertainty (including the concepts of Expected NPV (ENPV) and Expected IRR (EIRR)).

This webinar will be highly interactive. Fire up a instance of Analtyica as you join on. As I introduce each concept, I'll provide you with cash flow scenarios, and give you a chance to compute the result yourself using Analytica. This talk is intended for people who are not already well-versed in NPV and IRR, or for people who already have a good background with those concepts but are new to Analytica and thus can learn from the interactive practice of addressing these exercises during the talk.

I have assembled quite a bit of material, which I believe will fill two webinar sessions. Part 1 will focus mostly on present value, NPV, discount rate, and the use of NPV with uncertainty. Part 2 will focus mostly on IRR, several "gotchas" with IRR, and MIRR.

Materials:

Note: Part 1 covered 5 exercises, covering present value, discount rate, modeling certain cash flows, computing NPV, and graphing the NPV curve. Part 2 added exercises 6-9, covering cash flows at non-uniformly-spaced time periods, valuating bonds and treasury notes, cash flows with uncertainty, and using the CAPM to find invester-implied corporate discount rate.

The "class" will continue with Part 3 beginning with Internal Rate of Return.


Data Analysis Techiques

Statistical Functions

Date and Time: Thursday, May 22, 2008 10:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

A statistical function is a function that processes a data set containing many sample points, computing a "statistic" that summarizes the data. Simple examples are Mean and Variance, but more complex examples may return matrices or tables. In this talk, I'll review statistical functions that are built into Analytica 4.0. In Analytica 4.0, all built-in statistical functions can now be applied to historical data sets over an arbitrary index, as well as to uncertain samples (the Run index), eliminating the need for separate function libraries. I will demonstrate this use, as well as several new statistical functions, e.g., Pdf, Cdf, Covariance. I will explain how the domain attribute should be utilized to indicate that numeric-valued data is discrete (such as integer counts, for example), and how various statistical functions (e.g., Frequency, GetFract, Pdf, Cdf, etc) make use of this information. In the process, I'll demonstrate numerous examples using these functions, such things as inferring sample covariance or correlation matricies from data, quickly histogramming arbitrary data and using the coordinate index setting to plot it, or using a weighted Frequency for rapid aggregation.

In addition, all statistical functions in Analytica 4.0 can compute weighted statistics, where each point is assigned a different weight. I'll cover the basics of sample weighting, and demonstrate some simple examples of using this for computing a Bayesian posterior and for importance sampling from an extreme distribution.

This talk is appropriate for moderate to advanced users.

A recording of this webinar can be watched at Statistical-Functions.wmv. The model created during this webinar is at Statistical Functions.ana.


Principle Components Analysis (PCA)

Date and Time: 15 Jan 2009, 10:00am Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

Principle component analysis (PCA) is a widely used data analysis technique for dimensionality reduction and identification of underlying common factors. This webinar will provide a gentle introduction to PCA and demonstrate how to compute principle components within Analytica. Intended to be at an introductory level, with no prior experience with PCA (or even knowledge of what it is) assumed.

The model developed during this talk, where the principle components were computed for 17 publically traded stocks based on the previous 2 years of price change data is Principle Component Analysis.ana. A recording of this webinar can be viewed at PCA.wmv.

Variable Stiffness Cubic Splines

Date and Time: 2 October 2008, 10:00am Pacific Daylight Time

Presenter: Brian Parsonnet, ICE Energy

Abstract

The Variable Stiffness Cubic Spline is a highly robust data smoothing and interpolation technique. A stiffness parameter adjusts the variability of the curve. At the extreme of minimal stiffness, the curve approaches a cubic spline (like CubicInterp) that passes through all data points, while at the other extreme of maximal stiffness, the spline curve becomes the best-fit line. Weight parameters can be used to constrain the curve to include selected points, while smoothing over others. The first, second and third derivatives all exist and are readily available.

I'll introduce and demonstrate User-Defined Functions that compute the variable stiffness cubic spline and interpolate to new points. I'll also show how these curves can be used to detect or eliminate anomalies in data.

You can watch a recording of this webinar at Variable-Stiffness-Cubic-Splines.wmv. The model and library with the vscs functions will be posted here within a few weeks.

Using Regression

Date and Time: Thursday, May 1, 2008 at 10:00 - 11:00 Pacific Daylight Time

Date and Time: Thursday, Aug 30, 2007 at 10:00 - 11:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

Regression analysis is a statistical technique for curve fitting, discovering relationships in data, and testing hypotheses between variables. In this webinar, I will focus on generalized linear regression, which is provided by Analytica's Regression function, and examine many ways in which is can be used, including fitting simple lines to data, polynomial regression, use of other non-linear terms, and fitting of autoregressive models (e.g., ARMA). I'll examine how we can assess how likely it is the data might have been generated from the particular form of the regression model used. We can also determine the level of uncertainty in our inferred parameter values, and incorporate these uncertainties into a model that uses the result of the regression. The talk will cover Analytica 4.0 functions Regression, RegressionDist, RegressionFitProb, and RegressionNoise.

You can watch a recording of the 1 May 2008 webinar here: Regression.wmv. The model developed during that webinar is here: Using Regression.ana

Logistic Regression

Date and Time: Thursday, 5 June 2008, 10:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

(Features covered in this webinar require Analytica Optimizer)

Logistic regression is a technique for fitting a model to historical data to predict the probability of an event from a set of independent variables. In this talk, I'll introduce the concept of Logistic regression, explain how it differs from standard linear regression, and demonstrate how to fit a logistic regression model to data in Analytica. Probit regression is for all practical purposes the same idea as Logistic regression, differing only in the specific functional form for the model. Poisson regression is also similar except is appropriate when predicting a probability distribution over a dependent variable that represents integer "counts". All are examples of generalized linear models, and after reviewing these forms of logistic regression, it should be clear how other generalized linear model forms can be handled within Analytica.

This topic is appropriate for advanced modelers. I will assume familiarity with regression (see the earlier talk on the topic), but will not assume a previous knowledge of logistic regression.

You can watch a recording of this webinar at: Logistic-Regression.wmv. The model developed during this webinar can be downloaded from Logistic_regression_example.ana. You'll also need the file BreastCancer.data.


Bayesian Techniques

Bayesian Posteriors using Importance Sampling

Date and Time: Thursday, September 4, 2008 10:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

Several algorithms for computing Bayesian posterior probabilities are special cases of importance sampling. The webinar of the previous week, Importance Sampling (rare events) introduced importance sampling, covered the theory behind it, how it is applied, and how Analytica's sample weighting feature can be use for importance sampling. This webinar continues with importance sampling, this time exploring how it can be used (at least in some cases) to compute Bayesian posterior probabilities.

I'll provide an introduction to what Bayesian posterior probabilities are, describe a couple importance sampling-based approaches to computing them, and implement a few examples in Analytica. Importance sampling techniques for computing posteriors have limited applicability -- in some cases they work well, other not. I'll try to characterize what those conditions are.

You can watch a recording of this webinar at Posteriors_using_IS.wmv. About two-thirds through the presentation, we noticed a result that seemed to be coming out incorrectly. I explain what the problem was and fix it in Posteriors_using_IS_addendum.wmv. The models used during this presentation can be downloaded from Posterior sprinklers.ana and Likelihood weighting.ana.

Importance Sampling (Rare events)

Date and Time: Thursday, 28 Aug 2008, 10:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

Importance sampling is a technique that simulates a target probability distribution of interest by sampling from a different sampling distribution and then re-weighting the sampled points so that computed statistics match those of the target distribution. The technique has has applicability when the target distribution is difficult to sample from directly, but where the probability density function is readily available. The technique produces valid results in the large sample size limit for any selection of sampling distribution (provided it is absolutely continuous with respect to the target distribution), but best results (i.e., fastest convergence with smaller sample size) are obtained when a good sampling distribution is used. The technique is commonly used for rare-event sampling, where you want to ensure greater sampling coverage in the tails of distributions, where few samples would occur with standard Monte Carlo sampling. During the talk, we develop a rare event model. It also has applicability to the computation of Bayesian posteriors, and sampling of complex distribution.

In this talk we cover the theory behind importance sampling and introduce the sample weighting mechanism that is built into Analytica. We develop a rare-event model to demonstrate how the weighting mechanism is used to achieve the importance sampling. Next week we'll continue with an example of computing a Bayesian posterior probability.

A recording of this webinar can be viewed at Importance-Sampling.wmv. The model developed during this talk can be downloaded from: Importance Sampling rare events.ana.

Presenting Models to Others

Guidelines for Model Transparency

Date and Time: 19 Feb 2009, 10:00am Pacific Standard Time

Presenter: Max Henrion, Lumina Decision Systems

Abstract

What makes Analytica models easy for others to use and understand? I will review some example models that illustrate ways to improve transparency -- or opacity. Feel free to send me your candidates ahead of time! We'll review some proposed guidelines. I hope to stimulate a discussion about what you think works well or not, and enlist your help in refining these guidelines.

You can watch a recording of this webinar at Transparency-Guidelines.wmv.

Creating Control Panels

Date and Time: Thursday, May 29, 2008 10:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

It is quite easy to put together "control panels" or "forms" for your Analytica models by creating input and output nodes for the inputs and outputs of interest to your model end users. This webinar will cover the basic steps involved in creating and arranging these forms, along with some tricks for making the process efficient. We'll cover the different types of input and output controls that are currently available, the use of text nodes to create visual groupings, use of images and icons, and the alignment commands that make the process very rapid. We'll learn how to change colors, and look at the use of buttons very briefly. This talk is appropriate for beginning Analytica users.

A recording of this webinar can be viewed at Control-Panels.wmv (required Windows Media Player). The model used during this webinar is at Building Control Panels.ana.

Sneak preview of Analytica Web Publisher

Date and Time: Thursday, February 21, 2008, 10:00 - 11:00 Pacific Standard Time

Presenter: Max Henrion, Lumina Decision Systems

Abstract

In this week's webinar, Max Henrion, Lumina's CEO, will provide a sneak preview of the Analytica Web Publisher. AWP offers a way to make Analytica models easily accessible to anyone with a web browser. Users can open a model, view diagrams and objects, change input variables, and view results as tables and graphs. Users will also be able to save changed models, to revisit them in later sessions. Model builders can upload models into AWP directly from their desktop. Usually, AWP directories are password protected, so only authorized users can view and use models. But, we also plan to make a free AWP directory available for people who want to share their models openly.

AWP is nearing release for alpha testing. We will welcome your comments and hearing how you might envisage using AWP.

This webinar was not recorded.

Application Integration Topics

OLE Linking

Time and Date: Thursday, 27 Mar 2008 10:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract:

OLE linking is a commonly used methods for linking data from Excel spreadsheets into Analytica and results from Analytica into Excel spreadsheets. It can be used with other applications that support OLE-linking as well. The basic usage of OLE linking is pretty simple -- it is a lot like copy and paste. This webinar covers basics of using OLE linking of fixed-sized 1-D or 2-D tables. I also demonstrate the basic tricks you must go through to link index values and multi-D inputs and outputs. In addition, we discuss what some of those OLE-link settings actually do, and explain how OLE-connected applications connect to their data sources.

A recording of this webinar can be viewed at 2008-03-27-OLE-Linking.wmv.


Querying an OLAP server

Date and Time: Thursday, February 14, 2008, 10:00 - 11:00 Pacific Standard Time
(Note: Schedule change from an earlier posting. This is now back to the usual Thursday time. )

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

In this session, I'll show how the MdxQuery function can be used to extract multi-dimensional arrays from an On-Line Analytical Processing (OLAP) server. In particular, during this talk we'll query Microsoft Analysis Services using MDX. In this talk, I'll introduce some basics regarding OLAP and Analysis Services, discuss the differences between multi-dimensional arrays in OLAP and Analytica, cover the basics of the MDX query language, show how to form a connection string for MdxQuery, and import data. I'll also show how hierarchical dimensions can be handled once you get your data to Analytica.

Note: Use of the features demonstrated in this webinar require the Analytica Enterprise or Optimizer edition, or the Analytica Power Player. They are also available in ADE.

The model created during this webinar is available here: Using MdxQuery.ana. You can watch a recording of this webinar here: MdxQuery.wmv (requires Microsoft Media Player)

Querying an ODBC relational database

Date and Time: Thursday, February 7, 2008, 10:00 - 11:00 Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

In this talk I'll review the basics of querying an external relational ODBC database using DbQuery. This provides a flexible way to bring in data from SQL Server, Access, Oracle, and mySQL databases, and can also be used to read CSV-text databases and even Excel. In this talk, I will cover the topics of how to configure and specify the data source, the rudimentary basics of using SQL, the use of Analytica's DbQuery, DbWrite, DbLabels and DbTable functions.

Note: Use of the features demonstrated in this webinar require the Analytica Enterprise or Optimizer edition, or the Analytica Power Player. They are available in ADE."

You can grab the model created during this webinar from here: Querying an ODBC relational database.ana. A recording of this webinar can be viewed at Using-ODBC-Queries.wmv.


Calling External Applications

Date and Time: Thursday, Oct 18, 2007 at 10:00 - 11:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

The RunConsoleProcess function runs an external program, can exchange data with that program, and can be used to perform a computation or acquire data outside of Analytica, that then can be used within the model. I'll demonstrate how this can be used with a handful of programs, and code written in several programming and scripting languages. I'll demonstrate a user-defined function that retrieves historical stock data from a web site.

You can watch a recording of this webinar at: Calling-External-Applications.wmv (Requires Windows Media Player)

Files created or used during this webinar can be downloaded:

The example of retrieving stock data from Yahoo Finance is also detailed in an article here: Retrieving Content From the Web

New Functions for Reading Directly from an Excel File

Date and Time: Thursday, 24 April 2008 10:00 Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

(Feature covered requires Analytica Enterprise or better)

Hidden within the new release of Analytica 4.1 are three new functions for reading values directly from Excel spreadsheets: OpenExcelFile, WorksheetCell, WorksheetRange. These provide an alternative to OLE linking and ODBC for reading data from spreadsheets, which may be more convenient, flexible and reliable in many situations. We have not yet exposed these functions on the Definitions menu or in the Users Guide in release 4.1, since they are still in an experimental stage. I would like know that they have been "beta-tested" in a variety of scenarios before we fully expose them (also, the symmetric functions for writing don't exist yet). In this webinar, I will introduce and demonstrate these functions, after which you can start using them with your own problems.


The model created during this talk is here: Image:Functions for Reading Excel Worksheets.ana. It read from the example that comes with Office 2003, to which we added a few range names during the talk, resulting in SolvSamp.xls. Place the excel file in the same directory as the model. A recording of this webinar can be viewed at Reading-From-Excel.wmv.


Reading Data from URLs to a Model

Date and Time: Thursday, 27 Aug 2009, 10:00am-11:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

Requires Analytica Enterprise

The new built-in function, ReadFromUrl, can be used to read data (and images) from websites, such as HTTP web pages, FTP pages, or even web services like SOAP. In this webinar, I'll demonstrate the use of this function in several ways, including reading live stock and stock option price data, posting data to a web form, retrieving a text file from an FTP site, supplying user and password credentials for a web site or ftp service, downloading and displaying images including customized map and terrain images, and querying a SOAP web service.

You can watch a recording of this webinar at ReadFromUrl.wmv. The model with the examples shown during the webinar is at Reading_Data_From_the_Web.ana.

Using the Analytica Decision Engine (ADE) from ASP.NET

Date and Time: Thursday, April 10, 2008 10:00am Pacific Daylight Time

Presenter: Fred Brunton, Lumina Decision Systems

The Analytica Decision Engine (ADE) allows you to utilize a model developed in Analytica as a computational back-end engine from a custom application. In this webinar, we'll create a simple active web server application using ASP.NET that sends inputs submitted by a user to ADE, and displays results computed by ADE on a custom web page. In doing this, you will get a flavor how ADE works and how you program with it. If you've never created an active server page, you may enjoy seeing how that is done as well. This introductory session is oriented more towards people who do not have experience using ADE, so that you can learn a bit more about what ADE is and where it is appropriate by way of example.

You can watch a recording of this webinar at ASP-from-ASPNET.wmv. To download the program files that were created during this webinar Click here.

Optimization

Introduction to Linear and Quadratic Programming

Date and Time: Thursday, Oct 11, 2007 at 10:00 - 11:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

This talk is an introduction to linear programming and quadratic programming, and an introduction to solving LPs and QPs from inside an Analytica model (via Analytica Optimizer). LPs and QPs can be efficiently encoded using the Analytica Optimizer functions LpDefine and QpDefine. I'll introduce what a linear program is for the sake of those who are not already familiar, and examine some example problems that fit into this formalism. We'll encode a few in Analytica and compute optimal solutions. Although LPs and QPs are special cases of non-linear programs (NLPs), they are much more efficient and reliable to solve, avoid many of the complications present in non-linear optimization, and fully array abstract. Many problems that initially appear to be non-linear can often be reformulated as an LP or QP. We'll also see how to compute secondary solutions such as dual values (slack variables and reduced prices) and coefficient sensitivies. Finally, LpFindIIS can be useful for debugging an LP to isolate why there are no feasible solutions.

You can watch a recording of this webinar here: LP-QP-Optimization.wmv (requires Windows Media Player)

The model file created during this webinar is here: LP QP Optimization.ana

Non-Linear Optimization

Date and Time: Thursday, Oct 4, 2007 at 10:00 - 11:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

This talk focuses on the problem of maximizing or minimizing an objective criteria in the presence of contraints. This problem is referred to as a non-linear program, and the capability to solve problems of this form is provided by the Analytica Optimizer via the NlpDefine function. In this talk, I'll introduce the use of NlpDefine for those who have not previously used this function, and demonstrate how NLPs are structured within Analytica models. I'll examine various challenges inherent in non-linear optimization, tricks for diagnosing these and some ways to address these. We'll also examine various ways in which to structure models for parametric analyses (e.g., array abstraction over optimization problems), and optimizations in the presence of uncertainty.

You can watch a recording of this session here: Nonlinear-Optimization.wmv

During the talk, these two models were created:


Vertical Applications and Case Studies

Automated Monitoring and Failure Detection

Date and Time: 5 Feb 2009, 10:00am Pacific Standard Time

Presenter: Brian Parsonnet, ICE Energy

Abstract

In many complex physical systems, the automatic and proactive detection of system failures can be highly beneficial. Often dozens of sensor readings are collected over time, and a computer analyzes these to detect when system behavior is deviating from normal. Sounding an alert can then facilitate early intervention, perhaps catching a component that is just starting to go bad.

In a complex physical system with multiple operating modes and placed in a changing environment, anomaly detection is a very difficult problem. Simple sensor thresholds (and other related approaches) lack context-dependence, often making these simple approaches insufficient for the task. What is normal for any given sensor depends on the system's operating mode, time of day, activities in progress, and environmental factors. Simple thresholds that don't take such context into account either end up being so loose that they miss legitimate anomalies, or so tight that too many excess alarms are generated during normal conditions.

In this webinar, I'll show an expert system I've developed in Analytica that detects anomalies and developing failures in our deployed cooling system products. Data from dozens of sensors is collected in 5 minute intervals and the system transitions through multiple operating modes, daily and seasonal environmental fluctuations, and system demands. The Analytica model provides a framework in which complex rules that take multiple factors into account can be expressed, and used to estimate acceptable upper and lower operating ranges that are dynamically adjusted across each moment in time, taking into account whatever context is available. The Analytica environment presents a very readable and understandable language for expressing monitoring rules, and the overall transparency enables us to spot where other rules are needed and what they need to be.

Graph illustrates how upper and lower bounds on operating range is adjusted to context. Actual sensor data is green, the red and blue lines show the computed bounds on acceptable operating range at each point in time.

A recording of this webinar can be watched at Failure-Detection.wmv.

Data Center Capacity Planning

Please note that this presentation will be on Wednesday rather than Thursday this week.

Date and Time: Wednesday, October 21, 2008 10:00am Pacific Daylight Time

Presenter: Max Henrion, Lumina Decision Systems

Abstract

Data center energy demands are on the rise, creating serious financial as well as infrastructural challenges for data center operators. In 2006, data centers were responsible for a costly 1.5 percent of total U.S. electricity consumption, and national energy consumption by data centers is expected to nearly double by 2011. For data center operators, this means that many data centers are reaching the limits of power capacity for which they were originally designed. In fact, Gartner predicts that 50 percent of data centers will discover they have insufficient power and cooling capacity in 2008.

This week's presentation will provide an overview of ADCAPT -- the Analytica Data Center Capacity Planning Tool. For this webinar, the User Group will be joining a presentation that is also being given outside of the Analytica User Group, but I (Lonnie) think is also of interest to the User Group community in that it shows of an example of a re-usable Analytica model, containing several very interesting and novel techniques, applied to a very interesting application area.

Due to technical difficulties, this webinar was not recorded.

Modeling the Precision Strike Process

Date and Time: Thursday, October 16, 2008, 10:00am Pacific Daylight Time

Presenter: Henry Neimeier, MITRE

Abstract

We describe a new paradigm for modeling, and apply it to a simple view of the precision strike attack process against mobile targets. The new modeling paradigm employs analytic approximation techniques that allow rapid model development and execution. These also provide a simple dynamic analytic risk evaluation capability for the first time. The beta distribution is used to summarize a broad range of target dwell and execution time scenarios in compact form. The data processing and command and control processes are modeled as analytic queues.

You can watch a recording of this webinar at: Precision-Strike-Process.wmv. Several related papers and materials are also available, including:

Modeling Utility Tariffs in Analytica

Presenter: Brian Parsonnet, Ice Energy

Date and Time: Thursday, Nov 8, 2007 at 10:00 - 11:00am Pacific Standard Time

Modeling utility tariffs is a tedious and complicated task. There is no standard approach to how a utility tariff is constructed, and there are 1000’s of tariffs in the U.S. alone. Ice Energy has made numerous passes at finding a “simple” approach to enable tariff vs. product analysis, including writing VB applications, involved Excel spreadsheets, using 3rd party tools, or outsourcing projects to consultants. The difficulty stems from the fact that there is little common structure to tariffs, and efforts to standardize on what structure does exist is confounded by an endless list of exceptions. But using the relatively simple features of Analytica we have created a truly generic model that allows a tariff to be defined and integrated in just a few minutes. The technique is not fancy by Analytica standards, so this in essence demonstrates how Analytic’s novel modeling concept can tackle tough problems.

You can watch a recording of this webinar at: 2007-11-08-Tariff-Modeling (Requires Windows Media Player)


Modeling Energy Efficiency in Large Data Centers

Date and TimeThursday, Oct 25, 2007 at 10:00 - 11:00am Pacific Daylight Time

Presenter:Surya Swamy, Lumina Decision Systems

Abstract

The U.S. data center industry is witnessing a tremendous growth period stimulated by increasing demand for data processing and storage. This has resulted in a number of important implications including increased energy costs for business and government, increased emissions from electricity generation, increased strain on the power grid and rising capital costs for data center capacity expansion. In this webinar, Analytica's dynamic modeling capabilities coupled with it's advanced uncertainty capabilities, which offer tremendous support in building cost models for planning and development of energy efficient data centers, will be illustrated. The model enables users to explore future technologies, the performance, costs and efficiencies of which are uncertain and hence to be probabilistically evaluated over time.

You can watch a recording of this presentation at: Data-Center-Model.wmv (Requires Windows Media Player)

Graphing

Creating Scatter Plots

Date and Time: Thursday, May 15, 2008 at 10:00 - 11:00am Pacific Daylight Time

Date and Time: Thursday, Aug 23, 2007 at 10:00 - 11:00am Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

This webinar focuses on utilizing graphing functionality new to Analytica 4.0, and specifically, functionality enabling the creative use of scatter plots. The talk will focus primarily on techniques for simultaneously displaying many quantities on a single 2-D graph. I'll discuss several methods in which multiple data sources (i.e., variable results) can be brought together for display in a single graph, including the use of result comparison, comparison indexes, and external variables. I'll describe the basic new graphing-role / filler-dimension structure for advanced graphing in Analytica 4.0, enabling multiple dimensions to be displayed on the horizontal and vertical axes, or as symbol shape, color, or symbol size, and how all these can be rapidly pivoted to quickly explore the underlying data. I'll discuss how graph settings adapt to changes in pivot or result view (such as Mean, Pdf, Sample views).

A recording of this webinar can be viewed at Scatter-Plots.wmv.

Model used: During this webinar, I started with some example data in the model Chemical elements.ana. The original file is in the form before graph settings were changed. By the end of the webinar, many graph settings had been altered, and various changes made, resulting in Scatter-Plots.ana (during the Aug 23 presentation, this was the final model: Chemical elements2.ana).

Graph Style Templates

Date and Time: Thursday, February 28, 2008, 10:00 - 11:00 Pacific Standard Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

Graph style templates provide a convenient and versitile way to bundle graph setup options so that they can be reused when viewing other result graphs. For example, if you've discovered a set of colors and fonts and a layout that creates the perfect pizzazz for your results, you can bundle that into a template where you can quickly select it for any graph. In this talk, I'll introduce how templates can be used and how you can create and re-use your own. I'll show the basics of using existing templates, previewing what templates will look like, and applying a given template to a single result or to your entire model. We'll also see how to create your own templates, and in the process I'll discuss what settings can be controlled from within a template. I'll discuss how graph setup options are a combination of global settings, template settings, and graph-specific overrides. I'll show how to place templates into libraries (thus allowing you to have template libraries that can be readily re-used in different models), and even show how to control a few settings using templates that aren't selectable from the Graph Setup UI. I'll also touch on how different graph setting are associated with different aspects of a graph, ultimately determining how the graph adapts to changes in uncertainty view or pivots.

The model created during this webinar is here: Graph style templates.ana. You can watch a recording of the webinar here: Graph-Style-Templates.wmv.


Scripting

Button Scripting

Date and Time: Thursday, Sept. 6, 2007 at 10:00 - 11:00am Pacific Daylight Time

Presenter: Max Henrion, Lumina Decision Systems

Abstract

This webinar is an introduction to Analytica's typescript and button scripting. Unlike variable definitions, button scripts can have side-effects, and this can be useful in many circumstances. I'll cover the syntax of typescript (and button scripts), and how scripts can be used from buttons, picture nodes or choice inputs. I'll introduce some of the Analytica scripting language to those who may have seen or used it before. And we'll examine some ways in which button scripting can be used.

You can watch the recording of this webinar here: Button Scripting.wmv (Requires Windows Media Player or equiv). The model files and libraries used during the webinar are in Ana_tech_webinar_on_scripting.zip.


Analytica User Community

The Analytica Wiki, and How to Contribute

Date and Time: (tentative) Thursday, October 30, 2008, Pacific Daylight Time

Presenter: Lonnie Chrisman, Lumina Decision Systems

Abstract

The Analytica Wiki is a central repository of resources for active Analytica users. What's more, you -- as an active Analytica user -- can contribute to it. As an Analytica community, we have a lot to learn from each other, and the Analytica Wiki provides one very nice forum for doing so. You can contribute example models and libraries, hints and tricks, and descriptions of new techniques. You can fix errors in the Wiki documentation if you spot them, or add to the information that is there when you find subtleties that are not fully described. If you spend a lot of time debugging a problem, after solving it you could document the issue and how it was solved for your own benefit in the future, as well as for others in the user community who may encounter the same problem. When you publish a relevant paper, I hope you will add it to the page listing publications that utilize Analytica models.

I will provide a quick tour of the Analytica Wiki as it exists today. I'll then provide a tutorial on contributing to the Wiki -- e.g., the basics of how to edit or add content. The Wikipedia has had tremendous success with this community content contribution model, and I hope that after this introduction many of you will feel more comfortable contributing to the Wiki as you make use of it.

Due to a problem with the audio on the recording, the recording of this webinar is not available.

Comments


You are not allowed to post comments.