OpenAI API library

This page is under construction. The library is not yet available for download, but when it is ready, we will include a download link on this page



Release:

4.6  •  5.0  •  5.1  •  5.2  •  5.3  •  5.4  •  6.0  •  6.1  •  6.2  •  6.3  •  6.4  •  6.5


Requires the Analytica Enterprise edition or better

The Open AI API library is a collection of Analytica functions that enable you interface with large language models (LLMs) from within your Analytica model. You can leverage the flexibility of these generative A.I. models to perform tasks that would be hard to do in a formal program. And you can use this library to learn about generative A.I. from within Analytica. This page is a reference to the functions in the library. It is accompanied by a Tutorial on using the library. Going through the tutorial is a great way to learn about LLMs.

Requirements

To use this library, you must have:

To get an OpenAI API key

  1. Go to [https://platform.openai.com/] and sign up for an account.
  2. Click on your profile picture and select View API keys
  3. Click Create new secret key.
  4. Copy this key to the clipboard (or otherwise save it)

Getting started

  1. Download the library to your "C:\Program Files\Lumina\Analytica 6.5\Libraries folder.
  2. Launch Analytica
  3. Load your model, or start a new model.
  4. Select File / Add Library...., select OpenAI API library.ana [OK], select Link [OK].
  5. Enter the OpenAI API lib library module.
  6. Press either Save API key in env var or Save API key with your model. Read the text on that page to understand the difference.
  7. A message box appears asking you to enter your API key. Paste it into the box to continue.

At this point, you should be able to call the API. View the result of Available models to test whether the connect to OpenAI is working. This shows you the list of OpenAI models that you have access to.

Text generation from a prompt

Many machine learning and A.I. inference tasks are performed by providing prompt text and asking an LLM to complete the text.

Function Prompt_completion( prompt, modelName, «optional parms» )

Returns a text completion from the provided starting «prompt». This example demonstrates the basic usage:

Prompt_completion("The little red corvette is a metaphor for") → "sensuality and excitement"

The function has multiple return values: 1. The main return value is the textual response (the content).

    If «Completion_index» is specified, this is a set of completions indexed by «Completion_index».

2. The finish_reason. Usually Null, but may be "stop" if a «stop_sequence» is encountered. 3. Number of prompt tokens 4. Number of total tokens

Example:

Local ( response, finish_reason, prompt_tokens, total_tokens ) := Prompt_completion("The little red corvette is a metaphor for") Do
[ response, finish_reason, prompt_tokens, total_tokens ]
response "freedom, desire, and youthfulness"
finish_reason "stop"
prompt_tokens 11
total_tokens 17


The function has many optional parameters:

  • «modelName»: The OpenAI model to use. It must support Chat. 'gpt-3.5', 'gpt-3.5-turbo' and 'gpt-4' are common choices.
  • «functions» : One or more functions that the LLM can call during its completions.
  • «temperature»: A value between 0 and 2.
    Smaller values are more focused and deterministic, Higher values are more random. Default=1.
  • «top_p»: A value 0<top_p<=1. An alternative to sampling temperature.
    Do not specify both «temperature» and «top_p».
    A value of 0.1 means only tokens comprising the top 10% of probability mass are considered.
  • «Completion_index»: Specify an index if you want more than one alternative completion.
    The results will have this index if specified. The length of this index specifies how many completions are generated.
  • «stop_sequences»: You can specify up to 4 stop sequences.
    When one of these sequences is generated, the API stops generating.
  • «max_tokens»: The maximum number of tokens to generate in the chat completion.
  • «presence_penalty»: Number between -2.0 and 2.0.
    Positive penalizes new tokens based on whether they appear in the text so far.
  • «frequency_penalty»: Number between -2.0 and 2.0.
    Positive values penalize new tokens based on their existing frequency in the text so far.

Managing a chat

Similarity embeddings

Comments


You are not allowed to post comments.