OpenAI API library

Revision as of 01:21, 25 July 2023 by Lchrisman (talk | contribs)

This page is under construction. The library is not yet available for download, but when it is ready, we will include a download link on this page



Release:

4.6  •  5.0  •  5.1  •  5.2  •  5.3  •  5.4  •  6.0  •  6.1  •  6.2  •  6.3  •  6.4  •  6.5


Requires the Analytica Enterprise edition or better

The Open AI API library is a collection of Analytica functions that enable you interface with large language models (LLMs) from within your Analytica model. You can leverage the flexibility of these generative A.I. models to perform tasks that would be hard to do in a formal program. And you can use this library to learn about generative A.I. from within Analytica. This page is a reference to the functions in the library. It is accompanied by a Tutorial on using the library. Going through the tutorial is a great way to learn about LLMs.

Requirements

To use this library, you must have:

To get an OpenAI API key

  1. Go to [https://platform.openai.com/] and sign up for an account.
  2. Click on your profile picture and select View API keys
  3. Click Create new secret key.
  4. Copy this key to the clipboard (or otherwise save it)

Getting started

  1. Download the library to your "C:\Program Files\Lumina\Analytica 6.5\Libraries folder.
  2. Launch Analytica
  3. Load your model, or start a new model.
  4. Select File / Add Library...., select OpenAI API library.ana [OK], select Link [OK].
  5. Enter the OpenAI API lib library module.
  6. Press either Save API key in env var or Save API key with your model. Read the text on that page to understand the difference.
  7. A message box appears asking you to enter your API key. Paste it into the box to continue.

At this point, you should be able to call the API. View the result of Available models to test whether the connect to OpenAI is working. This shows you the list of OpenAI models that you have access to.

Text generation from a prompt

Many machine learning and A.I. inference tasks are performed by providing prompt text and asking an LLM to complete the text.

Function Prompt_completion( prompt, modelName, «optional parms» )

Returns a text completion from the provided starting «prompt». This example demonstrates the basic usage:

Prompt_completion("The little red corvette is a metaphor for") → "sensuality and excitement"

The function has multiple return values: 1. The main return value is the textual response (the content).

    If «Completion_index» is specified, this is a set of completions indexed by «Completion_index».

2. The finish_reason. Usually Null, but may be "stop" if a «stop_sequence» is encountered. 3. Number of prompt tokens 4. Number of total tokens

Example:

Local ( response, finish_reason, prompt_tokens, total_tokens ) := Prompt_completion("The little red corvette is a metaphor for") Do
[ response, finish_reason, prompt_tokens, total_tokens ]
response "freedom, desire, and youthfulness"
finish_reason "stop"
prompt_tokens 11
total_tokens 17


The function has many optional parameters:

  • «modelName»: The OpenAI model to use. It must support Chat. 'gpt-3.5', 'gpt-3.5-turbo' and 'gpt-4' are common choices.
  • «functions» : One or more functions that the LLM can call during its completions.
  • «temperature»: A value between 0 and 2.
    Smaller values are more focused and deterministic, Higher values are more random. Default=1.
  • «top_p»: A value 0<top_p<=1. An alternative to sampling temperature.
    Do not specify both «temperature» and «top_p».
    A value of 0.1 means only tokens comprising the top 10% of probability mass are considered.
  • «Completion_index»: Specify an index if you want more than one alternative completion.
    The results will have this index if specified. The length of this index specifies how many completions are generated.
  • «stop_sequences»: You can specify up to 4 stop sequences.
    When one of these sequences is generated, the API stops generating.
  • «max_tokens»: The maximum number of tokens to generate in the chat completion.
  • «presence_penalty»: Number between -2.0 and 2.0.
    Positive penalizes new tokens based on whether they appear in the text so far.
  • «frequency_penalty»: Number between -2.0 and 2.0.
    Positive values penalize new tokens based on their existing frequency in the text so far.

See the tutorial on using this library for more details. Also, see #Function callbacks below.

Managing a chat

Function callbacks

You can provide the Prompt_completion and Chat_completion functions with your own User-Defined functions that the LLM can call while it is generating the response to your prompt. You could use this, for example, to allow the LLM to gather results that your model computes to incorporate into the conversation. You can also use this to provide tools for it to use for things it is not very good at on its own, such as arithmetic.

Your callback functions should have only simple parameters, accepting scalar text or numbers. The language models do not have a way to pass arrays or indexes. It is a good idea to quality each parameter as either Text or Number. The Description of your function provides the language model with guidance for when it should use your function.

For example:

Function get_current_weather( location : text ; unit : text optional )
Description: Get the current weather in a given location
Parameter Enumerations:
   unit
       "celsius"|
       "fahrenheit"|
Definition: AskMsgText(f"What is the current weather in {location}?","API function call")

To allow it to use this function, use

Prompt_completion("Do I need an umbrella today? I'll be taking a hike in Portland, Oregon", functions: get_current_weather)

When you evaluate this, a message box appears on the screen asking you to provide the answer to "What is the current weather in Portland, Oregon?". This message box occurs when the AskMsgText in get_current_weather is evaluated by the LLM.

Type: Drizzly with occasional thunder showers, and the final return value is

"Yes, it is recommended to bring an umbrella today as there are occasional thunder showers in Portland, Oregon."

You can use the ParameterEnumeration to specify possible enumerated values for parameters that expect specific values.

(to do: it can benefit from parameter descriptions. Have not yet adopted a convention.)

Similarity embeddings

Comments


You are not allowed to post comments.