Skip to main content
Use any of OpenAI’s models on a row-by-row basis. This step doesn’t feed the whole dataset into the model, so you won’t be able to perform operations that require more than one row at a time. It can be used to perform a variety of tasks. Keep in mind that OpenAI’s models are generative AI technologies, and thus can give incorrect responses. It comes with a predefined budget of 5 $USD, which will prevent the step from executing if it will cost over that budget. It is advised that you use a filter step first to test the prompt out on a few rows, then launch it on the whole dataset. Keep in mind our budget is a rough estimate, if you’re concerned about cost you should set limits on OpenAI’s side. Your prompt will be configured by using two parameters: ‘prompt’ and ‘response_format’. prompt is a text field while response_format allows you to specify a JSON format for the model’s response, in the format of {[expected_column]: "description"}. Both in the prompt and the response format descriptions you may refer to the row’s attributes by using ${attribute_name}. Check the examples and parameter documentation below for more information. ???+ info “API integration” To use this step your team needs to have the OpenAI integration configured in Graphext. The corresponding credentials are required to connect to a third-party API. You can configure API integrations following the INTEGRATIONS or ADD INTEGRATION link in the top-left corner of your Team’s page, selecting API keys, and then the name of the desired third-party service. First, create an OpenAI account or sign in. Next, navigate to the API key page and “Create new secret key”, optionally naming the key. Make sure to save this somewhere safe and do not share it with anyone. Optionally, you can specify the organization the key belongs to. On OpenAI’s’ page, you can set general budgets for your api key and other settings that may interest you.

Usage

The following examples show how the step can be used in a recipe.

Examples

  • Example 1
  • Example 2
  • Example 3
  • Signature
Specify model
prompt_ai(ds[["Local Address"]], { # contains column 'Local Address'
    "integration": "MY_INTEGRATION_ID",
    "model": {
      "id": "gpt-4o",
      "temperature": 0.2
    },
    "prompt": "What is the country for ${Local Address}"
}) -> (ds.country)

Inputs & Outputs

The following are the inputs expected by the step and the outputs it produces. These are generally columns (ds.first_name), datasets (ds or ds[["first_name", "last_name"]]) or models (referenced by name e.g. "churn-clf").
ds
dataset
required
Dataset to enrich. Make sure it contains the necessary columns.
*outputs
column
Number of columns to specify. By default it’s set as only one column, of type category.

Configuration

The following parameters can be used to configure the behaviour of the step by including them in a json object as the last “input” to the step, i.e. step(..., {"param": "value", ...}) -> (output).

Parameters

prompt
string
Main prompt for the API call. The main body of instructions you wish to perform.
integration
string
Associated integration.
response_format
object
Prompt instructions for each output column. Further prompt instructions for each output column.
Items
string
One or more additional parameters.
force_format
object
Values allowed in each output column. If provided, values in each column will be restricted.
Items
array[string]
One or more additional parameters.
Item
string
Each item in array.
out_types
object
Types for the output column(s). Desired types for each output column. By default, they will all be categories.
Items
string
default:"category"
One or more additional parameters.Values must be one of the following:category date number boolean url sex text list[number] list[category] list[url] list[boolean] list[date]
model
object
Model Configuration. Configuration for OpenAI’s model.
id
string
default:"gpt-4o-mini"
OpenAI model to choose.Values must be one of the following:gpt-4o gpt-4o-mini o3-mini gpt-4.1 gpt-4.1-mini gpt-4.1-nano
temperature
number
default:"0.7"
Temperature. Higher means more creativity, but also makes the model more likely to hallucinate. Lower temperature yields more deterministic results.Values must be in the following range:
0temperature1
budget
number
default:"5"
Budget. If present, the step will not execute if estimated input token cost exceeds this amount in USD. If max_out_tokens is not set, we will minimum of the cost. If it is set, we will give a ceiling. Actual cost may vary depending on a number of factors like your OpenAI plan. Check your plan before executing.
max_out_tokens
number
Maximum output tokens. If set, each individual response will add to at most this amount. Allows for a budget theorical ceiling to be calculated before executing.
batch_size
integer
default:"100"
Size of concurrent request at a time. Lowering this if you have very low rate limits in your plan might prevent empty responses.Values must be in the following range:
1batch_size1000
timeout
integer
default:"60"
Timeout for requests to OpenAI.Values must be in the following range:
1timeout < inf
I