Skip to main content
An embedding vector is a numerical representation of an image (or text etc.), such that different numerical components of the vector capture different dimensions of the image’s content. Embeddings can be used, for example, to calculate the semantic similarity between pairs of images (see link_embeddings, for example, to create a network of images connected by similarity). In its current form the step calculates image embeddings using Clip, which has been trained on 400M image/text pairs to pick out an image’s correct caption from a list of candidates.

Usage

The following example shows how the step can be used in a recipe.

Examples

  • Example 1
  • Signature
The step has no required parameters, so the simplest call is simply
embed_images(ds.image_url) -> (ds.embedding)

Inputs & Outputs

The following are the inputs expected by the step and the outputs it produces. These are generally columns (ds.first_name), datasets (ds or ds[["first_name", "last_name"]]) or models (referenced by name e.g. "churn-clf").
images
column[url]
required
A column of URLs to images to calculate embeddings for.
embedding
column[list[number]]
required
A column of embedding vectors capturing the meaning of each input image.

Configuration

The following parameters can be used to configure the behaviour of the step by including them in a json object as the last “input” to the step, i.e. step(..., {"param": "value", ...}) -> (output).

Parameters

normalize
boolean
default:"true"
Whether to normalize embedding vectors (to length/norm of 1.0).
I