An embedding vector is a numerical representation of an image (or text etc.), such that different numerical components of the vector capture different dimensions of the image’s content. Embeddings can be used, for example, to calculate the semantic similarity between pairs of images (see link_embeddings, for example, to create a network of images connected by similarity).

In its current form the step calculates image embeddings using Clip, which has been trained on 400M image/text pairs to pick out an image’s correct caption from a list of candidates.

Usage

The following example shows how the step can be used in a recipe.

Inputs & Outputs

The following are the inputs expected by the step and the outputs it produces. These are generally columns (ds.first_name), datasets (ds or ds[["first_name", "last_name"]]) or models (referenced by name e.g. "churn-clf").

Configuration

The following parameters can be used to configure the behaviour of the step by including them in a json object as the last “input” to the step, i.e. step(..., {"param": "value", ...}) -> (output).