embed_images
Embed images using pretrained DL models.
An embedding vector is a numerical representation of an image (or text etc.), such that different numerical components
of the vector capture different dimensions of the image’s content. Embeddings can be used, for example, to calculate
the semantic similarity between pairs of images (see link_embeddings
, for example, to create a network of images
connected by similarity).
In its current form the step calculates image embeddings using Clip, which has been trained on 400M image/text pairs to pick out an image’s correct caption from a list of candidates.
Usage
The following example shows how the step can be used in a recipe.
Examples
Examples
The step has no required parameters, so the simplest call is simply
The step has no required parameters, so the simplest call is simply
General syntax for using the step in a recipe. Shows the inputs and outputs the step is expected to receive and will produce respectively. For futher details see sections below.
Inputs & Outputs
The following are the inputs expected by the step and the outputs it produces. These are generally
columns (ds.first_name
), datasets (ds
or ds[["first_name", "last_name"]]
) or models (referenced
by name e.g. "churn-clf"
).
Inputs
Inputs
A column of URLs to images to calculate embeddings for.
Outputs
Outputs
A column of embedding vectors capturing the meaning of each input image.
Configuration
The following parameters can be used to configure the behaviour of the step by including them in
a json object as the last “input” to the step, i.e. step(..., {"param": "value", ...}) -> (output)
.
Parameters
Parameters
Whether to normalize embedding vectors (to length/norm of 1.0).