embed_images
Embed images using pretrained DL models.
An embedding vector is a numerical representation of an image (or text etc.), such that different numerical components
of the vector capture different dimensions of the image’s content. Embeddings can be used, for example, to calculate
the semantic similarity between pairs of images (see link_embeddings
, for example, to create a network of images
connected by similarity).
In its current form the step calculates image embeddings using Clip, which has been trained on 400M image/text pairs to pick out an image’s correct caption from a list of candidates.
Usage
The following example shows how the step can be used in a recipe.
The step has no required parameters, so the simplest call is simply
Inputs & Outputs
The following are the inputs expected by the step and the outputs it produces. These are generally
columns (ds.first_name
), datasets (ds
or ds[["first_name", "last_name"]]
) or models (referenced
by name e.g. "churn-clf"
).
Configuration
The following parameters can be used to configure the behaviour of the step by including them in
a json object as the last “input” to the step, i.e. step(..., {"param": "value", ...}) -> (output)
.
Whether to normalize embedding vectors (to length/norm of 1.0).
Was this page helpful?