link_embeddings
, for example, to create a network of images
connected by similarity).
In its current form the step calculates image embeddings using Clip,
which has been trained on 400M image/text pairs to pick out an image’s correct caption from a list of candidates.
Usage
The following example shows how the step can be used in a recipe.Examples
Examples
The step has no required parameters, so the simplest call is simply
Inputs & Outputs
The following are the inputs expected by the step and the outputs it produces. These are generally columns (ds.first_name
), datasets (ds
or ds[["first_name", "last_name"]]
) or models (referenced
by name e.g. "churn-clf"
).
Inputs
Inputs
A column of URLs to images to calculate embeddings for.
Outputs
Outputs
A column of embedding vectors capturing the meaning of each input image.
Configuration
The following parameters can be used to configure the behaviour of the step by including them in a json object as the last “input” to the step, i.e.step(..., {"param": "value", ...}) -> (output)
.
Parameters
Parameters
Whether to normalize embedding vectors (to length/norm of 1.0).