Note that we do not validate the model name before executing it, so make sure it corresponds to an existing model in the hub, otherwise the step will fail.

model
string
required

The name of a model. This should be the full name (including the organization if applicable) of a model in the Hugging Face model hub. You can copy it by clicking on the icon next to the model’s name on its dedicated web page.

Note that if the name doesn’t correspond to a model existing in the hub the step will fail. Since there are hundreds if not thousands of potential models, we cannot validate if the name is correct before executing it.

revision
[string, null]

The specific model version. Can be a branch name, a tag name, or a commit id. To identify a particular revision, on a model’s webpage (such as https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual), browse to the Files and versions tab, and use the branch or history dropdown menus to see the available branch names or commit IDs. If not provided, will use the latest (newest) available version (usually from the “main” branch).

labels
[object, null]

Map original model output to human-readable labels. Unfortunately, many models in Hugging Face are badly configured and output labels like LABEL_0, LABEL_1, etc. which isn’t very helpful. You can use the “Hosted inference API” widget on the model’s web page to test its output labels. If necessary, use this parameter to map the default output labels to ones you prefer.

min_prob
[number, null]

Minimum probability (score) to accept prediction label. Class labels with a corresponding probability smaller than this value will be removed (replaced with NaN, i.e. the missing value).

Values must be in the following range:

0.0 º min_prob < 1.0
batch_size
integer
default: "8"

How many texts to process simultaneously. May get ignored when running on CPU.

Values must be in the following range:

1 ≤ batch_size ≤ 64
n_workers
integer

Number of threads used to feed GPU with texts.

Values must be in the following range:

1 ≤ n_workers ≤ 4
device
[integer, null]

Which CPU/GPU to run model on. Pass -1 to use CPU, and 0 to use first available GPU. By default, of when passed null, the step will use GPU automatically if one is found otherwise CPU.

integration
string

ID of a Hugging Face integration configured in Graphext. To use a private model from the Hugging Face hub, you need to configure a Hugging Face “API Key” integration (in the relevant Graphext team > Add Integration

API KEYS > Add API Key > Hugging Face > paste an access token previously configured in your huggingface account). Graphext will automatically assign an ID to your integration which gets autocompleted where required (e.g. in the recipe editor).