embed_items
Trains an item2vec model on provided lists of items (or sentences of words, etc.).
This is essentially the word2vec algorithm applied to arbitrary lists of items. Word2vec computes vectors representing words such that nearby (similar) vectors represent words that are often found in a similar context. Item2vec refers to using the exact same algorithm but applying it to arbitrary lists of items in which the order of items has a comparable interpretation to words in a sentence (the items may be categories, tags, IDs etc.).
Note, that if the order of items in the list (session/basket etc.) is not important, and you simply want item vectors
to be similar if the corresponding items usually occur together in the same list, use the window
parameter (see
below) with a value of “all”.
We use gensim to train the item2vec model, so for further details also see it’s word2vec page.
Usage
The following example shows how the step can be used in a recipe.
The following uses default parameter values only, and thus would be equivalent to using the step without specifying any parameters.
Inputs & Outputs
The following are the inputs expected by the step and the outputs it produces. These are generally
columns (ds.first_name
), datasets (ds
or ds[["first_name", "last_name"]]
) or models (referenced
by name e.g. "churn-clf"
).
Configuration
The following parameters can be used to configure the behaviour of the step by including them in
a json object as the last “input” to the step, i.e. step(..., {"param": "value", ...}) -> (output)
.
Length of resulting embedding vectors.
Values must be in the following range:
Whether to use the skip-gram or CBOW algorithm. Set this to 1 for skip-gram, and 0 for CBOW.
Values must be in the following range:
Update maximum for negative-sampling. Only update these many word vectors.
Initial learning rate.
Values must be in the following range:
Size of word context window. Must be either an integer (the number of neighbouring words to consider), or any of “auto”, “max” or “all”, in which case the window is equal to the whole list/session/basket.
Minimum count of item in dataset. If an item occurs fewer than this many times it will be ignored.
Values must be in the following range:
Iterations. How many epochs to run the algorithm for.
Values must be in the following range:
Percentage of most-common items to filter out (equivalent to “stop words”).
Values must be in the following range:
Whether to return normalized item vectors.
Was this page helpful?