Skip to content

Embed sessions

vectorizeword2vecitem2vecmodel

Trains an item2vec model on provided lists of items.

Lists of items may represent pages visited in a browsing session, shopping baskets and the products they contain, sentences of words, etc. The step calculates embeddings vectors for all item lists, such that two vectors are similar if their corresponding lists of items are similar. Similarity here is measured as an average over the individual items. Essentially, we first calculate embeddings vectors representing individual items (using word2vec), and then average over all items belonging to the same list/session.

As an example, consider a dataset containing shopping baskets. In this case the step will first calculate embeddings for individual products. The resulting vectors will be similar if they represent objects that are often bought together. E.g. the vectors for sausages and hot dog bread may be more similar to each other than those representing shampoo and toys. Then, to arrive at an embedding vector for each basket, we simply average over all its individual products. The result will capture the similarity between baskets in terms of the mix of products they contain. And so the vectors representing baskets of people buying a significant amount of baby products will be more similar to each other than to vectors representing baskets of people buying products for a BBQ party.

To only calculate individual item embeddings see the complementary embed_items step.

Also, we use gensim to train the item2vec model, so for further details also see it's word2vec page.

Usage


The following are the step's expected inputs and outputs and their specific types.

Step signature
embed_sessions(sessions: list[category]|list[number], {
    "param": value
}) -> (embeddings: list[number])

where the object {"param": value} is optional in most cases and if present may contain any of the parameters described in the corresponding section below.

Example

Example call (in recipe editor)
embed_sessions(baskets.products, {
  "size": 48,
  "sg": 1,
  "negative": 20,
  "alpha": 0.025,
  "window": 5,
  "min_count": 3,
  "iter": 10,
  "sample": 0,
  "workers": 3
}) -> (baskets.embedding)

Inputs


sessions: column:list[category]|list[number]

A column containing lists, where each row is a session, and each session a list of items.

Outputs


embeddings: column:list[number]

A column containing item embeddings in the same order as the items input column. Embeddings are lists of numbers.

Parameters


Also see gensim's word2vec reference for further detail about the underlying algorithm's parameters.


size: integer = 48

Length of embedding vectors.

Range: 1 ≤ size < inf


sg: integer = 1

Use Skip-Gram or CBOW. Set this to 1 to use Skip-Gram, 0 for CBOW.

Range: 0 ≤ sg ≤ 1


negative: integer = 20

Update maximum for negative-sampling. Only update these many word vectors.


alpha: number = 0.025

Initial Learning Rate.

Range: 0 ≤ alpha ≤ 1


window: integer | string = 5

Word context window. Must be either an integer or "auto", "max" or "all".

Must be one of: "auto", "max", "all"


min_count: integer = 3

Minimum count of item in dataset. Otherwise filtered out.

Range: 1 ≤ min_count < inf


iter: integer = 10

Iterations. How many epochs to run the algorithm for.

Range: 1 ≤ iter < inf


sample: number = 0

Sample. Percentage of most-common items to filter out.

Range: 0 ≤ sample ≤ 1


normalize: boolean = True

Whether to return normalized item vectors.