Skip to content

Embed dataset

vectorize · dimensionality reduction · UMAP

Reduce the dataset to an n-dimensional numeric vector embedding.

Works like vectorize_dataset, but instead of converting the input dataset to a new dataset of N numeric columns, it creates a single column in the original dataset containing vectors (lists) of N components. In other words, the result of vectorize_dataset is converted to a column of embeddings, where each embedding is a numerical representation of the corresponding row in the original dataset.

Many machine learning and AI algorithms expect their input data to be in pure numerical form, i.e. not containing categorical variables, missing values etc. This step converts arbitrary datasets, potentially containing non-numerical variables and NaNs, into this expected form. It does this by defining for each possible type of input column a transformation from non-numeric to numeric values. As an example, ordered categorical variables (ordinals) such as the day of week, may be converted into a series of numbers (0..7). Non-ordered categorical variables of low-cardinality (containing few different categories) may be expanded into multiple new columns of 0s and 1s, indicating whether each row belongs to a specific category or not. Similar transformations are applied to dates, multivalued categoricals etc.

NaNs are imputed (replaced) with an appropriate value from the corresponding column (e.g. the median in a quantitative column). In addition, a new component of 0s and 1s is added, indicating whether the original column had a missing value or not.

The resulting embeddings will almost certainly not contain the same number of components as the original dataset's columns (as the example of categorical variables shows).

The n_components parameter controls how many components the embeddings should have, and if this is smaller than would result normally, a dimensionality reduction will be applied (UMAP by default).

The resulting numerical representation of the original data points aims to preserve the structure of similarities. I.e. if two original rows are similar to each other, than their (potentially reduced) numerical representations should also be similar. Equally, two very different rows should have representations that are also very different.

Example

The following, simplest, example, creates a new column of vector embeddings, each containing numeric components only, and hopefully capturing the same or most of the information in the corresponding original row.

embed_dataset(ds) -> (ds.embedding)
More examples

The following example will convert and reduce the input dataset to a single column of embedding vectors (lists of numbers) each having 10 components. After normalization, the date component will be multiplied by 0.5 to reduces its weight relative to the others. The column age on the other hand will be given more importance. Also, 15 neighbours are considered for each data point in UMAP, so that we give more importance to the similarity between nearby points and less importance to the global structure of the data when calculating the embeddings.

embed_dataset(ds, {
  "weights": {"date": 0.5, "age": 2},
  "weights_max": 32,
  "weights_exp": 2,
  "algorithm": "umap",
  "n_components": 10,
  "n_neighbors": 15,
  "min_dist": 0.1,
  "random_state": 42
}) -> (ds_vec)

Usage

The following are the step's expected inputs and outputs and their specific types.

embed_dataset(ds_in: dataset, {"param": value}) -> (embedding: list[number])

where the object {"param": value} is optional in most cases and if present may contain any of the parameters described in the corresponding section below.

Inputs


ds_in: dataset

An arbitrary input dataset.

Outputs


embedding: column:list[number]

A column containing embedding vectors (lists of numbers) numerically representing the correspoding rows in ds_in.

Parameters


algorithm: string = "umap"

Algorithm. The name of a supported dimensionality reduction algorithm.

Must be one of: "umap"


encode_features: boolean = True

Preprocess features to normalize relative distances for quantitative and categorical variables.


weights: object | null

Weights used to multiply the normalized columns/features after vectorization. Should be a dictionary/object of {"column_name": weight, ...} items. Will be scaled using the parameters weights_max, and weights_exp before being applied. So only the relative weight of the columns is important here, not their absolute values.

Items in weights

column_weight: number

A "column_name": numeric_weight pair. Each column name must refer to an existing column in the dataset.

Example parameter values:

  • {"date": 0.5, "age": 2}

type_weights: object | null

Weights used to multiply the normalized columns/features after vectorization. Should be a dictionary/object of "type": weight" items. Will be scaled using the parameters weights_max, and weights_exp before being applied. So only the relative weight of the columns is important here, not their absolute values.

Items in type_weights

number: number

Weight for columns of type Number


datetime: number

Weight for columns of type Datetime


category: number

Weight for columns of type Category


ordinal: number

Weight for columns of type Ordinal


embedding: number

Weight for columns of type Embedding (List[Number]).


multilabel: number

Weight for columns of type Multilabel (List[Category]).


weights_max: number = 32

Maximum weight to scale the normalized columns with.

Range: 0 ≤ weights_max < inf


weights_exp: integer = 2

Weight exponent. Weights will be raised to this power before(!) scaling to weights_max. This allows for a non-linear mapping from input weights to those used eventually to multiply the normalized columns.


n_neighbors: integer = 100

Number of neighbours. Use smaller numbers to concentrate on the local structure in the data, and larger values to focus on the more global structure.

For further details see here.

Range: 1 ≤ n_neighbors < inf


min_dist: number = 0.1

Minimum distance between reduced data points. Controls how tightly UMAP is allowed to pack points together in the reduced space. Smaller values will lead to points more tightly packed together (potentially useful if result is used to cluster the points). Larger values will distribute points with more space between them (which may be desirable for visualization, or to focus more on the global structure of the date).

For further details see here.

Range: 0 ≤ min_dist < inf


n_components: integer = 10

Dimensionality of the reduced data.

Range: 1 ≤ n_components < inf


metric: string = "euclidean"

Metric to use for measuring similarity between data points.

Must be one of: "euclidean", "manhattan", "chebyshev", "minkowski", "canberra", "braycurtis", "haversine", "mahalanobis", "wminkowski", "seuclidean", "cosine", "correlation", "hamming", "jaccard", "dice", "russellrao", "kulsinski", "rogerstanimoto", "sokalmichener", "sokalsneath", "yule"


n_epochs: integer | null

Number of training iterations used in optimizing the embedding. Larger values result in more accurate embeddings. If null is specified a value will be selected based on the size of the input dataset (200 for large datasets, 500 for small).


init: string = "spectral"

How to initialize the low dimensional embedding. When ‘spectral’, uses a spectral embedding; when‘ random’ assigns initial embedding positions at random.

Must be one of: "spectral", "random"


low_memory: boolean = False

Avoid excessive memory use. For some datasets nearest neighbor computations can consume a lot of memory. If you find the step is failing due to memory constraints, consider setting this option to true. This approach is more computationally expensive, but avoids excessive memory use.


target: string | null

Target variable (labels) for supervised dimensionality reduction. Name of the column that contains your target values (labels).


target_weight: number = 0.5

Weighting factor between features and target. A value of 0.0 weights entirely on data, and a value of 1.0 weights entirely on target. The default of 0.5 balances the weighting equally between data and target.


random_state: integer | null = 42

A random number to initialize the algorithm for reproducibility.