Skip to content

Cluster embeddings

HDBSCAN

Identify clusters using the distance between provided embeddings.

Eqivalent to cluster_dataset, but instead of a dataset expects a column of embeddings as input. The input may e.g. be word2vec embeddings from an embed_text step, or whole dataset embeddings from an embed_dataset step.

Optionally reduces the dimensionality of the embeddings (by default using UMAP). This may help with making the data denser (counteracting the "curse-of-dimensionality"), and thus making it potentially easier to identify clusters.

The clustering algorithm used by default is (HDBSCAN), which produces a column of positive cluster IDs, or -1 if a data point is considered noise (not belonging to any cluster).

For further detail on HDBSCAN's parameters see its documentation here (for usage) and here (for its API).

Example

The following configuration applies clustering with the default values:

cluster_embeddings(ds, {
  "algorithm": "hdbscan",
  "min_cluster_size": 120,
  "min_samples": 15,
  "reduce": {
      "weights": None,
      "weights_max": 32,
      "weights_exp": 2,
      "algorithm": "umap",
      "n_components": 10,
      "n_neighbors": 100,
      "min_dist": 0,
      "random_state": 42,
  }) -> (ds.cluster)

Usage

The following are the step's expected inputs and outputs and their specific types.

cluster_embeddings(embeddings: list[number], {"param": value}) -> (cluster: category)

where the object {"param": value} is optional in most cases and if present may contain any of the parameters described in the corresponding section below.

Inputs


embeddings: column:list[number]

Outputs


cluster: column:category

Column containing cluster tags.

Parameters


For further detail on HDBSCAN's parameters see its documentation here (for usage) and here (for its API).


metric: string = "euclidean"

The metric used to calculate similarity between data points.

Must be one of: "euclidean", "manhattan", "chebyshev", "minkowski", "canberra", "braycurtis", "haversine", "mahalanobis", "wminkowski", "seuclidean", "cosine", "correlation", "hamming", "jaccard", "dice", "russellrao", "kulsinski", "rogerstanimoto", "sokalmichener", "sokalsneath", "yule"


algorithm: string = "hdbscan"

Algorithm to use. The name of a supported clustering algorithm (currently allows "hdbscan" only).

Must be one of: "hdbscan"


min_cluster_size: integer = 120

Minimum cluster size. The minimum size for considering a region of dense data points a proper cluster.

Range: 1 ≤ min_cluster_size < inf


min_samples: integer = 15

The larger the value, the more conservative the clustering. More points will be declared as noise, and clusters will be restricted to progressively more dense areas.

Range: 1 ≤ min_samples < inf


reduce: object | null

Umap configuration. See more here. Params for dimensionality reduction.

Items in reduce

algorithm: string = "umap"

Algorithm. The name of a supported dimensionality reduction algorithm.

Must be one of: "umap"


encode_features: boolean = True

Preprocess features to normalize relative distances for quantitative and categorical variables.


weights: object | null

Weights used to multiply the normalized columns/features after vectorization. Should be a dictionary/object of {"column_name": weight, ...} items. Will be scaled using the parameters weights_max, and weights_exp before being applied. So only the relative weight of the columns is important here, not their absolute values.

Items in weights

column_weight: number

A "column_name": numeric_weight pair. Each column name must refer to an existing column in the dataset.

Example parameter values:

  • {"date": 0.5, "age": 2}

type_weights: object | null

Weights used to multiply the normalized columns/features after vectorization. Should be a dictionary/object of "type": weight" items. Will be scaled using the parameters weights_max, and weights_exp before being applied. So only the relative weight of the columns is important here, not their absolute values.

Items in type_weights

number: number

Weight for columns of type Number


datetime: number

Weight for columns of type Datetime


category: number

Weight for columns of type Category


ordinal: number

Weight for columns of type Ordinal


embedding: number

Weight for columns of type Embedding (List[Number]).


multilabel: number

Weight for columns of type Multilabel (List[Category]).


weights_max: number = 32

Maximum weight to scale the normalized columns with.

Range: 0 ≤ weights_max < inf


weights_exp: integer = 2

Weight exponent. Weights will be raised to this power before(!) scaling to weights_max. This allows for a non-linear mapping from input weights to those used eventually to multiply the normalized columns.


n_neighbors: integer = 100

Number of neighbours. Use smaller numbers to concentrate on the local structure in the data, and larger values to focus on the more global structure.

For further details see here.

Range: 1 ≤ n_neighbors < inf


min_dist: number = 0.1

Minimum distance between reduced data points. Controls how tightly UMAP is allowed to pack points together in the reduced space. Smaller values will lead to points more tightly packed together (potentially useful if result is used to cluster the points). Larger values will distribute points with more space between them (which may be desirable for visualization, or to focus more on the global structure of the date).

For further details see here.

Range: 0 ≤ min_dist < inf


n_components: integer = 10

Dimensionality of the reduced data.

Range: 1 ≤ n_components < inf


metric: string = "euclidean"

Metric to use for measuring similarity between data points.

Must be one of: "euclidean", "manhattan", "chebyshev", "minkowski", "canberra", "braycurtis", "haversine", "mahalanobis", "wminkowski", "seuclidean", "cosine", "correlation", "hamming", "jaccard", "dice", "russellrao", "kulsinski", "rogerstanimoto", "sokalmichener", "sokalsneath", "yule"


n_epochs: integer | null

Number of training iterations used in optimizing the embedding. Larger values result in more accurate embeddings. If null is specified a value will be selected based on the size of the input dataset (200 for large datasets, 500 for small).


init: string = "spectral"

How to initialize the low dimensional embedding. When ‘spectral’, uses a spectral embedding; when‘ random’ assigns initial embedding positions at random.

Must be one of: "spectral", "random"


low_memory: boolean = False

Avoid excessive memory use. For some datasets nearest neighbor computations can consume a lot of memory. If you find the step is failing due to memory constraints, consider setting this option to true. This approach is more computationally expensive, but avoids excessive memory use.


target: string | null

Target variable (labels) for supervised dimensionality reduction. Name of the column that contains your target values (labels).


target_weight: number = 0.5

Weighting factor between features and target. A value of 0.0 weights entirely on data, and a value of 1.0 weights entirely on target. The default of 0.5 balances the weighting equally between data and target.


random_state: integer | null = 42

A random number to initialize the algorithm for reproducibility.