Cluster dataset¶
UMAP • HDBSCAN
Identify clusters in the dataset.
Applies a clustering algorithm after vectorizing the input dataset (converting its columns to numeric-only and no missing data), and optionally reducing its dimensionality.
Essentially applies the separate step vectorize_dataset
, followed by a clustering algorithm
(HDBSCAN by default). The result is a column of cluster IDs.
For further detail on HDBSCAN's parameters see its documentation here (for usage) and here (for its API).
Usage¶
The following are the step's expected inputs and outputs and their specific types.
cluster_dataset(ds: dataset, {"param": value}) -> (cluster: column)
where the object {"param": value}
is optional in most cases and if present may contain any of the parameters described in the
corresponding section below.
Example¶
The following configuration applies clustering with the default values:
cluster_dataset(ds, {
"algorithm": "hdbscan",
"min_cluster_size": 120,
"min_samples": 15,
"reduce": {
"weights": None,
"weights_max": 32,
"weights_exp": 2,
"algorithm": "umap",
"n_components": 10,
"n_neighbors": 100,
"min_dist": 0,
"random_state": 42,
}) -> (ds.cluster)
Inputs¶
ds: dataset
An arbitrary input dataset.
Outputs¶
cluster: column
Column containing cluster tags.
Parameters¶
For further detail on HDBSCAN's parameters see its documentation here (for usage) and here (for its API).
metric: string = "euclidean"
The metric used to calculate similarity between data points.
Must be one of:
"euclidean"
,
"manhattan"
,
"chebyshev"
,
"minkowski"
,
"canberra"
,
"braycurtis"
,
"haversine"
,
"mahalanobis"
,
"wminkowski"
,
"seuclidean"
,
"cosine"
,
"correlation"
,
"hamming"
,
"jaccard"
,
"dice"
,
"russellrao"
,
"kulsinski"
,
"rogerstanimoto"
,
"sokalmichener"
,
"sokalsneath"
,
"yule"
algorithm: string = "hdbscan"
Algorithm to use. The name of a supported clustering algorithm (currently allows "hdbscan"
only).
Must be one of:
"hdbscan"
min_cluster_size: integer = 120
Minimum cluster size. The minimum size for considering a region of dense data points a proper cluster.
Range: 1 ≤ min_cluster_size < inf
min_samples: integer = 15
The larger the value, the more conservative the clustering. More points will be declared as noise, and clusters will be restricted to progressively more dense areas.
Range: 1 ≤ min_samples < inf
reduce: object | null
Umap configuration. See more here. Params for dimensionality reduction.
Items in reduce
algorithm: string = "umap"
Algorithm. The name of a supported dimensionality reduction algorithm.
Must be one of:
"umap"
encode_features: boolean = True
Toggle encoding of feature columns. When enabled, Graphext will auto-convert any column types to the numeric type before
(optionally) reducing the data's dimensionality. How this conversion is done can be
configured using the feature_encoder
option below.
Warning
If disabled, the dimensionality reduction algorithm applied in this step will assume that input data is already numerical and doesn't contain any missing values.
feature_encoder: null | object
Configures encoding of feature columns. By default (null
), Graphext chooses automatically how to convert any column types the model
may not understand natively to a numeric type.
A configuration object can be passed instead to overwrite specific parameter values with respect to their default values.
Items in feature_encoder
number: object
Numeric encoder. Configures encoding of numeric features.
Items in number
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"Mean"
,
"Median"
,
"MostFrequent"
,
"Const"
,
None
scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
scaler_params: object
Further parameters passed to the scaler
function. Details depend no the particular scaler used.
Items in scaler_params
bool: object
Boolean encoder. Configures encoding of boolean features.
Items in bool
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
ordinal: object
Ordinal encoder. Configures encoding of categorical features that have a natural order.
Items in ordinal
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
category: object | object
Category encoder. May contain either a single configuration for all categorical variables, or two different configurations for low- and high-cardinality variables. For further details pick one of the two options below.
Items in category
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
max_categories: null | integer
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.
Range: 1 ≤ max_categories < inf
encoder: null | string
How to encode categories.
Must be one of:
"OneHot"
,
"Label"
,
"Ordinal"
,
"Binary"
,
"Frequency"
,
None
scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
cardinality_treshold: integer
Condition for application of low- or high-cardinality configuration. Number of unique categories below which the low_cardinality
configuration is used,
and above which the high_cardinality
configuration is used.
Range: 3 ≤ cardinality_treshold < inf
low_cardinality: object
Low cardinality configuration. Used for categories with fewer than cardinality_threshold
unique categories.
Items in low_cardinality
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
max_categories: null | integer
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.
Range: 1 ≤ max_categories < inf
encoder: null | string
How to encode categories.
Must be one of:
"OneHot"
,
"Label"
,
"Ordinal"
,
"Binary"
,
"Frequency"
,
None
scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
high_cardinality: object
High cardinality configuration. Used for categories with more than cardinality_threshold
unique categories.
Items in high_cardinality
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
max_categories: null | integer
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.
Range: 1 ≤ max_categories < inf
encoder: null | string
How to encode categories.
Must be one of:
"OneHot"
,
"Label"
,
"Ordinal"
,
"Binary"
,
"Frequency"
,
None
scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
multilabel: object | object
Multilabel encoder. Configures encoding of multivalued categorical features (variable length lists of categories,
or the semantic type list[category]
for short). May contain either a single configuration for
all multilabel variables, or two different configurations for low- and high-cardinality variables.
For further details pick one of the two options below.
Items in multilabel
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder: null | string
How to encode categories/labels in multilabel (list[category]) columns.
Must be one of:
"Binarizer"
,
"TfIdf"
,
None
max_categories: null | integer
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Range: 2 ≤ max_categories < inf
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
cardinality_treshold: integer
Condition for application of low- or high-cardinality configuration. Number of unique categories below which the low_cardinality
configuration is used,
and above which the high_cardinality
configuration is used.
Range: 3 ≤ cardinality_treshold < inf
low_cardinality: object
Low cardinality configuration. Used for mulitabel columns with fewer than cardinality_threshold
unique categories/labels.
Items in low_cardinality
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder: null | string
How to encode categories/labels in multilabel (list[category]) columns.
Must be one of:
"Binarizer"
,
"TfIdf"
,
None
max_categories: null | integer
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Range: 2 ≤ max_categories < inf
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
high_cardinality: object
High cardinality configuration. Used for categories with more than cardinality_threshold
unique categories.
Items in high_cardinality
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder: null | string
How to encode categories/labels in multilabel (list[category]) columns.
Must be one of:
"Binarizer"
,
"TfIdf"
,
None
max_categories: null | integer
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Range: 2 ≤ max_categories < inf
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
datetime: object
Datetime encoder. Configures encoding of datetime (timestamp) features.
Items in datetime
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
components: array[string]
A list of numerical components to extract. Will create one numeric column for each component.
Items in components
item: string
Must be one of:
"day"
,
"dayofweek"
,
"dayofyear"
,
"hour"
,
"minute"
,
"month"
,
"quarter"
,
"season"
,
"second"
,
"week"
,
"weekday"
,
"weekofyear"
,
"year"
cycles: array[string]
A list of cyclical time features to extract. "Cycles" are numerical transformations of features that should be represented on a circle. E.g. months, ranging from 1 to 12, should be arranged such that 12 and 1 are next to each other, rather than on opposite ends of a linear scale. We represent such cyclical time features on a circle by creating two columns for each original feature: the sin and cos of the numerical feature after appropriate scaling.
Items in cycles
item: string
Must be one of:
"day"
,
"dayofweek"
,
"dayofyear"
,
"hour"
,
"month"
epoch: null | boolean
Whether to include the epoch as new feature (seconds since 01/01/1970).
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"Mean"
,
"Median"
,
"MostFrequent"
,
"Const"
,
None
component_scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
vector_scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
embedding: object
Embedding/vector encoder. Configures encoding of multivalued numerical features (variable length lists of numbers, i.e. vectors, or the semantic type list[number]
for short).
Items in embedding
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
text: object
Text encoder. Configures encoding of text (natural language) features. Currently only allows Tf-Idf embeddings to represent texts. If you wish to use other embeddings, e.g. semantic, Word2Vec etc., transform your text column first using another step, and then use that result instead of the original texts.
Warning
Texts are excluded by default from the overall encoding of the dataset. See parameter
include_text_features
below to active it.
Items in text
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder_params: object
Parameters to be passed to the text encoder (Tf-Idf parameters only for now). See scikit-learn's documentation for detailed parameters and their explanation.
Items in encoder_params
n_components: integer
How many output features to generate. The resulting Tf-Idf vectors will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. This performs a kind of latent semantic analysis. By default we will reduce to 200 components.
Range: 2 ≤ n_components ≤ 1024
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
include_text_features: boolean = False
Whether to include or ignore text columns during the processing of input data. Enabling this will convert texts to their Tf-Idf representation. Each text will be
converted to an N-dimensional vector in which each component measures the relative
"over-representation" of a specific word (or n-gram) relative to its overall
frequency in the whole dataset. This is disabled by default because it will
often be better to convert texts explicitly using a previous step, such as
embed_text
or embed_text_with_model
weights: object | null
Weights used to multiply the normalized columns/features after vectorization. Should be a dictionary/object of {"column_name": weight, ...}
items. Will be scaled using the
parameters weights_max
, and weights_exp
before being applied. So only the relative weight of
the columns is important here, not their absolute values.
Items in weights
column_weight: number
A "column_name": numeric_weight
pair. Each column name must refer to an existing column in the dataset.
Example parameter values:
{"date": 0.5, "age": 2}
type_weights: object | null
Weights used to multiply the normalized columns/features after vectorization. Should be a dictionary/object of "type": weight"
items. Will be scaled using the parameters
weights_max
, and weights_exp
before being applied. So only the relative weight of the columns
is important here, not their absolute values.
Items in type_weights
number: number
Weight for columns of type Number
datetime: number
Weight for columns of type Datetime
category: number
Weight for columns of type Category
ordinal: number
Weight for columns of type Ordinal
embedding: number
Weight for columns of type Embedding
(List[Number]
).
multilabel: number
Weight for columns of type Multilabel
(List[Category]
).
weights_max: number = 32
Maximum weight to scale the normalized columns with.
Range: 0 ≤ weights_max < inf
weights_exp: integer = 2
Weight exponent. Weights will be raised to this power before(!) scaling to weights_max
. This allows for a non-linear
mapping from input weights to those used eventually to multiply the normalized columns.
n_neighbors: integer = 100
Number of neighbours. Use smaller numbers to concentrate on the local structure in the data, and larger values to focus on the more global structure.
For further details see here.
Range: 1 ≤ n_neighbors < inf
min_dist: number = 0.1
Minimum distance between reduced data points. Controls how tightly UMAP is allowed to pack points together in the reduced space. Smaller values will lead to points more tightly packed together (potentially useful if result is used to cluster the points). Larger values will distribute points with more space between them (which may be desirable for visualization, or to focus more on the global structure of the date).
For further details see here.
Range: 0 ≤ min_dist < inf
n_components: integer = 10
Dimensionality of the reduced data.
Range: 1 ≤ n_components < inf
metric: string = "euclidean"
Metric to use for measuring similarity between data points.
Must be one of:
"euclidean"
,
"manhattan"
,
"chebyshev"
,
"minkowski"
,
"canberra"
,
"braycurtis"
,
"haversine"
,
"mahalanobis"
,
"wminkowski"
,
"seuclidean"
,
"cosine"
,
"correlation"
,
"hamming"
,
"jaccard"
,
"dice"
,
"russellrao"
,
"kulsinski"
,
"rogerstanimoto"
,
"sokalmichener"
,
"sokalsneath"
,
"yule"
n_epochs: integer | null
Number of training iterations used in optimizing the embedding. Larger values result in more accurate embeddings. If null
is specified a value will be
selected based on the size of the input dataset (200 for large datasets, 500 for small).
init: string = "auto"
How to initialize the low dimensional embedding. When "spectral", uses a (relatively expensive) spectral embedding. When "random", assigns initial embedding positions at random. This uses less memory but may make UMAP slower to converge on the optimal embedding. "auto" selects between the two automatically depending on the size of the dataset.
Must be one of:
"spectral"
,
"random"
,
"auto"
low_memory: boolean | string | null = "auto"
Avoid excessive memory use. For some datasets nearest neighbor computations can consume a lot of memory. If you find
the step is failing due to memory constraints, consider setting this option to true
.
This approach is more computationally expensive, but avoids excessive memory use. Setting
it to "auto", will enable this mode automatically depending on the size of the dataset.
Must be one of:
True
,
False
,
"auto"
,
None
target: string | null
Target variable (labels) for supervised dimensionality reduction. Name of the column that contains your target values (labels).
target_weight: number = 0.5
Weighting factor between features and target. A value of 0.0 weights entirely on data, and a value of 1.0 weights entirely on target. The default of 0.5 balances the weighting equally between data and target.
densmap: boolean = False
Try to better preserve local densities in the data. Specifies whether the density-augmented objective of densMAP should be used for optimization. Turning on this option generates an embedding where the local densities are encouraged to be correlated with those in the original space.
dens_lambda: number = 2.0
Strength of local density preservation. Controls the regularization weight of the density correlation term in densMAP. Higher values prioritize density preservation over the UMAP objective, and vice versa for values closer to zero. Setting this parameter to zero is equivalent to running the original UMAP algorithm.
random_state: integer | null = 42
A random number to initialize the algorithm for reproducibility.