Skip to main content
Works like vectorize_dataset, but instead of converting the input dataset to a new dataset of N numeric columns, it creates a single column in the original dataset containing vectors (lists) of N components. In other words, the result of vectorize_dataset is converted to a column of embeddings, where each embedding is a numerical representation of the corresponding row in the original dataset. Many machine learning and AI algorithms expect their input data to be in pure numerical form, i.e. not containing categorical variables, missing values etc. This step converts arbitrary datasets, potentially containing non-numerical variables and NaNs, into this expected form. It does this by defining for each possible type of input column a transformation from non-numeric to numeric values. As an example, ordered categorical variables (ordinals) such as the day of week, may be converted into a series of numbers (0..7). Non-ordered categorical variables of low-cardinality (containing few different categories) may be expanded into multiple new columns of 0s and 1s, indicating whether each row belongs to a specific category or not. Similar transformations are applied to dates, multivalued categoricals etc. NaNs are imputed (replaced) with an appropriate value from the corresponding column (e.g. the median in a quantitative column). In addition, a new component of 0s and 1s is added, indicating whether the original column had a missing value or not. The resulting embeddings will almost certainly not contain the same number of components as the original dataset’s columns (as the example of categorical variables shows). The n_components parameter controls how many components the embeddings should have, and if this is smaller than would result normally, a dimensionality reduction will be applied (UMAP by default). The resulting numerical representation of the original data points aims to preserve the structure of similarities. I.e. if two original rows are similar to each other, than their (potentially reduced) numerical representations should also be similar. Equally, two very different rows should have representations that are also very different.

Usage

The following examples show how the step can be used in a recipe.

Examples

  • Example 1
  • Example 2
  • Signature
The following, simplest, example, creates a new column of vector embeddings, each containing numeric components only, and hopefully capturing the same or most of the information in the corresponding original row.
embed_dataset(ds) -> (ds.embedding)

Inputs & Outputs

The following are the inputs expected by the step and the outputs it produces. These are generally columns (ds.first_name), datasets (ds or ds[["first_name", "last_name"]]) or models (referenced by name e.g. "churn-clf").
ds
dataset
required
An arbitrary input dataset.
embedding
column[list[number]]
required
A column containing embedding vectors (lists of numbers) numerically representing the correspoding rows in ds.
Two optional columns holding the links between nearest neighbours in space of the created embeddings.

Configuration

The following parameters can be used to configure the behaviour of the step by including them in a json object as the last “input” to the step, i.e. step(..., {"param": "value", ...}) -> (output).

Parameters

algorithm
string
default:"umap"
Algorithm. The name of a supported dimensionality reduction algorithm.Values must be one of the following:
  • umap
encode_features
boolean
default:"true"
Toggle encoding of feature columns. When enabled, Graphext will auto-convert any column types to the numeric type before (optionally) reducing the data’s dimensionality. How this conversion is done can be configured using the feature_encoder option below.
If disabled, the dimensionality reduction algorithm applied in this step will assume that input data is already numerical and doesn’t contain any missing values.
feature_encoder
[null, object]
Configures encoding of feature columns. By default (null), Graphext chooses automatically how to convert any column types the model may not understand natively to a numeric type.A configuration object can be passed instead to overwrite specific parameter values with respect to their default values.
number
object
Numeric encoder. Configures encoding of numeric features.
indicate_missing
boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer
[null, string]
Whether and how to impute (replace/fill) missing values.Values must be one of the following:
  • Mean
  • Median
  • MostFrequent
  • Const
  • None
scaler
[null, string]
Whether and how to scale the final numerical values (across a single column).Values must be one of the following:
  • Standard
  • Robust
  • KNN
  • None
scaler_params
object
Further parameters passed to the scaler function. Details depend no the particular scaler used.
bool
object
Boolean encoder. Configures encoding of boolean features.
indicate_missing
boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer
[null, string]
Whether and how to impute (replace/fill) missing values.Values must be one of the following:
  • MostFrequent
  • Const
  • None
ordinal
object
Ordinal encoder. Configures encoding of categorical features that have a natural order.
indicate_missing
boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer
[null, string]
Whether and how to impute (replace/fill) missing values.Values must be one of the following:
  • MostFrequent
  • Const
  • None
category
[object, object]
Category encoder. May contain either a single configuration for all categorical variables, or two different configurations for low- and high-cardinality variables. For further details pick one of the two options below.
  • Simple category encoder
  • Conditional category encoder
indicate_missing
boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer
[null, string]
Whether and how to impute (replace/fill) missing values.Values must be one of the following:
  • MostFrequent
  • Const
  • None
max_categories
[null, integer]
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single “Others” category.Values must be in the following range:
1max_categories < inf
encoder
[null, string]
How to encode categories.Values must be one of the following:OneHot Label Ordinal Binary Frequency None
scaler
[null, string]
Whether and how to scale the final numerical values (across a single column).Values must be one of the following:
  • Standard
  • Robust
  • KNN
  • None
multilabel
[object, object]
Multilabel encoder. Configures encoding of multivalued categorical features (variable length lists of categories, or the semantic type list[category] for short). May contain either a single configuration for all multilabel variables, or two different configurations for low- and high-cardinality variables. For further details pick one of the two options below.
  • Simple multilabel encoder
  • Conditional multilabel encoder
indicate_missing
boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder
[null, string]
How to encode categories/labels in multilabel (list[category]) columns.Values must be one of the following:
  • Binarizer
  • TfIdf
  • None
max_categories
[null, integer]
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn’s truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.Values must be in the following range:
2max_categories < inf
scaler
[null, string]
How to scale the encoded (numerical columns).Values must be one of the following:
  • Euclidean
  • KNN
  • Norm
  • None
datetime
object
Datetime encoder. Configures encoding of datetime (timestamp) features.
indicate_missing
boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
components
array[string]
A list of numerical components to extract. Will create one numeric column for each component.
Item
string
Each item in array.Values must be one of the following:day dayofweek dayofyear hour minute month quarter season second week weekday weekofyear year
cycles
array[string]
A list of cyclical time features to extract. “Cycles” are numerical transformations of features that should be represented on a circle. E.g. months, ranging from 1 to 12, should be arranged such that 12 and 1 are next to each other, rather than on opposite ends of a linear scale. We represent such cyclical time features on a circle by creating two columns for each original feature: the sin and cos of the numerical feature after appropriate scaling.
Item
string
Each item in array.Values must be one of the following:
  • day
  • dayofweek
  • dayofyear
  • hour
  • month
epoch
[null, boolean]
Whether to include the epoch as new feature (seconds since 01/01/1970).
imputer
[null, string]
Whether and how to impute (replace/fill) missing values.Values must be one of the following:
  • Mean
  • Median
  • MostFrequent
  • Const
  • None
component_scaler
[null, string]
Whether and how to scale the final numerical values (across a single column).Values must be one of the following:
  • Standard
  • Robust
  • KNN
  • None
vector_scaler
[null, string]
How to scale the encoded (numerical columns).Values must be one of the following:
  • Euclidean
  • KNN
  • Norm
  • None
embedding
object
Embedding/vector encoder. Configures encoding of multivalued numerical features (variable length lists of numbers, i.e. vectors, or the semantic type list[number] for short).
indicate_missing
boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
scaler
[null, string]
How to scale the encoded (numerical columns).Values must be one of the following:
  • Euclidean
  • KNN
  • Norm
  • None
text
object
Text encoder. Configures encoding of text (natural language) features. Currently only allows Tf-Idf embeddings to represent texts. If you wish to use other embeddings, e.g. semantic, Word2Vec etc., transform your text column first using another step, and then use that result instead of the original texts.
Texts are excluded by default from the overall encoding of the dataset. See parameter include_text_features below to active it.
indicate_missing
boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder_params
object
Parameters to be passed to the text encoder (Tf-Idf parameters only for now). See scikit-learn’s documentation for detailed parameters and their explanation.
n_components
integer
How many output features to generate. The resulting Tf-Idf vectors will be reduced to these many dimensions (columns) using scikit-learn’s truncated SVD. This performs a kind of latent semantic analysis. By default we will reduce to 200 components.Values must be in the following range:
2n_components1024
scaler
[null, string]
How to scale the encoded (numerical columns).Values must be one of the following:
  • Euclidean
  • KNN
  • Norm
  • None
include_text_features
boolean
default:"false"
Whether to include or ignore text columns during the processing of input data. Enabling this will convert texts to their Tf-Idf representation. Each text will be converted to an N-dimensional vector in which each component measures the relative “over-representation” of a specific word (or n-gram) relative to its overall frequency in the whole dataset. This is disabled by default because it will often be better to convert texts explicitly using a previous step, such as embed_text or embed_text_with_model.
weights
[object, null]
Weights used to multiply the normalized columns/features after vectorization. Should be a dictionary/object of {"column_name": weight, ...} items. Will be scaled using the parameters weights_max, and weights_exp before being applied. So only the relative weight of the columns is important here, not their absolute values.
column_weight
number
A "column_name": numeric_weight pair. Each column name must refer to an existing column in the dataset.
  • {"date": 0.5, "age": 2}
type_weights
[object, null]
Weights used to multiply the normalized columns/features after vectorization. Should be a dictionary/object of "type": weight" items. Will be scaled using the parameters weights_max, and weights_exp before being applied. So only the relative weight of the columns is important here, not their absolute values.
number
number
Weight for columns of type Number
datetime
number
Weight for columns of type Datetime
category
number
Weight for columns of type Category
ordinal
number
Weight for columns of type Ordinal
embedding
number
Weight for columns of type Embedding (List[Number]).
multilabel
number
Weight for columns of type Multilabel (List[Category]).
weights_max
number
default:"32"
Maximum weight to scale the normalized columns with.Values must be in the following range:
0weights_max < inf
weights_exp
integer
default:"2"
Weight exponent. Weights will be raised to this power before(!) scaling to weights_max. This allows for a non-linear mapping from input weights to those used eventually to multiply the normalized columns.
n_neighbors
integer
default:"100"
Number of neighbours. Use smaller numbers to concentrate on the local structure in the data, and larger values to focus on the more global structure.For further details see here.Values must be in the following range:
1n_neighbors < inf
min_dist
number
default:"0.1"
Minimum distance between reduced data points. Controls how tightly UMAP is allowed to pack points together in the reduced space. Smaller values will lead to points more tightly packed together (potentially useful if result is used to cluster the points). Larger values will distribute points with more space between them (which may be desirable for visualization, or to focus more on the global structure of the date).For further details see here.Values must be in the following range:
0min_dist < inf
n_components
integer
default:"10"
Dimensionality of the reduced data.Values must be in the following range:
1n_components < inf
metric
string
default:"euclidean"
Metric to use for measuring similarity between data points.Values must be one of the following:euclidean manhattan chebyshev minkowski canberra braycurtis haversine mahalanobis wminkowski seuclidean cosine correlation hamming jaccard dice russellrao kulsinski rogerstanimoto sokalmichener sokalsneath yule
n_epochs
[integer, null]
Number of training iterations used in optimizing the embedding. Larger values result in more accurate embeddings. If null is specified a value will be selected based on the size of the input dataset (200 for large datasets, 500 for small).
init
string
default:"auto"
How to initialize the low dimensional embedding. When “spectral”, uses a (relatively expensive) spectral embedding. “pca” uses the first n_components from a principal component analysis. “tswspectral” is a cheaper alternative to “spectral”. When “random”, assigns initial embedding positions at random. This uses the least amount of memory and time but may make UMAP slower to converge on the optimal embedding. “auto” selects between “spectral” and “random” automatically depending on the size of the dataset.Values must be one of the following:
  • spectral
  • pca
  • tswspectral
  • random
  • auto
low_memory
[boolean, string, null]
default:"auto"
Avoid excessive memory use. For some datasets nearest neighbor computations can consume a lot of memory. If you find the step is failing due to memory constraints, consider setting this option to true. This approach is more computationally expensive, but avoids excessive memory use. Setting it to “auto”, will enable this mode automatically depending on the size of the dataset.Values must be one of the following:
  • True
  • False
  • auto
  • None
target
[string, null]
Target variable (labels) for supervised dimensionality reduction. Name of the column that contains your target values (labels).
target_weight
number
default:"0.5"
Weighting factor between features and target. A value of 0.0 weights entirely on data, and a value of 1.0 weights entirely on target. The default of 0.5 balances the weighting equally between data and target.
densmap
boolean
default:"false"
Try to better preserve local densities in the data. Specifies whether the density-augmented objective of densMAP should be used for optimization. Turning on this option generates an embedding where the local densities are encouraged to be correlated with those in the original space.
dens_lambda
number
default:"2.0"
Strength of local density preservation. Controls the regularization weight of the density correlation term in densMAP. Higher values prioritize density preservation over the UMAP objective, and vice versa for values closer to zero. Setting this parameter to zero is equivalent to running the original UMAP algorithm.
unique
boolean
default:"false"
Drop duplicate rows before embedding. If you have more duplicates than you have n_neighbors you can have the identical data points lying in different regions of your space. It also violates the definition of a metric. This option will remove duplicates before embedding, and then map the original data points back to the reduced space. Duplicate data points will be placed in the exact same location as the original data points.
random_state
[integer, null]
default:"42"
A random number to initialize the algorithm for reproducibility.
Maintain links for n nearest neighbours only in graph.Values must be in the following range:
1links_top_n15
I