Create a vectorized (numeric) dataset, (optionally) of reduced dimensionality.
Many machine learning and AI algorithms expect their input data to be in pure numerical form, i.e. not containing
categorical variables, missing values etc. This step converts arbitrary datasets, potentially containing non-numerical
variables and NaNs, into this expected form. It does this by defining for each possible type of input column a transformation
from non-numeric to numeric values. As an example, ordered categorical variables (ordinals) such as the day of week,
may be converted into a series of numbers (0..7). Non-ordered categorical variables of low-cardinality (containing few
different categories) may be expanded into multiple new columns of 0s and 1s, indicating whether each row belongs to a
specific category or not. Similar transformations are applied to dates, multivalued categoricals etc.
NaNs are imputed (replaced) with an appropriate value from the corresponding column (e.g. the median in a quantitative
column). In addition, a new column of 0s and 1s is added, indicating whether the original column had a missing value or not.
The resulting dataset will almost certainly not contain the same number of columns as the original (as the example of
categorical variables shows), and for simplicity, its columns will simply be numbered.
If desired, the n_components parameter may be used to select how many columns the new dataset should have, and if
this is smaller than would result normally, a dimensionality reduction will be applied (UMAP
by default).
The resulting numerical representation of the original data points aims to preserve the structure of similarities.
I.e. if two original rows are similar to each other, than their (potentially reduced) numerical representations should
also be similar. Equally, two very different rows should have representations that are also very different.
Note, if you need the output as a column of embedding vectors, rather than a dataset, use embed_dataset
instead.
The following examples show how the step can be used in a recipe.
The following, simplest, example, creates a new dataset containing a (potentially different) number of only numeric columns, where each row corresponds to its original row, and hopefully capturing the same or most of its information.
vectorize_dataset(ds)->(ds_vec)
The following, simplest, example, creates a new dataset containing a (potentially different) number of only numeric columns, where each row corresponds to its original row, and hopefully capturing the same or most of its information.
vectorize_dataset(ds)->(ds_vec)
The following example will convert and reduce the input dataset to a purely numeric dataset of 10 columns. After normalization, the date column will be multiplied by 0.5 to reduces its weight relative to the others. The column age on the other hand will be given more importance. Also, 15 neighbours are considered for each data point in UMAP, so that we give more importance to the similarity between nearby points and less importance to the global structure of the data when calculating the numeric representation of the dataset.
General syntax for using the step in a recipe. Shows the inputs and outputs the step is expected to receive and will produce respectively. For futher details see sections below.
The following are the inputs expected by the step and the outputs it produces. These are generally
columns (ds.first_name), datasets (ds or ds[["first_name", "last_name"]]) or models (referenced
by name e.g. "churn-clf").
The following parameters can be used to configure the behaviour of the step by including them in
a json object as the last “input” to the step, i.e. step(..., {"param": "value", ...}) -> (output).
Toggle encoding of feature columns.
When enabled, Graphext will auto-convert any column types to the numeric type before
(optionally) reducing the data’s dimensionality. How this conversion is done can be
configured using the feature_encoder option below.
If disabled, the dimensionality reduction algorithm applied in this step will
assume that input data is already numerical and doesn’t contain any missing
values.
Configures encoding of feature columns.
By default (null), Graphext chooses automatically how to convert any column types the model
may not understand natively to a numeric type.
A configuration object can be passed instead to overwrite specific parameter values with respect
to their default values.
Category encoder.
May contain either a single configuration for all categorical variables, or two different configurations
for low- and high-cardinality variables. For further details pick one of the two options below.
Maximum number of unique categories to encode.
Only the N-1 most common categories will be encoded, and the rest will be grouped into a single
“Others” category.
Maximum number of unique categories to encode.
Only the N-1 most common categories will be encoded, and the rest will be grouped into a single
“Others” category.
Condition for application of low- or high-cardinality configuration.
Number of unique categories below which the low_cardinality configuration is used,
and above which the high_cardinality configuration is used.
Maximum number of unique categories to encode.
Only the N-1 most common categories will be encoded, and the rest will be grouped into a single
“Others” category.
Maximum number of unique categories to encode.
Only the N-1 most common categories will be encoded, and the rest will be grouped into a single
“Others” category.
Multilabel encoder.
Configures encoding of multivalued categorical features (variable length lists of categories,
or the semantic type list[category] for short). May contain either a single configuration for
all multilabel variables, or two different configurations for low- and high-cardinality variables.
For further details pick one of the two options below.
Maximum number of categories/labels to encode.
If a number is provided, the result of the encoding will be reduced to these many dimensions (columns)
using scikit-learn’s truncated SVD.
When applied together with (after a) Tf-Idf encoding, this performs a kind of
latent semantic analysis.
Maximum number of categories/labels to encode.
If a number is provided, the result of the encoding will be reduced to these many dimensions (columns)
using scikit-learn’s truncated SVD.
When applied together with (after a) Tf-Idf encoding, this performs a kind of
latent semantic analysis.
Condition for application of low- or high-cardinality configuration.
Number of unique categories below which the low_cardinality configuration is used,
and above which the high_cardinality configuration is used.
Maximum number of categories/labels to encode.
If a number is provided, the result of the encoding will be reduced to these many dimensions (columns)
using scikit-learn’s truncated SVD.
When applied together with (after a) Tf-Idf encoding, this performs a kind of
latent semantic analysis.
Maximum number of categories/labels to encode.
If a number is provided, the result of the encoding will be reduced to these many dimensions (columns)
using scikit-learn’s truncated SVD.
When applied together with (after a) Tf-Idf encoding, this performs a kind of
latent semantic analysis.
A list of cyclical time features to extract.
“Cycles” are numerical transformations of features that should be represented on a circle. E.g. months,
ranging from 1 to 12, should be arranged such that 12 and 1 are next to each other, rather than on
opposite ends of a linear scale. We represent such cyclical time features on a circle by creating two
columns for each original feature: the sin and cos of the numerical feature after appropriate scaling.
Embedding/vector encoder.
Configures encoding of multivalued numerical features (variable length lists of numbers, i.e. vectors, or the semantic type list[number] for short).
Text encoder.
Configures encoding of text (natural language) features. Currently only allows
Tf-Idf embeddings to represent texts. If you wish
to use other embeddings, e.g. semantic, Word2Vec etc., transform your text column first using
another step, and then use that result instead of the original texts.
Texts are excluded by default from the overall encoding of the dataset. See parameter
include_text_features below to active it.
Parameters to be passed to the text encoder (Tf-Idf parameters only for now).
See scikit-learn’s documentation
for detailed parameters and their explanation.
How many output features to generate.
The resulting Tf-Idf vectors will be reduced to these many dimensions (columns) using scikit-learn’s
truncated SVD.
This performs a kind of latent semantic analysis.
By default we will reduce to 200 components.
Whether to include or ignore text columns during the processing of input data.
Enabling this will convert texts to their Tf-Idf representation. Each text will be
converted to an N-dimensional vector in which each component measures the relative
“over-representation” of a specific word (or n-gram) relative to its overall
frequency in the whole dataset. This is disabled by default because it will
often be better to convert texts explicitly using a previous step, such as
embed_text or embed_text_with_model.
Weights used to multiply the normalized columns/features after vectorization.
Should be a dictionary/object of {"column_name": weight, ...} items. Will be scaled using the
parameters weights_max, and weights_exp before being applied. So only the relative weight of
the columns is important here, not their absolute values.
Weights used to multiply the normalized columns/features after vectorization.
Should be a dictionary/object of "type": weight" items. Will be scaled using the parameters
weights_max, and weights_exp before being applied. So only the relative weight of the columns
is important here, not their absolute values.
Weight exponent.
Weights will be raised to this power before(!) scaling to weights_max. This allows for a non-linear
mapping from input weights to those used eventually to multiply the normalized columns.
Minimum distance between reduced data points.
Controls how tightly UMAP is allowed to pack points together in the reduced space. Smaller values will
lead to points more tightly packed together (potentially useful if result is used to cluster the points).
Larger values will distribute points with more space between them (which may be desirable for visualization,
or to focus more on the global structure of the date).
Number of training iterations used in optimizing the embedding.
Larger values result in more accurate embeddings. If null is specified a value will be
selected based on the size of the input dataset (200 for large datasets, 500 for small).
How to initialize the low dimensional embedding.
When “spectral”, uses a (relatively expensive) spectral embedding. “pca” uses the first n_components
from a principal component analysis. “tswspectral” is a cheaper alternative to “spectral”. When “random”,
assigns initial embedding positions at random. This uses the least amount of memory and time but may make UMAP
slower to converge on the optimal embedding. “auto” selects between “spectral” and “random” automatically
depending on the size of the dataset.
Avoid excessive memory use.
For some datasets nearest neighbor computations can consume a lot of memory. If you find
the step is failing due to memory constraints, consider setting this option to true.
This approach is more computationally expensive, but avoids excessive memory use. Setting
it to “auto”, will enable this mode automatically depending on the size of the dataset.
Weighting factor between features and target.
A value of 0.0 weights entirely on data, and a value of 1.0 weights entirely on target. The
default of 0.5 balances the weighting equally between data and target.
Try to better preserve local densities in the data.
Specifies whether the density-augmented objective of densMAP should be used for optimization.
Turning on this option generates an embedding where the local densities are encouraged to be
correlated with those in the original space.
Strength of local density preservation.
Controls the regularization weight of the density correlation term in densMAP. Higher values
prioritize density preservation over the UMAP objective, and vice versa for values closer to zero.
Setting this parameter to zero is equivalent to running the original UMAP algorithm.
Drop duplicate rows before embedding.
If you have more duplicates than you have n_neighbors you can have the identical data points lying
in different regions of your space. It also violates the definition of a metric. This option will
remove duplicates before embedding, and then map the original data points back to the reduced space. Duplicate
data points will be placed in the exact same location as the original data points.
Scaling factor for the coordinates.
The maximum (normalized) coordinates in positive and negative X and Y directions. Acts like a zoom, with
a scale of 1 corresponding to zooming out to the maximum (maximal space between nodes), and 0 to the densest
layout.
If set to "auto", will try to determine an appropriate scale taking into account the number of nodes.
If set to null, only changes calculated coordinates to ensure they’re within the allowed limits (16.000).