Train and use a machine learning model to predict (impute) the missing values in a column.
Non-missing values in the target column will be used to train a prediction model (a Catboost regressor or classifier), which then predicts (imputes) the missing values. Only simple numerical or categorical input data can be imputed.
The following are the inputs expected by the step and the outputs it produces. These are generally
columns (ds.first_name), datasets (ds or ds[["first_name", "last_name"]]) or models (referenced
by name e.g. "churn-clf").
The following parameters can be used to configure the behaviour of the step by including them in
a json object as the last “input” to the step, i.e. step(..., {"param": "value", ...}) -> (output).
Name of the column to impute.
The step will predict the missing values for this column, using rows in the dataset where the values are not missing to train the prediction model.
The method for processing missing values in the input dataset.
Possible values:
“Forbidden”:
Missing values are not supported, their presence is interpreted as an error.
“Min”:
Missing values are processed as the minimum value (less than all other values) for the feature.
It is guaranteed that a split that separates missing values from all other values is considered
when selecting trees.
“Max”:
Missing values are processed as the maximum value (greater than all other values) for the feature.
It is guaranteed that a split that separates missing values from all other values is considered when
selecting trees.
Using the Min or Max value of this parameter guarantees that a split between missing values and other values is considered when selecting a new split in the tree.
Number of Iterations.
The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.
Allow Writing Files.
Allow to write analytical and snapshot files during training. If set to “false”, the snapshot and data visualization tools are unavailable.
One Hot Max Size.
Use one-hot encoding for all categorical features with a number of different values less than or equal to the given parameter value. Ctrs are not calculated for such features.
The maximum number of features that can be combined.
Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.
Toggle encoding of feature columns.
When enabled, Graphext will auto-convert any column types to the numeric type before
fitting the model. How this conversion is done can be configured using the feature_encoder
option below.
If disabled, any model trained in this step will assume that input data
is already in an appropriate format (e.g. numerical and not containing any missing values).
Configures encoding of feature columns.
By default (null), Graphext chooses automatically how to convert any column types the model
may not understand natively to a numeric type.
A configuration object can be passed instead to overwrite specific parameter values with respect
to their default values.
Category encoder.
May contain either a single configuration for all categorical variables, or two different configurations
for low- and high-cardinality variables. For further details pick one of the two options below.
Maximum number of unique categories to encode.
Only the N-1 most common categories will be encoded, and the rest will be grouped into a single
“Others” category.
Multilabel encoder.
Configures encoding of multivalued categorical features (variable length lists of categories,
or the semantic type list[category] for short). May contain either a single configuration for
all multilabel variables, or two different configurations for low- and high-cardinality variables.
For further details pick one of the two options below.
Maximum number of categories/labels to encode.
If a number is provided, the result of the encoding will be reduced to these many dimensions (columns)
using scikit-learn’s truncated SVD.
When applied together with (after a) Tf-Idf encoding, this performs a kind of
latent semantic analysis.
A list of cyclical time features to extract.
“Cycles” are numerical transformations of features that should be represented on a circle. E.g. months,
ranging from 1 to 12, should be arranged such that 12 and 1 are next to each other, rather than on
opposite ends of a linear scale. We represent such cyclical time features on a circle by creating two
columns for each original feature: the sin and cos of the numerical feature after appropriate scaling.
Embedding/vector encoder.
Configures encoding of multivalued numerical features (variable length lists of numbers, i.e. vectors, or the semantic type list[number] for short).
Text encoder.
Configures encoding of text (natural language) features. Currently only allows
Tf-Idf embeddings to represent texts. If you wish
to use other embeddings, e.g. semantic, Word2Vec etc., transform your text column first using
another step, and then use that result instead of the original texts.
Texts are excluded by default from the overall encoding of the dataset. See parameter
include_text_features below to active it.
Parameters to be passed to the text encoder (Tf-Idf parameters only for now).
See scikit-learn’s documentation
for detailed parameters and their explanation.
How many output features to generate.
The resulting Tf-Idf vectors will be reduced to these many dimensions (columns) using scikit-learn’s
truncated SVD.
This performs a kind of latent semantic analysis.
By default we will reduce to 200 components.
Whether to include or ignore text columns during the processing of input data.
Enabling this will convert texts to their TfIdf representation. Each text will be
converted to an N-dimensional vector in which each component measures the relative
“over-representation” of a specific word (or n-gram) relative to its overall
frequency in the whole dataset. This is disabled by default because it will
often be better to convert texts explicitly using a previous step, such as
embed_text or embed_text_with_model.
Configure model validation.
Allows evaluation of model performance via cross-validation with custom metrics. If not
specified, will by default perform 5-fold cross-validation with automatically selected
metrics.
Number of train-test splits to evaluate the model on.
Will split the dataset into training and test set n_splits times, train on the former
and evaluate on the latter using specified or automatically selected metrics.
What proportion of the data to use for testing in each split.
If null or not provided, will use cross-validation to split the dataset. E.g. if n_splits
is 5, the dataset will be split into 5 equal-sized parts. For five iterations four parts will then
be used for training and the remaining part for testing. If test_size is a number between 0 and 1,
in contrast, validation is done using a shuffle-split approach. Here, instead of splitting the data into
n_splits equal parts up front, in each iteration we randomize the data and sample a proportion equal
to test_size to use for evaluation and the remaining rows for training.
One or more metrics/scoring functions to evaluate the model with.
When none is provided, will measure default metrics appropriate for the prediction task
(classification vs. regression determined from model or type of target column). See
sklearn model evaluation
for further details.
The method for processing missing values in the input dataset.
Possible values:
“Forbidden”:
Missing values are not supported, their presence is interpreted as an error.
“Min”:
Missing values are processed as the minimum value (less than all other values) for the feature.
It is guaranteed that a split that separates missing values from all other values is considered
when selecting trees.
“Max”:
Missing values are processed as the maximum value (greater than all other values) for the feature.
It is guaranteed that a split that separates missing values from all other values is considered when
selecting trees.
Using the Min or Max value of this parameter guarantees that a split between missing values and other values is considered when selecting a new split in the tree.
Number of Iterations.
The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.
Allow Writing Files.
Allow to write analytical and snapshot files during training. If set to “false”, the snapshot and data visualization tools are unavailable.
One Hot Max Size.
Use one-hot encoding for all categorical features with a number of different values less than or equal to the given parameter value. Ctrs are not calculated for such features.
The maximum number of features that can be combined.
Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.
Which search strategy to use for optimization.
Grid search explores all possible combinations of parameters specified in params.
Randomized search, on the other hand, randomly samples iterations parameter combinations
from the distributions specified in params.
Configure model validation.
Allows evaluation of model performance via cross-validation with custom metrics. If not
specified, will by default perform 5-fold cross-validation with automatically selected
metrics.
Number of train-test splits to evaluate the model on.
Will split the dataset into training and test set n_splits times, train on the former
and evaluate on the latter using specified or automatically selected metrics.
What proportion of the data to use for testing in each split.
If null or not provided, will use cross-validation to split the dataset. E.g. if n_splits
is 5, the dataset will be split into 5 equal-sized parts. For five iterations four parts will then
be used for training and the remaining part for testing. If test_size is a number between 0 and 1,
in contrast, validation is done using a shuffle-split approach. Here, instead of splitting the data into
n_splits equal parts up front, in each iteration we randomize the data and sample a proportion equal
to test_size to use for evaluation and the remaining rows for training.
One or more metrics/scoring functions to evaluate the model with.
When none is provided, will measure default metrics appropriate for the prediction task
(classification vs. regression determined from model or type of target column). See
sklearn model evaluation
for further details.