Skip to main content
Non-missing values in a categorical target column will be used to train a prediction model (a Catboost classifier), which then predicts (imputes) the missing values. The step produces two output columns: one containing predicted classes for all rows, and a second containing a probability for each predicted class.

Usage

The following shows how the step can be used in a recipe.

Examples

  • Signature
General syntax for using the step in a recipe. Shows the inputs and outputs the step is expected to receive and will produce respectively. For futher details see sections below.
infer_missing_with_probs(ds: dataset, {
    "param": value,
    ...
}) -> (predicted: category, probs: number)

Inputs & Outputs

The following are the inputs expected by the step and the outputs it produces. These are generally columns (ds.first_name), datasets (ds or ds[["first_name", "last_name"]]) or models (referenced by name e.g. "churn-clf").
ds
dataset
required
A dataset containing the column to be imputed as well as any other column to use as predictors in the model.
predicted
column[category]
required
A column containing the predicted classes for all rows.
probs
column[number]
required
Probability estimate (of the predicted class only).

Configuration

The following parameters can be used to configure the behaviour of the step by including them in a json object as the last “input” to the step, i.e. step(..., {"param": "value", ...}) -> (output).

Parameters

target
string (ds.column:category)
required
Name of the categorical column to impute. The step will predict the missing (and non-missing) class labels for this column, using rows in the dataset where the values are not missing to train the prediction model.
infer_all
boolean
default:"false"
Predict non-missing values. When set to true, all values are predicted. Set this param to false to maintain original values when they are not missing.
threshold
number
default:"0"
Confidence threshold. Every prediction with probability strictly below this threshold will be set to NaN (missing).Values must be in the following range:
0threshold < 1
params
object
CatBoost configuration parameters. You can check the official documentation for more details about Catboost’s parameters here.
depth
integer
default:"6"
Depth of the tree.Values must be in the following range:
2depth16
nan_mode
string
default:"Min"
The method for processing missing values in the input dataset. Possible values:
  • “Forbidden”: Missing values are not supported, their presence is interpreted as an error.
  • “Min”: Missing values are processed as the minimum value (less than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
  • “Max”: Missing values are processed as the maximum value (greater than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
Using the Min or Max value of this parameter guarantees that a split between missing values and other values is considered when selecting a new split in the tree.Values must be one of the following:
  • Forbidden
  • Min
  • Max
iterations
[integer, null]
default:"1000"
Number of Iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.Values must be in the following range:
1iterations < inf
allow_writing_files
boolean
default:"false"
Allow Writing Files. Allow to write analytical and snapshot files during training. If set to “false”, the snapshot and data visualization tools are unavailable.
random_seed
integer
default:"0"
The random seed used for training.
one_hot_max_size
integer
default:"10"
One Hot Max Size. Use one-hot encoding for all categorical features with a number of different values less than or equal to the given parameter value. Ctrs are not calculated for such features.Values must be in the following range:
2one_hot_max_size < inf
max_ctr_complexity
number
default:"2"
The maximum number of features that can be combined. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.Values must be in the following range:
1max_ctr_complexity < inf
boosting_type
string
default:"Plain"
Boosting type. Boosting scheme. Possible values are
  • Ordered: Usually provides better quality on small datasets, but it may be slower than the Plain scheme. - Plain: The classic gradient boosting scheme.
Values must be one of the following:
  • Ordered
  • Plain
encode_features
boolean
default:"true"
Toggle encoding of feature columns. When enabled, Graphext will auto-convert any column types to the numeric type before fitting the model. How this conversion is done can be configured using the feature_encoder option below.
If disabled, any model trained in this step will assume that input data is already in an appropriate format (e.g. numerical and not containing any missing values).
feature_encoder
[null, object]
Configures encoding of feature columns. By default (null), Graphext chooses automatically how to convert any column types the model may not understand natively to a numeric type.A configuration object can be passed instead to overwrite specific parameter values with respect to their default values.
number
object
Numeric encoder. Configures encoding of numeric features.
indicate_missing
boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer
[null, string]
Whether and how to impute (replace/fill) missing values.Values must be one of the following:
  • Mean
  • Median
  • MostFrequent
  • Const
  • None
scaler
[null, string]
Whether and how to scale the final numerical values (across a single column).Values must be one of the following:
  • Standard
  • Robust
  • KNN
  • None
scaler_params
object
Further parameters passed to the scaler function. Details depend no the particular scaler used.
bool
object
Boolean encoder. Configures encoding of boolean features.
indicate_missing
boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer
[null, string]
Whether and how to impute (replace/fill) missing values.Values must be one of the following:
  • MostFrequent
  • Const
  • None
ordinal
object
Ordinal encoder. Configures encoding of categorical features that have a natural order.
indicate_missing
boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer
[null, string]
Whether and how to impute (replace/fill) missing values.Values must be one of the following:
  • MostFrequent
  • Const
  • None
category
[object, object]
Category encoder. May contain either a single configuration for all categorical variables, or two different configurations for low- and high-cardinality variables. For further details pick one of the two options below.
  • Simple category encoder
  • Conditional category encoder
indicate_missing
boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer
[null, string]
Whether and how to impute (replace/fill) missing values.Values must be one of the following:
  • MostFrequent
  • Const
  • None
max_categories
[null, integer]
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single “Others” category.Values must be in the following range:
1max_categories < inf
encoder
[null, string]
How to encode categories.Values must be one of the following:OneHot Label Ordinal Binary Frequency None
scaler
[null, string]
Whether and how to scale the final numerical values (across a single column).Values must be one of the following:
  • Standard
  • Robust
  • KNN
  • None
multilabel
[object, object]
Multilabel encoder. Configures encoding of multivalued categorical features (variable length lists of categories, or the semantic type list[category] for short). May contain either a single configuration for all multilabel variables, or two different configurations for low- and high-cardinality variables. For further details pick one of the two options below.
  • Simple multilabel encoder
  • Conditional multilabel encoder
indicate_missing
boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder
[null, string]
How to encode categories/labels in multilabel (list[category]) columns.Values must be one of the following:
  • Binarizer
  • TfIdf
  • None
max_categories
[null, integer]
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn’s truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.Values must be in the following range:
2max_categories < inf
scaler
[null, string]
How to scale the encoded (numerical columns).Values must be one of the following:
  • Euclidean
  • KNN
  • Norm
  • None
datetime
object
Datetime encoder. Configures encoding of datetime (timestamp) features.
indicate_missing
boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
components
array[string]
A list of numerical components to extract. Will create one numeric column for each component.
Item
string
Each item in array.Values must be one of the following:day dayofweek dayofyear hour minute month quarter season second week weekday weekofyear year
cycles
array[string]
A list of cyclical time features to extract. “Cycles” are numerical transformations of features that should be represented on a circle. E.g. months, ranging from 1 to 12, should be arranged such that 12 and 1 are next to each other, rather than on opposite ends of a linear scale. We represent such cyclical time features on a circle by creating two columns for each original feature: the sin and cos of the numerical feature after appropriate scaling.
Item
string
Each item in array.Values must be one of the following:
  • day
  • dayofweek
  • dayofyear
  • hour
  • month
epoch
[null, boolean]
Whether to include the epoch as new feature (seconds since 01/01/1970).
imputer
[null, string]
Whether and how to impute (replace/fill) missing values.Values must be one of the following:
  • Mean
  • Median
  • MostFrequent
  • Const
  • None
component_scaler
[null, string]
Whether and how to scale the final numerical values (across a single column).Values must be one of the following:
  • Standard
  • Robust
  • KNN
  • None
vector_scaler
[null, string]
How to scale the encoded (numerical columns).Values must be one of the following:
  • Euclidean
  • KNN
  • Norm
  • None
embedding
object
Embedding/vector encoder. Configures encoding of multivalued numerical features (variable length lists of numbers, i.e. vectors, or the semantic type list[number] for short).
indicate_missing
boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
scaler
[null, string]
How to scale the encoded (numerical columns).Values must be one of the following:
  • Euclidean
  • KNN
  • Norm
  • None
text
object
Text encoder. Configures encoding of text (natural language) features. Currently only allows Tf-Idf embeddings to represent texts. If you wish to use other embeddings, e.g. semantic, Word2Vec etc., transform your text column first using another step, and then use that result instead of the original texts.
Texts are excluded by default from the overall encoding of the dataset. See parameter include_text_features below to active it.
indicate_missing
boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder_params
object
Parameters to be passed to the text encoder (Tf-Idf parameters only for now). See scikit-learn’s documentation for detailed parameters and their explanation.
n_components
integer
How many output features to generate. The resulting Tf-Idf vectors will be reduced to these many dimensions (columns) using scikit-learn’s truncated SVD. This performs a kind of latent semantic analysis. By default we will reduce to 200 components.Values must be in the following range:
2n_components1024
scaler
[null, string]
How to scale the encoded (numerical columns).Values must be one of the following:
  • Euclidean
  • KNN
  • Norm
  • None
include_text_features
boolean
default:"false"
Whether to include or ignore text columns during the processing of input data. Enabling this will convert texts to their TfIdf representation. Each text will be converted to an N-dimensional vector in which each component measures the relative “over-representation” of a specific word (or n-gram) relative to its overall frequency in the whole dataset. This is disabled by default because it will often be better to convert texts explicitly using a previous step, such as embed_text or embed_text_with_model.
validate
[object, null]
Configure model validation. Allows evaluation of model performance via cross-validation with custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.
n_splits
[integer, null]
default:"5"
Number of train-test splits to evaluate the model on. Will split the dataset into training and test set n_splits times, train on the former and evaluate on the latter using specified or automatically selected metrics.
test_size
[number, null]
What proportion of the data to use for testing in each split. If null or not provided, will use cross-validation to split the dataset. E.g. if n_splits is 5, the dataset will be split into 5 equal-sized parts. For five iterations four parts will then be used for training and the remaining part for testing. If test_size is a number between 0 and 1, in contrast, validation is done using a shuffle-split approach. Here, instead of splitting the data into n_splits equal parts up front, in each iteration we randomize the data and sample a proportion equal to test_size to use for evaluation and the remaining rows for training.Values must be in the following range:
0 < test_size < 1
metrics
[null, array]
One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.
  • null
  • array
{_}
null
null.
tune
object
Configure hypertuning. Configures the optimization of model hyper-parameters via cross-validated grid- or randomized search.
params
object
CatBoost configuration parameters. You can check the official documentation for more details about Catboost’s parameters here.
depth
integer
default:"6"
Depth of the tree.Values must be in the following range:
2depth16
nan_mode
string
default:"Min"
The method for processing missing values in the input dataset. Possible values:
  • “Forbidden”: Missing values are not supported, their presence is interpreted as an error.
  • “Min”: Missing values are processed as the minimum value (less than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
  • “Max”: Missing values are processed as the maximum value (greater than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
Using the Min or Max value of this parameter guarantees that a split between missing values and other values is considered when selecting a new split in the tree.Values must be one of the following:
  • Forbidden
  • Min
  • Max
iterations
[integer, null]
default:"1000"
Number of Iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.Values must be in the following range:
1iterations < inf
allow_writing_files
boolean
default:"false"
Allow Writing Files. Allow to write analytical and snapshot files during training. If set to “false”, the snapshot and data visualization tools are unavailable.
random_seed
integer
default:"0"
The random seed used for training.
one_hot_max_size
integer
default:"10"
One Hot Max Size. Use one-hot encoding for all categorical features with a number of different values less than or equal to the given parameter value. Ctrs are not calculated for such features.Values must be in the following range:
2one_hot_max_size < inf
max_ctr_complexity
number
default:"2"
The maximum number of features that can be combined. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.Values must be in the following range:
1max_ctr_complexity < inf
boosting_type
string
default:"Plain"
Boosting type. Boosting scheme. Possible values are
  • Ordered: Usually provides better quality on small datasets, but it may be slower than the Plain scheme. - Plain: The classic gradient boosting scheme.
Values must be one of the following:
  • Ordered
  • Plain
strategy
string
default:"grid"
Which search strategy to use for optimization. Grid search explores all possible combinations of parameters specified in params. Randomized search, on the other hand, randomly samples iterations parameter combinations from the distributions specified in params.Values must be one of the following:
  • grid
  • random
iterations
integer
default:"10"
How many randomly sampled parameter combinations to test in randomized search.Values must be in the following range:
1 < iterations < inf
validate
[object, null]
Configure model validation. Allows evaluation of model performance via cross-validation with custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.
n_splits
[integer, null]
default:"5"
Number of train-test splits to evaluate the model on. Will split the dataset into training and test set n_splits times, train on the former and evaluate on the latter using specified or automatically selected metrics.
test_size
[number, null]
What proportion of the data to use for testing in each split. If null or not provided, will use cross-validation to split the dataset. E.g. if n_splits is 5, the dataset will be split into 5 equal-sized parts. For five iterations four parts will then be used for training and the remaining part for testing. If test_size is a number between 0 and 1, in contrast, validation is done using a shuffle-split approach. Here, instead of splitting the data into n_splits equal parts up front, in each iteration we randomize the data and sample a proportion equal to test_size to use for evaluation and the remaining rows for training.Values must be in the following range:
0 < test_size < 1
metrics
[null, array]
One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.
  • null
  • array
{_}
null
null.
scorer
string
Each item in array.Values must be one of the following:accuracy balanced_accuracy explained_variance f1_micro f1_macro f1_samples f1_weighted neg_mean_squared_error neg_median_absolute_error neg_root_mean_squared_error precision_micro precision_macro precision_samples precision_weighted recall_micro recall_macro recall_samples recall_weighted r2
seed
integer
Seed for random number generator ensuring reproducibility.Values must be in the following range:
0seed < inf
I