Skip to content

Infer missing

inference · model · missing data · NaN · imputation

Train and use a machine learning model to predict (impute) the missing values in a column.

Non-missing values in the target column will be used to train a prediction model (a Catboost regressor or classifier), which then predicts (imputes) the missing values. Only simple numerical or categorical input data can be imputed.

Example

To automatically select the a model (classifier vs regressor) based on the kind of target variable (numeric or categorical), simply use:

infer_missing(ds, {"target": "incomplete_col"}) -> (ds.complete_col)

Usage

The following are the step's expected inputs and outputs and their specific types.

infer_missing(ds: dataset, {"param": value}) -> (predicted: column)

where the object {"param": value} is optional in most cases and if present may contain any of the parameters described in the corresponding section below.

Inputs


ds: dataset

A dataset containing the column to be imputed as well as any other column to use as predictors in the model.

Outputs


predicted: column

A column containing the predicted values for all rows.

Parameters


target: string

Name of the column to impute. The step will predict the missing values for this column, using rows in the dataset where the values are not missing to train the prediction model.


infer_all: boolean = True

Predict non-missing values. When set to true, all values are predicted. Set this param to false to maintain original values when they are not missing.


params: object

CatBoost configuration parameters. You can check the official documentation for more details about Catboost's parameters here.

Items in params

depth: integer = 6

Depth of the tree.

Range: 2 ≤ depth ≤ 16


nan_mode: string = "Min"

The method for processing missing values in the input dataset. Possible values:

  • “Forbidden”: Missing values are not supported, their presence is interpreted as an error.

  • “Min”: Missing values are processed as the minimum value (less than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.

  • “Max”: Missing values are processed as the maximum value (greater than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.

Using the Min or Max value of this parameter guarantees that a split between missing values and other values is considered when selecting a new split in the tree.

Must be one of: "Forbidden", "Min", "Max"


iterations: integer | null = 1000

Number of Iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.

Range: 1 ≤ iterations < inf


allow_writing_files: boolean = False

Allow Writing Files. Allow to write analytical and snapshot files during training. If set to “false”, the snapshot and data visualization tools are unavailable.


random_seed: integer = 0

The random seed used for training.


one_hot_max_size: integer = 10

One Hot Max Size. Use one-hot encoding for all categorical features with a number of different values less than or equal to the given parameter value. Ctrs are not calculated for such features.

Range: 2 ≤ one_hot_max_size < inf


max_ctr_complexity: number = 2

The maximum number of features that can be combined. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.

Range: 1 ≤ max_ctr_complexity < inf


boosting_type: string = "Plain"

Boosting type. Boosting scheme. Possible values are - Ordered: Usually provides better quality on small datasets, but it may be slower than the Plain scheme. - Plain: The classic gradient boosting scheme.

Must be one of: "Ordered", "Plain"


feature_encoder: null | string = "auto"

Configures encoding of feature columns. In "auto" mode, graphext chooses automatically how to convert any column types the model may not understand natively to a numeric type. When null is selected, features will not be processed in any way. Note: when using null, the data must already be in a format compatible with the selected kind of model!

Must be one of: "auto"


validate: object | null

Configure model validation. Allows evaluation of model performance via cross-validation with custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.

Items in validate

n_splits: integer | null = 5

Number of train-test splits to evaluate the model on. Will split the dataset into training and test set n_splits times, train on the former and evaluate on the latter using specified or automatically selected metrics


test_size: number | null

What proportion of the data to use for testing in each split. If null or not provided, will use cross-validation to split the dataset. E.g. if n_splits is 5, the dataset will be split into 5 equal-sized parts. For five iterations four parts will then be used for training and the remaining part for testing. If test_size is a number between 0 and 1, in contrast, validation is done using a shuffle-split approach. Here, instead of splitting the data into n_splits equal parts up front, in each iteration we randomize the data and sample a proportion equal to test_size to use for evaluation and the remaining rows for training.

Range: 0 < test_size < 1


metrics: null | array[string]

One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.

Must be one of: "accuracy", "balanced_accuracy", "explained_variance", "f1_micro", "f1_macro", "f1_samples", "f1_weighted", "neg_mean_squared_error", "neg_median_absolute_error", "neg_root_mean_squared_error", "precision_micro", "precision_macro", "precision_samples", "precision_weighted", "recall_micro", "recall_macro", "recall_samples", "recall_weighted", "r2"


tune: object

Configure hypertuning. Configures the optimization of model hyper-parameters via cross-validated grid- or randomized search.

Items in tune

params: object

CatBoost configuration parameters. You can check the official documentation for more details about Catboost's parameters here.

Items in params

depth: integer = 6

Depth of the tree.

Range: 2 ≤ depth ≤ 16


nan_mode: string = "Min"

The method for processing missing values in the input dataset. Possible values:

  • “Forbidden”: Missing values are not supported, their presence is interpreted as an error.

  • “Min”: Missing values are processed as the minimum value (less than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.

  • “Max”: Missing values are processed as the maximum value (greater than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.

Using the Min or Max value of this parameter guarantees that a split between missing values and other values is considered when selecting a new split in the tree.

Must be one of: "Forbidden", "Min", "Max"


iterations: integer | null = 1000

Number of Iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.

Range: 1 ≤ iterations < inf


allow_writing_files: boolean = False

Allow Writing Files. Allow to write analytical and snapshot files during training. If set to “false”, the snapshot and data visualization tools are unavailable.


random_seed: integer = 0

The random seed used for training.


one_hot_max_size: integer = 10

One Hot Max Size. Use one-hot encoding for all categorical features with a number of different values less than or equal to the given parameter value. Ctrs are not calculated for such features.

Range: 2 ≤ one_hot_max_size < inf


max_ctr_complexity: number = 2

The maximum number of features that can be combined. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.

Range: 1 ≤ max_ctr_complexity < inf


boosting_type: string = "Plain"

Boosting type. Boosting scheme. Possible values are - Ordered: Usually provides better quality on small datasets, but it may be slower than the Plain scheme. - Plain: The classic gradient boosting scheme.

Must be one of: "Ordered", "Plain"


strategy: string = "grid"

Which search strategy to use for optimization. Grid search explores all possible combinations of parameters specified in params. Randomized search, on the other hand, randomly samples iterations parameter combinations from the distributions specified in params

Must be one of: "grid", "random"


iterations: integer = 10

How many randomly sampled parameter combinations to test in randomized search.

Range: 1 < iterations < inf


validate: object | null

Configure model validation. Allows evaluation of model performance via cross-validation with custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.

Items in validate

n_splits: integer | null = 5

Number of train-test splits to evaluate the model on. Will split the dataset into training and test set n_splits times, train on the former and evaluate on the latter using specified or automatically selected metrics


test_size: number | null

What proportion of the data to use for testing in each split. If null or not provided, will use cross-validation to split the dataset. E.g. if n_splits is 5, the dataset will be split into 5 equal-sized parts. For five iterations four parts will then be used for training and the remaining part for testing. If test_size is a number between 0 and 1, in contrast, validation is done using a shuffle-split approach. Here, instead of splitting the data into n_splits equal parts up front, in each iteration we randomize the data and sample a proportion equal to test_size to use for evaluation and the remaining rows for training.

Range: 0 < test_size < 1


metrics: null | array[string]

One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.


scorer: string

Metric used to select best model.

Must be one of: "accuracy", "balanced_accuracy", "explained_variance", "f1_micro", "f1_macro", "f1_samples", "f1_weighted", "neg_mean_squared_error", "neg_median_absolute_error", "neg_root_mean_squared_error", "precision_micro", "precision_macro", "precision_samples", "precision_weighted", "recall_micro", "recall_macro", "recall_samples", "recall_weighted", "r2"


seed: integer

Seed for random number generator ensuring reproducibility.

Range: 0 ≤ seed < inf