train_classification_gpu
Train and store a classification model to be loaded at a later point for prediction.
The output will consist of a new column with the trained model’s predictions on the training data, as well as a saved and named model file that can be used in other projects for prediction of new data.
Optionally, if a second output column name is provided, the model’s predicted probabilities will also be returned.
A detailed guide on how to configure this step for model tuning and performance evaluation can be found here.
Usage
The following examples show how the step can be used in a recipe.
Examples
Examples
Train a classification model with default parameters. By default, a Catboost model will be trained, but this can be changed to any of the supported models by specifying the model
parameter (see below for details):
Train a classification model with default parameters. By default, a Catboost model will be trained, but this can be changed to any of the supported models by specifying the model
parameter (see below for details):
To also return the predicted probabilities, provide a second column name:
To be more explicit about which model parameters to use during training, which parameters to optimize (tune) automatically, and how to evaluate the model’s performance, the following example shows a complete configuration. It will explicitly select the CatboostClassifier as the model, set boosting_type
to “ordered”, and select the best combination of learning_rate
and depth
from the values specified in the tune: params
configuration. To find the best parameters, it will perform 5-fold cross-validation on each combination, and will use the scorer f1_weighted
to measure the performance. accuracy
will also be measured, but only for the purpose of reporting. The best parameter combination will than be evaluated on a single split of the dataset (with 20% of rows used for testing and 80% for training), with metrics selected automatically. Note that the final model will always be re-trained on the whole dataset!
General syntax for using the step in a recipe. Shows the inputs and outputs the step is expected to receive and will produce respectively. For futher details see sections below.
Inputs & Outputs
The following are the inputs expected by the step and the outputs it produces. These are generally
columns (ds.first_name
), datasets (ds
or ds[["first_name", "last_name"]]
) or models (referenced
by name e.g. "churn-clf"
).
Inputs
Inputs
Should contain the target column and the feature columns you wish to use in the model.
Outputs
Outputs
One or two columns containing the model’s predictions. If two column names are provided, the second column will contain the model’s predicted probabilities.
Zip file containing the trained model and associated information.
Configuration
The following parameters can be used to configure the behaviour of the step by including them in
a json object as the last “input” to the step, i.e. step(..., {"param": "value", ...}) -> (output)
.
Parameters
Parameters
Train a Catboost classifier. I.e. gradient boosted decision trees with support for categorical variables and missing values.
Target variable (labels). Name of the column that contains your target values (labels).
Name of the positive class. In binary classification, usually the class you’re most interested in, for example the label/class corresponding to successful lead conversion in a lead score model, the class corresponding to a customer who has churned in a churn prediction model, etc.
If provided, will automaticall measure the performance (accuracy, precision, recall) of the model on this class, in addition to averages across all classes. If not provided, only summary metrics will be reported.
Maximum number of classes in the target variable. If there are more classes than this, the least frequent classes will be grouped together into a single class called “others”. Reducing the number of classes in the target variable can help improve model performance, especially when the number of classes is very large, some classes are very rare, or the dataset doesn’t have sufficient samples for all classes. Raising this significantly might lead to much longer training times.
Values must be in the following range:
Importance of each feature in the model. Whether and how to measure each feature’s contribution to the model’s predictions. The higher the value, the more important the feature was in the model. Only relative values are meaningful, i.e. the importance of a feature relative to other features in the model.
Also note that feature importance is usually meaningful only for models that fit the data well.
The default (null
, true
or "native"
) uses the classifier’s native feature importance measure, e.g.
prediction-value-change in the case
of Catboost, Gini importance
in the case of scikit-learn’s DecisionTreeClassifier, and the mean of absolute coefficients in the case of
logistic regression.
When set to "permutation"
, uses permutation importance,
i.e. measures the decrease in model score when a single feature’s values are randomly shuffled. This is
considerably slower than native feature importance (the model needs to be evaluated an additional k*n times,
where k is the number of features and n the number of repetitions to average over). On the positive side it is
model-agnostic and doesn’t suffer from bias towards high cardinality features (like some tree-based feature
importances). On the negative side, it can be sensitive to strongly correlated features, as the unshuffled
correlated variable is still available to the model when shuffling the original variable.
When set to false
, no feature importance will be calculated.
Values must be one of the following:
True
False
native
permutation
null
Toggle encoding of feature columns.
When enabled, Graphext will auto-convert any column types to the numeric type before
fitting the model. How this conversion is done can be configured using the feature_encoder
option below.
Configures encoding of feature columns.
By default (null
), Graphext chooses automatically how to convert any column types the model
may not understand natively to a numeric type.
A configuration object can be passed instead to overwrite specific parameter values with respect to their default values.
Properties
Properties
Numeric encoder. Configures encoding of numeric features.
Properties
Properties
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
Whether and how to impute (replace/fill) missing values.
Values must be one of the following:
Mean
Median
MostFrequent
Const
None
Whether and how to scale the final numerical values (across a single column).
Values must be one of the following:
Standard
Robust
KNN
None
Further parameters passed to the scaler
function.
Details depend no the particular scaler used.
Boolean encoder. Configures encoding of boolean features.
Ordinal encoder. Configures encoding of categorical features that have a natural order.
Category encoder. May contain either a single configuration for all categorical variables, or two different configurations for low- and high-cardinality variables. For further details pick one of the two options below.
Options
Options
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
Whether and how to impute (replace/fill) missing values.
Values must be one of the following:
MostFrequent
Const
None
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single “Others” category.
Values must be in the following range:
How to encode categories.
Values must be one of the following:
OneHot
Label
Ordinal
Binary
Frequency
None
Whether and how to scale the final numerical values (across a single column).
Values must be one of the following:
Standard
Robust
KNN
None
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
Whether and how to impute (replace/fill) missing values.
Values must be one of the following:
MostFrequent
Const
None
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single “Others” category.
Values must be in the following range:
How to encode categories.
Values must be one of the following:
OneHot
Label
Ordinal
Binary
Frequency
None
Whether and how to scale the final numerical values (across a single column).
Values must be one of the following:
Standard
Robust
KNN
None
Condition for application of low- or high-cardinality configuration.
Number of unique categories below which the low_cardinality
configuration is used,
and above which the high_cardinality
configuration is used.
Values must be in the following range:
Low cardinality configuration.
Used for categories with fewer than cardinality_threshold
unique categories.
Properties
Properties
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
Whether and how to impute (replace/fill) missing values.
Values must be one of the following:
MostFrequent
Const
None
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single “Others” category.
Values must be in the following range:
How to encode categories.
Values must be one of the following:
OneHot
Label
Ordinal
Binary
Frequency
None
Whether and how to scale the final numerical values (across a single column).
Values must be one of the following:
Standard
Robust
KNN
None
High cardinality configuration.
Used for categories with more than cardinality_threshold
unique categories.
Properties
Properties
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
Whether and how to impute (replace/fill) missing values.
Values must be one of the following:
MostFrequent
Const
None
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single “Others” category.
Values must be in the following range:
How to encode categories.
Values must be one of the following:
OneHot
Label
Ordinal
Binary
Frequency
None
Whether and how to scale the final numerical values (across a single column).
Values must be one of the following:
Standard
Robust
KNN
None
Multilabel encoder.
Configures encoding of multivalued categorical features (variable length lists of categories,
or the semantic type list[category]
for short). May contain either a single configuration for
all multilabel variables, or two different configurations for low- and high-cardinality variables.
For further details pick one of the two options below.
Options
Options
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
How to encode categories/labels in multilabel (list[category]) columns.
Values must be one of the following:
Binarizer
TfIdf
None
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn’s truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Values must be in the following range:
How to scale the encoded (numerical columns).
Values must be one of the following:
Euclidean
KNN
Norm
None
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
How to encode categories/labels in multilabel (list[category]) columns.
Values must be one of the following:
Binarizer
TfIdf
None
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn’s truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Values must be in the following range:
How to scale the encoded (numerical columns).
Values must be one of the following:
Euclidean
KNN
Norm
None
Condition for application of low- or high-cardinality configuration.
Number of unique categories below which the low_cardinality
configuration is used,
and above which the high_cardinality
configuration is used.
Values must be in the following range:
Low cardinality configuration.
Used for mulitabel columns with fewer than cardinality_threshold
unique categories/labels.
Properties
Properties
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
How to encode categories/labels in multilabel (list[category]) columns.
Values must be one of the following:
Binarizer
TfIdf
None
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn’s truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Values must be in the following range:
How to scale the encoded (numerical columns).
Values must be one of the following:
Euclidean
KNN
Norm
None
High cardinality configuration.
Used for categories with more than cardinality_threshold
unique categories.
Properties
Properties
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
How to encode categories/labels in multilabel (list[category]) columns.
Values must be one of the following:
Binarizer
TfIdf
None
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn’s truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Values must be in the following range:
How to scale the encoded (numerical columns).
Values must be one of the following:
Euclidean
KNN
Norm
None
Datetime encoder. Configures encoding of datetime (timestamp) features.
Properties
Properties
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
A list of numerical components to extract. Will create one numeric column for each component.
Array items
Array items
Each item in array.
Values must be one of the following:
day
dayofweek
dayofyear
hour
minute
month
quarter
season
second
week
weekday
weekofyear
year
A list of cyclical time features to extract. “Cycles” are numerical transformations of features that should be represented on a circle. E.g. months, ranging from 1 to 12, should be arranged such that 12 and 1 are next to each other, rather than on opposite ends of a linear scale. We represent such cyclical time features on a circle by creating two columns for each original feature: the sin and cos of the numerical feature after appropriate scaling.
Array items
Array items
Each item in array.
Values must be one of the following:
day
dayofweek
dayofyear
hour
month
Whether to include the epoch as new feature (seconds since 01/01/1970).
Whether and how to impute (replace/fill) missing values.
Values must be one of the following:
Mean
Median
MostFrequent
Const
None
Whether and how to scale the final numerical values (across a single column).
Values must be one of the following:
Standard
Robust
KNN
None
How to scale the encoded (numerical columns).
Values must be one of the following:
Euclidean
KNN
Norm
None
Embedding/vector encoder.
Configures encoding of multivalued numerical features (variable length lists of numbers, i.e. vectors, or the semantic type list[number]
for short).
Text encoder. Configures encoding of text (natural language) features. Currently only allows Tf-Idf embeddings to represent texts. If you wish to use other embeddings, e.g. semantic, Word2Vec etc., transform your text column first using another step, and then use that result instead of the original texts.
include_text_features
below to active it.Properties
Properties
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
Parameters to be passed to the text encoder (Tf-Idf parameters only for now). See scikit-learn’s documentation for detailed parameters and their explanation.
How many output features to generate. The resulting Tf-Idf vectors will be reduced to these many dimensions (columns) using scikit-learn’s truncated SVD. This performs a kind of latent semantic analysis. By default we will reduce to 200 components.
Values must be in the following range:
How to scale the encoded (numerical columns).
Values must be one of the following:
Euclidean
KNN
Norm
None
Whether to include or ignore text columns during the processing of input data.
Enabling this will convert texts to their TfIdf representation. Each text will be
converted to an N-dimensional vector in which each component measures the relative
“over-representation” of a specific word (or n-gram) relative to its overall
frequency in the whole dataset. This is disabled by default because it will
often be better to convert texts explicitly using a previous step, such as
embed_text
or embed_text_with_model
.
CatBoost configuration parameters. You can check the official documentation for more details about Catboost’s parameters here.
Properties
Properties
The maximum depth of the tree.
Values must be in the following range:
Number of iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.
Values must be in the following range:
Maximum cardinality of variables to be one-hot encoded. Use one-hot encoding for all categorical features with a number of different values less than or equal to this value. Other variables will be target-encoded. Note that one-hot encoding is faster than the alternatives, so decreasing this value makes it more likely slower methods will be used. See CatBoost details for further information.
Values must be in the following range:
The maximum number of features that can be combined when transforming categorical variables. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.
Values must be in the following range:
Coefficient at the L2 regularization term of the cost function.
Values must be in the following range:
The number of splits for numerical features.
Values must be in the following range:
The amount of randomness to use for scoring splits. Use this parameter to avoid overfitting the model. The value multiplies the variance of a random variable (with zero mean) that is added to the score used to select splits when a tree is grown.
Values must be in the following range:
The method for processing missing values in the input dataset. Possible values:
- “Forbidden”: Missing values are not supported, their presence is interpreted as an error.
- “Min”: Missing values are processed as the minimum value (less than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
- “Max”: Missing values are processed as the maximum value (greater than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
Using the Min or Max value of this parameter guarantees that a split between missing values and other values is considered when selecting a new split in the tree.
Values must be one of the following:
Forbidden
Min
Max
Boosting type. Boosting scheme. Possible values are
- Ordered: Usually provides better quality on small datasets, but it may be slower than the Plain scheme.
- Plain: The classic gradient boosting scheme.
Values must be one of the following:
Ordered
Plain
Random subspace method.
The percentage of features to use at each split selection, when features are selected over again at random. The value null
is equivalent to 1.0 (all features). You can set this to values < 1.0 when the dataset has many features (e.g. > 20) to speed up training.
Values must be in the following range:
The random seed used for training.
Whether and how to limit memory usage. Select the maximum Ram used using strings like “2GB” or “100mb” (non case_sensitive).
Whether and how to assign weights to different predicted classes. The options are:
- null: No class weighting
- Balanced: Inversely proportional to the number of samples/rows in each class
- SqrtBalanced: Using the square root of the “Balanced” option.
Values must be one of the following:
Balanced
SqrtBalanced
None
None
Configure model validation. Allows evaluation of model performance via cross-validation using custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.
Properties
Properties
Number of train-test splits to evaluate the model on.
Will split the dataset into training and test set n_splits
times, train on the former
and evaluate on the latter using specified or automatically selected metrics
.
Values must be in the following range:
What proportion of the data to use for testing in each split.
If null
or not provided, will use k-fold cross-validation
to split the dataset. E.g. if n_splits
is 5, the dataset will be split into 5 equal-sized parts.
For five iterations four parts will then be used for training and the remaining part for testing.
If test_size
is a number between 0 and 1, in contrast, validation is done using a
shuffle-split
approach. Here, instead of splitting the data into n_splits
equal parts up front, in each iteration
we randomize the data and sample a proportion equal to test_size
to use for evaluation and the remaining
rows for training.
Values must be in the following range:
Whether to split the data by time. Most recent data will be used for testing and previous data for training. Assumes data is passed already sorted ascending by time.
One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.
Array items
Array items
Each item in array.
Values must be one of the following:
accuracy
balanced_accuracy
f1_micro
f1_macro
f1_samples
f1_weighted
precision_micro
precision_macro
precision_samples
precision_weighted
recall_micro
recall_macro
recall_samples
recall_weighted
roc_auc
roc_auc_ovr
roc_auc_ovo
roc_auc_ovr_weighted
roc_auc_ovo_weighted
Configure hypertuning. Configures the optimization of model hyper-parameters via cross-validated grid- or randomized search.
Properties
Properties
Which search strategy to use for optimization.
Grid search explores all possible combinations of parameters specified in params
.
Randomized search, on the other hand, randomly samples iterations
parameter combinations
from the distributions specified in params
.
Values must be one of the following:
grid
random
How many randomly sampled parameter combinations to test in randomized search.
Values must be in the following range:
Configure model validation. Allows evaluation of model performance via cross-validation using custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.
Properties
Properties
Number of train-test splits to evaluate the model on.
Will split the dataset into training and test set n_splits
times, train on the former
and evaluate on the latter using specified or automatically selected metrics
.
Values must be in the following range:
What proportion of the data to use for testing in each split.
If null
or not provided, will use k-fold cross-validation
to split the dataset. E.g. if n_splits
is 5, the dataset will be split into 5 equal-sized parts.
For five iterations four parts will then be used for training and the remaining part for testing.
If test_size
is a number between 0 and 1, in contrast, validation is done using a
shuffle-split
approach. Here, instead of splitting the data into n_splits
equal parts up front, in each iteration
we randomize the data and sample a proportion equal to test_size
to use for evaluation and the remaining
rows for training.
Values must be in the following range:
Whether to split the data by time. Most recent data will be used for testing and previous data for training. Assumes data is passed already sorted ascending by time.
One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.
Array items
Array items
Each item in array.
Values must be one of the following:
accuracy
balanced_accuracy
f1_micro
f1_macro
f1_samples
f1_weighted
precision_micro
precision_macro
precision_samples
precision_weighted
recall_micro
recall_macro
recall_samples
recall_weighted
roc_auc
roc_auc_ovr
roc_auc_ovo
roc_auc_ovr_weighted
roc_auc_ovo_weighted
Each item in array.
Values must be one of the following:
accuracy
balanced_accuracy
f1_micro
f1_macro
f1_samples
f1_weighted
precision_micro
precision_macro
precision_samples
precision_weighted
recall_micro
recall_macro
recall_samples
recall_weighted
roc_auc
roc_auc_ovr
roc_auc_ovo
roc_auc_ovr_weighted
roc_auc_ovo_weighted
The parameter values to explore. Allows tuning of any and all of the parameters that can be set also as constants in the “params” attribute.
Keys in this object should be strings identifying parameter names, and values should be
lists of values to explore for that parameter. E.g. "depth": [3, 5, 7]
.
Properties
Properties
List of depths values to explore.
Array items
Array items
The maximum depth of the tree.
Values must be in the following range:
List of iterations values to explore.
Array items
Array items
Number of iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.
Values must be in the following range:
List of values configuring max cardinality for one-hot encoding.
Array items
Array items
Maximum cardinality of variables to be one-hot encoded. Use one-hot encoding for all categorical features with a number of different values less than or equal to this value. Other variables will be target-encoded. Note that one-hot encoding is faster than the alternatives, so decreasing this value makes it more likely slower methods will be used. See CatBoost details for further information.
Values must be in the following range:
List of values configuring variable combination complexity.
Array items
Array items
The maximum number of features that can be combined when transforming categorical variables. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.
Values must be in the following range:
List of leaf regularization strengths.
Array items
Array items
Coefficient at the L2 regularization term of the cost function.
Values must be in the following range:
List of border counts.
Array items
Array items
The number of splits for numerical features.
Values must be in the following range:
List of random strengths.
Array items
Array items
The amount of randomness to use for scoring splits. Use this parameter to avoid overfitting the model. The value multiplies the variance of a random variable (with zero mean) that is added to the score used to select splits when a tree is grown.
Values must be in the following range:
List of criterion values to explore.
Array items
Array items
Function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain.
Values must be one of the following:
gini
entropy
List of splitter values to explore.
Array items
Array items
Strategy used to choose the split at each node. Supported strategies are “best” to choose the best split and “random” to choose the best random split.
Values must be one of the following:
best
random
List of max_depth values to explore.
Array items
Array items
Maximum depth of the tree. The maximum depth of the tree. If null, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
Values must be in the following range:
List of min_samples_split values to explore.
Array items
Array items
Minimum number of samples required to split an internal node.
The minimum number of samples required to split an internal node. If int, then consider min_samples_split
as the minimum count. If float, then min_samples_split is a fraction and ceil(min_samples_split * n_samples)
are the minimum number of samples for each split.
List of min_samples_leaf values to explore.
Array items
Array items
Minimum number of samples required to be at a leaf node.
The minimum number of samples required to be at a leaf node. A split point at any depth will only be
considered if it leaves at least min_samples_leaf
training samples in each of the left and right branches.
This may have the effect of smoothing the model, especially in regression.
If int, then consider min_samples_leaf
as the minimum count. If float, then min_samples_leaf
is
a fraction and ceil(min_samples_leaf * n_samples)
are the minimum number of samples for each node.
List of max_leaf_nodes values to explore.
Array items
Array items
Grow a tree with max_leaf_nodes
in best-first fashion.
Best nodes are defined as relative reduction in impurity. If null then unlimited number of leaf nodes.
List of max_features values to explore.
Array items
Array items
Number of features to consider when looking for the best split. The number of features to consider when looking for the best split:
- If int, then consider
max_features
features at each split. - If float, then
max_features
is a fraction andint(max_features * n_features)
features are considered at each split. - If “auto”, then
max_features=sqrt(n_features)
. - If “sqrt”, then
max_features=sqrt(n_features)
. - If “log2”, then
max_features=log2(n_features)
. - If null, then
max_features=n_features
.
Note: the search for a split does not stop until at least one valid partition of the node samples is found,
even if it requires to effectively inspect more than max_features
features.
List of ccp_alpha values to explore.
Array items
Array items
Complexity parameter used for Minimal Cost-Complexity Pruning. Minimal Cost-Complexity Pruning recursively finds the node with the “weakest link”. The weakest link is characterized by an effective alpha, where the nodes with the smallest effective alpha are pruned first. As alpha increases, more of the tree is pruned, which increases the total impurity of its leaves.
Values must be in the following range:
Seed for random number generator ensuring reproducibility.
Values must be in the following range:
Sort the data before training.
If the data is not already sorted by time, you can sort it here. This is useful when you want to split the data
by time, for example to train on older data and test on newer data (see the time_split
parameter in validation
configurations). If the data is already sorted by time, you can ignore this parameter.
Properties
Properties
One or more column to sort by.
Array items
Array items
Each item in array.
Sort order.
Whether to sort in ascending or descending order. If the single value true
is provided, or no value is specified,
all columns will be sorted in ascending order. If a single false
is provided, all columns will be sorted in descending
order. If an array of booleans is provided, each column will be sorted according to the corresponding boolean value.
Array items
Array items
Each item in array.
Train a Catboost classifier. I.e. gradient boosted decision trees with support for categorical variables and missing values.
Target variable (labels). Name of the column that contains your target values (labels).
Name of the positive class. In binary classification, usually the class you’re most interested in, for example the label/class corresponding to successful lead conversion in a lead score model, the class corresponding to a customer who has churned in a churn prediction model, etc.
If provided, will automaticall measure the performance (accuracy, precision, recall) of the model on this class, in addition to averages across all classes. If not provided, only summary metrics will be reported.
Maximum number of classes in the target variable. If there are more classes than this, the least frequent classes will be grouped together into a single class called “others”. Reducing the number of classes in the target variable can help improve model performance, especially when the number of classes is very large, some classes are very rare, or the dataset doesn’t have sufficient samples for all classes. Raising this significantly might lead to much longer training times.
Values must be in the following range:
Importance of each feature in the model. Whether and how to measure each feature’s contribution to the model’s predictions. The higher the value, the more important the feature was in the model. Only relative values are meaningful, i.e. the importance of a feature relative to other features in the model.
Also note that feature importance is usually meaningful only for models that fit the data well.
The default (null
, true
or "native"
) uses the classifier’s native feature importance measure, e.g.
prediction-value-change in the case
of Catboost, Gini importance
in the case of scikit-learn’s DecisionTreeClassifier, and the mean of absolute coefficients in the case of
logistic regression.
When set to "permutation"
, uses permutation importance,
i.e. measures the decrease in model score when a single feature’s values are randomly shuffled. This is
considerably slower than native feature importance (the model needs to be evaluated an additional k*n times,
where k is the number of features and n the number of repetitions to average over). On the positive side it is
model-agnostic and doesn’t suffer from bias towards high cardinality features (like some tree-based feature
importances). On the negative side, it can be sensitive to strongly correlated features, as the unshuffled
correlated variable is still available to the model when shuffling the original variable.
When set to false
, no feature importance will be calculated.
Values must be one of the following:
True
False
native
permutation
null
Toggle encoding of feature columns.
When enabled, Graphext will auto-convert any column types to the numeric type before
fitting the model. How this conversion is done can be configured using the feature_encoder
option below.
Configures encoding of feature columns.
By default (null
), Graphext chooses automatically how to convert any column types the model
may not understand natively to a numeric type.
A configuration object can be passed instead to overwrite specific parameter values with respect to their default values.
Properties
Properties
Numeric encoder. Configures encoding of numeric features.
Properties
Properties
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
Whether and how to impute (replace/fill) missing values.
Values must be one of the following:
Mean
Median
MostFrequent
Const
None
Whether and how to scale the final numerical values (across a single column).
Values must be one of the following:
Standard
Robust
KNN
None
Further parameters passed to the scaler
function.
Details depend no the particular scaler used.
Boolean encoder. Configures encoding of boolean features.
Ordinal encoder. Configures encoding of categorical features that have a natural order.
Category encoder. May contain either a single configuration for all categorical variables, or two different configurations for low- and high-cardinality variables. For further details pick one of the two options below.
Options
Options
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
Whether and how to impute (replace/fill) missing values.
Values must be one of the following:
MostFrequent
Const
None
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single “Others” category.
Values must be in the following range:
How to encode categories.
Values must be one of the following:
OneHot
Label
Ordinal
Binary
Frequency
None
Whether and how to scale the final numerical values (across a single column).
Values must be one of the following:
Standard
Robust
KNN
None
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
Whether and how to impute (replace/fill) missing values.
Values must be one of the following:
MostFrequent
Const
None
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single “Others” category.
Values must be in the following range:
How to encode categories.
Values must be one of the following:
OneHot
Label
Ordinal
Binary
Frequency
None
Whether and how to scale the final numerical values (across a single column).
Values must be one of the following:
Standard
Robust
KNN
None
Condition for application of low- or high-cardinality configuration.
Number of unique categories below which the low_cardinality
configuration is used,
and above which the high_cardinality
configuration is used.
Values must be in the following range:
Low cardinality configuration.
Used for categories with fewer than cardinality_threshold
unique categories.
Properties
Properties
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
Whether and how to impute (replace/fill) missing values.
Values must be one of the following:
MostFrequent
Const
None
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single “Others” category.
Values must be in the following range:
How to encode categories.
Values must be one of the following:
OneHot
Label
Ordinal
Binary
Frequency
None
Whether and how to scale the final numerical values (across a single column).
Values must be one of the following:
Standard
Robust
KNN
None
High cardinality configuration.
Used for categories with more than cardinality_threshold
unique categories.
Properties
Properties
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
Whether and how to impute (replace/fill) missing values.
Values must be one of the following:
MostFrequent
Const
None
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single “Others” category.
Values must be in the following range:
How to encode categories.
Values must be one of the following:
OneHot
Label
Ordinal
Binary
Frequency
None
Whether and how to scale the final numerical values (across a single column).
Values must be one of the following:
Standard
Robust
KNN
None
Multilabel encoder.
Configures encoding of multivalued categorical features (variable length lists of categories,
or the semantic type list[category]
for short). May contain either a single configuration for
all multilabel variables, or two different configurations for low- and high-cardinality variables.
For further details pick one of the two options below.
Options
Options
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
How to encode categories/labels in multilabel (list[category]) columns.
Values must be one of the following:
Binarizer
TfIdf
None
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn’s truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Values must be in the following range:
How to scale the encoded (numerical columns).
Values must be one of the following:
Euclidean
KNN
Norm
None
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
How to encode categories/labels in multilabel (list[category]) columns.
Values must be one of the following:
Binarizer
TfIdf
None
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn’s truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Values must be in the following range:
How to scale the encoded (numerical columns).
Values must be one of the following:
Euclidean
KNN
Norm
None
Condition for application of low- or high-cardinality configuration.
Number of unique categories below which the low_cardinality
configuration is used,
and above which the high_cardinality
configuration is used.
Values must be in the following range:
Low cardinality configuration.
Used for mulitabel columns with fewer than cardinality_threshold
unique categories/labels.
Properties
Properties
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
How to encode categories/labels in multilabel (list[category]) columns.
Values must be one of the following:
Binarizer
TfIdf
None
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn’s truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Values must be in the following range:
How to scale the encoded (numerical columns).
Values must be one of the following:
Euclidean
KNN
Norm
None
High cardinality configuration.
Used for categories with more than cardinality_threshold
unique categories.
Properties
Properties
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
How to encode categories/labels in multilabel (list[category]) columns.
Values must be one of the following:
Binarizer
TfIdf
None
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn’s truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Values must be in the following range:
How to scale the encoded (numerical columns).
Values must be one of the following:
Euclidean
KNN
Norm
None
Datetime encoder. Configures encoding of datetime (timestamp) features.
Properties
Properties
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
A list of numerical components to extract. Will create one numeric column for each component.
Array items
Array items
Each item in array.
Values must be one of the following:
day
dayofweek
dayofyear
hour
minute
month
quarter
season
second
week
weekday
weekofyear
year
A list of cyclical time features to extract. “Cycles” are numerical transformations of features that should be represented on a circle. E.g. months, ranging from 1 to 12, should be arranged such that 12 and 1 are next to each other, rather than on opposite ends of a linear scale. We represent such cyclical time features on a circle by creating two columns for each original feature: the sin and cos of the numerical feature after appropriate scaling.
Array items
Array items
Each item in array.
Values must be one of the following:
day
dayofweek
dayofyear
hour
month
Whether to include the epoch as new feature (seconds since 01/01/1970).
Whether and how to impute (replace/fill) missing values.
Values must be one of the following:
Mean
Median
MostFrequent
Const
None
Whether and how to scale the final numerical values (across a single column).
Values must be one of the following:
Standard
Robust
KNN
None
How to scale the encoded (numerical columns).
Values must be one of the following:
Euclidean
KNN
Norm
None
Embedding/vector encoder.
Configures encoding of multivalued numerical features (variable length lists of numbers, i.e. vectors, or the semantic type list[number]
for short).
Text encoder. Configures encoding of text (natural language) features. Currently only allows Tf-Idf embeddings to represent texts. If you wish to use other embeddings, e.g. semantic, Word2Vec etc., transform your text column first using another step, and then use that result instead of the original texts.
include_text_features
below to active it.Properties
Properties
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
Parameters to be passed to the text encoder (Tf-Idf parameters only for now). See scikit-learn’s documentation for detailed parameters and their explanation.
How many output features to generate. The resulting Tf-Idf vectors will be reduced to these many dimensions (columns) using scikit-learn’s truncated SVD. This performs a kind of latent semantic analysis. By default we will reduce to 200 components.
Values must be in the following range:
How to scale the encoded (numerical columns).
Values must be one of the following:
Euclidean
KNN
Norm
None
Whether to include or ignore text columns during the processing of input data.
Enabling this will convert texts to their TfIdf representation. Each text will be
converted to an N-dimensional vector in which each component measures the relative
“over-representation” of a specific word (or n-gram) relative to its overall
frequency in the whole dataset. This is disabled by default because it will
often be better to convert texts explicitly using a previous step, such as
embed_text
or embed_text_with_model
.
CatBoost configuration parameters. You can check the official documentation for more details about Catboost’s parameters here.
Properties
Properties
The maximum depth of the tree.
Values must be in the following range:
Number of iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.
Values must be in the following range:
Maximum cardinality of variables to be one-hot encoded. Use one-hot encoding for all categorical features with a number of different values less than or equal to this value. Other variables will be target-encoded. Note that one-hot encoding is faster than the alternatives, so decreasing this value makes it more likely slower methods will be used. See CatBoost details for further information.
Values must be in the following range:
The maximum number of features that can be combined when transforming categorical variables. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.
Values must be in the following range:
Coefficient at the L2 regularization term of the cost function.
Values must be in the following range:
The number of splits for numerical features.
Values must be in the following range:
The amount of randomness to use for scoring splits. Use this parameter to avoid overfitting the model. The value multiplies the variance of a random variable (with zero mean) that is added to the score used to select splits when a tree is grown.
Values must be in the following range:
The method for processing missing values in the input dataset. Possible values:
- “Forbidden”: Missing values are not supported, their presence is interpreted as an error.
- “Min”: Missing values are processed as the minimum value (less than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
- “Max”: Missing values are processed as the maximum value (greater than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
Using the Min or Max value of this parameter guarantees that a split between missing values and other values is considered when selecting a new split in the tree.
Values must be one of the following:
Forbidden
Min
Max
Boosting type. Boosting scheme. Possible values are
- Ordered: Usually provides better quality on small datasets, but it may be slower than the Plain scheme.
- Plain: The classic gradient boosting scheme.
Values must be one of the following:
Ordered
Plain
Random subspace method.
The percentage of features to use at each split selection, when features are selected over again at random. The value null
is equivalent to 1.0 (all features). You can set this to values < 1.0 when the dataset has many features (e.g. > 20) to speed up training.
Values must be in the following range:
The random seed used for training.
Whether and how to limit memory usage. Select the maximum Ram used using strings like “2GB” or “100mb” (non case_sensitive).
Whether and how to assign weights to different predicted classes. The options are:
- null: No class weighting
- Balanced: Inversely proportional to the number of samples/rows in each class
- SqrtBalanced: Using the square root of the “Balanced” option.
Values must be one of the following:
Balanced
SqrtBalanced
None
None
Configure model validation. Allows evaluation of model performance via cross-validation using custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.
Properties
Properties
Number of train-test splits to evaluate the model on.
Will split the dataset into training and test set n_splits
times, train on the former
and evaluate on the latter using specified or automatically selected metrics
.
Values must be in the following range:
What proportion of the data to use for testing in each split.
If null
or not provided, will use k-fold cross-validation
to split the dataset. E.g. if n_splits
is 5, the dataset will be split into 5 equal-sized parts.
For five iterations four parts will then be used for training and the remaining part for testing.
If test_size
is a number between 0 and 1, in contrast, validation is done using a
shuffle-split
approach. Here, instead of splitting the data into n_splits
equal parts up front, in each iteration
we randomize the data and sample a proportion equal to test_size
to use for evaluation and the remaining
rows for training.
Values must be in the following range:
Whether to split the data by time. Most recent data will be used for testing and previous data for training. Assumes data is passed already sorted ascending by time.
One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.
Array items
Array items
Each item in array.
Values must be one of the following:
accuracy
balanced_accuracy
f1_micro
f1_macro
f1_samples
f1_weighted
precision_micro
precision_macro
precision_samples
precision_weighted
recall_micro
recall_macro
recall_samples
recall_weighted
roc_auc
roc_auc_ovr
roc_auc_ovo
roc_auc_ovr_weighted
roc_auc_ovo_weighted
Configure hypertuning. Configures the optimization of model hyper-parameters via cross-validated grid- or randomized search.
Properties
Properties
Which search strategy to use for optimization.
Grid search explores all possible combinations of parameters specified in params
.
Randomized search, on the other hand, randomly samples iterations
parameter combinations
from the distributions specified in params
.
Values must be one of the following:
grid
random
How many randomly sampled parameter combinations to test in randomized search.
Values must be in the following range:
Configure model validation. Allows evaluation of model performance via cross-validation using custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.
Properties
Properties
Number of train-test splits to evaluate the model on.
Will split the dataset into training and test set n_splits
times, train on the former
and evaluate on the latter using specified or automatically selected metrics
.
Values must be in the following range:
What proportion of the data to use for testing in each split.
If null
or not provided, will use k-fold cross-validation
to split the dataset. E.g. if n_splits
is 5, the dataset will be split into 5 equal-sized parts.
For five iterations four parts will then be used for training and the remaining part for testing.
If test_size
is a number between 0 and 1, in contrast, validation is done using a
shuffle-split
approach. Here, instead of splitting the data into n_splits
equal parts up front, in each iteration
we randomize the data and sample a proportion equal to test_size
to use for evaluation and the remaining
rows for training.
Values must be in the following range:
Whether to split the data by time. Most recent data will be used for testing and previous data for training. Assumes data is passed already sorted ascending by time.
One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.
Array items
Array items
Each item in array.
Values must be one of the following:
accuracy
balanced_accuracy
f1_micro
f1_macro
f1_samples
f1_weighted
precision_micro
precision_macro
precision_samples
precision_weighted
recall_micro
recall_macro
recall_samples
recall_weighted
roc_auc
roc_auc_ovr
roc_auc_ovo
roc_auc_ovr_weighted
roc_auc_ovo_weighted
Each item in array.
Values must be one of the following:
accuracy
balanced_accuracy
f1_micro
f1_macro
f1_samples
f1_weighted
precision_micro
precision_macro
precision_samples
precision_weighted
recall_micro
recall_macro
recall_samples
recall_weighted
roc_auc
roc_auc_ovr
roc_auc_ovo
roc_auc_ovr_weighted
roc_auc_ovo_weighted
The parameter values to explore. Allows tuning of any and all of the parameters that can be set also as constants in the “params” attribute.
Keys in this object should be strings identifying parameter names, and values should be
lists of values to explore for that parameter. E.g. "depth": [3, 5, 7]
.
Properties
Properties
List of depths values to explore.
Array items
Array items
The maximum depth of the tree.
Values must be in the following range:
List of iterations values to explore.
Array items
Array items
Number of iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.
Values must be in the following range:
List of values configuring max cardinality for one-hot encoding.
Array items
Array items
Maximum cardinality of variables to be one-hot encoded. Use one-hot encoding for all categorical features with a number of different values less than or equal to this value. Other variables will be target-encoded. Note that one-hot encoding is faster than the alternatives, so decreasing this value makes it more likely slower methods will be used. See CatBoost details for further information.
Values must be in the following range:
List of values configuring variable combination complexity.
Array items
Array items
The maximum number of features that can be combined when transforming categorical variables. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.
Values must be in the following range:
List of leaf regularization strengths.
Array items
Array items
Coefficient at the L2 regularization term of the cost function.
Values must be in the following range:
List of border counts.
Array items
Array items
The number of splits for numerical features.
Values must be in the following range:
List of random strengths.
Array items
Array items
The amount of randomness to use for scoring splits. Use this parameter to avoid overfitting the model. The value multiplies the variance of a random variable (with zero mean) that is added to the score used to select splits when a tree is grown.
Values must be in the following range:
List of criterion values to explore.
Array items
Array items
Function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain.
Values must be one of the following:
gini
entropy
List of splitter values to explore.
Array items
Array items
Strategy used to choose the split at each node. Supported strategies are “best” to choose the best split and “random” to choose the best random split.
Values must be one of the following:
best
random
List of max_depth values to explore.
Array items
Array items
Maximum depth of the tree. The maximum depth of the tree. If null, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
Values must be in the following range:
List of min_samples_split values to explore.
Array items
Array items
Minimum number of samples required to split an internal node.
The minimum number of samples required to split an internal node. If int, then consider min_samples_split
as the minimum count. If float, then min_samples_split is a fraction and ceil(min_samples_split * n_samples)
are the minimum number of samples for each split.
List of min_samples_leaf values to explore.
Array items
Array items
Minimum number of samples required to be at a leaf node.
The minimum number of samples required to be at a leaf node. A split point at any depth will only be
considered if it leaves at least min_samples_leaf
training samples in each of the left and right branches.
This may have the effect of smoothing the model, especially in regression.
If int, then consider min_samples_leaf
as the minimum count. If float, then min_samples_leaf
is
a fraction and ceil(min_samples_leaf * n_samples)
are the minimum number of samples for each node.
List of max_leaf_nodes values to explore.
Array items
Array items
Grow a tree with max_leaf_nodes
in best-first fashion.
Best nodes are defined as relative reduction in impurity. If null then unlimited number of leaf nodes.
List of max_features values to explore.
Array items
Array items
Number of features to consider when looking for the best split. The number of features to consider when looking for the best split:
- If int, then consider
max_features
features at each split. - If float, then
max_features
is a fraction andint(max_features * n_features)
features are considered at each split. - If “auto”, then
max_features=sqrt(n_features)
. - If “sqrt”, then
max_features=sqrt(n_features)
. - If “log2”, then
max_features=log2(n_features)
. - If null, then
max_features=n_features
.
Note: the search for a split does not stop until at least one valid partition of the node samples is found,
even if it requires to effectively inspect more than max_features
features.
List of ccp_alpha values to explore.
Array items
Array items
Complexity parameter used for Minimal Cost-Complexity Pruning. Minimal Cost-Complexity Pruning recursively finds the node with the “weakest link”. The weakest link is characterized by an effective alpha, where the nodes with the smallest effective alpha are pruned first. As alpha increases, more of the tree is pruned, which increases the total impurity of its leaves.
Values must be in the following range:
Seed for random number generator ensuring reproducibility.
Values must be in the following range:
Sort the data before training.
If the data is not already sorted by time, you can sort it here. This is useful when you want to split the data
by time, for example to train on older data and test on newer data (see the time_split
parameter in validation
configurations). If the data is already sorted by time, you can ignore this parameter.
Properties
Properties
One or more column to sort by.
Array items
Array items
Each item in array.
Sort order.
Whether to sort in ascending or descending order. If the single value true
is provided, or no value is specified,
all columns will be sorted in ascending order. If a single false
is provided, all columns will be sorted in descending
order. If an array of booleans is provided, each column will be sorted according to the corresponding boolean value.
Array items
Array items
Each item in array.