Train classification gpu¶
inference • models • catboost • classification • logistic regression
Train and store a classification model to be loaded at a later point for prediction.
The output will consist of a new column with the trained model's predictions on the training data, as well as a saved and named model file that can be used in other projects for prediction of new data.
Optionally, if a second output column name is provided, the model's predicted probabilities will also be returned.
A detailed guide on how to configure this step for model tuning and performance evaluation can be found here.
Usage¶
The following are the step's expected inputs and outputs and their specific types.
train_classification_gpu(ds: dataset, {
"param": value
}) -> (*predicted: column, model: model_classification[ds])
where the object {"param": value}
is optional in most cases and if present may contain any of the parameters described in the
corresponding section below.
Example¶
Train a classification model with default parameters. By default, a Catboost model will be trained, but this can be changed to any of the supported models by specifying the model
parameter (see below for details):
train_classification(ds, {"target": "class"}) -> (ds.predicted, model)
More examples
To also return the predicted probabilities, provide a second column name:
train_classification(ds, {"target": "class"}) -> (ds.predicted, ds.probs, model)
To be more explicit about which model parameters to use during training, which parameters to optimize (tune) automatically, and how to evaluate the model's performance, the following example shows a complete configuration. It will explicitly select the CatboostClassifier as the model, set boosting_type
to "ordered", and select the best combination of learning_rate
and depth
from the values specified in the tune: params
configuration. To find the best parameters, it will perform 5-fold cross-validation on each combination, and will use the scorer f1_weighted
to measure the performance. accuracy
will also be measured, but only for the purpose of reporting. The best parameter combination will than be evaluated on a single split of the dataset (with 20% of rows used for testing and 80% for training), with metrics selected automatically. Note that the final model will always be re-trained on the whole dataset!
train_classification(ds, {
"target": "label_col",
"model": "CatboostClassifier",
"params": {
"boosting_type": "ordered"
},
"tune": {
"strategy": "grid",
"params": {
"learning_rate": [0.03, 0.1],
"depth": [4, 6, 10]
},
"validate": {
"n_splits": 5,
"metrics": ["f1_weighted", "accuracy"]
},
"scorer": "f1_weighted"
},
"validate": {
"n_splits": 1,
"test_size": 0.2
}
}) -> (ds.predicted, model)
Inputs¶
ds: dataset
Should contain the target column and the feature columns you wish to use in the model.
Outputs¶
*predicted: column
One or two columns containing the model's predictions. If two column names are provided, the second column will contain the model's predicted probabilities.
model: file:model_classification[ds]
Zip file containing the trained model and associated information.
Parameters¶
model: string = "CatboostClassifier"
Train a Catboost classifier. I.e. gradient boosted decision trees with support for categorical variables and missing values.
target: string
Target variable (labels). Name of the column that contains your target values (labels).
positive_class: string | null
Name of the positive class. In binary classification, usually the class you're most interested in, for example the label/class corresponding to successful lead conversion in a lead score model, the class corresponding to a customer who has churned in a churn prediction model, etc.
If provided, will automaticall measure the performance (accuracy, precision, recall) of the model on this class, in addition to averages across all classes. If not provided, only summary metrics will be reported.
feature_importance: string | boolean | null = "native"
Importance of each feature in the model. Whether and how to measure each feature's contribution to the model's predictions. The higher the value, the more important the feature was in the model. Only relative values are meaningful, i.e. the importance of a feature relative to other features in the model.
Also note that feature importance is usually meaningful only for models that fit the data well.
The default (null
, true
or "native"
) uses the classifier's native feature importance measure, e.g.
prediction-value-change in the case
of Catboost, Gini importance
in the case of scikit-learn's DecisionTreeClassifier, and the mean of absolute coefficients in the case of
logistic regression.
When set to "permutation"
, uses permutation importance,
i.e. measures the decrease in model score when a single feature's values are randomly shuffled. This is
considerably slower than native feature importance (the model needs to be evaluated an additional k*n times,
where k is the number of features and n the number of repetitions to average over). On the positive side it is
model-agnostic and doesn't suffer from bias towards high cardinality features (like some tree-based feature
importances). On the negative side, it can be sensitive to strongly correlated features, as the unshuffled
correlated variable is still available to the model when shuffling the original variable.
When set to false
, no feature importance will be calculated.
Must be one of:
True
,
False
,
"native"
,
"permutation"
,
"null"
encode_features: boolean = True
Toggle encoding of feature columns. When enabled, Graphext will auto-convert any column types to the numeric type before
fitting the model. How this conversion is done can be configured using the feature_encoder
option below.
Warning! If disabled, any model trained in this step will assume that input data is already in an appropriate format (e.g. numerical and not containing any missing values).
feature_encoder: null | object
Configures encoding of feature columns. By default (null
), Graphext chooses automatically how to convert any column types the model
may not understand natively to a numeric type.
A configuration object can be passed instead to overwrite specific parameter values with respect to their default values.
Items in feature_encoder
number: object
Numeric encoder. Configures encoding of numeric features.
Items in number
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"Mean"
,
"Median"
,
"MostFrequent"
,
"Const"
,
None
scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
scaler_params: object
Further parameters passed to the scaler
function. Details depend no the particular scaler used.
Items in scaler_params
bool: object
Boolean encoder. Configures encoding of boolean features.
Items in bool
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
ordinal: object
Ordinal encoder. Configures encoding of categorical features that have a natural order.
Items in ordinal
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
category: object | object
Category encoder. May contain either a single configuration for all categorical variables, or two different configurations for low- and high-cardinality variables. For further details pick one of the two options below.
Items in category
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
max_categories: null | integer
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.
Range: 1 ≤ max_categories < inf
encoder: null | string
How to encode categories.
Must be one of:
"OneHot"
,
"Label"
,
"Ordinal"
,
"Binary"
,
"Frequency"
,
None
scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
cardinality_treshold: integer
Condition for application of low- or high-cardinality configuration. Number of unique categories below which the low_cardinality
configuration is used,
and above which the high_cardinality
configuration is used.
Range: 3 ≤ cardinality_treshold < inf
low_cardinality: object
Low cardinality configuration. Used for categories with fewer than cardinality_threshold
unique categories.
Items in low_cardinality
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
max_categories: null | integer
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.
Range: 1 ≤ max_categories < inf
encoder: null | string
How to encode categories.
Must be one of:
"OneHot"
,
"Label"
,
"Ordinal"
,
"Binary"
,
"Frequency"
,
None
scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
high_cardinality: object
High cardinality configuration. Used for categories with more than cardinality_threshold
unique categories.
Items in high_cardinality
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
max_categories: null | integer
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.
Range: 1 ≤ max_categories < inf
encoder: null | string
How to encode categories.
Must be one of:
"OneHot"
,
"Label"
,
"Ordinal"
,
"Binary"
,
"Frequency"
,
None
scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
multilabel: object | object
Multilabel encoder. Configures encoding of multivalued categorical features (variable length lists of categories,
or the semantic type list[category]
for short). May contain either a single configuration for
all multilabel variables, or two different configurations for low- and high-cardinality variables.
For further details pick one of the two options below.
Items in multilabel
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder: null | string
How to encode categories/labels in multilabel (list[category]) columns.
Must be one of:
"Binarizer"
,
"TfIdf"
,
None
max_categories: null | integer
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Range: 2 ≤ max_categories < inf
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
cardinality_treshold: integer
Condition for application of low- or high-cardinality configuration. Number of unique categories below which the low_cardinality
configuration is used,
and above which the high_cardinality
configuration is used.
Range: 3 ≤ cardinality_treshold < inf
low_cardinality: object
Low cardinality configuration. Used for mulitabel columns with fewer than cardinality_threshold
unique categories/labels.
Items in low_cardinality
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder: null | string
How to encode categories/labels in multilabel (list[category]) columns.
Must be one of:
"Binarizer"
,
"TfIdf"
,
None
max_categories: null | integer
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Range: 2 ≤ max_categories < inf
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
high_cardinality: object
High cardinality configuration. Used for categories with more than cardinality_threshold
unique categories.
Items in high_cardinality
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder: null | string
How to encode categories/labels in multilabel (list[category]) columns.
Must be one of:
"Binarizer"
,
"TfIdf"
,
None
max_categories: null | integer
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Range: 2 ≤ max_categories < inf
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
datetime: object
Datetime encoder. Configures encoding of datetime (timestamp) features.
Items in datetime
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
components: array[string]
A list of numerical components to extract. Will create one numeric column for each component.
Items in components
item: string
Must be one of:
"day"
,
"dayofweek"
,
"dayofyear"
,
"hour"
,
"minute"
,
"month"
,
"quarter"
,
"season"
,
"second"
,
"week"
,
"weekday"
,
"weekofyear"
,
"year"
cycles: array[string]
A list of cyclical time features to extract. "Cycles" are numerical transformations of features that should be represented on a circle. E.g. months, ranging from 1 to 12, should be arranged such that 12 and 1 are next to each other, rather than on opposite ends of a linear scale. We represent such cyclical time features on a circle by creating two columns for each original feature: the sin and cos of the numerical feature after appropriate scaling.
Items in cycles
item: string
Must be one of:
"day"
,
"dayofweek"
,
"dayofyear"
,
"hour"
,
"month"
epoch: null | boolean
Whether to include the epoch as new feature (seconds since 01/01/1970).
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"Mean"
,
"Median"
,
"MostFrequent"
,
"Const"
,
None
component_scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
vector_scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
embedding: object
Embedding/vector encoder. Configures encoding of multivalued numerical features (variable length lists of numbers, i.e. vectors, or the semantic type list[number]
for short).
Items in embedding
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
text: object
Text encoder. Configures encoding of text (natural language) features. Currently only allows Tf-Idf embeddings to represent texts. If you wish to use other embeddings, e.g. semantic, Word2Vec etc., transform your text column first using another step, and then use that result instead of the original texts.
Warning! Texts are excluded by default from the overall encoding of the dataset. See parameter
include_text_features
below to active it.
Items in text
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder_params: object
Parameters to be passed to the text encoder (Tf-Idf parameters only for now). See scikit-learn's documentation for detailed parameters and their explanation.
Items in encoder_params
n_components: integer
How many output features to generate. The resulting Tf-Idf vectors will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. This performs a kind of latent semantic analysis. By default we will reduce to 200 components.
Range: 2 ≤ n_components ≤ 1024
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
include_text_features: boolean = False
Whether to include or ignore text columns during the processing of input data. Enabling this will convert texts to their TfIdf representation. Each text will be
converted to an N-dimensional vector in which each component measures the relative
"over-representation" of a specific word (or n-gram) relative to its overall
frequency in the whole dataset. This is disabled by default because it will
often be better to convert texts explicitly using a previous step, such as
embed_text
or embed_text_with_model
params: object
CatBoost configuration parameters. You can check the official documentation for more details about Catboost's parameters here.
Items in params
depth: integer = 6
The maximum depth of the tree.
Range: 2 ≤ depth ≤ 16
iterations: integer | null = 1000
Number of iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.
Range: 1 ≤ iterations < inf
one_hot_max_size: integer = 10
Maximum cardinality of variables to be one-hot encoded. Use one-hot encoding for all categorical features with a number of different values less than or equal to this value. Other variables will be target-encoded. Note that one-hot encoding is faster than the alternatives, so decreasing this value makes it more likely slower methods will be used. See CatBoost details for further information.
Range: 2 ≤ one_hot_max_size < inf
max_ctr_complexity: number = 2
The maximum number of features that can be combined when transforming categorical variables. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.
Range: 1 ≤ max_ctr_complexity ≤ 4
l2_leaf_reg: number = 3.0
Coefficient at the L2 regularization term of the cost function.
Range: 0.0 < l2_leaf_reg < inf
border_count: integer = 254
The number of splits for numerical features.
Range: 1 ≤ border_count ≤ 65535
random_strength: number = 1.0
The amount of randomness to use for scoring splits. Use this parameter to avoid overfitting the model. The value multiplies the variance of a random variable (with zero mean) that is added to the score used to select splits when a tree is grown.
Range: 0 < random_strength < inf
nan_mode: string = "Min"
The method for processing missing values in the input dataset. Possible values:
- “Forbidden”: Missing values are not supported, their presence is interpreted as an error.
- “Min”: Missing values are processed as the minimum value (less than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
- “Max”: Missing values are processed as the maximum value (greater than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
Using the Min or Max value of this parameter guarantees that a split between missing values and other values is considered when selecting a new split in the tree.
Must be one of:
"Forbidden"
,
"Min"
,
"Max"
boosting_type: string = "Plain"
Boosting type. Boosting scheme. Possible values are
- Ordered: Usually provides better quality on small datasets, but it may be slower than the Plain scheme.
- Plain: The classic gradient boosting scheme.
Must be one of:
"Ordered"
,
"Plain"
rsm: number | null
Random subspace method. The percentage of features to use at each split selection, when features are selected over again at random. The value null
is equivalent to 1.0 (all features). You can set this to values < 1.0 when the dataset has many features (e.g. > 20) to speed up training.
Range: 0 < rsm ≤ 1.0
random_seed: integer = 0
The random seed used for training.
used_ram_limit: string | null
Whether and how to limit memory usage. Select the maximum Ram used using strings like "2GB" or "100mb" (non case_sensitive).
auto_class_weights: string | null = "Balanced"
Whether and how to assign weights to different predicted classes. The options are:
- null: No class weighting
- Balanced: Inversely proportional to the number of samples/rows in each class
- SqrtBalanced: Using the square root of the "Balanced" option.
Must be one of:
"Balanced"
,
"SqrtBalanced"
validate: object | null
Configure model validation. Allows evaluation of model performance via cross-validation using custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.
Items in validate
n_splits: integer | null = 5
Number of train-test splits to evaluate the model on. Will split the dataset into training and test set n_splits
times, train on the former
and evaluate on the latter using specified or automatically selected metrics
Range: 1 ≤ n_splits < inf
test_size: number | null
What proportion of the data to use for testing in each split. If null
or not provided, will use k-fold cross-validation
to split the dataset. E.g. if n_splits
is 5, the dataset will be split into 5 equal-sized parts.
For five iterations four parts will then be used for training and the remaining part for testing.
If test_size
is a number between 0 and 1, in contrast, validation is done using a
shuffle-split
approach. Here, instead of splitting the data into n_splits
equal parts up front, in each iteration
we randomize the data and sample a proportion equal to test_size
to use for evaluation and the remaining
rows for training.
Range: 0 < test_size < 1
metrics: null | array[string]
One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.
Must be one of:
"accuracy"
,
"balanced_accuracy"
,
"f1_micro"
,
"f1_macro"
,
"f1_samples"
,
"f1_weighted"
,
"precision_micro"
,
"precision_macro"
,
"precision_samples"
,
"precision_weighted"
,
"recall_micro"
,
"recall_macro"
,
"recall_samples"
,
"recall_weighted"
,
"roc_auc"
,
"roc_auc_ovr"
,
"roc_auc_ovo"
,
"roc_auc_ovr_weighted"
,
"roc_auc_ovo_weighted"
tune: object
Configure hypertuning. Configures the optimization of model hyper-parameters via cross-validated grid- or randomized search.
Items in tune
strategy: string = "grid"
Which search strategy to use for optimization. Grid search explores all possible combinations of parameters specified in params
.
Randomized search, on the other hand, randomly samples iterations
parameter combinations
from the distributions specified in params
Must be one of:
"grid"
,
"random"
iterations: integer = 10
How many randomly sampled parameter combinations to test in randomized search.
Range: 1 < iterations < inf
validate: object | null
Configure model validation. Allows evaluation of model performance via cross-validation using custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.
Items in validate
n_splits: integer | null = 5
Number of train-test splits to evaluate the model on. Will split the dataset into training and test set n_splits
times, train on the former
and evaluate on the latter using specified or automatically selected metrics
Range: 1 ≤ n_splits < inf
test_size: number | null
What proportion of the data to use for testing in each split. If null
or not provided, will use k-fold cross-validation
to split the dataset. E.g. if n_splits
is 5, the dataset will be split into 5 equal-sized parts.
For five iterations four parts will then be used for training and the remaining part for testing.
If test_size
is a number between 0 and 1, in contrast, validation is done using a
shuffle-split
approach. Here, instead of splitting the data into n_splits
equal parts up front, in each iteration
we randomize the data and sample a proportion equal to test_size
to use for evaluation and the remaining
rows for training.
Range: 0 < test_size < 1
metrics: null | array[string]
One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.
Must be one of:
"accuracy"
,
"balanced_accuracy"
,
"f1_micro"
,
"f1_macro"
,
"f1_samples"
,
"f1_weighted"
,
"precision_micro"
,
"precision_macro"
,
"precision_samples"
,
"precision_weighted"
,
"recall_micro"
,
"recall_macro"
,
"recall_samples"
,
"recall_weighted"
,
"roc_auc"
,
"roc_auc_ovr"
,
"roc_auc_ovo"
,
"roc_auc_ovr_weighted"
,
"roc_auc_ovo_weighted"
scorer: string
Metric used to select best model.
Must be one of:
"accuracy"
,
"balanced_accuracy"
,
"f1_micro"
,
"f1_macro"
,
"f1_samples"
,
"f1_weighted"
,
"precision_micro"
,
"precision_macro"
,
"precision_samples"
,
"precision_weighted"
,
"recall_micro"
,
"recall_macro"
,
"recall_samples"
,
"recall_weighted"
,
"roc_auc"
,
"roc_auc_ovr"
,
"roc_auc_ovo"
,
"roc_auc_ovr_weighted"
,
"roc_auc_ovo_weighted"
params: object
The parameter values to explore. Allows tuning of any and all of the parameters that can be set also as constants in the "params" attribute.
Keys in this object should be strings identifying parameter names, and values should be
lists of values to explore for that parameter. E.g. "depth": [3, 5, 7]
Items in params
depth: array[integer]
List of depths values to explore.
Items in depth
item: integer = 6
The maximum depth of the tree.
Range: 2 ≤ item ≤ 16
iterations: array[integer | null]
List of iterations values to explore.
Items in iterations
item: integer | null = 1000
Number of iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.
Range: 1 ≤ item < inf
one_hot_max_size: array[integer]
List of values configuring max cardinality for one-hot encoding.
Items in one_hot_max_size
item: integer = 10
Maximum cardinality of variables to be one-hot encoded. Use one-hot encoding for all categorical features with a number of different values less than or equal to this value. Other variables will be target-encoded. Note that one-hot encoding is faster than the alternatives, so decreasing this value makes it more likely slower methods will be used. See CatBoost details for further information.
Range: 2 ≤ item < inf
max_ctr_complexity: array[number]
List of values configuring variable combination complexity.
Items in max_ctr_complexity
item: number = 2
The maximum number of features that can be combined when transforming categorical variables. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.
Range: 1 ≤ item ≤ 4
l2_leaf_reg: array[number]
List of leaf regularization strengths.
Items in l2_leaf_reg
item: number = 3.0
Coefficient at the L2 regularization term of the cost function.
Range: 0.0 < item < inf
border_count: array[integer]
List of border counts.
Items in border_count
item: integer = 254
The number of splits for numerical features.
Range: 1 ≤ item ≤ 65535
random_strength: array[number]
List of random strengths.
Items in random_strength
item: number = 1.0
The amount of randomness to use for scoring splits. Use this parameter to avoid overfitting the model. The value multiplies the variance of a random variable (with zero mean) that is added to the score used to select splits when a tree is grown.
Range: 0 < item < inf
seed: integer
Seed for random number generator ensuring reproducibility.
Range: 0 ≤ seed < inf