# Train classification¶

inference • models • catboost • classification • logistic regression

Train and store a classification model to be loaded at a later point for prediction.

The output will consist of a new column with the trained model's predictions on the training data, as well as a saved and named model file that can be used in other projects for prediction of new data.

Optionally, if a second output column name is provided, the model's predicted probabilities will also be returned.

A detailed guide on how to configure this step for model tuning and performance evaluation can be found here.

## Usage¶

The following are the step's expected inputs and outputs and their specific types.

```
train_classification(ds: dataset, {"param": value}) -> (*predicted: column, model: model_classification[ds])
```

where the object `{"param": value}`

is optional in most cases and if present may contain any of the parameters described in the
corresponding section below.

#### Example¶

Train a classification model with default parameters. By default, a Catboost model will be trained, but this can be changed to any of the supported models by specifying the `model`

parameter (see below for details):

```
train_classification(ds, {"target": "class"}) -> (ds.predicted, model)
```

## More examples

To also return the predicted probabilities, provide a second column name:

```
train_classification(ds, {"target": "class"}) -> (ds.predicted, ds.probs, model)
```

To be more explicit about which model parameters to use during training, which parameters to optimize (tune) automatically, and how to evaluate the model's performance, the following example shows a complete configuration. It will explicitly select the CatboostClassifier as the model, set `boosting_type`

to "ordered", and select the best combination of `learning_rate`

and `depth`

from the values specified in the `tune: params`

configuration. To find the best parameters, it will perform 5-fold cross-validation on each combination, and will use the scorer `f1_weighted`

to measure the performance. `accuracy`

will also be measured, but only for the purpose of reporting. The best parameter combination will than be evaluated on a single split of the dataset (with 20% of rows used for testing and 80% for training), with metrics selected automatically. Note that the final model will always be re-trained on the whole dataset!

```
train_classification(ds, {
"target": "label_col",
"model": "CatboostClassifier",
"params": {
"boosting_type": "ordered"
},
"tune": {
"strategy": "grid",
"params": {
"learning_rate": [0.03, 0.1],
"depth": [4, 6, 10]
},
"validate": {
"n_splits": 5,
"metrics": ["f1_weighted", "accuracy"]
},
"scorer": "f1_weighted"
},
"validate": {
"n_splits": 1,
"test_size": 0.2
}
}) -> (ds.predicted, model)
```

## Inputs¶

ds: dataset

Should contain the target column and the feature columns you wish to use in the model.

## Outputs¶

*predicted: column

One or two columns containing the model's predictions. If two column names are provided, the second column will contain the model's predicted probabilities.

model: file:model_classification[ds]

Zip file containing the trained model and associated information.

## Parameters¶

model: string = "CatboostClassifier"

Train a Catboost classifier. I.e. gradient boosted decision trees with support for categorical variables and missing values.

target: string

Target variable (labels). Name of the column that contains your target values (labels).

positive_class: string | null

Name of the positive class. In *binary* classification, usually the class you're most interested in, for example the label/class
corresponding to successful lead conversion in a lead score model, the class corresponding to a
customer who has churned in a churn prediction model, etc.

If provided, will automaticall measure the performance (accuracy, precision, recall) of the model on this class, in addition to averages across all classes. If not provided, only summary metrics will be reported.

encode_features: boolean = True

Toggle encoding of feature columns. When enabled, Graphext will auto-convert any column types to the numeric type before
fitting the model. How this conversion is done can be configured using the `feature_encoder`

option below.

Warning

If disabled, any model trained in this step will assume that input data is already in an appropriate format (e.g. numerical and not containing any missing values).

feature_encoder: null | object

Configures encoding of feature columns. By default (`null`

), Graphext chooses automatically how to convert any column types the model
may not understand natively to a numeric type.

A configuration object can be passed instead to overwrite specific parameter values with respect to their default values.

## Items in `feature_encoder`

number: object

Numeric encoder. Configures encoding of numeric features.

## Items in `number`

indicate_missing: boolean

Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"Mean"`

,
`"Median"`

,
`"MostFrequent"`

,
`"Const"`

,
`None`

scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

,
`None`

scaler_params: object

Further parameters passed to the `scaler`

function. Details depend no the particular scaler used.

## Items in `scaler_params`

bool: object

Boolean encoder. Configures encoding of boolean features.

## Items in `bool`

indicate_missing: boolean

Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

,
`None`

ordinal: object

Ordinal encoder. Configures encoding of categorical features that have a natural order.

## Items in `ordinal`

indicate_missing: boolean

Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

,
`None`

category: object | object

Category encoder. May contain either a single configuration for all categorical variables, or two different configurations for low- and high-cardinality variables. For further details pick one of the two options below.

## Items in `category`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

,
`None`

max_categories: null | integer

Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.

Range: `1 ≤ max_categories < inf`

encoder: null | string

How to encode categories.

Must be one of:
`"OneHot"`

,
`"Label"`

,
`"Ordinal"`

,
`"Binary"`

,
`"Frequency"`

,
`None`

scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

,
`None`

cardinality_treshold: integer

Condition for application of low- or high-cardinality configuration. Number of unique categories below which the `low_cardinality`

configuration is used,
and above which the `high_cardinality`

configuration is used.

Range: `3 ≤ cardinality_treshold < inf`

low_cardinality: object

Low cardinality configuration. Used for categories with fewer than `cardinality_threshold`

unique categories.

## Items in `low_cardinality`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

,
`None`

max_categories: null | integer

Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.

Range: `1 ≤ max_categories < inf`

encoder: null | string

How to encode categories.

Must be one of:
`"OneHot"`

,
`"Label"`

,
`"Ordinal"`

,
`"Binary"`

,
`"Frequency"`

,
`None`

scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

,
`None`

high_cardinality: object

High cardinality configuration. Used for categories with more than `cardinality_threshold`

unique categories.

## Items in `high_cardinality`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

,
`None`

max_categories: null | integer

Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.

Range: `1 ≤ max_categories < inf`

encoder: null | string

How to encode categories.

Must be one of:
`"OneHot"`

,
`"Label"`

,
`"Ordinal"`

,
`"Binary"`

,
`"Frequency"`

,
`None`

scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

,
`None`

multilabel: object | object

Multilabel encoder. Configures encoding of multivalued categorical features (variable length lists of categories,
or the semantic type `list[category]`

for short). May contain either a single configuration for
all multilabel variables, or two different configurations for low- and high-cardinality variables.
For further details pick one of the two options below.

## Items in `multilabel`

indicate_missing: boolean

encoder: null | string

How to encode categories/labels in multilabel (list[category]) columns.

Must be one of:
`"Binarizer"`

,
`"TfIdf"`

,
`None`

max_categories: null | integer

Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.

Range: `2 ≤ max_categories < inf`

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

cardinality_treshold: integer

Condition for application of low- or high-cardinality configuration. Number of unique categories below which the `low_cardinality`

configuration is used,
and above which the `high_cardinality`

configuration is used.

Range: `3 ≤ cardinality_treshold < inf`

low_cardinality: object

Low cardinality configuration. Used for mulitabel columns with fewer than `cardinality_threshold`

unique categories/labels.

## Items in `low_cardinality`

indicate_missing: boolean

encoder: null | string

How to encode categories/labels in multilabel (list[category]) columns.

Must be one of:
`"Binarizer"`

,
`"TfIdf"`

,
`None`

max_categories: null | integer

Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.

Range: `2 ≤ max_categories < inf`

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

high_cardinality: object

High cardinality configuration. Used for categories with more than `cardinality_threshold`

unique categories.

## Items in `high_cardinality`

indicate_missing: boolean

encoder: null | string

How to encode categories/labels in multilabel (list[category]) columns.

Must be one of:
`"Binarizer"`

,
`"TfIdf"`

,
`None`

max_categories: null | integer

Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.

Range: `2 ≤ max_categories < inf`

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

datetime: object

Datetime encoder. Configures encoding of datetime (timestamp) features.

## Items in `datetime`

indicate_missing: boolean

components: array[string]

A list of numerical components to extract. Will create one numeric column for each component.

## Items in `components`

item: string

Must be one of:
`"day"`

,
`"dayofweek"`

,
`"dayofyear"`

,
`"hour"`

,
`"minute"`

,
`"month"`

,
`"quarter"`

,
`"season"`

,
`"second"`

,
`"week"`

,
`"weekday"`

,
`"weekofyear"`

,
`"year"`

cycles: array[string]

A list of cyclical time features to extract. "Cycles" are numerical transformations of features that should be represented on a circle. E.g. months, ranging from 1 to 12, should be arranged such that 12 and 1 are next to each other, rather than on opposite ends of a linear scale. We represent such cyclical time features on a circle by creating two columns for each original feature: the sin and cos of the numerical feature after appropriate scaling.

## Items in `cycles`

item: string

Must be one of:
`"day"`

,
`"dayofweek"`

,
`"dayofyear"`

,
`"hour"`

,
`"month"`

epoch: null | boolean

Whether to include the epoch as new feature (seconds since 01/01/1970).

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"Mean"`

,
`"Median"`

,
`"MostFrequent"`

,
`"Const"`

,
`None`

component_scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

,
`None`

vector_scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

embedding: object

Embedding/vector encoder. Configures encoding of multivalued numerical features (variable length lists of numbers, i.e. vectors, or the semantic type `list[number]`

for short).

## Items in `embedding`

indicate_missing: boolean

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

text: object

Text encoder. Configures encoding of text (natural language) features. Currently only allows Tf-Idf embeddings to represent texts. If you wish to use other embeddings, e.g. semantic, Word2Vec etc., transform your text column first using another step, and then use that result instead of the original texts.

Warning

Texts are *excluded* by default from the overall encoding of the dataset. See parameter
`include_text_features`

below to active it.

## Items in `text`

indicate_missing: boolean

encoder_params: object

Parameters to be passed to the text encoder (Tf-Idf parameters only for now). See scikit-learn's documentation for detailed parameters and their explanation.

## Items in `encoder_params`

n_components: integer

How many output features to generate. The resulting Tf-Idf vectors will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. This performs a kind of latent semantic analysis. By default we will reduce to 200 components.

Range: `2 ≤ n_components ≤ 1024`

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

include_text_features: boolean = False

Whether to include or ignore text columns during the processing of input data. Enabling this will convert texts to their TfIdf representation. Each text will be
converted to an N-dimensional vector in which each component measures the relative
"over-representation" of a specific word (or n-gram) relative to its overall
frequency in the whole dataset. This is disabled by default because it will
often be better to convert texts explicitly using a previous step, such as
`embed_text`

or `embed_text_with_model`

params: object

CatBoost configuration parameters. You can check the official documentation for more details about Catboost's parameters here.

## Items in `params`

depth: integer = 6

The maximum depth of the tree.

Range: `2 ≤ depth ≤ 16`

iterations: integer | null = 1000

Number of iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.

Range: `1 ≤ iterations < inf`

one_hot_max_size: integer = 10

Maximum cardinality of variables to be one-hot encoded. Use one-hot encoding for all categorical features with a number of different values less than or equal to this value. Other variables will be target-encoded. Note that one-hot encoding is faster than the alternatives, so decreasing this value makes it more likely slower methods will be used. See CatBoost details for further information.

Range: `2 ≤ one_hot_max_size < inf`

max_ctr_complexity: number = 2

The maximum number of features that can be combined when transforming categorical variables. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.

Range: `1 ≤ max_ctr_complexity ≤ 4`

l2_leaf_reg: number = 3.0

Coefficient at the L2 regularization term of the cost function.

Range: `0.0 < l2_leaf_reg < inf`

border_count: integer = 254

The number of splits for numerical features.

Range: `1 ≤ border_count ≤ 65535`

random_strength: number = 1.0

The amount of randomness to use for scoring splits. Use this parameter to avoid overfitting the model. The value multiplies the variance of a random variable (with zero mean) that is added to the score used to select splits when a tree is grown.

Range: `0 < random_strength < inf`

nan_mode: string = "Min"

The method for processing missing values in the input dataset. Possible values:

- “Forbidden”: Missing values are not supported, their presence is interpreted as an error.
- “Min”: Missing values are processed as the minimum value (less than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
- “Max”: Missing values are processed as the maximum value (greater than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.

Using the Min or Max value of this parameter guarantees that a split between missing values and other values is considered when selecting a new split in the tree.

Must be one of:
`"Forbidden"`

,
`"Min"`

,
`"Max"`

boosting_type: string = "Plain"

Boosting type. Boosting scheme. Possible values are

- Ordered: Usually provides better quality on small datasets, but it may be slower than the Plain scheme.
- Plain: The classic gradient boosting scheme.

Must be one of:
`"Ordered"`

,
`"Plain"`

rsm: number | null

Random subspace method. The percentage of features to use at each split selection, when features are selected over again at random. The value `null`

is equivalent to 1.0 (all features). You can set this to values < 1.0 when the dataset has many features (e.g. > 20) to speed up training.

Range: `0 < rsm ≤ 1.0`

random_seed: integer = 0

The random seed used for training.

used_ram_limit: string | null

Whether and how to limit memory usage. Select the maximum Ram used using strings like "2GB" or "100mb" (non case_sensitive).

auto_class_weights: string | null = "Balanced"

Whether and how to assign weights to different predicted classes. The options are:

- null: No class weighting
- Balanced: Inversely proportional to the number of samples/rows in each class
- SqrtBalanced: Using the square root of the "Balanced" option.

Must be one of:
`"Balanced"`

,
`"SqrtBalanced"`

validate: object | null

Configure model validation. Allows evaluation of model performance via cross-validation using custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.

## Items in `validate`

n_splits: integer | null = 5

Number of train-test splits to evaluate the model on. Will split the dataset into training and test set `n_splits`

times, train on the former
and evaluate on the latter using specified or automatically selected `metrics`

Range: `1 ≤ n_splits < inf`

test_size: number | null

What proportion of the data to use for testing in each split. If `null`

or not provided, will use k-fold cross-validation
to split the dataset. E.g. if `n_splits`

is 5, the dataset will be split into 5 equal-sized parts.
For five iterations four parts will then be used for training and the remaining part for testing.
If `test_size`

is a number between 0 and 1, in contrast, validation is done using a
shuffle-split
approach. Here, instead of splitting the data into `n_splits`

equal parts up front, in each iteration
we randomize the data and sample a proportion equal to `test_size`

to use for evaluation and the remaining
rows for training.

Range: `0 < test_size < 1`

metrics: null | array[string]

One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.

Must be one of:
`"accuracy"`

,
`"balanced_accuracy"`

,
`"f1_micro"`

,
`"f1_macro"`

,
`"f1_samples"`

,
`"f1_weighted"`

,
`"precision_micro"`

,
`"precision_macro"`

,
`"precision_samples"`

,
`"precision_weighted"`

,
`"recall_micro"`

,
`"recall_macro"`

,
`"recall_samples"`

,
`"recall_weighted"`

,
`"roc_auc"`

,
`"roc_auc_ovr"`

,
`"roc_auc_ovo"`

,
`"roc_auc_ovr_weighted"`

,
`"roc_auc_ovo_weighted"`

tune: object

Configure hypertuning. Configures the optimization of model hyper-parameters via cross-validated grid- or randomized search.

## Items in `tune`

strategy: string = "grid"

Which search strategy to use for optimization. Grid search explores all possible combinations of parameters specified in `params`

.
Randomized search, on the other hand, randomly samples `iterations`

parameter combinations
from the distributions specified in `params`

Must be one of:
`"grid"`

,
`"random"`

iterations: integer = 10

How many randomly sampled parameter combinations to test in randomized search.

Range: `1 < iterations < inf`

validate: object | null

Configure model validation. Allows evaluation of model performance via cross-validation using custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.

## Items in `validate`

n_splits: integer | null = 5

Number of train-test splits to evaluate the model on. Will split the dataset into training and test set `n_splits`

times, train on the former
and evaluate on the latter using specified or automatically selected `metrics`

Range: `1 ≤ n_splits < inf`

test_size: number | null

What proportion of the data to use for testing in each split. If `null`

or not provided, will use k-fold cross-validation
to split the dataset. E.g. if `n_splits`

is 5, the dataset will be split into 5 equal-sized parts.
For five iterations four parts will then be used for training and the remaining part for testing.
If `test_size`

is a number between 0 and 1, in contrast, validation is done using a
shuffle-split
approach. Here, instead of splitting the data into `n_splits`

equal parts up front, in each iteration
we randomize the data and sample a proportion equal to `test_size`

to use for evaluation and the remaining
rows for training.

Range: `0 < test_size < 1`

metrics: null | array[string]

One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.

Must be one of:
`"accuracy"`

,
`"balanced_accuracy"`

,
`"f1_micro"`

,
`"f1_macro"`

,
`"f1_samples"`

,
`"f1_weighted"`

,
`"precision_micro"`

,
`"precision_macro"`

,
`"precision_samples"`

,
`"precision_weighted"`

,
`"recall_micro"`

,
`"recall_macro"`

,
`"recall_samples"`

,
`"recall_weighted"`

,
`"roc_auc"`

,
`"roc_auc_ovr"`

,
`"roc_auc_ovo"`

,
`"roc_auc_ovr_weighted"`

,
`"roc_auc_ovo_weighted"`

scorer: string

Metric used to select best model.

Must be one of:
`"accuracy"`

,
`"balanced_accuracy"`

,
`"f1_micro"`

,
`"f1_macro"`

,
`"f1_samples"`

,
`"f1_weighted"`

,
`"precision_micro"`

,
`"precision_macro"`

,
`"precision_samples"`

,
`"precision_weighted"`

,
`"recall_micro"`

,
`"recall_macro"`

,
`"recall_samples"`

,
`"recall_weighted"`

,
`"roc_auc"`

,
`"roc_auc_ovr"`

,
`"roc_auc_ovo"`

,
`"roc_auc_ovr_weighted"`

,
`"roc_auc_ovo_weighted"`

params: object

The parameter values to explore. Allows tuning of any and all of the parameters that can be set also as constants in the "params" attribute.

Keys in this object should be strings identifying parameter names, and values should be
*lists* of values to explore for that parameter. E.g. `"depth": [3, 5, 7]`

## Items in `params`

depth: array[integer]

List of depths values to explore.

## Items in `depth`

item: integer = 6

The maximum depth of the tree.

Range: `2 ≤ item ≤ 16`

iterations: array[integer | null]

List of iterations values to explore.

## Items in `iterations`

item: integer | null = 1000

Number of iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.

Range: `1 ≤ item < inf`

one_hot_max_size: array[integer]

List of values configuring max cardinality for one-hot encoding.

## Items in `one_hot_max_size`

item: integer = 10

Maximum cardinality of variables to be one-hot encoded. Use one-hot encoding for all categorical features with a number of different values less than or equal to this value. Other variables will be target-encoded. Note that one-hot encoding is faster than the alternatives, so decreasing this value makes it more likely slower methods will be used. See CatBoost details for further information.

Range: `2 ≤ item < inf`

max_ctr_complexity: array[number]

List of values configuring variable combination complexity.

## Items in `max_ctr_complexity`

item: number = 2

The maximum number of features that can be combined when transforming categorical variables. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.

Range: `1 ≤ item ≤ 4`

l2_leaf_reg: array[number]

List of leaf regularization strengths.

## Items in `l2_leaf_reg`

item: number = 3.0

Coefficient at the L2 regularization term of the cost function.

Range: `0.0 < item < inf`

border_count: array[integer]

List of border counts.

## Items in `border_count`

item: integer = 254

The number of splits for numerical features.

Range: `1 ≤ item ≤ 65535`

random_strength: array[number]

List of random strengths.

## Items in `random_strength`

item: number = 1.0

The amount of randomness to use for scoring splits. Use this parameter to avoid overfitting the model. The value multiplies the variance of a random variable (with zero mean) that is added to the score used to select splits when a tree is grown.

Range: `0 < item < inf`

seed: integer

Seed for random number generator ensuring reproducibility.

Range: `0 ≤ seed < inf`

model: string = "LogisticRegression"

Trains a logistic regression. The specific kind of logistic regression trained here uses "elastic net" regularization, which allows for a blend of ridge and lasso penalties to prevent overfitting. The mix as well as the strength of this regularization is automatically tuned using 5-fold cross-validation. See sklearn's LogisticRegressionCV for further details.

target: string

Target variable (labels). Name of the column that contains your target values (labels).

positive_class: string | null

Name of the positive class. In *binary* classification, usually the class you're most interested in, for example the label/class
corresponding to successful lead conversion in a lead score model, the class corresponding to a
customer who has churned in a churn prediction model, etc.

If provided, will automaticall measure the performance (accuracy, precision, recall) of the model on this class, in addition to averages across all classes. If not provided, only summary metrics will be reported.

encode_features: boolean = True

Toggle encoding of feature columns. When enabled, Graphext will auto-convert any column types to the numeric type before
fitting the model. How this conversion is done can be configured using the `feature_encoder`

option below.

Warning

If disabled, any model trained in this step will assume that input data is already in an appropriate format (e.g. numerical and not containing any missing values).

feature_encoder: null | object

Configures encoding of feature columns. By default (`null`

), Graphext chooses automatically how to convert any column types the model
may not understand natively to a numeric type.

A configuration object can be passed instead to overwrite specific parameter values with respect to their default values.

## Items in `feature_encoder`

number: object

Numeric encoder. Configures encoding of numeric features.

## Items in `number`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"Mean"`

,
`"Median"`

,
`"MostFrequent"`

,
`"Const"`

,
`None`

scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

,
`None`

scaler_params: object

Further parameters passed to the `scaler`

function. Details depend no the particular scaler used.

## Items in `scaler_params`

bool: object

Boolean encoder. Configures encoding of boolean features.

## Items in `bool`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

,
`None`

ordinal: object

Ordinal encoder. Configures encoding of categorical features that have a natural order.

## Items in `ordinal`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

,
`None`

category: object | object

Category encoder. May contain either a single configuration for all categorical variables, or two different configurations for low- and high-cardinality variables. For further details pick one of the two options below.

## Items in `category`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

,
`None`

max_categories: null | integer

Range: `1 ≤ max_categories < inf`

encoder: null | string

How to encode categories.

Must be one of:
`"OneHot"`

,
`"Label"`

,
`"Ordinal"`

,
`"Binary"`

,
`"Frequency"`

,
`None`

scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

,
`None`

cardinality_treshold: integer

Condition for application of low- or high-cardinality configuration. Number of unique categories below which the `low_cardinality`

configuration is used,
and above which the `high_cardinality`

configuration is used.

Range: `3 ≤ cardinality_treshold < inf`

low_cardinality: object

Low cardinality configuration. Used for categories with fewer than `cardinality_threshold`

unique categories.

## Items in `low_cardinality`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

,
`None`

max_categories: null | integer

Range: `1 ≤ max_categories < inf`

encoder: null | string

How to encode categories.

Must be one of:
`"OneHot"`

,
`"Label"`

,
`"Ordinal"`

,
`"Binary"`

,
`"Frequency"`

,
`None`

scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

,
`None`

high_cardinality: object

High cardinality configuration. Used for categories with more than `cardinality_threshold`

unique categories.

## Items in `high_cardinality`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

,
`None`

max_categories: null | integer

Range: `1 ≤ max_categories < inf`

encoder: null | string

How to encode categories.

Must be one of:
`"OneHot"`

,
`"Label"`

,
`"Ordinal"`

,
`"Binary"`

,
`"Frequency"`

,
`None`

scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

,
`None`

multilabel: object | object

Multilabel encoder. Configures encoding of multivalued categorical features (variable length lists of categories,
or the semantic type `list[category]`

for short). May contain either a single configuration for
all multilabel variables, or two different configurations for low- and high-cardinality variables.
For further details pick one of the two options below.

## Items in `multilabel`

indicate_missing: boolean

encoder: null | string

How to encode categories/labels in multilabel (list[category]) columns.

Must be one of:
`"Binarizer"`

,
`"TfIdf"`

,
`None`

max_categories: null | integer

Range: `2 ≤ max_categories < inf`

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

cardinality_treshold: integer

`low_cardinality`

configuration is used,
and above which the `high_cardinality`

configuration is used.

Range: `3 ≤ cardinality_treshold < inf`

low_cardinality: object

Low cardinality configuration. Used for mulitabel columns with fewer than `cardinality_threshold`

unique categories/labels.

## Items in `low_cardinality`

indicate_missing: boolean

encoder: null | string

How to encode categories/labels in multilabel (list[category]) columns.

Must be one of:
`"Binarizer"`

,
`"TfIdf"`

,
`None`

max_categories: null | integer

Range: `2 ≤ max_categories < inf`

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

high_cardinality: object

`cardinality_threshold`

unique categories.

## Items in `high_cardinality`

indicate_missing: boolean

encoder: null | string

How to encode categories/labels in multilabel (list[category]) columns.

Must be one of:
`"Binarizer"`

,
`"TfIdf"`

,
`None`

max_categories: null | integer

Range: `2 ≤ max_categories < inf`

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

datetime: object

Datetime encoder. Configures encoding of datetime (timestamp) features.

## Items in `datetime`

indicate_missing: boolean

components: array[string]

A list of numerical components to extract. Will create one numeric column for each component.

## Items in `components`

item: string

Must be one of:
`"day"`

,
`"dayofweek"`

,
`"dayofyear"`

,
`"hour"`

,
`"minute"`

,
`"month"`

,
`"quarter"`

,
`"season"`

,
`"second"`

,
`"week"`

,
`"weekday"`

,
`"weekofyear"`

,
`"year"`

cycles: array[string]

A list of cyclical time features to extract. "Cycles" are numerical transformations of features that should be represented on a circle. E.g. months, ranging from 1 to 12, should be arranged such that 12 and 1 are next to each other, rather than on opposite ends of a linear scale. We represent such cyclical time features on a circle by creating two columns for each original feature: the sin and cos of the numerical feature after appropriate scaling.

## Items in `cycles`

item: string

Must be one of:
`"day"`

,
`"dayofweek"`

,
`"dayofyear"`

,
`"hour"`

,
`"month"`

epoch: null | boolean

Whether to include the epoch as new feature (seconds since 01/01/1970).

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"Mean"`

,
`"Median"`

,
`"MostFrequent"`

,
`"Const"`

,
`None`

component_scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

,
`None`

vector_scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

embedding: object

Embedding/vector encoder. Configures encoding of multivalued numerical features (variable length lists of numbers, i.e. vectors, or the semantic type `list[number]`

for short).

## Items in `embedding`

indicate_missing: boolean

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

text: object

Text encoder. Configures encoding of text (natural language) features. Currently only allows Tf-Idf embeddings to represent texts. If you wish to use other embeddings, e.g. semantic, Word2Vec etc., transform your text column first using another step, and then use that result instead of the original texts.

Warning

Texts are *excluded* by default from the overall encoding of the dataset. See parameter
`include_text_features`

below to active it.

## Items in `text`

indicate_missing: boolean

encoder_params: object

Parameters to be passed to the text encoder (Tf-Idf parameters only for now). See scikit-learn's documentation for detailed parameters and their explanation.

## Items in `encoder_params`

n_components: integer

How many output features to generate. The resulting Tf-Idf vectors will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. This performs a kind of latent semantic analysis. By default we will reduce to 200 components.

Range: `2 ≤ n_components ≤ 1024`

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

include_text_features: boolean = False

Whether to include or ignore text columns during the processing of input data. Enabling this will convert texts to their TfIdf representation. Each text will be
converted to an N-dimensional vector in which each component measures the relative
"over-representation" of a specific word (or n-gram) relative to its overall
frequency in the whole dataset. This is disabled by default because it will
often be better to convert texts explicitly using a previous step, such as
`embed_text`

or `embed_text_with_model`

params: object

Model parameters. Constant parameters to configure before training.

## Items in `params`

Cs: integer | array = 10

Regularization strengths to explore. Smaller values specify stronger regularization. If `Cs`

is as an integer, a grid of C values are chosen
in a logarithmic scale between 1e-4 and 1e4.

Range: `1 < Cs < inf`

l1_ratios: array | null = [0.1, 0.5, 0.7, 0.9, 0.95, 0.99, 1]

Relative weights of l1 norm penalty vs l2 norm penalty to explore. An l1-ratio of 0 means l2 penalty only (euclidean norm), resulting in a ridge regression penalizing large coefficients proportional to their sum of squares. An l1-ratio of 1.0 means l1 penalty only (taxicab/manhattan norm), i.e. proportional to the sum of absolute coefficient values. This has the tendency to prefer solutions with fewer non-zero coefficients, effectively reducing the number of features used in the optimized model.

max_iter: integer = 1000

Maximum number of iterations of the optimization algorithm. Try increasing this if you suspect the algorithm doesn't reach the peformance you'd expect.

Range: `100 ≤ max_iter < inf`

scoring: string = "accuracy"

A scoring metric for evaluating the model.

`"accuracy"`

,
`"balanced_accuracy"`

,
`"f1_micro"`

,
`"f1_macro"`

,
`"f1_samples"`

,
`"f1_weighted"`

,
`"precision_micro"`

,
`"precision_macro"`

,
`"precision_samples"`

,
`"precision_weighted"`

,
`"recall_micro"`

,
`"recall_macro"`

,
`"recall_samples"`

,
`"recall_weighted"`

,
`"roc_auc"`

,
`"roc_auc_ovr"`

,
`"roc_auc_ovo"`

,
`"roc_auc_ovr_weighted"`

,
`"roc_auc_ovo_weighted"`

class_weight: string | null = "balanced"

Mode for selecting sample weights given its class. If not provided (or `null`

), all weights will be 1, and so in effect no weights are applied.
When `"balanced"`

(default), sample weights are calculated as inversely proportional to class
frequencies, such that samples from the more frequent classes have lower weights, and under-represented
classes are given more weight.

Must be one of:
`"balanced"`

validate: object | null

Configure model validation. Allows evaluation of model performance via cross-validation using custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.

## Items in `validate`

n_splits: integer | null = 5

Number of train-test splits to evaluate the model on. Will split the dataset into training and test set `n_splits`

times, train on the former
and evaluate on the latter using specified or automatically selected `metrics`

Range: `1 ≤ n_splits < inf`

test_size: number | null

What proportion of the data to use for testing in each split. If `null`

or not provided, will use k-fold cross-validation
to split the dataset. E.g. if `n_splits`

is 5, the dataset will be split into 5 equal-sized parts.
For five iterations four parts will then be used for training and the remaining part for testing.
If `test_size`

is a number between 0 and 1, in contrast, validation is done using a
shuffle-split
approach. Here, instead of splitting the data into `n_splits`

equal parts up front, in each iteration
we randomize the data and sample a proportion equal to `test_size`

to use for evaluation and the remaining
rows for training.

Range: `0 < test_size < 1`

metrics: null | array[string]

One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.

`"accuracy"`

,
`"balanced_accuracy"`

,
`"f1_micro"`

,
`"f1_macro"`

,
`"f1_samples"`

,
`"f1_weighted"`

,
`"precision_micro"`

,
`"precision_macro"`

,
`"precision_samples"`

,
`"precision_weighted"`

,
`"recall_micro"`

,
`"recall_macro"`

,
`"recall_samples"`

,
`"recall_weighted"`

,
`"roc_auc"`

,
`"roc_auc_ovr"`

,
`"roc_auc_ovo"`

,
`"roc_auc_ovr_weighted"`

,
`"roc_auc_ovo_weighted"`

seed: integer

Seed for random number generator ensuring reproducibility.

Range: `0 ≤ seed < inf`

model: string = "DecisionTreeClassifier"

Trains a decision tree classifier. A decision tree is a non-parametric, supervised method for predicting a target variable by learning simple decision rules inferred from the data. It can be seen as a piecewise constant approximation, applying simple if-else decision rules to the data. The particular model used here is scikit-learn's DecisionTreeClassifier.

target: string

Target variable (labels). Name of the column that contains your target values (labels).

positive_class: string | null

Name of the positive class. In *binary* classification, usually the class you're most interested in, for example the label/class
corresponding to successful lead conversion in a lead score model, the class corresponding to a
customer who has churned in a churn prediction model, etc.

If provided, will automaticall measure the performance (accuracy, precision, recall) of the model on this class, in addition to averages across all classes. If not provided, only summary metrics will be reported.

encode_features: boolean = True

Toggle encoding of feature columns. When enabled, Graphext will auto-convert any column types to the numeric type before
fitting the model. How this conversion is done can be configured using the `feature_encoder`

option below.

Warning

If disabled, any model trained in this step will assume that input data is already in an appropriate format (e.g. numerical and not containing any missing values).

feature_encoder: null | object

Configures encoding of feature columns. By default (`null`

), Graphext chooses automatically how to convert any column types the model
may not understand natively to a numeric type.

A configuration object can be passed instead to overwrite specific parameter values with respect to their default values.

## Items in `feature_encoder`

number: object

Numeric encoder. Configures encoding of numeric features.

## Items in `number`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"Mean"`

,
`"Median"`

,
`"MostFrequent"`

,
`"Const"`

,
`None`

scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

,
`None`

scaler_params: object

Further parameters passed to the `scaler`

function. Details depend no the particular scaler used.

## Items in `scaler_params`

bool: object

Boolean encoder. Configures encoding of boolean features.

## Items in `bool`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

,
`None`

ordinal: object

Ordinal encoder. Configures encoding of categorical features that have a natural order.

## Items in `ordinal`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

,
`None`

category: object | object

Category encoder. May contain either a single configuration for all categorical variables, or two different configurations for low- and high-cardinality variables. For further details pick one of the two options below.

## Items in `category`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

,
`None`

max_categories: null | integer

Range: `1 ≤ max_categories < inf`

encoder: null | string

How to encode categories.

Must be one of:
`"OneHot"`

,
`"Label"`

,
`"Ordinal"`

,
`"Binary"`

,
`"Frequency"`

,
`None`

scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

,
`None`

cardinality_treshold: integer

`low_cardinality`

configuration is used,
and above which the `high_cardinality`

configuration is used.

Range: `3 ≤ cardinality_treshold < inf`

low_cardinality: object

Low cardinality configuration. Used for categories with fewer than `cardinality_threshold`

unique categories.

## Items in `low_cardinality`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

,
`None`

max_categories: null | integer

Range: `1 ≤ max_categories < inf`

encoder: null | string

How to encode categories.

Must be one of:
`"OneHot"`

,
`"Label"`

,
`"Ordinal"`

,
`"Binary"`

,
`"Frequency"`

,
`None`

scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

,
`None`

high_cardinality: object

`cardinality_threshold`

unique categories.

## Items in `high_cardinality`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

,
`None`

max_categories: null | integer

Range: `1 ≤ max_categories < inf`

encoder: null | string

How to encode categories.

Must be one of:
`"OneHot"`

,
`"Label"`

,
`"Ordinal"`

,
`"Binary"`

,
`"Frequency"`

,
`None`

scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

,
`None`

multilabel: object | object

Multilabel encoder. Configures encoding of multivalued categorical features (variable length lists of categories,
or the semantic type `list[category]`

for short). May contain either a single configuration for
all multilabel variables, or two different configurations for low- and high-cardinality variables.
For further details pick one of the two options below.

## Items in `multilabel`

indicate_missing: boolean

encoder: null | string

How to encode categories/labels in multilabel (list[category]) columns.

Must be one of:
`"Binarizer"`

,
`"TfIdf"`

,
`None`

max_categories: null | integer

Range: `2 ≤ max_categories < inf`

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

cardinality_treshold: integer

`low_cardinality`

configuration is used,
and above which the `high_cardinality`

configuration is used.

Range: `3 ≤ cardinality_treshold < inf`

low_cardinality: object

Low cardinality configuration. Used for mulitabel columns with fewer than `cardinality_threshold`

unique categories/labels.

## Items in `low_cardinality`

indicate_missing: boolean

encoder: null | string

How to encode categories/labels in multilabel (list[category]) columns.

Must be one of:
`"Binarizer"`

,
`"TfIdf"`

,
`None`

max_categories: null | integer

Range: `2 ≤ max_categories < inf`

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

high_cardinality: object

`cardinality_threshold`

unique categories.

## Items in `high_cardinality`

indicate_missing: boolean

encoder: null | string

How to encode categories/labels in multilabel (list[category]) columns.

Must be one of:
`"Binarizer"`

,
`"TfIdf"`

,
`None`

max_categories: null | integer

Range: `2 ≤ max_categories < inf`

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

datetime: object

Datetime encoder. Configures encoding of datetime (timestamp) features.

## Items in `datetime`

indicate_missing: boolean

components: array[string]

A list of numerical components to extract. Will create one numeric column for each component.

## Items in `components`

item: string

Must be one of:
`"day"`

,
`"dayofweek"`

,
`"dayofyear"`

,
`"hour"`

,
`"minute"`

,
`"month"`

,
`"quarter"`

,
`"season"`

,
`"second"`

,
`"week"`

,
`"weekday"`

,
`"weekofyear"`

,
`"year"`

cycles: array[string]

A list of cyclical time features to extract. "Cycles" are numerical transformations of features that should be represented on a circle. E.g. months, ranging from 1 to 12, should be arranged such that 12 and 1 are next to each other, rather than on opposite ends of a linear scale. We represent such cyclical time features on a circle by creating two columns for each original feature: the sin and cos of the numerical feature after appropriate scaling.

## Items in `cycles`

item: string

Must be one of:
`"day"`

,
`"dayofweek"`

,
`"dayofyear"`

,
`"hour"`

,
`"month"`

epoch: null | boolean

Whether to include the epoch as new feature (seconds since 01/01/1970).

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"Mean"`

,
`"Median"`

,
`"MostFrequent"`

,
`"Const"`

,
`None`

component_scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

,
`None`

vector_scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

embedding: object

Embedding/vector encoder. Configures encoding of multivalued numerical features (variable length lists of numbers, i.e. vectors, or the semantic type `list[number]`

for short).

## Items in `embedding`

indicate_missing: boolean

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

text: object

Text encoder. Configures encoding of text (natural language) features. Currently only allows Tf-Idf embeddings to represent texts. If you wish to use other embeddings, e.g. semantic, Word2Vec etc., transform your text column first using another step, and then use that result instead of the original texts.

Warning

Texts are *excluded* by default from the overall encoding of the dataset. See parameter
`include_text_features`

below to active it.

## Items in `text`

indicate_missing: boolean

encoder_params: object

Parameters to be passed to the text encoder (Tf-Idf parameters only for now). See scikit-learn's documentation for detailed parameters and their explanation.

## Items in `encoder_params`

n_components: integer

How many output features to generate. The resulting Tf-Idf vectors will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. This performs a kind of latent semantic analysis. By default we will reduce to 200 components.

Range: `2 ≤ n_components ≤ 1024`

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

,
`None`

include_text_features: boolean = False

Whether to include or ignore text columns during the processing of input data. Enabling this will convert texts to their TfIdf representation. Each text will be
converted to an N-dimensional vector in which each component measures the relative
"over-representation" of a specific word (or n-gram) relative to its overall
frequency in the whole dataset. This is disabled by default because it will
often be better to convert texts explicitly using a previous step, such as
`embed_text`

or `embed_text_with_model`

params: object

Decision tree configuration parameters. These parameters are specific to the decision tree algorithm. They are used to define the tree structure and the stopping criteria. The default values are the ones used by scikit-learn.

For more information, see the scikit-learn documentation.

## Items in `params`

criterion: string = "gini"

Function to measure the quality of a split. Supported criteria are "gini" for the Gini impurity and "entropy" for the information gain.

Must be one of:
`"gini"`

,
`"entropy"`

splitter: string = "best"

Strategy used to choose the split at each node. Supported strategies are "best" to choose the best split and "random" to choose the best random split.

Must be one of:
`"best"`

,
`"random"`

max_depth: integer | null = 10

Maximum depth of the tree. The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.

Range: `1 < max_depth < inf`

min_samples_split: number = 2

Minimum number of samples required to split an internal node. The minimum number of samples required to split an internal node. If int, then consider min_samples_split
as the minimum *count*. If float, then min_samples_split is a *fraction* and `ceil(min_samples_split * n_samples)`

are the minimum number of samples for each split.

min_samples_leaf: number = 1

Minimum number of samples required to be at a leaf node. The minimum number of samples required to be at a leaf node. A split point at any depth will only be
considered if it leaves at least `min_samples_leaf`

training samples in each of the left and right branches.
This may have the effect of smoothing the model, especially in regression.

If int, then consider `min_samples_leaf`

as the minimum *count*. If float, then `min_samples_leaf`

is
a *fraction* and `ceil(min_samples_leaf * n_samples)`

are the minimum number of samples for each node.

max_leaf_nodes: integer | null = "None"

Grow a tree with `max_leaf_nodes`

in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.

max_features: number | string | null = "None"

Number of features to consider when looking for the best split. The number of features to consider when looking for the best split:

- If int, then consider
`max_features`

features at each split. - If float, then
`max_features`

is a*fraction*and`int(max_features * n_features)`

features are considered at each split. - If "auto", then
`max_features=sqrt(n_features)`

. - If "sqrt", then
`max_features=sqrt(n_features)`

. - If "log2", then
`max_features=log2(n_features)`

. - If None, then
`max_features=n_features`

.

Note: the search for a split does not stop until at least one valid partition of the node samples is found,
even if it requires to effectively inspect more than `max_features`

features.

ccp_alpha: number = 0.0

Complexity parameter used for Minimal Cost-Complexity Pruning. Minimal Cost-Complexity Pruning recursively finds the node with the "weakest link". The weakest link is characterized by an effective alpha, where the nodes with the smallest effective alpha are pruned first. As alpha increases, more of the tree is pruned, which increases the total impurity of its leaves.

Range: `0.0 ≤ ccp_alpha < inf`

random_state: integer | null = "None"

Controls the randomness of the estimator. The features are always randomly permuted at each split, even if splitter is set to "best". When max_features < n_features, the algorithm will select max_features at random at each split before finding the best split among them. But the best found split may vary across different runs, even if max_features=n_features. That is the case, if the improvement of the criterion is identical for several splits and one split has to be selected at random. To obtain a deterministic behaviour during fitting, random_state has to be fixed to an integer.

class_weight: string | null = "balanced"

How to weigh each class of the target variable. If `null`

, all classes will have a weight of one. The "balanced" mode uses the values of the target y to
automatically adjust weights inversely proportional to class frequencies in the input data
as `n_samples / (n_classes * np.bincount(y))`

Must be one of:
`"balanced"`

validate: object | null

## Items in `validate`

n_splits: integer | null = 5

`n_splits`

times, train on the former
and evaluate on the latter using specified or automatically selected `metrics`

Range: `1 ≤ n_splits < inf`

test_size: number | null

`null`

or not provided, will use k-fold cross-validation
to split the dataset. E.g. if `n_splits`

is 5, the dataset will be split into 5 equal-sized parts.
For five iterations four parts will then be used for training and the remaining part for testing.
If `test_size`

is a number between 0 and 1, in contrast, validation is done using a
shuffle-split
approach. Here, instead of splitting the data into `n_splits`

equal parts up front, in each iteration
we randomize the data and sample a proportion equal to `test_size`

to use for evaluation and the remaining
rows for training.

Range: `0 < test_size < 1`

metrics: null | array[string]

`"accuracy"`

,
`"balanced_accuracy"`

,
`"f1_micro"`

,
`"f1_macro"`

,
`"f1_samples"`

,
`"f1_weighted"`

,
`"precision_micro"`

,
`"precision_macro"`

,
`"precision_samples"`

,
`"precision_weighted"`

,
`"recall_micro"`

,
`"recall_macro"`

,
`"recall_samples"`

,
`"recall_weighted"`

,
`"roc_auc"`

,
`"roc_auc_ovr"`

,
`"roc_auc_ovo"`

,
`"roc_auc_ovr_weighted"`

,
`"roc_auc_ovo_weighted"`

tune: object

Configure hypertuning. Configures the optimization of model hyper-parameters via cross-validated grid- or randomized search.

## Items in `tune`

strategy: string = "grid"

Which search strategy to use for optimization. Grid search explores all possible combinations of parameters specified in `params`

.
Randomized search, on the other hand, randomly samples `iterations`

parameter combinations
from the distributions specified in `params`

Must be one of:
`"grid"`

,
`"random"`

iterations: integer = 10

How many randomly sampled parameter combinations to test in randomized search.

Range: `1 < iterations < inf`

validate: object | null

## Items in `validate`

n_splits: integer | null = 5

`n_splits`

times, train on the former
and evaluate on the latter using specified or automatically selected `metrics`

Range: `1 ≤ n_splits < inf`

test_size: number | null

`null`

or not provided, will use k-fold cross-validation
to split the dataset. E.g. if `n_splits`

is 5, the dataset will be split into 5 equal-sized parts.
For five iterations four parts will then be used for training and the remaining part for testing.
If `test_size`

is a number between 0 and 1, in contrast, validation is done using a
shuffle-split
approach. Here, instead of splitting the data into `n_splits`

equal parts up front, in each iteration
we randomize the data and sample a proportion equal to `test_size`

to use for evaluation and the remaining
rows for training.

Range: `0 < test_size < 1`

metrics: null | array[string]

`"accuracy"`

,
`"balanced_accuracy"`

,
`"f1_micro"`

,
`"f1_macro"`

,
`"f1_samples"`

,
`"f1_weighted"`

,
`"precision_micro"`

,
`"precision_macro"`

,
`"precision_samples"`

,
`"precision_weighted"`

,
`"recall_micro"`

,
`"recall_macro"`

,
`"recall_samples"`

,
`"recall_weighted"`

,
`"roc_auc"`

,
`"roc_auc_ovr"`

,
`"roc_auc_ovo"`

,
`"roc_auc_ovr_weighted"`

,
`"roc_auc_ovo_weighted"`

scorer: string

Metric used to select best model.

`"accuracy"`

,
`"balanced_accuracy"`

,
`"f1_micro"`

,
`"f1_macro"`

,
`"f1_samples"`

,
`"f1_weighted"`

,
`"precision_micro"`

,
`"precision_macro"`

,
`"precision_samples"`

,
`"precision_weighted"`

,
`"recall_micro"`

,
`"recall_macro"`

,
`"recall_samples"`

,
`"recall_weighted"`

,
`"roc_auc"`

,
`"roc_auc_ovr"`

,
`"roc_auc_ovo"`

,
`"roc_auc_ovr_weighted"`

,
`"roc_auc_ovo_weighted"`

params: object

The parameter values to explore. Allows tuning of any and all of the parameters that can be set also as constants in the "params" attribute.

Keys in this object should be strings identifying parameter names, and values should be
*lists* of values to explore for that parameter. E.g. `"depth": [3, 5, 7]`

## Items in `params`

criterion: array[string]

List of criterion values to explore.

## Items in `criterion`

item: string = "gini"

Function to measure the quality of a split. Supported criteria are "gini" for the Gini impurity and "entropy" for the information gain.

Must be one of:
`"gini"`

,
`"entropy"`

splitter: array[string]

List of splitter values to explore.

## Items in `splitter`

item: string = "best"

Strategy used to choose the split at each node. Supported strategies are "best" to choose the best split and "random" to choose the best random split.

Must be one of:
`"best"`

,
`"random"`

max_depth: array[integer | null]

List of max_depth values to explore.

## Items in `max_depth`

item: integer | null = 10

Maximum depth of the tree. The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.

Range: `1 < item < inf`

min_samples_split: array[number]

List of min_samples_split values to explore.

## Items in `min_samples_split`

item: number = 2

Minimum number of samples required to split an internal node. The minimum number of samples required to split an internal node. If int, then consider min_samples_split
as the minimum *count*. If float, then min_samples_split is a *fraction* and `ceil(min_samples_split * n_samples)`

are the minimum number of samples for each split.

min_samples_leaf: array[number]

List of min_samples_leaf values to explore.

## Items in `min_samples_leaf`

item: number = 1

Minimum number of samples required to be at a leaf node. The minimum number of samples required to be at a leaf node. A split point at any depth will only be
considered if it leaves at least `min_samples_leaf`

training samples in each of the left and right branches.
This may have the effect of smoothing the model, especially in regression.

If int, then consider `min_samples_leaf`

as the minimum *count*. If float, then `min_samples_leaf`

is
a *fraction* and `ceil(min_samples_leaf * n_samples)`

are the minimum number of samples for each node.

max_leaf_nodes: array[integer | null]

List of max_leaf_nodes values to explore.

## Items in `max_leaf_nodes`

item: integer | null = "None"

Grow a tree with `max_leaf_nodes`

in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.

max_features: array[number | string | null]

List of max_features values to explore.

## Items in `max_features`

item: number | string | null = "None"

Number of features to consider when looking for the best split. The number of features to consider when looking for the best split:

- If int, then consider
`max_features`

features at each split. - If float, then
`max_features`

is a*fraction*and`int(max_features * n_features)`

features are considered at each split. - If "auto", then
`max_features=sqrt(n_features)`

. - If "sqrt", then
`max_features=sqrt(n_features)`

. - If "log2", then
`max_features=log2(n_features)`

. - If None, then
`max_features=n_features`

.

Note: the search for a split does not stop until at least one valid partition of the node samples is found,
even if it requires to effectively inspect more than `max_features`

features.

ccp_alpha: array[number]

List of ccp_alpha values to explore.

## Items in `ccp_alpha`

item: number = 0.0

Complexity parameter used for Minimal Cost-Complexity Pruning. Minimal Cost-Complexity Pruning recursively finds the node with the "weakest link". The weakest link is characterized by an effective alpha, where the nodes with the smallest effective alpha are pruned first. As alpha increases, more of the tree is pruned, which increases the total impurity of its leaves.

Range: `0.0 ≤ item < inf`

seed: integer

Seed for random number generator ensuring reproducibility.

Range: `0 ≤ seed < inf`