Skip to content

Train

inference · models · catboost · umap · linear regression · logistic regression

Train and store a machine learning model to be loaded at a later point for prediction.

Note that the configuration parameters depend on the specific model trained. In the parameters section below, each allowed model has its own section.

The output will always be a new column with the trained model's predictions on the training data, as well as a saved and named model file that can be used in other projects for prediction of new data.

Example

Train a Catboost model with default parameters. The specific kind of model will be inferred from the type of the target column: classfication if the target column is categorical, and regression if the target is numerical:

train(ds, {"target": "class"}) -> (ds.predicted, model)
More examples

To be more explicit about which model parameters to use during training, which parameters to optimize (tune) automatically, and how to evaluate a model's performance, the following example shows a complete configuration. It will set boosting_type to "ordered" and select the best combination of learning_rate and depth from the values specified in the tune: params configuration. To find the best parameters, it will perform 5-fold cross-validation on each combination, and will use the scorer f1_weighted to measure the performance. accuracy will also be measured, but only for the purpose of reporting. The best parameter combination will than be evaluated on a single split of the dataset (with 20% of rows used for testing and 80% for training), with metrics selected automatically. Note that the final model will always be re-trained on the whole dataset!

train(ds, {
  "target": "label_col",
  "model": "CatboostRegressor",
  "params": {
    "boosting_type": "ordered"
  },
  "tune": {
    "strategy": "grid",
    "params": {
      "learning_rate": [0.03, 0.1],
      "depth": [4, 6, 10]
    },
    "validate": {
      "n_splits": 5,
      "metrics": ["f1_weighted", "accuracy"]
    },
    "scorer": "f1_weighted"
  },
  "validate": {
    "n_splits": 1,
    "test_size": 0.2
  }
}) -> (ds.predicted, model)

Usage

The following are the step's expected inputs and outputs and their specific types.

train(ds: dataset, {"param": value}) -> (predicted: column, model: model[ds])

Inputs


ds: dataset

Should contain the target column and the feature columns you wish to use in the model.

Outputs


predicted: column

Column containing results of the model.


model: file:model[ds]

Zip file containing the trained model and associated information.

Parameters

model: string = "CatboostClassifier"

Train a Catboost classifier. I.e. gradient boosted decision trees with support for categorical variables and missing values.


target: string

Target variable (labels). Name of the column that contains your target values (labels).


feature_encoder: null | string = "auto"

Configures encoding of feature columns. In "auto" mode, graphext chooses automatically how to convert any column types the model may not understand natively to a numeric type. When null is selected, features will not be processed in any way.

Warning

When using null, the data must already be in a format compatible with the selected kind of model!

Must be one of: "auto"


params: object

CatBoost configuration parameters. You can check the official documentation for more details about Catboost's parameters here.

Items in params

depth: integer = 6

The maximum depth of the tree.

Range: 2 ≤ depth ≤ 16


iterations: integer | null = 1000

Number of iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.

Range: 1 ≤ iterations < inf


one_hot_max_size: integer = 10

Maximum cardinality of variables to be one-hot encoded. Use one-hot encoding for all categorical features with a number of different values less than or equal to the given parameter value. Other variables will be target-encoded. See CatBoost details for further information.

Range: 2 ≤ one_hot_max_size < inf


max_ctr_complexity: number = 2

The maximum number of features that can be combined when transforming categorical variables. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.

Range: 1 ≤ max_ctr_complexity ≤ 4


l2_leaf_reg: number = 3.0

Coefficient at the L2 regularization term of the cost function.

Range: 0.0 < l2_leaf_reg < inf


border_count: integer = 254

The number of splits for numerical features.

Range: 1 ≤ border_count ≤ 65535


random_strength: number = 1.0

The amount of randomness to use for scoring splits. Use this parameter to avoid overfitting the model. The value multiplies the variance of a random variable (with zero mean) that is added to the score used to select splits when a tree is grown.

Range: 0 < random_strength < inf


nan_mode: string = "Min"

The method for processing missing values in the input dataset. Possible values:

  • “Forbidden”: Missing values are not supported, their presence is interpreted as an error.
  • “Min”: Missing values are processed as the minimum value (less than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
  • “Max”: Missing values are processed as the maximum value (greater than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.

Using the Min or Max value of this parameter guarantees that a split between missing values and other values is considered when selecting a new split in the tree.

Must be one of: "Forbidden", "Min", "Max"


random_seed: integer = 0

The random seed used for training.


boosting_type: string = "Plain"

Boosting type. Boosting scheme. Possible values are

  • Ordered: Usually provides better quality on small datasets, but it may be slower than the Plain scheme.
  • Plain: The classic gradient boosting scheme.

Must be one of: "Ordered", "Plain"


used_ram_limit: string | null

Whether and how to limit memory usage. Select the maximum Ram used using strings like "2GB" or "100mb" (non case_sensitive).


auto_class_weights: string | null = "Balanced"

Whether and how to assign weights to different predicted classes. The options are:

  • null: No class weighting
  • Balanced: Inversely proportional to the number of samples/rows in each class
  • SqrtBalanced: Using the square root of the "Balanced" option.

Must be one of: "Balanced", "SqrtBalanced"


validate: object | null

Configure model validation. Allows evaluation of model performance via cross-validation with custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.

Items in validate

n_splits: integer | null = 5

Number of train-test splits to evaluate the model on. Will split the dataset into training and test set n_splits times, train on the former and evaluate on the latter using specified or automatically selected metrics


test_size: number | null

What proportion of the data to use for testing in each split. If null or not provided, will use cross-validation to split the dataset. E.g. if n_splits is 5, the dataset will be split into 5 equal-sized parts. For five iterations four parts will then be used for training and the remaining part for testing. If test_size is a number between 0 and 1, in contrast, validation is done using a shuffle-split approach. Here, instead of splitting the data into n_splits equal parts up front, in each iteration we randomize the data and sample a proportion equal to test_size to use for evaluation and the remaining rows for training.

Range: 0 < test_size < 1


metrics: null | array[string]

One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.

Must be one of: "accuracy", "balanced_accuracy", "explained_variance", "f1_micro", "f1_macro", "f1_samples", "f1_weighted", "neg_mean_squared_error", "neg_median_absolute_error", "neg_root_mean_squared_error", "precision_micro", "precision_macro", "precision_samples", "precision_weighted", "recall_micro", "recall_macro", "recall_samples", "recall_weighted", "r2"


tune: object

Configure hypertuning. Configures the optimization of model hyper-parameters via cross-validated grid- or randomized search.

Items in tune

strategy: string = "grid"

Which search strategy to use for optimization. Grid search explores all possible combinations of parameters specified in params. Randomized search, on the other hand, randomly samples iterations parameter combinations from the distributions specified in params

Must be one of: "grid", "random"


iterations: integer = 10

How many randomly sampled parameter combinations to test in randomized search.

Range: 1 < iterations < inf


validate: object | null

Configure model validation. Allows evaluation of model performance via cross-validation with custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.

Items in validate

n_splits: integer | null = 5

Number of train-test splits to evaluate the model on. Will split the dataset into training and test set n_splits times, train on the former and evaluate on the latter using specified or automatically selected metrics


test_size: number | null

What proportion of the data to use for testing in each split. If null or not provided, will use cross-validation to split the dataset. E.g. if n_splits is 5, the dataset will be split into 5 equal-sized parts. For five iterations four parts will then be used for training and the remaining part for testing. If test_size is a number between 0 and 1, in contrast, validation is done using a shuffle-split approach. Here, instead of splitting the data into n_splits equal parts up front, in each iteration we randomize the data and sample a proportion equal to test_size to use for evaluation and the remaining rows for training.

Range: 0 < test_size < 1


metrics: null | array[string]

One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.


scorer: string

Metric used to select best model.

Must be one of: "accuracy", "balanced_accuracy", "explained_variance", "f1_micro", "f1_macro", "f1_samples", "f1_weighted", "neg_mean_squared_error", "neg_median_absolute_error", "neg_root_mean_squared_error", "precision_micro", "precision_macro", "precision_samples", "precision_weighted", "recall_micro", "recall_macro", "recall_samples", "recall_weighted", "r2"


params: object

The parameter values to explore. Allows tuning of any and all of the parameters that can be set also as constants in the "params" attribute.

Keys in this object should be strings identifying parameter names, and values should be lists of values to explore for that parameter. E.g. "depth": [3, 5, 7].

.

Items in params

depth: array[integer]

List of depths values to explore.

Items in depth

item: integer = 6

The maximum depth of the tree.

Range: 2 ≤ item ≤ 16


iterations: array[integer | null]

List of iterations values to explore.

Items in iterations

item: integer | null = 1000

Number of iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.

Range: 1 ≤ item < inf


one_hot_max_size: array[integer]

List of values configuring max cardinality for one-hot encoding.

Items in one_hot_max_size

item: integer = 10

Maximum cardinality of variables to be one-hot encoded. Use one-hot encoding for all categorical features with a number of different values less than or equal to the given parameter value. Other variables will be target-encoded. See CatBoost details for further information.

Range: 2 ≤ item < inf


max_ctr_complexity: array[number]

List of values configuring variable combination complexity.

Items in max_ctr_complexity

item: number = 2

The maximum number of features that can be combined when transforming categorical variables. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.

Range: 1 ≤ item ≤ 4


l2_leaf_reg: array[number]

List of leaf regularization strengths.

Items in l2_leaf_reg

item: number = 3.0

Coefficient at the L2 regularization term of the cost function.

Range: 0.0 < item < inf


border_count: array[integer]

List of border counts.

Items in border_count

item: integer = 254

The number of splits for numerical features.

Range: 1 ≤ item ≤ 65535


random_strength: array[number]

List of random strengths.

Items in random_strength

item: number = 1.0

The amount of randomness to use for scoring splits. Use this parameter to avoid overfitting the model. The value multiplies the variance of a random variable (with zero mean) that is added to the score used to select splits when a tree is grown.

Range: 0 < item < inf


seed: integer

Seed for random number generator ensuring reproducibility.

Range: 0 ≤ seed < inf

model: string = "CatboostRegressor"

Train a Catboost regressor. I.e. gradient boosted decision trees with support for categorical variables and missing values.


target: string

Target variable (labels). Name of the column that contains your target values (labels).


feature_encoder: null | string = "auto"

Configures encoding of feature columns. In "auto" mode, graphext chooses automatically how to convert any column types the model may not understand natively to a numeric type. When null is selected, features will not be processed in any way.

Warning

When using null, the data must already be in a format compatible with the selected kind of model!

Must be one of: "auto"


params: object

CatBoost configuration parameters. You can check the official documentation for more details about Catboost's parameters here.

Items in params

depth: integer = 6

The maximum depth of the tree.

Range: 2 ≤ depth ≤ 16


iterations: integer | null = 1000

Number of iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.

Range: 1 ≤ iterations < inf


one_hot_max_size: integer = 10

Maximum cardinality of variables to be one-hot encoded. Use one-hot encoding for all categorical features with a number of different values less than or equal to the given parameter value. Other variables will be target-encoded. See CatBoost details for further information.

Range: 2 ≤ one_hot_max_size < inf


max_ctr_complexity: number = 2

The maximum number of features that can be combined when transforming categorical variables. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.

Range: 1 ≤ max_ctr_complexity ≤ 4


l2_leaf_reg: number = 3.0

Coefficient at the L2 regularization term of the cost function.

Range: 0.0 < l2_leaf_reg < inf


border_count: integer = 254

The number of splits for numerical features.

Range: 1 ≤ border_count ≤ 65535


random_strength: number = 1.0

The amount of randomness to use for scoring splits. Use this parameter to avoid overfitting the model. The value multiplies the variance of a random variable (with zero mean) that is added to the score used to select splits when a tree is grown.

Range: 0 < random_strength < inf


nan_mode: string = "Min"

The method for processing missing values in the input dataset. Possible values:

  • “Forbidden”: Missing values are not supported, their presence is interpreted as an error.
  • “Min”: Missing values are processed as the minimum value (less than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
  • “Max”: Missing values are processed as the maximum value (greater than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.

Using the Min or Max value of this parameter guarantees that a split between missing values and other values is considered when selecting a new split in the tree.

Must be one of: "Forbidden", "Min", "Max"


random_seed: integer = 0

The random seed used for training.


boosting_type: string = "Plain"

Boosting type. Boosting scheme. Possible values are

  • Ordered: Usually provides better quality on small datasets, but it may be slower than the Plain scheme.
  • Plain: The classic gradient boosting scheme.

Must be one of: "Ordered", "Plain"


used_ram_limit: string | null

Whether and how to limit memory usage. Select the maximum Ram used using strings like "2GB" or "100mb" (non case_sensitive).


auto_class_weights: string | null = "Balanced"

Whether and how to assign weights to different predicted classes. The options are:

  • null: No class weighting
  • Balanced: Inversely proportional to the number of samples/rows in each class
  • SqrtBalanced: Using the square root of the "Balanced" option.

Must be one of: "Balanced", "SqrtBalanced"


validate: object | null

Configure model validation. Allows evaluation of model performance via cross-validation with custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.

Items in validate

n_splits: integer | null = 5

Number of train-test splits to evaluate the model on. Will split the dataset into training and test set n_splits times, train on the former and evaluate on the latter using specified or automatically selected metrics


test_size: number | null

What proportion of the data to use for testing in each split. If null or not provided, will use cross-validation to split the dataset. E.g. if n_splits is 5, the dataset will be split into 5 equal-sized parts. For five iterations four parts will then be used for training and the remaining part for testing. If test_size is a number between 0 and 1, in contrast, validation is done using a shuffle-split approach. Here, instead of splitting the data into n_splits equal parts up front, in each iteration we randomize the data and sample a proportion equal to test_size to use for evaluation and the remaining rows for training.

Range: 0 < test_size < 1


metrics: null | array[string]

One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.

Must be one of: "accuracy", "balanced_accuracy", "explained_variance", "f1_micro", "f1_macro", "f1_samples", "f1_weighted", "neg_mean_squared_error", "neg_median_absolute_error", "neg_root_mean_squared_error", "precision_micro", "precision_macro", "precision_samples", "precision_weighted", "recall_micro", "recall_macro", "recall_samples", "recall_weighted", "r2"


tune: object

Configure hypertuning. Configures the optimization of model hyper-parameters via cross-validated grid- or randomized search.

Items in tune

strategy: string = "grid"

Which search strategy to use for optimization. Grid search explores all possible combinations of parameters specified in params. Randomized search, on the other hand, randomly samples iterations parameter combinations from the distributions specified in params

Must be one of: "grid", "random"


iterations: integer = 10

How many randomly sampled parameter combinations to test in randomized search.

Range: 1 < iterations < inf


validate: object | null

Configure model validation. Allows evaluation of model performance via cross-validation with custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.

Items in validate

n_splits: integer | null = 5

Number of train-test splits to evaluate the model on. Will split the dataset into training and test set n_splits times, train on the former and evaluate on the latter using specified or automatically selected metrics


test_size: number | null

What proportion of the data to use for testing in each split. If null or not provided, will use cross-validation to split the dataset. E.g. if n_splits is 5, the dataset will be split into 5 equal-sized parts. For five iterations four parts will then be used for training and the remaining part for testing. If test_size is a number between 0 and 1, in contrast, validation is done using a shuffle-split approach. Here, instead of splitting the data into n_splits equal parts up front, in each iteration we randomize the data and sample a proportion equal to test_size to use for evaluation and the remaining rows for training.

Range: 0 < test_size < 1


metrics: null | array[string]

One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.


scorer: string

Metric used to select best model.

Must be one of: "accuracy", "balanced_accuracy", "explained_variance", "f1_micro", "f1_macro", "f1_samples", "f1_weighted", "neg_mean_squared_error", "neg_median_absolute_error", "neg_root_mean_squared_error", "precision_micro", "precision_macro", "precision_samples", "precision_weighted", "recall_micro", "recall_macro", "recall_samples", "recall_weighted", "r2"


params: object

The parameter values to explore. Allows tuning of any and all of the parameters that can be set also as constants in the "params" attribute.

Keys in this object should be strings identifying parameter names, and values should be lists of values to explore for that parameter. E.g. "depth": [3, 5, 7].

.

Items in params

depth: array[integer]

List of depths values to explore.

Items in depth

item: integer = 6

The maximum depth of the tree.

Range: 2 ≤ item ≤ 16


iterations: array[integer | null]

List of iterations values to explore.

Items in iterations

item: integer | null = 1000

Number of iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.

Range: 1 ≤ item < inf


one_hot_max_size: array[integer]

List of values configuring max cardinality for one-hot encoding.

Items in one_hot_max_size

item: integer = 10

Maximum cardinality of variables to be one-hot encoded. Use one-hot encoding for all categorical features with a number of different values less than or equal to the given parameter value. Other variables will be target-encoded. See CatBoost details for further information.

Range: 2 ≤ item < inf


max_ctr_complexity: array[number]

List of values configuring variable combination complexity.

Items in max_ctr_complexity

item: number = 2

The maximum number of features that can be combined when transforming categorical variables. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.

Range: 1 ≤ item ≤ 4


l2_leaf_reg: array[number]

List of leaf regularization strengths.

Items in l2_leaf_reg

item: number = 3.0

Coefficient at the L2 regularization term of the cost function.

Range: 0.0 < item < inf


border_count: array[integer]

List of border counts.

Items in border_count

item: integer = 254

The number of splits for numerical features.

Range: 1 ≤ item ≤ 65535


random_strength: array[number]

List of random strengths.

Items in random_strength

item: number = 1.0

The amount of randomness to use for scoring splits. Use this parameter to avoid overfitting the model. The value multiplies the variance of a random variable (with zero mean) that is added to the score used to select splits when a tree is grown.

Range: 0 < item < inf


seed: integer

Seed for random number generator ensuring reproducibility.

Range: 0 ≤ seed < inf

model: string = "LinearRegression"

Trains a linear regression. The specific kind of linear regression trained here is an "elastic net", which allows for a blend of ridge and lasso regularization to prevent overfitting. The mix as well as the strength of this regularization is automatically tuned using 5-fold cross-validation. See sklearn's ElasticNetCV for further details.


target: string

Target variable (labels). Name of the column that contains your target values (labels).


feature_encoder: null | string = "auto"

Configures encoding of feature columns. In "auto" mode, graphext chooses automatically how to convert any column types the model may not understand natively to a numeric type. When null is selected, features will not be processed in any way.

Warning

When using null, the data must already be in a format compatible with the selected kind of model!

Must be one of: "auto"


params: object

Model parameters. Constant parameters to configure before training.

Items in params

l1_ratio: number | array | null = [0.1, 0.5, 0.7, 0.9, 0.95, 0.99, 1]

Relative weight of l1 norm penalty vs l2 norm penalty. An l1-ratio of 0 means l2 penalty only (euclidean norm), resulting in a ridge regression penalizing large coefficients proportional to their sum of squares. An l1-ratio of 1.0 means l1 penalty only (taxicab/manhattan norm), i.e. proportional to the sum of absolute coefficient values. This has the tendency to prefer solutions with fewer non-zero coefficients, effectively reducing the number of features used in the optimized model.


n_alphas: integer = 100

Number of alphas (regularization strengths) to test for each l1_ratio.

Range: 10 ≤ n_alphas < inf


normalize: boolean = False

Feature normalization. Whether to normalize features before regression by subtracting the mean and dividing by the l2-norm. Note that by default feature will automatically pre-processed, so you may want to enables this only after disabling feature encoding first.


max_iter: integer = 1000

Maximum number of iterations of the optimization algorithm. Try increasing this if you suspect the algorithm doesn't reach the peformance you'd expect.

Range: 100 ≤ max_iter < inf


validate: object | null

Configure model validation. Allows evaluation of model performance via cross-validation with custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.

Items in validate

n_splits: integer | null = 5

Number of train-test splits to evaluate the model on. Will split the dataset into training and test set n_splits times, train on the former and evaluate on the latter using specified or automatically selected metrics


test_size: number | null

What proportion of the data to use for testing in each split. If null or not provided, will use cross-validation to split the dataset. E.g. if n_splits is 5, the dataset will be split into 5 equal-sized parts. For five iterations four parts will then be used for training and the remaining part for testing. If test_size is a number between 0 and 1, in contrast, validation is done using a shuffle-split approach. Here, instead of splitting the data into n_splits equal parts up front, in each iteration we randomize the data and sample a proportion equal to test_size to use for evaluation and the remaining rows for training.

Range: 0 < test_size < 1


metrics: null | array[string]

One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.

Must be one of: "accuracy", "balanced_accuracy", "explained_variance", "f1_micro", "f1_macro", "f1_samples", "f1_weighted", "neg_mean_squared_error", "neg_median_absolute_error", "neg_root_mean_squared_error", "precision_micro", "precision_macro", "precision_samples", "precision_weighted", "recall_micro", "recall_macro", "recall_samples", "recall_weighted", "r2"


seed: integer

Seed for random number generator ensuring reproducibility.

Range: 0 ≤ seed < inf

model: string = "LogisticRegression"

Trains a logistic regression. The specific kind of logistic regression trained here uses "elastic net" regularization, which allows for a blend of ridge and lasso penalties to prevent overfitting. The mix as well as the strength of this regularization is automatically tuned using 5-fold cross-validation. See sklearn's LogisticRegressionCV for further details.


target: string

Target variable (labels). Name of the column that contains your target values (labels).


feature_encoder: null | string = "auto"

Configures encoding of feature columns. In "auto" mode, graphext chooses automatically how to convert any column types the model may not understand natively to a numeric type. When null is selected, features will not be processed in any way.

Warning

When using null, the data must already be in a format compatible with the selected kind of model!

Must be one of: "auto"


params: object

Model parameters. Constant parameters to configure before training.

Items in params

Cs: integer | array = 10

Regularization strengths to explore. Smaller values specify stronger regularization. If Cs is as an integer, a grid of C values are chosen in a logarithmic scale between 1e-4 and 1e4.

Range: 1 < Cs < inf


l1_ratios: array | null = [0.1, 0.5, 0.7, 0.9, 0.95, 0.99, 1]

Relative weights of l1 norm penalty vs l2 norm penalty to explore. An l1-ratio of 0 means l2 penalty only (euclidean norm), resulting in a ridge regression penalizing large coefficients proportional to their sum of squares. An l1-ratio of 1.0 means l1 penalty only (taxicab/manhattan norm), i.e. proportional to the sum of absolute coefficient values. This has the tendency to prefer solutions with fewer non-zero coefficients, effectively reducing the number of features used in the optimized model.


max_iter: integer = 1000

Maximum number of iterations of the optimization algorithm. Try increasing this if you suspect the algorithm doesn't reach the peformance you'd expect.

Range: 100 ≤ max_iter < inf


scoring: string = "accuracy"

A scoring metric for evaluating the model.

Must be one of: "accuracy", "balanced_accuracy", "explained_variance", "f1_micro", "f1_macro", "f1_samples", "f1_weighted", "neg_mean_squared_error", "neg_median_absolute_error", "neg_root_mean_squared_error", "precision_micro", "precision_macro", "precision_samples", "precision_weighted", "recall_micro", "recall_macro", "recall_samples", "recall_weighted", "r2"


validate: object | null

Configure model validation. Allows evaluation of model performance via cross-validation with custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.

Items in validate

n_splits: integer | null = 5

Number of train-test splits to evaluate the model on. Will split the dataset into training and test set n_splits times, train on the former and evaluate on the latter using specified or automatically selected metrics


test_size: number | null

What proportion of the data to use for testing in each split. If null or not provided, will use cross-validation to split the dataset. E.g. if n_splits is 5, the dataset will be split into 5 equal-sized parts. For five iterations four parts will then be used for training and the remaining part for testing. If test_size is a number between 0 and 1, in contrast, validation is done using a shuffle-split approach. Here, instead of splitting the data into n_splits equal parts up front, in each iteration we randomize the data and sample a proportion equal to test_size to use for evaluation and the remaining rows for training.

Range: 0 < test_size < 1


metrics: null | array[string]

One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.

Must be one of: "accuracy", "balanced_accuracy", "explained_variance", "f1_micro", "f1_macro", "f1_samples", "f1_weighted", "neg_mean_squared_error", "neg_median_absolute_error", "neg_root_mean_squared_error", "precision_micro", "precision_macro", "precision_samples", "precision_weighted", "recall_micro", "recall_macro", "recall_samples", "recall_weighted", "r2"


seed: integer

Seed for random number generator ensuring reproducibility.

Range: 0 ≤ seed < inf

model: string = "UMAP"

Dimensionality reduction with "Uniform Manifold Approximation and Projection" (UMAP). Generates numeric embeddings (vectors) of the input data with reduced dimensionality, preserving local and global similarities between data points. Can be used for visualisation, for example, to arrange data in 2 dimensions according to their similarity, or to create nearest neighbour graphs/networks (also see step link_embeddings in the latter case).

Can be used in supervised mode (providing a target column as parameter) or unsupervised (without target).


target: string

Target variable (labels). Name of the column that contains your target values (labels).


params: object

Model parameters. See official UMAP documentation for details.

Items in params

n_neighbors: integer = 15

Number of neighbors. This determines the number of neighboring points used in local approximations of manifold structure. Larger values will result in more global structure being preserved at the loss of detailed local structure. In general this parameter should often be in the range 5 to 50, with a choice of 10 to 15 being a sensible default.

Range: 1 ≤ n_neighbors < inf


min_dist: number = 0.1

Minimum distance between reduced data points. Controls how tightly UMAP is allowed to pack points together in the reduced space. Smaller values will lead to points more tightly packed together (potentially useful if result is used to cluster the points). Larger values will distribute points with more space between them (which may be desirable for visualization, or to focus more on the global structure of the date).

For further details see here.

Range: 0 ≤ min_dist < inf


n_components: integer

Number of n_components. Allows the user to determine the dimensionality of the reduced dimension space we will be embedding the data into.

Range: 1 ≤ n_components < inf


metric: string = "euclidean"

Metric to use for measuring similarity between data points.

Must be one of: "euclidean", "manhattan", "chebyshev", "minkowski", "canberra", "braycurtis", "haversine", "mahalanobis", "wminkowski", "seuclidean", "cosine", "correlation", "hamming", "jaccard", "dice", "russellrao", "kulsinski", "rogerstanimoto", "sokalmichener", "sokalsneath", "yule"


n_epochs: integer | null

Number of training iterations used in optimizing the embedding. Larger values result in more accurate embeddings. If null is specified a value will be selected based on the size of the input dataset (200 for large datasets, 500 for small).


init: string = "spectral"

How to initialize the low dimensional embedding. When ‘spectral’, uses a spectral embedding; when‘ random’ assigns initial embedding positions at random.

Must be one of: "spectral", "random"


low_memory: boolean = False

Avoid excessive memory use. For some datasets nearest neighbor computations can consume a lot of memory. If you find the step is failing due to memory constraints, consider setting this option to true. This approach is more computationally expensive, but avoids excessive memory use.


target_weight: number = 0.5

Weighting factor between features and target. A value of 0.0 weights entirely on data, and a value of 1.0 weights entirely on target. The default of 0.5 balances the weighting equally between data and target.


seed: integer

Seed for random number generator ensuring reproducibility.

Range: 0 ≤ seed < inf

target: string

Target variable (labels). Name of the column that contains your target values (labels).


model: string = "Survival"

Model to Use.


params: object

Model parameters. Parameters for survival models.

Items in params

alpha: number = 0.05

Alpha. Level in the confidence intervals.


tie_method: string = "Efron"

Tie Method. specify how the fitter should deal with ties. Currently only ‘Efron’ is available.

Must be one of: "Efron"


penalizer: number = 1

Penalizer. Attach an L2 penalizer to the size of the coefficients during regression. This improves stability of the estimates and controls for high correlation between covariates. For example, this shrinks the magnitude value of 𝛽𝑖. The penalty is penalizer*||𝛽||^2 / 2.


strata: array[string]

Strata. Specify a list of columns to use in stratification. This is useful if a categorical covariate does not obey the proportional hazard assumption. This is used similar to the strata expression in R. See http://courses.washington.edu/b515/l17.pdf.