Train regression¶
inference • models • catboost • linear regression
Train and store a regression model to be loaded at a later point for prediction.
Note that the configuration parameters depend on the specific model trained. In the parameters section below, each allowed model has its own section.
The output will always be a new column with the trained model's predictions on the training data, as well as a saved and named model file that can be used in other projects for prediction of new data.
A detailed guide on how to configure this step for model tuning and performance evaluation can be found here.
Usage¶
The following are the step's expected inputs and outputs and their specific types.
train_regression(ds: dataset, {"param": value}) -> (predicted: number, model: model_regression[ds])
where the object {"param": value}
is optional in most cases and if present may contain any of the parameters described in the
corresponding section below.
Example¶
Train a regression model with default parameters. By default, a Catboost model will be trained, but this can be changed to any of the supported models by specifying the model
parameter (see below for details):
train_regression(ds, {"target": "class"}) -> (ds.predicted, model)
More examples
To be more explicit about which model parameters to use during training, which parameters to optimize (tune) automatically, and how to evaluate the model's performance, the following example shows a complete configuration. It will explicitly select the CatboostRegressor as the model, set boosting_type
to "ordered", and select the best combination of learning_rate
and depth
from the values specified in the tune: params
configuration. To find the best parameters, it will perform 5-fold cross-validation on each combination, and will use the scorer f1_weighted
to measure the performance. accuracy
will also be measured, but only for the purpose of reporting. The best parameter combination will than be evaluated on a single split of the dataset (with 20% of rows used for testing and 80% for training), with metrics selected automatically. Note that the final model will always be re-trained on the whole dataset!
train_regression(ds, {
"target": "label_col",
"model": "CatboostRegressor",
"params": {
"boosting_type": "ordered"
},
"tune": {
"strategy": "grid",
"params": {
"learning_rate": [0.03, 0.1],
"depth": [4, 6, 10]
},
"validate": {
"n_splits": 5,
"metrics": ["f1_weighted", "accuracy"]
},
"scorer": "f1_weighted"
},
"validate": {
"n_splits": 1,
"test_size": 0.2
}
}) -> (ds.predicted, model)
Inputs¶
ds: dataset
Should contain the target column and the feature columns you wish to use in the model.
Outputs¶
predicted: column:number
Column containing results of the model.
model: file:model_regression[ds]
Zip file containing the trained model and associated information.
Parameters¶
model: string = "CatboostRegressor"
Train a Catboost regressor. I.e. gradient boosted decision trees with support for categorical variables and missing values.
target: string
Target variable (labels). Name of the column that contains your target values (labels).
encode_features: boolean = True
Toggle encoding of feature columns. When enabled, Graphext will auto-convert any column types to the numeric type before
fitting the model. How this conversion is done can be configured using the feature_encoder
option below.
Warning
If disabled, any model trained in this step will assume that input data is already in an appropriate format (e.g. numerical and not containing any missing values).
feature_encoder: null | object
Configures encoding of feature columns. By default (null
), Graphext chooses automatically how to convert any column types the model
may not understand natively to a numeric type.
A configuration object can be passed instead to overwrite specific parameter values with respect to their default values.
Items in feature_encoder
number: object
Numeric encoder. Configures encoding of numeric features.
Items in number
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"Mean"
,
"Median"
,
"MostFrequent"
,
"Const"
,
None
scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
scaler_params: object
Further parameters passed to the scaler
function. Details depend no the particular scaler used.
Items in scaler_params
bool: object
Boolean encoder. Configures encoding of boolean features.
Items in bool
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
ordinal: object
Ordinal encoder. Configures encoding of categorical features that have a natural order.
Items in ordinal
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
category: object | object
Category encoder. May contain either a single configuration for all categorical variables, or two different configurations for low- and high-cardinality variables. For further details pick one of the two options below.
Items in category
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
max_categories: null | integer
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.
Range: 1 ≤ max_categories < inf
encoder: null | string
How to encode categories.
Must be one of:
"OneHot"
,
"Label"
,
"Ordinal"
,
"Binary"
,
"Frequency"
,
None
scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
cardinality_treshold: integer
Condition for application of low- or high-cardinality configuration. Number of unique categories below which the low_cardinality
configuration is used,
and above which the high_cardinality
configuration is used.
Range: 3 ≤ cardinality_treshold < inf
low_cardinality: object
Low cardinality configuration. Used for categories with fewer than cardinality_threshold
unique categories.
Items in low_cardinality
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
max_categories: null | integer
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.
Range: 1 ≤ max_categories < inf
encoder: null | string
How to encode categories.
Must be one of:
"OneHot"
,
"Label"
,
"Ordinal"
,
"Binary"
,
"Frequency"
,
None
scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
high_cardinality: object
High cardinality configuration. Used for categories with more than cardinality_threshold
unique categories.
Items in high_cardinality
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
max_categories: null | integer
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.
Range: 1 ≤ max_categories < inf
encoder: null | string
How to encode categories.
Must be one of:
"OneHot"
,
"Label"
,
"Ordinal"
,
"Binary"
,
"Frequency"
,
None
scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
multilabel: object | object
Multilabel encoder. Configures encoding of multivalued categorical features (variable length lists of categories,
or the semantic type list[category]
for short). May contain either a single configuration for
all multilabel variables, or two different configurations for low- and high-cardinality variables.
For further details pick one of the two options below.
Items in multilabel
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder: null | string
How to encode categories/labels in multilabel (list[category]) columns.
Must be one of:
"Binarizer"
,
"TfIdf"
,
None
max_categories: null | integer
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Range: 2 ≤ max_categories < inf
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
cardinality_treshold: integer
Condition for application of low- or high-cardinality configuration. Number of unique categories below which the low_cardinality
configuration is used,
and above which the high_cardinality
configuration is used.
Range: 3 ≤ cardinality_treshold < inf
low_cardinality: object
Low cardinality configuration. Used for mulitabel columns with fewer than cardinality_threshold
unique categories/labels.
Items in low_cardinality
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder: null | string
How to encode categories/labels in multilabel (list[category]) columns.
Must be one of:
"Binarizer"
,
"TfIdf"
,
None
max_categories: null | integer
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Range: 2 ≤ max_categories < inf
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
high_cardinality: object
High cardinality configuration. Used for categories with more than cardinality_threshold
unique categories.
Items in high_cardinality
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder: null | string
How to encode categories/labels in multilabel (list[category]) columns.
Must be one of:
"Binarizer"
,
"TfIdf"
,
None
max_categories: null | integer
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Range: 2 ≤ max_categories < inf
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
datetime: object
Datetime encoder. Configures encoding of datetime (timestamp) features.
Items in datetime
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
components: array[string]
A list of numerical components to extract. Will create one numeric column for each component.
Items in components
item: string
Must be one of:
"day"
,
"dayofweek"
,
"dayofyear"
,
"hour"
,
"minute"
,
"month"
,
"quarter"
,
"season"
,
"second"
,
"week"
,
"weekday"
,
"weekofyear"
,
"year"
cycles: array[string]
A list of cyclical time features to extract. "Cycles" are numerical transformations of features that should be represented on a circle. E.g. months, ranging from 1 to 12, should be arranged such that 12 and 1 are next to each other, rather than on opposite ends of a linear scale. We represent such cyclical time features on a circle by creating two columns for each original feature: the sin and cos of the numerical feature after appropriate scaling.
Items in cycles
item: string
Must be one of:
"day"
,
"dayofweek"
,
"dayofyear"
,
"hour"
,
"month"
epoch: null | boolean
Whether to include the epoch as new feature (seconds since 01/01/1970).
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"Mean"
,
"Median"
,
"MostFrequent"
,
"Const"
,
None
component_scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
vector_scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
embedding: object
Embedding/vector encoder. Configures encoding of multivalued numerical features (variable length lists of numbers, i.e. vectors, or the semantic type list[number]
for short).
Items in embedding
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
text: object
Text encoder. Configures encoding of text (natural language) features. Currently only allows Tf-Idf embeddings to represent texts. If you wish to use other embeddings, e.g. semantic, Word2Vec etc., transform your text column first using another step, and then use that result instead of the original texts.
Warning
Texts are excluded by default from the overall encoding of the dataset. See parameter
include_text_features
below to active it.
Items in text
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder_params: object
Parameters to be passed to the text encoder (Tf-Idf parameters only for now). See scikit-learn's documentation for detailed parameters and their explanation.
Items in encoder_params
n_components: integer
How many output features to generate. The resulting Tf-Idf vectors will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. This performs a kind of latent semantic analysis. By default we will reduce to 200 components.
Range: 2 ≤ n_components ≤ 1024
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
include_text_features: boolean = False
Whether to include or ignore text columns during the processing of input data. Enabling this will convert texts to their TfIdf representation. Each text will be
converted to an N-dimensional vector in which each component measures the relative
"over-representation" of a specific word (or n-gram) relative to its overall
frequency in the whole dataset. This is disabled by default because it will
often be better to convert texts explicitly using a previous step, such as
embed_text
or embed_text_with_model
params: object
CatBoost configuration parameters. You can check the official documentation for more details about Catboost's parameters here.
Items in params
depth: integer = 6
The maximum depth of the tree.
Range: 2 ≤ depth ≤ 16
iterations: integer | null = 1000
Number of iterations. The maximum number of trees that can be built when solving machine learning problems. When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.
Range: 1 ≤ iterations < inf
one_hot_max_size: integer = 10
Maximum cardinality of variables to be one-hot encoded. Use one-hot encoding for all categorical features with a number of different values less than or equal to this value. Other variables will be target-encoded. Note that one-hot encoding is faster than the alternatives, so decreasing this value makes it more likely slower methods will be used. See CatBoost details for further information.
Range: 2 ≤ one_hot_max_size < inf
max_ctr_complexity: number = 2
The maximum number of features that can be combined when transforming categorical variables. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”.
Range: 1 ≤ max_ctr_complexity ≤ 4
l2_leaf_reg: number = 3.0
Coefficient at the L2 regularization term of the cost function.
Range: 0.0 < l2_leaf_reg < inf
border_count: integer = 254
The number of splits for numerical features.
Range: 1 ≤ border_count ≤ 65535
random_strength: number = 1.0
The amount of randomness to use for scoring splits. Use this parameter to avoid overfitting the model. The value multiplies the variance of a random variable (with zero mean) that is added to the score used to select splits when a tree is grown.
Range: 0 < random_strength < inf
nan_mode: string = "Min"
The method for processing missing values in the input dataset. Possible values:
- “Forbidden”: Missing values are not supported, their presence is interpreted as an error.
- “Min”: Missing values are processed as the minimum value (less than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
- “Max”: Missing values are processed as the maximum value (greater than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
Using the Min or Max value of this parameter guarantees that a split between missing values and other values is considered when selecting a new split in the tree.
Must be one of:
"Forbidden"
,
"Min"
,
"Max"
boosting_type: string = "Plain"
Boosting type. Boosting scheme. Possible values are
- Ordered: Usually provides better quality on small datasets, but it may be slower than the Plain scheme.
- Plain: The classic gradient boosting scheme.
Must be one of:
"Ordered"
,
"Plain"
rsm: number | null
Random subspace method. The percentage of features to use at each split selection, when features are selected over again at random. The value null
is equivalent to 1.0 (all features). You can set this to values < 1.0 when the dataset has many features (e.g. > 20) to speed up training.
Range: 0 < rsm ≤ 1.0
random_seed: integer = 0
The random seed used for training.
used_ram_limit: string | null
Whether and how to limit memory usage. Select the maximum Ram used using strings like "2GB" or "100mb" (non case_sensitive).
validate: object | null
Configure model validation. Allows evaluation of model performance via cross-validation using custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.
Items in validate
n_splits: integer | null = 5
Number of train-test splits to evaluate the model on. Will split the dataset into training and test set n_splits
times, train on the former
and evaluate on the latter using specified or automatically selected metrics
Range: 1 ≤ n_splits < inf
test_size: number | null
What proportion of the data to use for testing in each split. If null
or not provided, will use k-fold cross-validation
to split the dataset. E.g. if n_splits
is 5, the dataset will be split into 5 equal-sized parts.
For five iterations four parts will then be used for training and the remaining part for testing.
If test_size
is a number between 0 and 1, in contrast, validation is done using a
shuffle-split
approach. Here, instead of splitting the data into n_splits
equal parts up front, in each iteration
we randomize the data and sample a proportion equal to test_size
to use for evaluation and the remaining
rows for training.
Range: 0 < test_size < 1
metrics: null | array[string]
One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.
Must be one of:
"accuracy"
,
"balanced_accuracy"
,
"explained_variance"
,
"f1_micro"
,
"f1_macro"
,
"f1_samples"
,
"f1_weighted"
,
"neg_mean_squared_error"
,
"neg_median_absolute_error"
,
"neg_root_mean_squared_error"
,
"precision_micro"
,
"precision_macro"
,
"precision_samples"
,
"precision_weighted"
,
"recall_micro"
,
"recall_macro"
,
"recall_samples"
,
"recall_weighted"
,
"r2"
tune: object
Configure hypertuning. Configures the optimization of model hyper-parameters via cross-validated grid- or randomized search.
Items in tune
strategy: string = "grid"
Which search strategy to use for optimization. Grid search explores all possible combinations of parameters specified in params
.
Randomized search, on the other hand, randomly samples iterations
parameter combinations
from the distributions specified in params
Must be one of:
"grid"
,
"random"
iterations: integer = 10
How many randomly sampled parameter combinations to test in randomized search.
Range: 1 < iterations < inf
validate: object | null
Configure model validation. Allows evaluation of model performance via cross-validation using custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.
Items in validate
n_splits: integer | null = 5
Number of train-test splits to evaluate the model on. Will split the dataset into training and test set n_splits
times, train on the former
and evaluate on the latter using specified or automatically selected metrics
Range: 1 ≤ n_splits < inf
test_size: number | null
What proportion of the data to use for testing in each split. If null
or not provided, will use k-fold cross-validation
to split the dataset. E.g. if n_splits
is 5, the dataset will be split into 5 equal-sized parts.
For five iterations four parts will then be used for training and the remaining part for testing.
If test_size
is a number between 0 and 1, in contrast, validation is done using a
shuffle-split
approach. Here, instead of splitting the data into n_splits
equal parts up front, in each iteration
we randomize the data and sample a proportion equal to test_size
to use for evaluation and the remaining
rows for training.
Range: 0 < test_size < 1
metrics: null | array[string]
One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.
Must be one of:
"accuracy"
,
"balanced_accuracy"
,
"explained_variance"
,
"f1_micro"
,
"f1_macro"
,
"f1_samples"
,
"f1_weighted"
,
"neg_mean_squared_error"
,
"neg_median_absolute_error"
,
"neg_root_mean_squared_error"
,
"precision_micro"
,
"precision_macro"
,
"precision_samples"
,
"precision_weighted"
,
"recall_micro"
,
"recall_macro"
,
"recall_samples"
,
"recall_weighted"
,
"r2"
scorer: string
Metric used to select best model.
Must be one of:
"accuracy"
,
"balanced_accuracy"
,
"explained_variance"
,
"f1_micro"
,
"f1_macro"
,
"f1_samples"
,
"f1_weighted"
,
"neg_mean_squared_error"
,
"neg_median_absolute_error"
,
"neg_root_mean_squared_error"
,
"precision_micro"
,
"precision_macro"
,
"precision_samples"
,
"precision_weighted"
,
"recall_micro"
,
"recall_macro"
,
"recall_samples"
,
"recall_weighted"
,
"r2"
seed: integer
Seed for random number generator ensuring reproducibility.
Range: 0 ≤ seed < inf
model: string = "LinearRegression"
Trains a linear regression. The specific kind of linear regression trained here is an "elastic net", which allows for a blend of ridge and lasso regularization to prevent overfitting. The mix as well as the strength of this regularization is automatically tuned using 5-fold cross-validation. See sklearn's ElasticNetCV for further details.
target: string
Target variable (labels). Name of the column that contains your target values (labels).
encode_features: boolean = True
Toggle encoding of feature columns. When enabled, Graphext will auto-convert any column types to the numeric type before
fitting the model. How this conversion is done can be configured using the feature_encoder
option below.
Warning
If disabled, any model trained in this step will assume that input data is already in an appropriate format (e.g. numerical and not containing any missing values).
feature_encoder: null | object
Configures encoding of feature columns. By default (null
), Graphext chooses automatically how to convert any column types the model
may not understand natively to a numeric type.
A configuration object can be passed instead to overwrite specific parameter values with respect to their default values.
Items in feature_encoder
number: object
Numeric encoder. Configures encoding of numeric features.
Items in number
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"Mean"
,
"Median"
,
"MostFrequent"
,
"Const"
,
None
scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
scaler_params: object
Further parameters passed to the scaler
function. Details depend no the particular scaler used.
Items in scaler_params
bool: object
Boolean encoder. Configures encoding of boolean features.
Items in bool
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
ordinal: object
Ordinal encoder. Configures encoding of categorical features that have a natural order.
Items in ordinal
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
category: object | object
Category encoder. May contain either a single configuration for all categorical variables, or two different configurations for low- and high-cardinality variables. For further details pick one of the two options below.
Items in category
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
max_categories: null | integer
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.
Range: 1 ≤ max_categories < inf
encoder: null | string
How to encode categories.
Must be one of:
"OneHot"
,
"Label"
,
"Ordinal"
,
"Binary"
,
"Frequency"
,
None
scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
cardinality_treshold: integer
Condition for application of low- or high-cardinality configuration. Number of unique categories below which the low_cardinality
configuration is used,
and above which the high_cardinality
configuration is used.
Range: 3 ≤ cardinality_treshold < inf
low_cardinality: object
Low cardinality configuration. Used for categories with fewer than cardinality_threshold
unique categories.
Items in low_cardinality
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
max_categories: null | integer
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.
Range: 1 ≤ max_categories < inf
encoder: null | string
How to encode categories.
Must be one of:
"OneHot"
,
"Label"
,
"Ordinal"
,
"Binary"
,
"Frequency"
,
None
scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
high_cardinality: object
High cardinality configuration. Used for categories with more than cardinality_threshold
unique categories.
Items in high_cardinality
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"MostFrequent"
,
"Const"
,
None
max_categories: null | integer
Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.
Range: 1 ≤ max_categories < inf
encoder: null | string
How to encode categories.
Must be one of:
"OneHot"
,
"Label"
,
"Ordinal"
,
"Binary"
,
"Frequency"
,
None
scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
multilabel: object | object
Multilabel encoder. Configures encoding of multivalued categorical features (variable length lists of categories,
or the semantic type list[category]
for short). May contain either a single configuration for
all multilabel variables, or two different configurations for low- and high-cardinality variables.
For further details pick one of the two options below.
Items in multilabel
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder: null | string
How to encode categories/labels in multilabel (list[category]) columns.
Must be one of:
"Binarizer"
,
"TfIdf"
,
None
max_categories: null | integer
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Range: 2 ≤ max_categories < inf
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
cardinality_treshold: integer
Condition for application of low- or high-cardinality configuration. Number of unique categories below which the low_cardinality
configuration is used,
and above which the high_cardinality
configuration is used.
Range: 3 ≤ cardinality_treshold < inf
low_cardinality: object
Low cardinality configuration. Used for mulitabel columns with fewer than cardinality_threshold
unique categories/labels.
Items in low_cardinality
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder: null | string
How to encode categories/labels in multilabel (list[category]) columns.
Must be one of:
"Binarizer"
,
"TfIdf"
,
None
max_categories: null | integer
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Range: 2 ≤ max_categories < inf
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
high_cardinality: object
High cardinality configuration. Used for categories with more than cardinality_threshold
unique categories.
Items in high_cardinality
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder: null | string
How to encode categories/labels in multilabel (list[category]) columns.
Must be one of:
"Binarizer"
,
"TfIdf"
,
None
max_categories: null | integer
Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.
Range: 2 ≤ max_categories < inf
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
datetime: object
Datetime encoder. Configures encoding of datetime (timestamp) features.
Items in datetime
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
components: array[string]
A list of numerical components to extract. Will create one numeric column for each component.
Items in components
item: string
Must be one of:
"day"
,
"dayofweek"
,
"dayofyear"
,
"hour"
,
"minute"
,
"month"
,
"quarter"
,
"season"
,
"second"
,
"week"
,
"weekday"
,
"weekofyear"
,
"year"
cycles: array[string]
A list of cyclical time features to extract. "Cycles" are numerical transformations of features that should be represented on a circle. E.g. months, ranging from 1 to 12, should be arranged such that 12 and 1 are next to each other, rather than on opposite ends of a linear scale. We represent such cyclical time features on a circle by creating two columns for each original feature: the sin and cos of the numerical feature after appropriate scaling.
Items in cycles
item: string
Must be one of:
"day"
,
"dayofweek"
,
"dayofyear"
,
"hour"
,
"month"
epoch: null | boolean
Whether to include the epoch as new feature (seconds since 01/01/1970).
imputer: null | string
Whether and how to impute (replace/fill) missing values.
Must be one of:
"Mean"
,
"Median"
,
"MostFrequent"
,
"Const"
,
None
component_scaler: null | string
Whether and how to scale the final numerical values (across a single column).
Must be one of:
"Standard"
,
"Robust"
,
"KNN"
,
None
vector_scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
embedding: object
Embedding/vector encoder. Configures encoding of multivalued numerical features (variable length lists of numbers, i.e. vectors, or the semantic type list[number]
for short).
Items in embedding
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
text: object
Text encoder. Configures encoding of text (natural language) features. Currently only allows Tf-Idf embeddings to represent texts. If you wish to use other embeddings, e.g. semantic, Word2Vec etc., transform your text column first using another step, and then use that result instead of the original texts.
Warning
Texts are excluded by default from the overall encoding of the dataset. See parameter
include_text_features
below to active it.
Items in text
indicate_missing: boolean
Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.
encoder_params: object
Parameters to be passed to the text encoder (Tf-Idf parameters only for now). See scikit-learn's documentation for detailed parameters and their explanation.
Items in encoder_params
n_components: integer
How many output features to generate. The resulting Tf-Idf vectors will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. This performs a kind of latent semantic analysis. By default we will reduce to 200 components.
Range: 2 ≤ n_components ≤ 1024
scaler: null | string
How to scale the encoded (numerical columns).
Must be one of:
"Euclidean"
,
"KNN"
,
"Norm"
,
None
include_text_features: boolean = False
Whether to include or ignore text columns during the processing of input data. Enabling this will convert texts to their TfIdf representation. Each text will be
converted to an N-dimensional vector in which each component measures the relative
"over-representation" of a specific word (or n-gram) relative to its overall
frequency in the whole dataset. This is disabled by default because it will
often be better to convert texts explicitly using a previous step, such as
embed_text
or embed_text_with_model
params: object
Model parameters. Constant parameters to configure before training.
Items in params
l1_ratio: number | array | null = [0.1, 0.5, 0.7, 0.9, 0.95, 0.99, 1]
Relative weight of l1 norm penalty vs l2 norm penalty. An l1-ratio of 0 means l2 penalty only (euclidean norm), resulting in a ridge regression penalizing large coefficients proportional to their sum of squares. An l1-ratio of 1.0 means l1 penalty only (taxicab/manhattan norm), i.e. proportional to the sum of absolute coefficient values. This has the tendency to prefer solutions with fewer non-zero coefficients, effectively reducing the number of features used in the optimized model.
n_alphas: integer = 100
Number of alphas (regularization strengths) to test for each l1_ratio.
Range: 10 ≤ n_alphas < inf
normalize: boolean = False
Feature normalization. Whether to normalize features before regression by subtracting the mean and dividing by the l2-norm. Note that by default feature will automatically pre-processed, so you may want to enables this only after disabling feature encoding first.
max_iter: integer = 1000
Maximum number of iterations of the optimization algorithm. Try increasing this if you suspect the algorithm doesn't reach the peformance you'd expect.
Range: 100 ≤ max_iter < inf
validate: object | null
Configure model validation. Allows evaluation of model performance via cross-validation using custom metrics. If not specified, will by default perform 5-fold cross-validation with automatically selected metrics.
Items in validate
n_splits: integer | null = 5
Number of train-test splits to evaluate the model on. Will split the dataset into training and test set n_splits
times, train on the former
and evaluate on the latter using specified or automatically selected metrics
Range: 1 ≤ n_splits < inf
test_size: number | null
What proportion of the data to use for testing in each split. If null
or not provided, will use k-fold cross-validation
to split the dataset. E.g. if n_splits
is 5, the dataset will be split into 5 equal-sized parts.
For five iterations four parts will then be used for training and the remaining part for testing.
If test_size
is a number between 0 and 1, in contrast, validation is done using a
shuffle-split
approach. Here, instead of splitting the data into n_splits
equal parts up front, in each iteration
we randomize the data and sample a proportion equal to test_size
to use for evaluation and the remaining
rows for training.
Range: 0 < test_size < 1
metrics: null | array[string]
One or more metrics/scoring functions to evaluate the model with. When none is provided, will measure default metrics appropriate for the prediction task (classification vs. regression determined from model or type of target column). See sklearn model evaluation for further details.
Must be one of:
"accuracy"
,
"balanced_accuracy"
,
"explained_variance"
,
"f1_micro"
,
"f1_macro"
,
"f1_samples"
,
"f1_weighted"
,
"neg_mean_squared_error"
,
"neg_median_absolute_error"
,
"neg_root_mean_squared_error"
,
"precision_micro"
,
"precision_macro"
,
"precision_samples"
,
"precision_weighted"
,
"recall_micro"
,
"recall_macro"
,
"recall_samples"
,
"recall_weighted"
,
"r2"
seed: integer
Seed for random number generator ensuring reproducibility.
Range: 0 ≤ seed < inf