# Layout dataset¶

`network`

·
`vectorize`

·
`dimensionality reduction`

·
`UMAP`

Reduce the dataset to 2 dimensions that can be mapped to x/y node positions.

Based on the same vectorization and dimensionality reduction as the steps `vectorize_dataset`

and `embed_dataset`

. The only difference being that the number of dimensions (output columns) is
fixed to 2 (corresponding to x and y positions).

## Usage¶

The following are the step's expected inputs and outputs and their specific types.

```
layout_dataset(ds_in: dataset, {"param": value}) -> (x: column, y: column)
```

where the object `{"param": value}`

is optional in most cases and if present may contain any of the parameters described in the
corresponding section below.

#### Example¶

The following, simplest, example, will convert the input dataset `ds`

to purely numerical form, will reduce its dimensionality to just 2 using UMAP, and will save those 2 dimensions in the columns `x`

and `y`

. The way that the x and y coordinates are calculated via dimensionality reduction should preserve the similarity between original rows. I.e., rows that are similar in the original dataset should have coordinates close to each other.

```
layout_dataset(ds) -> (ds.x, ds.y)
```

## More examples

The following example will numerically convert and reduce the input dataset to 2 dimensions: x and y. 15 neighbours are considered for each data point during dimensionality reduction (instead of the default 100), so that we give more importance to the similarity between nearby points and less importance to the global structure of the data when calculating the layout. Also, the minimum distance of neighbours is increased from 0.1 to 0.8, to create a less dense and overlapping layout.

```
layout_dataset(ds, {
"algorithm": "umap",
"n_neighbors": 15,
"min_dist": 0.8,
"random_state": 42,
"scale": 200,
}) -> (ds.x, ds.y)
```

## Inputs¶

ds_in: dataset

An arbitrary input dataset.

## Outputs¶

x: column

Column containing the x coordinate for each row.

y: column

Column containing the y coordinate for each row.

## Parameters¶

algorithm: string = "umap"

Algorithm. The name of a supported dimensionality reduction algorithm.

Must be one of:
`"umap"`

encode_features: boolean = True

Toggle encoding of feature columns. When enabled, Graphext will auto-convert any column types to the numeric type before
(optionally) reducing the data's dimensionality. How this conversion is done can be
configured using the `feature_encoder`

option below.

Warning

If disabled, the dimensionality reduction algorithm applied in this step will assume that input data is already numerical and doesn't contain any missing values.

feature_encoder: null | object

Configures encoding of feature columns. By default (`null`

), Graphext chooses automatically how to convert any column types the model
may not understand natively to a numeric type.

A configuration object can be passed instead to overwrite specific parameter values with respect to their default values.

## Items in `feature_encoder`

number: object

Numeric encoder. Configures encoding of numeric features.

## Items in `number`

indicate_missing: boolean

Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"Mean"`

,
`"Median"`

,
`"MostFrequent"`

,
`"Const"`

scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

scaler_params: object

Further parameters passed to the `scaler`

function. Details depend no the particular scaler used.

## Items in `scaler_params`

bool: object

Boolean encoder. Configures encoding of boolean features.

## Items in `bool`

indicate_missing: boolean

Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

ordinal: object

Ordinal encoder. Configures encoding of categorical features that have a natural order.

## Items in `ordinal`

indicate_missing: boolean

Toggle the addition of a column using 0s and 1s to indicate where an input column contained missing values.

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

category: object | object

Category encoder. May contain either a single configuration for all categorical variables, or two different configurations for low- and high-cardinality variables. For further details pick one of the two options below.

## Items in `category`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

max_categories: null | integer

Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.

Range: `1 ≤ max_categories < inf`

encoder: null | string

How to encode categories.

Must be one of:
`"OneHot"`

,
`"Label"`

,
`"Ordinal"`

,
`"Binary"`

,
`"Frequency"`

scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

cardinality_treshold: integer

Condition for application of low- or high-cardinality configuration. Number of unique categories below which the `low_cardinality`

configuration is used,
and above which the `high_cardinality`

configuration is used.

Range: `3 ≤ cardinality_treshold < inf`

low_cardinality: object

Low cardinality configuration. Used for categories with fewer than `cardinality_threshold`

unique categories.

## Items in `low_cardinality`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

max_categories: null | integer

Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.

Range: `1 ≤ max_categories < inf`

encoder: null | string

How to encode categories.

Must be one of:
`"OneHot"`

,
`"Label"`

,
`"Ordinal"`

,
`"Binary"`

,
`"Frequency"`

scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

high_cardinality: object

High cardinality configuration. Used for categories with more than `cardinality_threshold`

unique categories.

## Items in `high_cardinality`

indicate_missing: boolean

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"MostFrequent"`

,
`"Const"`

max_categories: null | integer

Maximum number of unique categories to encode. Only the N-1 most common categories will be encoded, and the rest will be grouped into a single "Others" category.

Range: `1 ≤ max_categories < inf`

encoder: null | string

How to encode categories.

Must be one of:
`"OneHot"`

,
`"Label"`

,
`"Ordinal"`

,
`"Binary"`

,
`"Frequency"`

scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

multilabel: object | object

Multilabel encoder. Configures encoding of multivalued categorical features (variable length lists of categories,
or the semantic type `list[category]`

for short). May contain either a single configuration for
all multilabel variables, or two different configurations for low- and high-cardinality variables.
For further details pick one of the two options below.

## Items in `multilabel`

indicate_missing: boolean

encoder: null | string

How to encode categories/labels in multilabel (list[category]) columns.

Must be one of:
`"Binarizer"`

,
`"TfIdf"`

max_categories: null | integer

Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.

Range: `2 ≤ max_categories < inf`

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

cardinality_treshold: integer

Condition for application of low- or high-cardinality configuration. Number of unique categories below which the `low_cardinality`

configuration is used,
and above which the `high_cardinality`

configuration is used.

Range: `3 ≤ cardinality_treshold < inf`

low_cardinality: object

Low cardinality configuration. Used for mulitabel columns with fewer than `cardinality_threshold`

unique categories/labels.

## Items in `low_cardinality`

indicate_missing: boolean

encoder: null | string

How to encode categories/labels in multilabel (list[category]) columns.

Must be one of:
`"Binarizer"`

,
`"TfIdf"`

max_categories: null | integer

Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.

Range: `2 ≤ max_categories < inf`

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

high_cardinality: object

High cardinality configuration. Used for categories with more than `cardinality_threshold`

unique categories.

## Items in `high_cardinality`

indicate_missing: boolean

encoder: null | string

How to encode categories/labels in multilabel (list[category]) columns.

Must be one of:
`"Binarizer"`

,
`"TfIdf"`

max_categories: null | integer

Maximum number of categories/labels to encode. If a number is provided, the result of the encoding will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. When applied together with (after a) Tf-Idf encoding, this performs a kind of latent semantic analysis.

Range: `2 ≤ max_categories < inf`

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

datetime: object

Datetime encoder. Configures encoding of datetime (timestamp) features.

## Items in `datetime`

indicate_missing: boolean

components: array[string]

A list of numerical components to extract. Will create one numeric column for each component.

## Items in `components`

item: string

Must be one of:
`"day"`

,
`"dayofweek"`

,
`"dayofyear"`

,
`"hour"`

,
`"minute"`

,
`"month"`

,
`"quarter"`

,
`"season"`

,
`"second"`

,
`"week"`

,
`"weekday"`

,
`"weekofyear"`

,
`"year"`

cycles: array[string]

A list of cyclical time features to extract. "Cycles" are numerical transformations of features that should be represented on a circle. E.g. months, ranging from 1 to 12, should be arranged such that 12 and 1 are next to each other, rather than on opposite ends of a linear scale. We represent such cyclical time features on a circle by creating two columns for each original feature: the sin and cos of the numerical feature after appropriate scaling.

## Items in `cycles`

item: string

Must be one of:
`"day"`

,
`"dayofweek"`

,
`"dayofyear"`

,
`"hour"`

,
`"month"`

epoch: null | boolean

Whether to include the epoch as new feature (seconds since 01/01/1970).

imputer: null | string

Whether and how to impute (replace/fill) missing values.

Must be one of:
`"Mean"`

,
`"Median"`

,
`"MostFrequent"`

,
`"Const"`

component_scaler: null | string

Whether and how to scale the final numerical values (across a single column).

Must be one of:
`"Standard"`

,
`"Robust"`

,
`"KNN"`

vector_scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

embedding: object

Embedding/vector encoder. Configures encoding of multivalued numerical features (variable length lists of numbers, i.e. vectors, or the semantic type `list[number]`

for short).

## Items in `embedding`

indicate_missing: boolean

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

text: object

Text encoder. Configures encoding of text (natural language) features. Currently only allows Tf-Idf embeddings to represent texts. If you wish to use other embeddings, e.g. semantic, Word2Vec etc., transform your text column first using another step, and then use that result instead of the original texts.

Warning

Texts are *excluded* by default from the overall encoding of the dataset. See parameter
`include_text_features`

below to active it.

## Items in `text`

indicate_missing: boolean

encoder_params: object

Parameters to be passed to the text encoder (Tf-Idf parameters only for now). See scikit-learn's documentation for detailed parameters and their explanation.

## Items in `encoder_params`

n_components: integer

How many output features to generate. The resulting Tf-Idf vectors will be reduced to these many dimensions (columns) using scikit-learn's truncated SVD. This performs a kind of latent semantic analysis. By default we will reduce to 200 components.

Range: `2 ≤ n_components ≤ 1024`

scaler: null | string

How to scale the encoded (numerical columns).

Must be one of:
`"Euclidean"`

,
`"KNN"`

,
`"Norm"`

include_text_features: boolean = False

Whether to include or ignore text columns during the processing of input data. Enabling this will convert texts to their Tf-Idf representation. Each text will be
converted to an N-dimensional vector in which each component measures the relative
"over-representation" of a specific word (or n-gram) relative to its overall
frequency in the whole dataset. This is disabled by default because it will
often be better to convert texts explicitly using a previous step, such as
`embed_text`

or `embed_text_with_model`

weights: object | null

Weights used to multiply the normalized columns/features after vectorization. Should be a dictionary/object of `{"column_name": weight, ...}`

items. Will be scaled using the
parameters `weights_max`

, and `weights_exp`

before being applied. So only the relative weight of
the columns is important here, not their absolute values.

## Items in `weights`

column_weight: number

A `"column_name": numeric_weight`

pair. Each column name must refer to an existing column in the dataset.

Example parameter values:

`{"date": 0.5, "age": 2}`

type_weights: object | null

Weights used to multiply the normalized columns/features after vectorization. Should be a dictionary/object of `"type": weight"`

items. Will be scaled using the parameters
`weights_max`

, and `weights_exp`

before being applied. So only the relative weight of the columns
is important here, not their absolute values.

## Items in `type_weights`

number: number

Weight for columns of type `Number`

datetime: number

Weight for columns of type `Datetime`

category: number

Weight for columns of type `Category`

ordinal: number

Weight for columns of type `Ordinal`

embedding: number

Weight for columns of type `Embedding`

(`List[Number]`

).

multilabel: number

Weight for columns of type `Multilabel`

(`List[Category]`

).

weights_max: number = 32

Maximum weight to scale the normalized columns with.

Range: `0 ≤ weights_max < inf`

weights_exp: integer = 2

Weight exponent. Weights will be raised to this power before(!) scaling to `weights_max`

. This allows for a non-linear
mapping from input weights to those used eventually to multiply the normalized columns.

n_neighbors: integer = 100

Number of neighbours. Use smaller numbers to concentrate on the local structure in the data, and larger values to focus on the more global structure.

For further details see here.

Range: `1 ≤ n_neighbors < inf`

min_dist: number = 0.1

Minimum distance between reduced data points. Controls how tightly UMAP is allowed to pack points together in the reduced space. Smaller values will lead to points more tightly packed together (potentially useful if result is used to cluster the points). Larger values will distribute points with more space between them (which may be desirable for visualization, or to focus more on the global structure of the date).

For further details see here.

Range: `0 ≤ min_dist < inf`

n_components: integer = 2

Dimensionality of the reduced data. Fixed at 2 for the purpose of a layout's x and y coordinates.

Range: `1 ≤ n_components < inf`

metric: string = "euclidean"

Metric to use for measuring similarity between data points.

Must be one of:
`"euclidean"`

,
`"manhattan"`

,
`"chebyshev"`

,
`"minkowski"`

,
`"canberra"`

,
`"braycurtis"`

,
`"haversine"`

,
`"mahalanobis"`

,
`"wminkowski"`

,
`"seuclidean"`

,
`"cosine"`

,
`"correlation"`

,
`"hamming"`

,
`"jaccard"`

,
`"dice"`

,
`"russellrao"`

,
`"kulsinski"`

,
`"rogerstanimoto"`

,
`"sokalmichener"`

,
`"sokalsneath"`

,
`"yule"`

n_epochs: integer | null

Number of training iterations used in optimizing the embedding. Larger values result in more accurate embeddings. If `null`

is specified a value will be
selected based on the size of the input dataset (200 for large datasets, 500 for small).

init: string = "auto"

How to initialize the low dimensional embedding. When "spectral", uses a (relatively expensive) spectral embedding. When "random", assigns initial embedding positions at random. This uses less memory but may make UMAP slower to converge on the optimal embedding. "auto" selects between the two automatically depending on the size of the dataset.

Must be one of:
`"spectral"`

,
`"random"`

,
`"auto"`

low_memory: boolean | string | null = "auto"

Avoid excessive memory use. For some datasets nearest neighbor computations can consume a lot of memory. If you find
the step is failing due to memory constraints, consider setting this option to `true`

.
This approach is more computationally expensive, but avoids excessive memory use. Setting
it to "auto", will enable this mode automatically depending on the size of the dataset.

Must be one of:
`True`

,
`False`

,
`"auto"`

,
`None`

target: string | null

Target variable (labels) for supervised dimensionality reduction. Name of the column that contains your target values (labels).

target_weight: number = 0.5

Weighting factor between features and target. A value of 0.0 weights entirely on data, and a value of 1.0 weights entirely on target. The default of 0.5 balances the weighting equally between data and target.

densmap: boolean = False

Try to better preserve local densities in the data. Specifies whether the density-augmented objective of densMAP should be used for optimization. Turning on this option generates an embedding where the local densities are encouraged to be correlated with those in the original space.

dens_lambda: number = 2.0

Strength of local density preservation. Controls the regularization weight of the density correlation term in densMAP. Higher values prioritize density preservation over the UMAP objective, and vice versa for values closer to zero. Setting this parameter to zero is equivalent to running the original UMAP algorithm.

random_state: integer | null = 42

A random number to initialize the algorithm for reproducibility.

scale: null | string | number = "null"

Scaling factor for the coordinates. The maximum (normalized) coordinates in positive and negative X and Y directions. Acts like a zoom, with a scale of 1 corresponding to zooming out to the maximum (maximal space between nodes), and 0 to the densest layout.

If set to `"auto"`

, will try to determine an appropriate scale taking into account the number of nodes.

If set to `null`

, only changes calculated coordinates to ensure they're within the allowed limits (16.000).