Skip to content

Layout dataset

network · vectorize · dimensionality reduction · UMAP

Reduce the dataset to 2 dimensions that can be mapped to x/y node positions.

Based on the same vectorization and dimensionality reduction as the steps vectorize_dataset and embed_dataset. The only difference being that the number of dimensions (output columns) is fixed to 2 (corresponding to x and y positions).

Example

The following, simplest, example, will convert the input dataset ds to purely numerical form, will reduce its dimensionality to just 2 using UMAP, and will save those 2 dimensions in the columns x and y. The way that the x and y coordinates are calculated via dimensionality reduction should preserve the similarity between original rows. I.e., rows that are similar in the original dataset should have coordinates close to each other.

layout_dataset(ds) -> (ds.x, ds.y)
More examples

The following example will numerically convert and reduce the input dataset to 2 dimensions: x and y. 15 neighbours are considered for each data point during dimensionality reduction (instead of the default 100), so that we give more importance to the similarity between nearby points and less importance to the global structure of the data when calculating the layout. Also, the minimum distance of neighbours is increased from 0.1 to 0.8, to create a less dense and overlapping layout.

layout_dataset(ds, {
  "algorithm": "umap",
  "n_neighbors": 15,
  "min_dist": 0.8,
  "random_state": 42,
  "scale": 200,
}) -> (ds.x, ds.y)

Usage

The following are the step's expected inputs and outputs and their specific types.

layout_dataset(ds_in: dataset, {"param": value}) -> (x: number, y: number)

where the object {"param": value} is optional in most cases and if present may contain any of the parameters described in the corresponding section below.

Inputs


ds_in: dataset

An arbitrary input dataset.

Outputs


x: column:number

Column containing the x coordinate for each row.


y: column:number

Column containing the y coordinate for each row.

Parameters


algorithm: string = "umap"

Algorithm. The name of a supported dimensionality reduction algorithm.

Must be one of: "umap"


encode_features: boolean = True

Preprocess features to normalize relative distances for quantitative and categorical variables.


weights: object | null

Weights used to multiply the normalized columns/features after vectorization. Should be a dictionary/object of {"column_name": weight, ...} items. Will be scaled using the parameters weights_max, and weights_exp before being applied. So only the relative weight of the columns is important here, not their absolute values.

Items in weights

column_weight: number

A "column_name": numeric_weight pair. Each column name must refer to an existing column in the dataset.

Example parameter values:

  • {"date": 0.5, "age": 2}

type_weights: object | null

Weights used to multiply the normalized columns/features after vectorization. Should be a dictionary/object of "type": weight" items. Will be scaled using the parameters weights_max, and weights_exp before being applied. So only the relative weight of the columns is important here, not their absolute values.

Items in type_weights

number: number

Weight for columns of type Number


datetime: number

Weight for columns of type Datetime


category: number

Weight for columns of type Category


ordinal: number

Weight for columns of type Ordinal


embedding: number

Weight for columns of type Embedding (List[Number]).


multilabel: number

Weight for columns of type Multilabel (List[Category]).


weights_max: number = 32

Maximum weight to scale the normalized columns with.

Range: 0 ≤ weights_max < inf


weights_exp: integer = 2

Weight exponent. Weights will be raised to this power before(!) scaling to weights_max. This allows for a non-linear mapping from input weights to those used eventually to multiply the normalized columns.


n_neighbors: integer = 100

Number of neighbours. Use smaller numbers to concentrate on the local structure in the data, and larger values to focus on the more global structure.

For further details see here.

Range: 1 ≤ n_neighbors < inf


min_dist: number = 0.1

Minimum distance between reduced data points. Controls how tightly UMAP is allowed to pack points together in the reduced space. Smaller values will lead to points more tightly packed together (potentially useful if result is used to cluster the points). Larger values will distribute points with more space between them (which may be desirable for visualization, or to focus more on the global structure of the date).

For further details see here.

Range: 0 ≤ min_dist < inf


n_components: integer = 2

Dimensionality of the reduced data. Fixed at 2 for the purpose of a layout's x and y coordinates.

Range: 1 ≤ n_components < inf


metric: string = "euclidean"

Metric to use for measuring similarity between data points.

Must be one of: "euclidean", "manhattan", "chebyshev", "minkowski", "canberra", "braycurtis", "haversine", "mahalanobis", "wminkowski", "seuclidean", "cosine", "correlation", "hamming", "jaccard", "dice", "russellrao", "kulsinski", "rogerstanimoto", "sokalmichener", "sokalsneath", "yule"


n_epochs: integer | null

Number of training iterations used in optimizing the embedding. Larger values result in more accurate embeddings. If null is specified a value will be selected based on the size of the input dataset (200 for large datasets, 500 for small).


init: string = "spectral"

How to initialize the low dimensional embedding. When ‘spectral’, uses a spectral embedding; when‘ random’ assigns initial embedding positions at random.

Must be one of: "spectral", "random"


low_memory: boolean = False

Avoid excessive memory use. For some datasets nearest neighbor computations can consume a lot of memory. If you find the step is failing due to memory constraints, consider setting this option to true. This approach is more computationally expensive, but avoids excessive memory use.


target: string | null

Target variable (labels) for supervised dimensionality reduction. Name of the column that contains your target values (labels).


target_weight: number = 0.5

Weighting factor between features and target. A value of 0.0 weights entirely on data, and a value of 1.0 weights entirely on target. The default of 0.5 balances the weighting equally between data and target.


random_state: integer | null = 42

A random number to initialize the algorithm for reproducibility.


scale: null | string | number = "null"

Scaling factor for the coordinates. The maximum (normalized) coordinates in positive and negative X and Y directions. Acts like a zoom, with a scale of 1 corresponding to zooming out to the maximum (maximal space between nodes), and 0 to the densest layout.

If set to "auto", will try to determine an appropriate scale taking into account the number of nodes.

If set to null, only changes calculated coordinates to ensure they're within the allowed limits (16.000).