layout_dataset
Reduce the dataset to 2 dimensions that can be mapped to x/y node positions.
Based on the same vectorization and dimensionality reduction as the steps vectorize_dataset
and embed_dataset
. The only difference being that the number of dimensions (output columns) is
fixed to 2 (corresponding to x and y positions).
Usage
The following examples show how the step can be used in a recipe.
The following, simplest, example, will convert the input dataset ds
to purely numerical form, will reduce its dimensionality to just 2 using UMAP, and will save those 2 dimensions in the columns x
and y
. The way that the x and y coordinates are calculated via dimensionality reduction should preserve the similarity between original rows. I.e., rows that are similar in the original dataset should have coordinates close to each other.
Inputs & Outputs
The following are the inputs expected by the step and the outputs it produces. These are generally
columns (ds.first_name
), datasets (ds
or ds[["first_name", "last_name"]]
) or models (referenced
by name e.g. "churn-clf"
).
Configuration
The following parameters can be used to configure the behaviour of the step by including them in
a json object as the last “input” to the step, i.e. step(..., {"param": "value", ...}) -> (output)
.
Algorithm. The name of a supported dimensionality reduction algorithm.
Values must be one of the following:
umap
Toggle encoding of feature columns.
When enabled, Graphext will auto-convert any column types to the numeric type before
(optionally) reducing the data’s dimensionality. How this conversion is done can be
configured using the feature_encoder
option below.
Configures encoding of feature columns.
By default (null
), Graphext chooses automatically how to convert any column types the model
may not understand natively to a numeric type.
A configuration object can be passed instead to overwrite specific parameter values with respect to their default values.
Whether to include or ignore text columns during the processing of input data.
Enabling this will convert texts to their Tf-Idf representation. Each text will be
converted to an N-dimensional vector in which each component measures the relative
“over-representation” of a specific word (or n-gram) relative to its overall
frequency in the whole dataset. This is disabled by default because it will
often be better to convert texts explicitly using a previous step, such as
embed_text
or embed_text_with_model
.
Weights used to multiply the normalized columns/features after vectorization.
Should be a dictionary/object of {"column_name": weight, ...}
items. Will be scaled using the
parameters weights_max
, and weights_exp
before being applied. So only the relative weight of
the columns is important here, not their absolute values.
Weights used to multiply the normalized columns/features after vectorization.
Should be a dictionary/object of "type": weight"
items. Will be scaled using the parameters
weights_max
, and weights_exp
before being applied. So only the relative weight of the columns
is important here, not their absolute values.
Maximum weight to scale the normalized columns with.
Values must be in the following range:
Weight exponent.
Weights will be raised to this power before(!) scaling to weights_max
. This allows for a non-linear
mapping from input weights to those used eventually to multiply the normalized columns.
Number of neighbours. Use smaller numbers to concentrate on the local structure in the data, and larger values to focus on the more global structure.
For further details see here.
Values must be in the following range:
Minimum distance between reduced data points. Controls how tightly UMAP is allowed to pack points together in the reduced space. Smaller values will lead to points more tightly packed together (potentially useful if result is used to cluster the points). Larger values will distribute points with more space between them (which may be desirable for visualization, or to focus more on the global structure of the date).
For further details see here.
Values must be in the following range:
Dimensionality of the reduced data. Fixed at 2 for the purpose of a layout’s x and y coordinates.
Values must be in the following range:
Metric to use for measuring similarity between data points.
Values must be one of the following:
euclidean
manhattan
chebyshev
minkowski
canberra
braycurtis
haversine
mahalanobis
wminkowski
seuclidean
cosine
correlation
hamming
jaccard
dice
russellrao
kulsinski
rogerstanimoto
sokalmichener
sokalsneath
yule
Number of training iterations used in optimizing the embedding.
Larger values result in more accurate embeddings. If null
is specified a value will be
selected based on the size of the input dataset (200 for large datasets, 500 for small).
How to initialize the low dimensional embedding.
When “spectral”, uses a (relatively expensive) spectral embedding. “pca” uses the first n_components
from a principal component analysis. “tswspectral” is a cheaper alternative to “spectral”. When “random”,
assigns initial embedding positions at random. This uses the least amount of memory and time but may make UMAP
slower to converge on the optimal embedding. “auto” selects between “spectral” and “random” automatically
depending on the size of the dataset.
Values must be one of the following:
spectral
pca
tswspectral
random
auto
Avoid excessive memory use.
For some datasets nearest neighbor computations can consume a lot of memory. If you find
the step is failing due to memory constraints, consider setting this option to true
.
This approach is more computationally expensive, but avoids excessive memory use. Setting
it to “auto”, will enable this mode automatically depending on the size of the dataset.
Values must be one of the following:
True
False
auto
None
Target variable (labels) for supervised dimensionality reduction. Name of the column that contains your target values (labels).
Weighting factor between features and target. A value of 0.0 weights entirely on data, and a value of 1.0 weights entirely on target. The default of 0.5 balances the weighting equally between data and target.
Try to better preserve local densities in the data. Specifies whether the density-augmented objective of densMAP should be used for optimization. Turning on this option generates an embedding where the local densities are encouraged to be correlated with those in the original space.
Strength of local density preservation. Controls the regularization weight of the density correlation term in densMAP. Higher values prioritize density preservation over the UMAP objective, and vice versa for values closer to zero. Setting this parameter to zero is equivalent to running the original UMAP algorithm.
Drop duplicate rows before embedding.
If you have more duplicates than you have n_neighbors
you can have the identical data points lying
in different regions of your space. It also violates the definition of a metric. This option will
remove duplicates before embedding, and then map the original data points back to the reduced space. Duplicate
data points will be placed in the exact same location as the original data points.
A random number to initialize the algorithm for reproducibility.
Maintain links for n nearest neighbours only in graph.
Values must be in the following range:
Scaling factor for the coordinates. The maximum (normalized) coordinates in positive and negative X and Y directions. Acts like a zoom, with a scale of 1 corresponding to zooming out to the maximum (maximal space between nodes), and 0 to the densest layout.
If set to "auto"
, will try to determine an appropriate scale taking into account the number of nodes.
If set to null
, only changes calculated coordinates to ensure they’re within the allowed limits (16.000).
Was this page helpful?