aggregate
Group and aggregate a dataset using any of a number of predefined functions.
aggregate(ds_in: dataset, {
"param": value,
...
}) -> (ds_out: dataset)
After optionally sorting the dataset, it is grouped by the unique values (or combinations of unique values) in one or more columns. Each group’s rows are then aggregated using one or more predefined functions. A new dataset is thus created containing one column per selected aggregation function, and one row for each unique group.
aggregate(ds_in: dataset, {
"param": value,
...
}) -> (ds_out: dataset)
A dataset to group and aggregate.
The result of the aggregation.
Grouping column(s). The name(s) of column(s) whose unique values define the groups to aggregate.
- order_id
- [‘weekday’, ‘hour’]
Pre-aggregation row sorting.
Sort the dataset rows before aggregating, e.g. when in a particular aggregation function (such as list
) the
encountered order is important.
- With a single column for sorting:
"presort": {
"columns": "date_added",
"ascending": true
}
Definition of desired aggregations.
A dictionary mapping original columns to new aggregated columns, specifying an aggregation function for each.
Aggregations are functions that reduce all the values in a particular column of a single group to a single summary value of that group.
E.g. a sum
aggregation of column A calculates a single total by adding up all the values in A belonging to each group.
Possible aggregations functions accepted as func
parameters are:
n
,size
orcount
: calculate number of rows in groupsum
: sum total of valuesmean
: take mean of valuesmax
: take max of valuesmin
: take min of valuesfirst
: take first item foundlast
: take last item foundunique
: collect a list of unique valuesn_unique
: count the number of unique valueslist
: collect a list of all valuesconcatenate
: convert all values to text and concatenate them into one long textconcat_lists
: concatenate lists in all rows into a single larger listcount_where
: number of rows in which the column matches a value, needs parametervalue
with the value that you want to countpercent_where
: percentage of the column where the column matches a value, needs parametervalue
with the value that you want to count
Note that in the case of count_where
and percent_where
an additional value
parameter is required.
One item per input column. Each key should be the name of an input column, and each value an object defining one or more aggregations for that column. An individual aggregation consists of the name of a desired output column, mapped to a specific aggregation function. For example:
{
"input_col": {
"output_col": {"func": "sum"}
}
}
Object defining how to aggregate a single output column.
Needs at least the "func"
parameter. If the aggregation function accepts further arguments,
like the "value"
parameter in case of count_where
and percent_where
, these need to be provided also.
For example:
{
"output_col": {"func": "count_where", "value": 2}
}
Aggregation function.
Values must be one of the following:
n
size
count
sum
mean
n_unique
count_where
percent_where
concatenate
max
min
first
last
concat_lists
unique
list
- Including an aggregation function with additional parameters:
{
"product_id": {
"products": {"func": "list"},
"size": {"func": "count"}
},
"item_total": {
"total": {"func": "sum"},
},
"item_category": {
"num_food_items": {"func": "count_where", "value": "food"}
}
}
Whether to ignore missing values (NaNs) in group columns.
If false
(default), missing values (NaNs) will be grouped together in their own group. Otherwise, rows
containing NaNs in the group column will be ignored.
Whether to sort groups by values in the grouping columns.
This doesn’t affect sorting of rows within groups, which is always maintained (and may depend on the
presort
parameter), but only the ordering amongst groups. If the order of groups is not important,
leaving this off will usually result in faster execution (false
by default) .
Enforce use of Pandas aggregation. Normally, depending on dataset size, the step will automatically switch between Pandas and Dask aggregation, preferring whichever represents a better trade-off between execution-time and memory usage. For very large datasets, Dask is the only viable method, but Dask has limitations when it comes to sorting. For intermediate dataset sizes, and if you need to sort the dataset before aggregation on more than a single column, you can try enforcing the use of Pandas if otherwise you see warning or errors related to sorting.
Was this page helpful?
aggregate(ds_in: dataset, {
"param": value,
...
}) -> (ds_out: dataset)