aggregate(ds_in: dataset, {
    "param": value,
    ...
}) -> (ds_out: dataset)

After optionally sorting the dataset, it is grouped by the unique values (or combinations of unique values) in one or more columns. Each group’s rows are then aggregated using one or more predefined functions. A new dataset is thus created containing one column per selected aggregation function, and one row for each unique group.

aggregate(ds_in: dataset, {
    "param": value,
    ...
}) -> (ds_out: dataset)
by
[string, array[string]]
required

Grouping column(s). The name(s) of column(s) whose unique values define the groups to aggregate.

presort
object

Pre-aggregation row sorting. Sort the dataset rows before aggregating, e.g. when in a particular aggregation function (such as list) the encountered order is important.

aggregations
object
required

Definition of desired aggregations. A dictionary mapping original columns to new aggregated columns, specifying an aggregation function for each. Aggregations are functions that reduce all the values in a particular column of a single group to a single summary value of that group. E.g. a sum aggregation of column A calculates a single total by adding up all the values in A belonging to each group.

Possible aggregations functions accepted as func parameters are:

  • n, size or count: calculate number of rows in group
  • sum: sum total of values
  • mean: take mean of values
  • max: take max of values
  • min: take min of values
  • first: take first item found
  • last: take last item found
  • unique: collect a list of unique values
  • n_unique: count the number of unique values
  • list: collect a list of all values
  • concatenate: convert all values to text and concatenate them into one long text
  • concat_lists: concatenate lists in all rows into a single larger list
  • count_where: number of rows in which the column matches a value, needs parameter value with the value that you want to count
  • percent_where: percentage of the column where the column matches a value, needs parameter value with the value that you want to count

Note that in the case of count_where and percent_where an additional value parameter is required.

drop_nan
boolean

Whether to ignore missing values (NaNs) in group columns. If false (default), missing values (NaNs) will be grouped together in their own group. Otherwise, rows containing NaNs in the group column will be ignored.

sort_groups
boolean

Whether to sort groups by values in the grouping columns. This doesn’t affect sorting of rows within groups, which is always maintained (and may depend on the presort parameter), but only the ordering amongst groups. If the order of groups is not important, leaving this off will usually result in faster execution (false by default) .

force_pandas
boolean

Enforce use of Pandas aggregation. Normally, depending on dataset size, the step will automatically switch between Pandas and Dask aggregation, preferring whichever represents a better trade-off between execution-time and memory usage. For very large datasets, Dask is the only viable method, but Dask has limitations when it comes to sorting. For intermediate dataset sizes, and if you need to sort the dataset before aggregation on more than a single column, you can try enforcing the use of Pandas if otherwise you see warning or errors related to sorting.