Skip to content

Aggregate list items

group by

Group a dataset by elements in a column of lists and aggregate remaining columns using one or more predefined functions.

This is essentially aggregate after "exploding" a column of lists such that each list item has its own row. By default the step produces one row per unique list item, and two columns: the count of how many times each list item was encountered, and a column rows recording the row numbers of the lists in which the element was found ([1,3,7] would mean an item was present in the lists of rows 1, 3 and 7). In addition, predefined functions can be used to add further aggregations of the grouped input dataset.

For example, if a dataset contains texts already separated into lists of individual words, this step will create a new dataset containing one row per word, a column containing each word's frequency (count) across all texts, and another column of lists indicating in which rows the word was found.

Optionally, if a grouping column is specified using the "by" parameter, otherwise identical items belonging to different groups will be counted separately. If the dataset contains texts in different languages, for example, one may not want to group all occurences of the same word together, irrespective of language. The word "angel" in German signifies a fishing rod, for example, "any" in Catalan means "year", and the Italian word "burro" means "butter" while in Spanish it refers to "donkey". Using language as the grouping column would preserve the word in each language as a separate group.


The following are the step's expected inputs and outputs and their specific types.

Step signature
aggregate_list_items(ds_in: dataset, {
    "param": value
}) -> (ds_out: dataset)

where the object {"param": value} is optional in most cases and if present may contain any of the parameters described in the corresponding section below.


The following example performs a simple word-count, returning a new dataset with one row per word and each word's frequency in the "count" column. The aggregation will be performed separately for each language. Also, for each word, a custom aggregation collects the dates of the texts in which the word was mentioned:

Example call (in recipe editor)
aggregate_list_items(ds_in, {
  "split_column": "words",
  "by": "language",
  "unique_rows": true,
  "aggregations": {
    "text_publication_date": {
      "mention_dates": {"func": "list"},
}) -> (ds_out)


ds_in: dataset

An input dataset containing at least one column with lists of elements to group.


ds_out: dataset

The result of the aggregation. Contains one row per unique element in original column of lists.


split_column: string

Name of column containing the lists to be split and grouped.

by: string | null

Optional grouping column to use for item counting and aggregation.

unique_rows: boolean = False

Count unique occurences only. Whether to collect in the output column "rows" only the unique rows each item appeared in, or all rows (duplicate row IDs if item appeared more than once in a single row).

rows_as_str: boolean = False

Row IDs as strings. Output occurrence of items in rows as lists of strings (categorical) rather than lists of row numbers.

aggregations: object

Definition of additional aggregations. A dictionary mapping original columns to new aggregated columns, specifying an aggregation function for each. Aggregations are functions that reduce all the values in a particular column of a single group to a single summary value of that group. E.g. a sum aggregation of column A calculates a single total by adding up all the values in A belonging to each group.

Possible aggregations functions accepted as func parameters are:

  • n, size or count: calculate number of rows in group
  • sum: sum total of values
  • mean: take mean of values
  • max: take max of values
  • min: take min of values
  • first: take first item found
  • last: take last item found
  • unique: collect a list of unique values
  • n_unique: count the number of unique values
  • list: collect a list of all values
  • concatenate: convert all values to text and concatenate them into one long text
  • concat_lists: concatenate lists in all rows into a single larger list
  • count_where: number of rows in which the column matches a value, needs parameter value with the value that you want to count
  • percent_where: percentage of the column where the column matches a value, needs parameter value with the value that you want to count

Note that in the case of count_where and percent_where an additional value parameter is required.

Items in aggregations

input_aggregations: object

One item per input column. Each key should be the name of an input column, and each value an object defining one or more aggregations for that column. An individual aggregation consists of the name of a desired output column, mapped to a specific aggregation function. For example:

  "input_col": {
    "output_col": {"func": "sum"}
Items in input_aggregations

aggregation_func: object

Object defining how to aggregate a single output column. Needs at least the "func" parameter. If the aggregation function accepts further arguments, like the "value" parameter in case of count_where and percent_where, these need to be provided also. For example:

  "output_col": {"func": "count_where", "value": 2}
Items in aggregation_func

func: string

Aggregation function.

Must be one of: "n", "size", "count", "sum", "mean", "n_unique", "count_where", "percent_where", "concatenate", "max", "min", "first", "last", "concat_lists", "unique", "list"

Example parameter values:

  • Including an aggregation function with additional parameters:

      "product_id": {
        "products": {"func": "list"},
        "size": {"func": "count"}
      "item_total": {
        "total": {"func": "sum"},
      "item_category": {
        "num_food_items": {"func": "count_where", "value": "food"}