plateau.io.dask.bag module

plateau.io.dask.bag.build_dataset_indices__bag(store: str | KeyValueStore | Callable[[], KeyValueStore] | None, dataset_uuid: str | None, columns: Sequence[str], partition_size: int | None = None, factory: DatasetFactory | None = None) Delayed[source]

Function which builds a ExplicitSecondaryIndex.

This function loads the dataset, computes the requested indices and writes the indices to the dataset. The dataset partitions itself are not mutated.

Parameters:
  • store (Callable or str or minimalkv.KeyValueStore) –

    The store where we can find or store the dataset.

    Can be either minimalkv.KeyValueStore, a minimalkv store url or a generic Callable producing a minimalkv.KeyValueStore

  • dataset_uuid (str) – The dataset UUID

  • columns – A subset of columns to be loaded.

  • partition_size (Optional[int]) – Dask bag partition size. Use a larger numbers to decrease scheduler load and overhead, use smaller numbers for a fine-grained scheduling and better resilience against worker errors.

  • factory (plateau.core.factory.DatasetFactory) – A DatasetFactory holding the store and UUID to the source dataset.

plateau.io.dask.bag.read_dataset_as_dataframe_bag(dataset_uuid=None, store=None, columns=None, predicate_pushdown_to_io=True, categoricals=None, dates_as_object: bool = True, predicates=None, factory=None, dispatch_by=None, partition_size=None)[source]

Retrieve data as dataframe from a dask.bag.Bag of MetaPartition objects.

Parameters:
  • dataset_uuid (str) – The dataset UUID

  • store (Callable or str or minimalkv.KeyValueStore) –

    The store where we can find or store the dataset.

    Can be either minimalkv.KeyValueStore, a minimalkv store url or a generic Callable producing a minimalkv.KeyValueStore

  • columns – A subset of columns to be loaded.

  • predicate_pushdown_to_io (bool) – Push predicates through to the I/O layer, default True. Disable this if you see problems with predicate pushdown for the given file even if the file format supports it. Note that this option only hides problems in the storage layer that need to be addressed there.

  • categoricals – Load the provided subset of columns as a pandas.Categorical.

  • dates_as_object (bool) – Load pyarrow.date{32,64} columns as object columns in Pandas instead of using np.datetime64 to preserve their type. While this improves type-safety, this comes at a performance cost.

  • predicates (List[List[Tuple[str, str, Any]]) –

    Optional list of predicates, like [[(‘x’, ‘>’, 0), …], that are used to filter the resulting DataFrame, possibly using predicate pushdown, if supported by the file format. This parameter is not compatible with filter_query.

    Predicates are expressed in disjunctive normal form (DNF). This means that the innermost tuple describes a single column predicate. These inner predicates are all combined with a conjunction (AND) into a larger predicate. The most outer list then combines all predicates with a disjunction (OR). By this, we should be able to express all kinds of predicates that are possible using boolean logic.

    Available operators are: ==, !=, <=, >=, <, > and in.

    Filtering for missings is supported with operators ==, != and in and values np.nan and None for float and string columns respectively.

    Categorical data

    When using order sensitive operators on categorical data we will assume that the categories obey a lexicographical ordering. This filtering may result in less than optimal performance and may be slower than the evaluation on non-categorical data.

    See also Filtering / Predicate pushdown and Efficient Querying

  • factory (plateau.core.factory.DatasetFactory) – A DatasetFactory holding the store and UUID to the source dataset.

  • dispatch_by (Optional[List[str]]) –

    List of index columns to group and partition the jobs by. There will be one job created for every observed index value combination. This may result in either many very small partitions or in few very large partitions, depending on the index you are using this on.

    Secondary indices

    This is also useable in combination with secondary indices where the physical file layout may not be aligned with the logically requested layout. For optimal performance it is recommended to use this for columns which can benefit from predicate pushdown since the jobs will fetch their data individually and will not shuffle data in memory / over network.

  • partition_size (Optional[int]) – Dask bag partition size. Use a larger numbers to decrease scheduler load and overhead, use smaller numbers for a fine-grained scheduling and better resilience against worker errors.

Returns:

A dask.bag.Bag which contains the metapartitions and mapped to a function for retrieving the data.

Return type:

dask.bag.Bag

plateau.io.dask.bag.store_bag_as_dataset(bag, store, dataset_uuid=None, metadata=None, df_serializer=None, overwrite=False, metadata_merger=None, metadata_version=4, partition_on=None, metadata_storage_format='json', secondary_indices=None, table_name: str = 'table')[source]

Transform and store a dask.bag of dictionaries containing dataframes to a plateau dataset in store.

This is the dask.bag-equivalent of store_delayed_as_dataset(). See there for more detailed documentation on the different possible input types.

Parameters:
  • store (Callable or str or minimalkv.KeyValueStore) –

    The store where we can find or store the dataset.

    Can be either minimalkv.KeyValueStore, a minimalkv store url or a generic Callable producing a minimalkv.KeyValueStore

  • dataset_uuid (str) – The dataset UUID

  • metadata (Optional[Dict]) – A dictionary used to update the dataset metadata.

  • df_serializer (Optional[plateau.serialization.DataFrameSerializer]) – A pandas DataFrame serialiser from plateau.serialization

  • overwrite (Optional[bool]) – If True, allow overwrite of an existing dataset.

  • metadata_merger (Optional[Callable]) – By default partition metadata is combined using the combine_metadata() function. You can supply a callable here that implements a custom merge operation on the metadata dictionaries (depending on the matches this might be more than two values).

  • metadata_version (Optional[int]) – The dataset metadata version

  • partition_on (List) – Column names by which the dataset should be partitioned by physically. These columns may later on be used as an Index to improve query performance. Partition columns need to be present in all dataset tables. Sensitive to ordering.

  • metadata_storage_format (str) – Optional list of datastorage format to use. Currently supported is .json & .msgpack.zstd”

  • secondary_indices (List[str]) – A list of columns for which a secondary index should be calculated.

  • table_name

    The table name of the dataset to be loaded. This creates a namespace for the partitioning like

    dataset_uuid/table_name/*

    This is to support legacy workflows. We recommend not to use this and use the default wherever possible.

  • bag (dask.bag.Bag) – A dask bag containing dictionaries of dataframes or dataframes.