pyarrow.parquet.read_table

pyarrow.parquet.read_table(source, columns=None, use_threads=True, metadata=None, use_pandas_metadata=False, memory_map=False, read_dictionary=None, filesystem=None, filters=None, buffer_size=0, partitioning='hive', use_legacy_dataset=False, ignore_prefixes=None)[source]

Read a Table from Parquet format

Note: starting with pyarrow 1.0, the default for use_legacy_dataset is switched to False.

Parameters
  • source (str, pyarrow.NativeFile, or file-like object) – If a string passed, can be a single file name or directory name. For file-like objects, only read a single file. Use pyarrow.BufferReader to read a file contained in a bytes or buffer-like object.

  • columns (list) – If not None, only these columns will be read from the file. A column name may be a prefix of a nested field, e.g. ‘a’ will select ‘a.b’, ‘a.c’, and ‘a.d.e’.

  • use_threads (bool, default True) – Perform multi-threaded column reads.

  • metadata (FileMetaData) – If separately computed

  • read_dictionary (list, default None) – List of names or column paths (for nested types) to read directly as DictionaryArray. Only supported for BYTE_ARRAY storage. To read a flat column as dictionary-encoded pass the column name. For nested types, you must pass the full column “path”, which could be something like level1.level2.list.item. Refer to the Parquet file’s schema to obtain the paths.

  • memory_map (bool, default False) – If the source is a file path, use a memory map to read file, which can improve performance in some environments.

  • buffer_size (int, default 0) – If positive, perform read buffering when deserializing individual column chunks. Otherwise IO calls are unbuffered.

  • partitioning (Partitioning or str or list of str, default "hive") – The partitioning scheme for a partitioned dataset. The default of “hive” assumes directory names with key=value pairs like “/year=2009/month=11”. In addition, a scheme like “/2009/11” is also supported, in which case you need to specify the field names or a full schema. See the pyarrow.dataset.partitioning() function for more details.

  • use_pandas_metadata (bool, default False) – If True and file has custom pandas schema metadata, ensure that index columns are also loaded

  • use_legacy_dataset (bool, default False) – By default, read_table uses the new Arrow Datasets API since pyarrow 1.0.0. Among other things, this allows to pass filters for all columns and not only the partition keys, enables different partitioning schemes, etc. Set to True to use the legacy behaviour.

  • ignore_prefixes (list, optional) – Files matching any of these prefixes will be ignored by the discovery process if use_legacy_dataset=False. This is matched to the basename of a path. By default this is [‘.’, ‘_’]. Note that discovery happens only if a directory is passed as source.

  • filesystem (FileSystem, default None) – If nothing passed, paths assumed to be found in the local on-disk filesystem.

  • filters (List[Tuple] or List[List[Tuple]] or None (default)) –

    Rows which do not match the filter predicate will be removed from scanned data. Partition keys embedded in a nested directory structure will be exploited to avoid loading files at all if they contain no matching rows. If use_legacy_dataset is True, filters can only reference partition keys and only a hive-style directory structure is supported. When setting use_legacy_dataset to False, also within-file level filtering and different partitioning schemes are supported.

    Predicates are expressed in disjunctive normal form (DNF), like [[('x', '=', 0), ...], ...]. DNF allows arbitrary boolean logical combinations of single column predicates. The innermost tuples each describe a single column predicate. The list of inner predicates is interpreted as a conjunction (AND), forming a more selective and multiple column predicate. Finally, the most outer list combines these filters as a disjunction (OR).

    Predicates may also be passed as List[Tuple]. This form is interpreted as a single conjunction. To express OR in predicates, one must use the (preferred) List[List[Tuple]] notation.

    Each tuple has format: (key, op, value) and compares the key with the value. The supported op are: = or ==, !=, <, >, <=, >=, in and not in. If the op is in or not in, the value must be a collection such as a list, a set or a tuple.

    Examples:

    ('x', '=', 0)
    ('y', 'in', ['a', 'b', 'c'])
    ('z', 'not in', {'a','b'})
    

Returns

pyarrow.Table – Content of the file as a table (of columns)