pyspark.sql.DataFrameReader.parquet

DataFrameReader.parquet(*paths: str, **options: OptionalPrimitiveType) → DataFrame[source]

Loads Parquet files, returning the result as a DataFrame.

New in version 1.4.0.

Parameters
pathsstr
Other Parameters
**options

For the extra options, refer to Data Source Option in the version you use.

Examples

>>> df = spark.read.parquet('python/test_support/sql/parquet_partitioned')
>>> df.dtypes
[('name', 'string'), ('year', 'int'), ('month', 'int'), ('day', 'int')]