pyspark.sql.DataFrameReader.orc

DataFrameReader.orc(path: Union[str, List[str]], mergeSchema: Optional[bool] = None, pathGlobFilter: Union[bool, str, None] = None, recursiveFileLookup: Union[bool, str, None] = None, modifiedBefore: Union[bool, str, None] = None, modifiedAfter: Union[bool, str, None] = None) → DataFrame[source]

Loads ORC files, returning the result as a DataFrame.

New in version 1.5.0.

Changed in version 3.4.0: Supports Spark Connect.

Parameters
pathstr or list
Other Parameters
Extra options

For the extra options, refer to Data Source Option for the version you use.

Examples

Write a DataFrame into a ORC file and read it back.

>>> import tempfile
>>> with tempfile.TemporaryDirectory() as d:
...     # Write a DataFrame into a ORC file
...     spark.createDataFrame(
...         [{"age": 100, "name": "Hyukjin Kwon"}]
...     ).write.mode("overwrite").format("orc").save(d)
...
...     # Read the Parquet file as a DataFrame.
...     spark.read.orc(d).show()
+---+------------+
|age|        name|
+---+------------+
|100|Hyukjin Kwon|
+---+------------+