pyspark.sql.streaming.DataStreamReader.orc

DataStreamReader.orc(path: str, mergeSchema: Optional[bool] = None, pathGlobFilter: Union[bool, str, None] = None, recursiveFileLookup: Union[bool, str, None] = None) → DataFrame[source]

Loads a ORC file stream, returning the result as a DataFrame.

New in version 2.3.0.

Other Parameters
Extra options

For the extra options, refer to Data Source Option in the version you use.

Examples

Load a data stream from a temporary ORC file.

>>> import tempfile
>>> import time
>>> with tempfile.TemporaryDirectory() as d:
...     # Write a temporary ORC file to read it.
...     spark.range(10).write.mode("overwrite").format("orc").save(d)
...
...     # Start a streaming query to read the ORC file.
...     q = spark.readStream.schema("id LONG").orc(d).writeStream.format("console").start()
...     time.sleep(3)
...     q.stop()