Main entry point for Spark Streaming functionality. A StreamingContext
represents the connection to a Spark cluster, and can be used to create
DStream various input sources. It can be from an existing SparkContext.
After creating and transforming DStreams, the streaming computation can
be started and stopped using context.start() and context.stop(),
respectively. context.awaitTermination() allows the current thread
to wait for the termination of the context by stop() or by an exception.
the time interval (in seconds) at which streaming
data will be divided into batches
Add a [[org.apache.spark.streaming.scheduler.StreamingListener]] object for receiving system events related to streaming.
Wait for the execution to stop.
Create an input stream that monitors a Hadoop-compatible file system for new files and reads them as flat binary files with records of fixed length.
Sets the context to periodically checkpoint the DStream operations for master fault-tolerance.
Return either the currently active StreamingContext (i.e., if there is a context started but not stopped) or None.
Either return the active StreamingContext (i.e.
Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
queueStream(rdds[, oneAtATime, default])
Create an input stream from a queue of RDDs or list.
Set each DStreams in this context to remember RDDs it generated in the last given duration.
socketTextStream(hostname, port[, storageLevel])
Create an input from TCP source hostname:port.
Start the execution of the streams.
Stop the execution of the streams, with option of ensuring all received data has been processed.
Create an input stream that monitors a Hadoop-compatible file system for new files and reads them as text files.
Create a new DStream in which each RDD is generated by applying a function on RDDs of the DStreams.
Create a unified DStream from multiple DStreams of the same type and same slide duration.
Return SparkContext which is associated with this StreamingContext.