org.apache.spark.streaming.flume

FlumeUtils

object FlumeUtils

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. FlumeUtils
  2. AnyRef
  3. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. def createPollingStream(jssc: JavaStreamingContext, addresses: Array[InetSocketAddress], storageLevel: StorageLevel, maxBatchSize: Int, parallelism: Int): JavaReceiverInputDStream[SparkFlumeEvent]

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent. This stream will poll the sink for data and will pull events as they are available.

    addresses

    List of InetSocketAddresses on which the Spark Sink is running

    storageLevel

    Storage level to use for storing the received objects

    maxBatchSize

    The maximum number of events to be pulled from the Spark sink in a single RPC call

    parallelism

    Number of concurrent requests this stream should send to the sink. Note that having a higher number of requests concurrently being pulled will result in this stream using more threads

    Annotations
    @Experimental()
  9. def createPollingStream(jssc: JavaStreamingContext, addresses: Array[InetSocketAddress], storageLevel: StorageLevel): JavaReceiverInputDStream[SparkFlumeEvent]

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent. This stream will poll the sink for data and will pull events as they are available. This stream will use a batch size of 1000 events and run 5 threads to pull data.

    addresses

    List of InetSocketAddresses on which the Spark Sink is running.

    storageLevel

    Storage level to use for storing the received objects

    Annotations
    @Experimental()
  10. def createPollingStream(jssc: JavaStreamingContext, hostname: String, port: Int, storageLevel: StorageLevel): JavaReceiverInputDStream[SparkFlumeEvent]

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent. This stream will poll the sink for data and will pull events as they are available. This stream will use a batch size of 1000 events and run 5 threads to pull data.

    hostname

    Hostname of the host on which the Spark Sink is running

    port

    Port of the host at which the Spark Sink is listening

    storageLevel

    Storage level to use for storing the received objects

    Annotations
    @Experimental()
  11. def createPollingStream(jssc: JavaStreamingContext, hostname: String, port: Int): JavaReceiverInputDStream[SparkFlumeEvent]

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent. This stream will poll the sink for data and will pull events as they are available. This stream will use a batch size of 1000 events and run 5 threads to pull data.

    hostname

    Hostname of the host on which the Spark Sink is running

    port

    Port of the host at which the Spark Sink is listening

    Annotations
    @Experimental()
  12. def createPollingStream(ssc: StreamingContext, addresses: Seq[InetSocketAddress], storageLevel: StorageLevel, maxBatchSize: Int, parallelism: Int): ReceiverInputDStream[SparkFlumeEvent]

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent. This stream will poll the sink for data and will pull events as they are available.

    addresses

    List of InetSocketAddresses representing the hosts to connect to.

    storageLevel

    Storage level to use for storing the received objects

    maxBatchSize

    Maximum number of events to be pulled from the Spark sink in a single RPC call

    parallelism

    Number of concurrent requests this stream should send to the sink. Note that having a higher number of requests concurrently being pulled will result in this stream using more threads

    Annotations
    @Experimental()
  13. def createPollingStream(ssc: StreamingContext, addresses: Seq[InetSocketAddress], storageLevel: StorageLevel): ReceiverInputDStream[SparkFlumeEvent]

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent. This stream will poll the sink for data and will pull events as they are available. This stream will use a batch size of 1000 events and run 5 threads to pull data.

    addresses

    List of InetSocketAddresses representing the hosts to connect to.

    storageLevel

    Storage level to use for storing the received objects

    Annotations
    @Experimental()
  14. def createPollingStream(ssc: StreamingContext, hostname: String, port: Int, storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK_SER_2): ReceiverInputDStream[SparkFlumeEvent]

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent. This stream will poll the sink for data and will pull events as they are available. This stream will use a batch size of 1000 events and run 5 threads to pull data.

    hostname

    Address of the host on which the Spark Sink is running

    port

    Port of the host at which the Spark Sink is listening

    storageLevel

    Storage level to use for storing the received objects

    Annotations
    @Experimental()
  15. def createStream(jssc: JavaStreamingContext, hostname: String, port: Int, storageLevel: StorageLevel, enableDecompression: Boolean): JavaReceiverInputDStream[SparkFlumeEvent]

    Creates a input stream from a Flume source.

    Creates a input stream from a Flume source.

    hostname

    Hostname of the slave machine to which the flume data will be sent

    port

    Port of the slave machine to which the flume data will be sent

    storageLevel

    Storage level to use for storing the received objects

    enableDecompression

    should netty server decompress input stream

  16. def createStream(jssc: JavaStreamingContext, hostname: String, port: Int, storageLevel: StorageLevel): JavaReceiverInputDStream[SparkFlumeEvent]

    Creates a input stream from a Flume source.

    Creates a input stream from a Flume source.

    hostname

    Hostname of the slave machine to which the flume data will be sent

    port

    Port of the slave machine to which the flume data will be sent

    storageLevel

    Storage level to use for storing the received objects

  17. def createStream(jssc: JavaStreamingContext, hostname: String, port: Int): JavaReceiverInputDStream[SparkFlumeEvent]

    Creates a input stream from a Flume source.

    Creates a input stream from a Flume source. Storage level of the data will be the default StorageLevel.MEMORY_AND_DISK_SER_2.

    hostname

    Hostname of the slave machine to which the flume data will be sent

    port

    Port of the slave machine to which the flume data will be sent

  18. def createStream(ssc: StreamingContext, hostname: String, port: Int, storageLevel: StorageLevel, enableDecompression: Boolean): ReceiverInputDStream[SparkFlumeEvent]

    Create a input stream from a Flume source.

    Create a input stream from a Flume source.

    ssc

    StreamingContext object

    hostname

    Hostname of the slave machine to which the flume data will be sent

    port

    Port of the slave machine to which the flume data will be sent

    storageLevel

    Storage level to use for storing the received objects

    enableDecompression

    should netty server decompress input stream

  19. def createStream(ssc: StreamingContext, hostname: String, port: Int, storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK_SER_2): ReceiverInputDStream[SparkFlumeEvent]

    Create a input stream from a Flume source.

    Create a input stream from a Flume source.

    ssc

    StreamingContext object

    hostname

    Hostname of the slave machine to which the flume data will be sent

    port

    Port of the slave machine to which the flume data will be sent

    storageLevel

    Storage level to use for storing the received objects

  20. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  21. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  22. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  23. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  24. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  25. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  26. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  27. final def notify(): Unit

    Definition Classes
    AnyRef
  28. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  29. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  30. def toString(): String

    Definition Classes
    AnyRef → Any
  31. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  32. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  33. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped