Object

org.apache.spark.streaming.flume

FlumeUtils

Related Doc: package flume

Permalink

object FlumeUtils

Source
FlumeUtils.scala
Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. FlumeUtils
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. def createPollingStream(jssc: JavaStreamingContext, addresses: Array[InetSocketAddress], storageLevel: StorageLevel, maxBatchSize: Int, parallelism: Int): JavaReceiverInputDStream[SparkFlumeEvent]

    Permalink

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent. This stream will poll the sink for data and will pull events as they are available.

    addresses

    List of InetSocketAddresses on which the Spark Sink is running

    storageLevel

    Storage level to use for storing the received objects

    maxBatchSize

    The maximum number of events to be pulled from the Spark sink in a single RPC call

    parallelism

    Number of concurrent requests this stream should send to the sink. Note that having a higher number of requests concurrently being pulled will result in this stream using more threads

  7. def createPollingStream(jssc: JavaStreamingContext, addresses: Array[InetSocketAddress], storageLevel: StorageLevel): JavaReceiverInputDStream[SparkFlumeEvent]

    Permalink

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent. This stream will poll the sink for data and will pull events as they are available. This stream will use a batch size of 1000 events and run 5 threads to pull data.

    addresses

    List of InetSocketAddresses on which the Spark Sink is running.

    storageLevel

    Storage level to use for storing the received objects

  8. def createPollingStream(jssc: JavaStreamingContext, hostname: String, port: Int, storageLevel: StorageLevel): JavaReceiverInputDStream[SparkFlumeEvent]

    Permalink

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent. This stream will poll the sink for data and will pull events as they are available. This stream will use a batch size of 1000 events and run 5 threads to pull data.

    hostname

    Hostname of the host on which the Spark Sink is running

    port

    Port of the host at which the Spark Sink is listening

    storageLevel

    Storage level to use for storing the received objects

  9. def createPollingStream(jssc: JavaStreamingContext, hostname: String, port: Int): JavaReceiverInputDStream[SparkFlumeEvent]

    Permalink

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent. This stream will poll the sink for data and will pull events as they are available. This stream will use a batch size of 1000 events and run 5 threads to pull data.

    hostname

    Hostname of the host on which the Spark Sink is running

    port

    Port of the host at which the Spark Sink is listening

  10. def createPollingStream(ssc: StreamingContext, addresses: Seq[InetSocketAddress], storageLevel: StorageLevel, maxBatchSize: Int, parallelism: Int): ReceiverInputDStream[SparkFlumeEvent]

    Permalink

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent. This stream will poll the sink for data and will pull events as they are available.

    addresses

    List of InetSocketAddresses representing the hosts to connect to.

    storageLevel

    Storage level to use for storing the received objects

    maxBatchSize

    Maximum number of events to be pulled from the Spark sink in a single RPC call

    parallelism

    Number of concurrent requests this stream should send to the sink. Note that having a higher number of requests concurrently being pulled will result in this stream using more threads

  11. def createPollingStream(ssc: StreamingContext, addresses: Seq[InetSocketAddress], storageLevel: StorageLevel): ReceiverInputDStream[SparkFlumeEvent]

    Permalink

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent. This stream will poll the sink for data and will pull events as they are available. This stream will use a batch size of 1000 events and run 5 threads to pull data.

    addresses

    List of InetSocketAddresses representing the hosts to connect to.

    storageLevel

    Storage level to use for storing the received objects

  12. def createPollingStream(ssc: StreamingContext, hostname: String, port: Int, storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK_SER_2): ReceiverInputDStream[SparkFlumeEvent]

    Permalink

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.

    Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent. This stream will poll the sink for data and will pull events as they are available. This stream will use a batch size of 1000 events and run 5 threads to pull data.

    hostname

    Address of the host on which the Spark Sink is running

    port

    Port of the host at which the Spark Sink is listening

    storageLevel

    Storage level to use for storing the received objects

  13. def createStream(jssc: JavaStreamingContext, hostname: String, port: Int, storageLevel: StorageLevel, enableDecompression: Boolean): JavaReceiverInputDStream[SparkFlumeEvent]

    Permalink

    Creates a input stream from a Flume source.

    Creates a input stream from a Flume source.

    hostname

    Hostname of the slave machine to which the flume data will be sent

    port

    Port of the slave machine to which the flume data will be sent

    storageLevel

    Storage level to use for storing the received objects

    enableDecompression

    should netty server decompress input stream

  14. def createStream(jssc: JavaStreamingContext, hostname: String, port: Int, storageLevel: StorageLevel): JavaReceiverInputDStream[SparkFlumeEvent]

    Permalink

    Creates a input stream from a Flume source.

    Creates a input stream from a Flume source.

    hostname

    Hostname of the slave machine to which the flume data will be sent

    port

    Port of the slave machine to which the flume data will be sent

    storageLevel

    Storage level to use for storing the received objects

  15. def createStream(jssc: JavaStreamingContext, hostname: String, port: Int): JavaReceiverInputDStream[SparkFlumeEvent]

    Permalink

    Creates a input stream from a Flume source.

    Creates a input stream from a Flume source. Storage level of the data will be the default StorageLevel.MEMORY_AND_DISK_SER_2.

    hostname

    Hostname of the slave machine to which the flume data will be sent

    port

    Port of the slave machine to which the flume data will be sent

  16. def createStream(ssc: StreamingContext, hostname: String, port: Int, storageLevel: StorageLevel, enableDecompression: Boolean): ReceiverInputDStream[SparkFlumeEvent]

    Permalink

    Create a input stream from a Flume source.

    Create a input stream from a Flume source.

    ssc

    StreamingContext object

    hostname

    Hostname of the slave machine to which the flume data will be sent

    port

    Port of the slave machine to which the flume data will be sent

    storageLevel

    Storage level to use for storing the received objects

    enableDecompression

    should netty server decompress input stream

  17. def createStream(ssc: StreamingContext, hostname: String, port: Int, storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK_SER_2): ReceiverInputDStream[SparkFlumeEvent]

    Permalink

    Create a input stream from a Flume source.

    Create a input stream from a Flume source.

    ssc

    StreamingContext object

    hostname

    Hostname of the slave machine to which the flume data will be sent

    port

    Port of the slave machine to which the flume data will be sent

    storageLevel

    Storage level to use for storing the received objects

  18. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  19. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  20. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  21. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  22. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  23. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  24. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  25. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  26. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  27. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  28. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  29. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  30. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  31. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped