Object

org.apache.spark.streaming.kinesis

KinesisUtils

Related Doc: package kinesis

Permalink

object KinesisUtils

Source
KinesisUtils.scala
Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. KinesisUtils
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. def createStream(jssc: JavaStreamingContext, kinesisAppName: String, streamName: String, endpointUrl: String, regionName: String, initialPositionInStream: InitialPositionInStream, checkpointInterval: Duration, storageLevel: StorageLevel, awsAccessKeyId: String, awsSecretKey: String): JavaReceiverInputDStream[Array[Byte]]

    Permalink

    Create an input stream that pulls messages from a Kinesis stream.

    Create an input stream that pulls messages from a Kinesis stream. This uses the Kinesis Client Library (KCL) to pull messages from Kinesis.

    jssc

    Java StreamingContext object

    kinesisAppName

    Kinesis application name used by the Kinesis Client Library (KCL) to update DynamoDB

    streamName

    Kinesis stream name

    endpointUrl

    Url of Kinesis service (e.g., https://kinesis.us-east-1.amazonaws.com)

    regionName

    Name of region used by the Kinesis Client Library (KCL) to update DynamoDB (lease coordination and checkpointing) and CloudWatch (metrics)

    initialPositionInStream

    In the absence of Kinesis checkpoint info, this is the worker's initial starting position in the stream. The values are either the beginning of the stream per Kinesis' limit of 24 hours (InitialPositionInStream.TRIM_HORIZON) or the tip of the stream (InitialPositionInStream.LATEST).

    checkpointInterval

    Checkpoint interval for Kinesis checkpointing. See the Kinesis Spark Streaming documentation for more details on the different types of checkpoints.

    storageLevel

    Storage level to use for storing the received objects. StorageLevel.MEMORY_AND_DISK_2 is recommended.

    awsAccessKeyId

    AWS AccessKeyId (if null, will use DefaultAWSCredentialsProviderChain)

    awsSecretKey

    AWS SecretKey (if null, will use DefaultAWSCredentialsProviderChain)

    Note

    The given AWS credentials will get saved in DStream checkpoints if checkpointing is enabled. Make sure that your checkpoint directory is secure.

  7. def createStream(jssc: JavaStreamingContext, kinesisAppName: String, streamName: String, endpointUrl: String, regionName: String, initialPositionInStream: InitialPositionInStream, checkpointInterval: Duration, storageLevel: StorageLevel): JavaReceiverInputDStream[Array[Byte]]

    Permalink

    Create an input stream that pulls messages from a Kinesis stream.

    Create an input stream that pulls messages from a Kinesis stream. This uses the Kinesis Client Library (KCL) to pull messages from Kinesis.

    jssc

    Java StreamingContext object

    kinesisAppName

    Kinesis application name used by the Kinesis Client Library (KCL) to update DynamoDB

    streamName

    Kinesis stream name

    endpointUrl

    Url of Kinesis service (e.g., https://kinesis.us-east-1.amazonaws.com)

    regionName

    Name of region used by the Kinesis Client Library (KCL) to update DynamoDB (lease coordination and checkpointing) and CloudWatch (metrics)

    initialPositionInStream

    In the absence of Kinesis checkpoint info, this is the worker's initial starting position in the stream. The values are either the beginning of the stream per Kinesis' limit of 24 hours (InitialPositionInStream.TRIM_HORIZON) or the tip of the stream (InitialPositionInStream.LATEST).

    checkpointInterval

    Checkpoint interval for Kinesis checkpointing. See the Kinesis Spark Streaming documentation for more details on the different types of checkpoints.

    storageLevel

    Storage level to use for storing the received objects. StorageLevel.MEMORY_AND_DISK_2 is recommended.

    Note

    The AWS credentials will be discovered using the DefaultAWSCredentialsProviderChain on the workers. See AWS documentation to understand how DefaultAWSCredentialsProviderChain gets the AWS credentials.

  8. def createStream[T](jssc: JavaStreamingContext, kinesisAppName: String, streamName: String, endpointUrl: String, regionName: String, initialPositionInStream: InitialPositionInStream, checkpointInterval: Duration, storageLevel: StorageLevel, messageHandler: Function[Record, T], recordClass: Class[T], awsAccessKeyId: String, awsSecretKey: String): JavaReceiverInputDStream[T]

    Permalink

    Create an input stream that pulls messages from a Kinesis stream.

    Create an input stream that pulls messages from a Kinesis stream. This uses the Kinesis Client Library (KCL) to pull messages from Kinesis.

    jssc

    Java StreamingContext object

    kinesisAppName

    Kinesis application name used by the Kinesis Client Library (KCL) to update DynamoDB

    streamName

    Kinesis stream name

    endpointUrl

    Url of Kinesis service (e.g., https://kinesis.us-east-1.amazonaws.com)

    regionName

    Name of region used by the Kinesis Client Library (KCL) to update DynamoDB (lease coordination and checkpointing) and CloudWatch (metrics)

    initialPositionInStream

    In the absence of Kinesis checkpoint info, this is the worker's initial starting position in the stream. The values are either the beginning of the stream per Kinesis' limit of 24 hours (InitialPositionInStream.TRIM_HORIZON) or the tip of the stream (InitialPositionInStream.LATEST).

    checkpointInterval

    Checkpoint interval for Kinesis checkpointing. See the Kinesis Spark Streaming documentation for more details on the different types of checkpoints.

    storageLevel

    Storage level to use for storing the received objects. StorageLevel.MEMORY_AND_DISK_2 is recommended.

    messageHandler

    A custom message handler that can generate a generic output from a Kinesis Record, which contains both message data, and metadata.

    recordClass

    Class of the records in DStream

    awsAccessKeyId

    AWS AccessKeyId (if null, will use DefaultAWSCredentialsProviderChain)

    awsSecretKey

    AWS SecretKey (if null, will use DefaultAWSCredentialsProviderChain)

    Note

    The given AWS credentials will get saved in DStream checkpoints if checkpointing is enabled. Make sure that your checkpoint directory is secure.

  9. def createStream[T](jssc: JavaStreamingContext, kinesisAppName: String, streamName: String, endpointUrl: String, regionName: String, initialPositionInStream: InitialPositionInStream, checkpointInterval: Duration, storageLevel: StorageLevel, messageHandler: Function[Record, T], recordClass: Class[T]): JavaReceiverInputDStream[T]

    Permalink

    Create an input stream that pulls messages from a Kinesis stream.

    Create an input stream that pulls messages from a Kinesis stream. This uses the Kinesis Client Library (KCL) to pull messages from Kinesis.

    jssc

    Java StreamingContext object

    kinesisAppName

    Kinesis application name used by the Kinesis Client Library (KCL) to update DynamoDB

    streamName

    Kinesis stream name

    endpointUrl

    Url of Kinesis service (e.g., https://kinesis.us-east-1.amazonaws.com)

    regionName

    Name of region used by the Kinesis Client Library (KCL) to update DynamoDB (lease coordination and checkpointing) and CloudWatch (metrics)

    initialPositionInStream

    In the absence of Kinesis checkpoint info, this is the worker's initial starting position in the stream. The values are either the beginning of the stream per Kinesis' limit of 24 hours (InitialPositionInStream.TRIM_HORIZON) or the tip of the stream (InitialPositionInStream.LATEST).

    checkpointInterval

    Checkpoint interval for Kinesis checkpointing. See the Kinesis Spark Streaming documentation for more details on the different types of checkpoints.

    storageLevel

    Storage level to use for storing the received objects. StorageLevel.MEMORY_AND_DISK_2 is recommended.

    messageHandler

    A custom message handler that can generate a generic output from a Kinesis Record, which contains both message data, and metadata.

    recordClass

    Class of the records in DStream

    Note

    The AWS credentials will be discovered using the DefaultAWSCredentialsProviderChain on the workers. See AWS documentation to understand how DefaultAWSCredentialsProviderChain gets the AWS credentials.

  10. def createStream(ssc: StreamingContext, kinesisAppName: String, streamName: String, endpointUrl: String, regionName: String, initialPositionInStream: InitialPositionInStream, checkpointInterval: Duration, storageLevel: StorageLevel, awsAccessKeyId: String, awsSecretKey: String): ReceiverInputDStream[Array[Byte]]

    Permalink

    Create an input stream that pulls messages from a Kinesis stream.

    Create an input stream that pulls messages from a Kinesis stream. This uses the Kinesis Client Library (KCL) to pull messages from Kinesis.

    ssc

    StreamingContext object

    kinesisAppName

    Kinesis application name used by the Kinesis Client Library (KCL) to update DynamoDB

    streamName

    Kinesis stream name

    endpointUrl

    Url of Kinesis service (e.g., https://kinesis.us-east-1.amazonaws.com)

    regionName

    Name of region used by the Kinesis Client Library (KCL) to update DynamoDB (lease coordination and checkpointing) and CloudWatch (metrics)

    initialPositionInStream

    In the absence of Kinesis checkpoint info, this is the worker's initial starting position in the stream. The values are either the beginning of the stream per Kinesis' limit of 24 hours (InitialPositionInStream.TRIM_HORIZON) or the tip of the stream (InitialPositionInStream.LATEST).

    checkpointInterval

    Checkpoint interval for Kinesis checkpointing. See the Kinesis Spark Streaming documentation for more details on the different types of checkpoints.

    storageLevel

    Storage level to use for storing the received objects. StorageLevel.MEMORY_AND_DISK_2 is recommended.

    awsAccessKeyId

    AWS AccessKeyId (if null, will use DefaultAWSCredentialsProviderChain)

    awsSecretKey

    AWS SecretKey (if null, will use DefaultAWSCredentialsProviderChain)

    Note

    The given AWS credentials will get saved in DStream checkpoints if checkpointing is enabled. Make sure that your checkpoint directory is secure.

  11. def createStream(ssc: StreamingContext, kinesisAppName: String, streamName: String, endpointUrl: String, regionName: String, initialPositionInStream: InitialPositionInStream, checkpointInterval: Duration, storageLevel: StorageLevel): ReceiverInputDStream[Array[Byte]]

    Permalink

    Create an input stream that pulls messages from a Kinesis stream.

    Create an input stream that pulls messages from a Kinesis stream. This uses the Kinesis Client Library (KCL) to pull messages from Kinesis.

    ssc

    StreamingContext object

    kinesisAppName

    Kinesis application name used by the Kinesis Client Library (KCL) to update DynamoDB

    streamName

    Kinesis stream name

    endpointUrl

    Url of Kinesis service (e.g., https://kinesis.us-east-1.amazonaws.com)

    regionName

    Name of region used by the Kinesis Client Library (KCL) to update DynamoDB (lease coordination and checkpointing) and CloudWatch (metrics)

    initialPositionInStream

    In the absence of Kinesis checkpoint info, this is the worker's initial starting position in the stream. The values are either the beginning of the stream per Kinesis' limit of 24 hours (InitialPositionInStream.TRIM_HORIZON) or the tip of the stream (InitialPositionInStream.LATEST).

    checkpointInterval

    Checkpoint interval for Kinesis checkpointing. See the Kinesis Spark Streaming documentation for more details on the different types of checkpoints.

    storageLevel

    Storage level to use for storing the received objects. StorageLevel.MEMORY_AND_DISK_2 is recommended.

    Note

    The AWS credentials will be discovered using the DefaultAWSCredentialsProviderChain on the workers. See AWS documentation to understand how DefaultAWSCredentialsProviderChain gets the AWS credentials.

  12. def createStream[T](ssc: StreamingContext, kinesisAppName: String, streamName: String, endpointUrl: String, regionName: String, initialPositionInStream: InitialPositionInStream, checkpointInterval: Duration, storageLevel: StorageLevel, messageHandler: (Record) ⇒ T, awsAccessKeyId: String, awsSecretKey: String)(implicit arg0: ClassTag[T]): ReceiverInputDStream[T]

    Permalink

    Create an input stream that pulls messages from a Kinesis stream.

    Create an input stream that pulls messages from a Kinesis stream. This uses the Kinesis Client Library (KCL) to pull messages from Kinesis.

    ssc

    StreamingContext object

    kinesisAppName

    Kinesis application name used by the Kinesis Client Library (KCL) to update DynamoDB

    streamName

    Kinesis stream name

    endpointUrl

    Url of Kinesis service (e.g., https://kinesis.us-east-1.amazonaws.com)

    regionName

    Name of region used by the Kinesis Client Library (KCL) to update DynamoDB (lease coordination and checkpointing) and CloudWatch (metrics)

    initialPositionInStream

    In the absence of Kinesis checkpoint info, this is the worker's initial starting position in the stream. The values are either the beginning of the stream per Kinesis' limit of 24 hours (InitialPositionInStream.TRIM_HORIZON) or the tip of the stream (InitialPositionInStream.LATEST).

    checkpointInterval

    Checkpoint interval for Kinesis checkpointing. See the Kinesis Spark Streaming documentation for more details on the different types of checkpoints.

    storageLevel

    Storage level to use for storing the received objects. StorageLevel.MEMORY_AND_DISK_2 is recommended.

    messageHandler

    A custom message handler that can generate a generic output from a Kinesis Record, which contains both message data, and metadata.

    awsAccessKeyId

    AWS AccessKeyId (if null, will use DefaultAWSCredentialsProviderChain)

    awsSecretKey

    AWS SecretKey (if null, will use DefaultAWSCredentialsProviderChain)

    Note

    The given AWS credentials will get saved in DStream checkpoints if checkpointing is enabled. Make sure that your checkpoint directory is secure.

  13. def createStream[T](ssc: StreamingContext, kinesisAppName: String, streamName: String, endpointUrl: String, regionName: String, initialPositionInStream: InitialPositionInStream, checkpointInterval: Duration, storageLevel: StorageLevel, messageHandler: (Record) ⇒ T)(implicit arg0: ClassTag[T]): ReceiverInputDStream[T]

    Permalink

    Create an input stream that pulls messages from a Kinesis stream.

    Create an input stream that pulls messages from a Kinesis stream. This uses the Kinesis Client Library (KCL) to pull messages from Kinesis.

    ssc

    StreamingContext object

    kinesisAppName

    Kinesis application name used by the Kinesis Client Library (KCL) to update DynamoDB

    streamName

    Kinesis stream name

    endpointUrl

    Url of Kinesis service (e.g., https://kinesis.us-east-1.amazonaws.com)

    regionName

    Name of region used by the Kinesis Client Library (KCL) to update DynamoDB (lease coordination and checkpointing) and CloudWatch (metrics)

    initialPositionInStream

    In the absence of Kinesis checkpoint info, this is the worker's initial starting position in the stream. The values are either the beginning of the stream per Kinesis' limit of 24 hours (InitialPositionInStream.TRIM_HORIZON) or the tip of the stream (InitialPositionInStream.LATEST).

    checkpointInterval

    Checkpoint interval for Kinesis checkpointing. See the Kinesis Spark Streaming documentation for more details on the different types of checkpoints.

    storageLevel

    Storage level to use for storing the received objects. StorageLevel.MEMORY_AND_DISK_2 is recommended.

    messageHandler

    A custom message handler that can generate a generic output from a Kinesis Record, which contains both message data, and metadata.

    Note

    The AWS credentials will be discovered using the DefaultAWSCredentialsProviderChain on the workers. See AWS documentation to understand how DefaultAWSCredentialsProviderChain gets the AWS credentials.

  14. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  15. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  16. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  17. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  18. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  19. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  20. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  21. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  22. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  23. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  24. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  25. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  26. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  27. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped