org.apache.spark.streaming.kafka
Class KafkaUtilsPythonHelper

Object
  extended by org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper

public class KafkaUtilsPythonHelper
extends Object

This is a helper class that wraps the KafkaUtils.createStream() into more Python-friendly class and function so that it can be easily instantiated and called from Python's KafkaUtils (see SPARK-6027).

The zero-arg constructor helps instantiate this class from the Class object classOf[KafkaUtilsPythonHelper].newInstance(), and the createStream() takes care of known parameters instead of passing them from Python


Constructor Summary
KafkaUtilsPythonHelper()
           
 
Method Summary
 Broker createBroker(String host, Integer port)
           
 JavaPairInputDStream<byte[],byte[]> createDirectStream(JavaStreamingContext jssc, java.util.Map<String,String> kafkaParams, java.util.Set<String> topics, java.util.Map<kafka.common.TopicAndPartition,Long> fromOffsets)
           
 OffsetRange createOffsetRange(String topic, Integer partition, Long fromOffset, Long untilOffset)
           
 JavaPairRDD<byte[],byte[]> createRDD(JavaSparkContext jsc, java.util.Map<String,String> kafkaParams, java.util.List<OffsetRange> offsetRanges, java.util.Map<kafka.common.TopicAndPartition,Broker> leaders)
           
 JavaPairReceiverInputDStream<byte[],byte[]> createStream(JavaStreamingContext jssc, java.util.Map<String,String> kafkaParams, java.util.Map<String,Integer> topics, StorageLevel storageLevel)
           
 kafka.common.TopicAndPartition createTopicAndPartition(String topic, Integer partition)
           
 
Methods inherited from class Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

KafkaUtilsPythonHelper

public KafkaUtilsPythonHelper()
Method Detail

createStream

public JavaPairReceiverInputDStream<byte[],byte[]> createStream(JavaStreamingContext jssc,
                                                                java.util.Map<String,String> kafkaParams,
                                                                java.util.Map<String,Integer> topics,
                                                                StorageLevel storageLevel)

createRDD

public JavaPairRDD<byte[],byte[]> createRDD(JavaSparkContext jsc,
                                            java.util.Map<String,String> kafkaParams,
                                            java.util.List<OffsetRange> offsetRanges,
                                            java.util.Map<kafka.common.TopicAndPartition,Broker> leaders)

createDirectStream

public JavaPairInputDStream<byte[],byte[]> createDirectStream(JavaStreamingContext jssc,
                                                              java.util.Map<String,String> kafkaParams,
                                                              java.util.Set<String> topics,
                                                              java.util.Map<kafka.common.TopicAndPartition,Long> fromOffsets)

createOffsetRange

public OffsetRange createOffsetRange(String topic,
                                     Integer partition,
                                     Long fromOffset,
                                     Long untilOffset)

createTopicAndPartition

public kafka.common.TopicAndPartition createTopicAndPartition(String topic,
                                                              Integer partition)

createBroker

public Broker createBroker(String host,
                           Integer port)