org.apache.spark.rdd
Class JdbcRDD<T>

Object
  extended by org.apache.spark.rdd.RDD<T>
      extended by org.apache.spark.rdd.JdbcRDD<T>
All Implemented Interfaces:
java.io.Serializable, Logging

public class JdbcRDD<T>
extends RDD<T>
implements Logging

An RDD that executes an SQL query on a JDBC connection and reads results. For usage example, see test case JdbcRDDSuite.

param: getConnection a function that returns an open Connection. The RDD takes care of closing the connection. param: sql the text of the query. The query must contain two ? placeholders for parameters used to partition the results. E.g. "select title, author from books where ? <= id and id <= ?" param: lowerBound the minimum value of the first placeholder param: upperBound the maximum value of the second placeholder The lower and upper bounds are inclusive. param: numPartitions the number of partitions. Given a lowerBound of 1, an upperBound of 20, and a numPartitions of 2, the query would be executed twice, once with (1, 10) and once with (11, 20) param: mapRow a function from a ResultSet to a single row of the desired result type(s). This should only call getInt, getString, etc; the RDD takes care of calling next. The default maps a ResultSet to an array of Object.

See Also:
Serialized Form

Nested Class Summary
static interface JdbcRDD.ConnectionFactory
           
 
Constructor Summary
JdbcRDD(SparkContext sc, scala.Function0<java.sql.Connection> getConnection, String sql, long lowerBound, long upperBound, int numPartitions, scala.Function1<java.sql.ResultSet,T> mapRow, scala.reflect.ClassTag<T> evidence$1)
           
 
Method Summary
 scala.collection.Iterator<T> compute(Partition thePart, TaskContext context)
          :: DeveloperApi :: Implemented by subclasses to compute a given partition.
static JavaRDD<Object[]> create(JavaSparkContext sc, JdbcRDD.ConnectionFactory connectionFactory, String sql, long lowerBound, long upperBound, int numPartitions)
          Create an RDD that executes an SQL query on a JDBC connection and reads results.
static
<T> JavaRDD<T>
create(JavaSparkContext sc, JdbcRDD.ConnectionFactory connectionFactory, String sql, long lowerBound, long upperBound, int numPartitions, Function<java.sql.ResultSet,T> mapRow)
          Create an RDD that executes an SQL query on a JDBC connection and reads results.
 Partition[] getPartitions()
          Implemented by subclasses to return the set of partitions in this RDD.
static Object[] resultSetToObjectArray(java.sql.ResultSet rs)
           
 
Methods inherited from class org.apache.spark.rdd.RDD
aggregate, cache, cartesian, checkpoint, checkpointData, coalesce, collect, collect, context, count, countApprox, countApproxDistinct, countApproxDistinct, countByValue, countByValueApprox, creationSite, dependencies, distinct, distinct, doubleRDDToDoubleRDDFunctions, filter, filterWith, first, flatMap, flatMapWith, fold, foreach, foreachPartition, foreachWith, getCheckpointFile, getStorageLevel, glom, groupBy, groupBy, groupBy, id, intersection, intersection, intersection, isCheckpointed, isEmpty, iterator, keyBy, map, mapPartitions, mapPartitionsWithContext, mapPartitionsWithIndex, mapPartitionsWithSplit, mapWith, max, min, name, numericRDDToDoubleRDDFunctions, partitioner, partitions, persist, persist, pipe, pipe, pipe, preferredLocations, randomSplit, rddToAsyncRDDActions, rddToOrderedRDDFunctions, rddToPairRDDFunctions, rddToSequenceFileRDDFunctions, reduce, repartition, sample, saveAsObjectFile, saveAsTextFile, saveAsTextFile, scope, setName, sortBy, sparkContext, subtract, subtract, subtract, take, takeOrdered, takeSample, toArray, toDebugString, toJavaRDD, toLocalIterator, top, toString, treeAggregate, treeReduce, union, unpersist, zip, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipWithIndex, zipWithUniqueId
 
Methods inherited from class Object
equals, getClass, hashCode, notify, notifyAll, wait, wait, wait
 
Methods inherited from interface org.apache.spark.Logging
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
 

Constructor Detail

JdbcRDD

public JdbcRDD(SparkContext sc,
               scala.Function0<java.sql.Connection> getConnection,
               String sql,
               long lowerBound,
               long upperBound,
               int numPartitions,
               scala.Function1<java.sql.ResultSet,T> mapRow,
               scala.reflect.ClassTag<T> evidence$1)
Method Detail

resultSetToObjectArray

public static Object[] resultSetToObjectArray(java.sql.ResultSet rs)

create

public static <T> JavaRDD<T> create(JavaSparkContext sc,
                                    JdbcRDD.ConnectionFactory connectionFactory,
                                    String sql,
                                    long lowerBound,
                                    long upperBound,
                                    int numPartitions,
                                    Function<java.sql.ResultSet,T> mapRow)
Create an RDD that executes an SQL query on a JDBC connection and reads results. For usage example, see test case JavaAPISuite.testJavaJdbcRDD.

Parameters:
connectionFactory - a factory that returns an open Connection. The RDD takes care of closing the connection.
sql - the text of the query. The query must contain two ? placeholders for parameters used to partition the results. E.g. "select title, author from books where ? <= id and id <= ?"
lowerBound - the minimum value of the first placeholder
upperBound - the maximum value of the second placeholder The lower and upper bounds are inclusive.
numPartitions - the number of partitions. Given a lowerBound of 1, an upperBound of 20, and a numPartitions of 2, the query would be executed twice, once with (1, 10) and once with (11, 20)
mapRow - a function from a ResultSet to a single row of the desired result type(s). This should only call getInt, getString, etc; the RDD takes care of calling next. The default maps a ResultSet to an array of Object.
sc - (undocumented)
Returns:
(undocumented)

create

public static JavaRDD<Object[]> create(JavaSparkContext sc,
                                       JdbcRDD.ConnectionFactory connectionFactory,
                                       String sql,
                                       long lowerBound,
                                       long upperBound,
                                       int numPartitions)
Create an RDD that executes an SQL query on a JDBC connection and reads results. Each row is converted into a Object array. For usage example, see test case JavaAPISuite.testJavaJdbcRDD.

Parameters:
connectionFactory - a factory that returns an open Connection. The RDD takes care of closing the connection.
sql - the text of the query. The query must contain two ? placeholders for parameters used to partition the results. E.g. "select title, author from books where ? <= id and id <= ?"
lowerBound - the minimum value of the first placeholder
upperBound - the maximum value of the second placeholder The lower and upper bounds are inclusive.
numPartitions - the number of partitions. Given a lowerBound of 1, an upperBound of 20, and a numPartitions of 2, the query would be executed twice, once with (1, 10) and once with (11, 20)
sc - (undocumented)
Returns:
(undocumented)

getPartitions

public Partition[] getPartitions()
Description copied from class: RDD
Implemented by subclasses to return the set of partitions in this RDD. This method will only be called once, so it is safe to implement a time-consuming computation in it.

Returns:
(undocumented)

compute

public scala.collection.Iterator<T> compute(Partition thePart,
                                            TaskContext context)
Description copied from class: RDD
:: DeveloperApi :: Implemented by subclasses to compute a given partition.

Specified by:
compute in class RDD<T>
Parameters:
thePart - (undocumented)
context - (undocumented)
Returns:
(undocumented)