public class JdbcRDD<T> extends RDD<T> implements org.apache.spark.internal.Logging
param: getConnection a function that returns an open Connection. The RDD takes care of closing the connection. param: sql the text of the query. The query must contain two ? placeholders for parameters used to partition the results. For example,
   select title, author from books where ? <= id and id <= ?
   | Modifier and Type | Class and Description | 
|---|---|
| static interface  | JdbcRDD.ConnectionFactory | 
| Constructor and Description | 
|---|
| JdbcRDD(SparkContext sc,
       scala.Function0<java.sql.Connection> getConnection,
       String sql,
       long lowerBound,
       long upperBound,
       int numPartitions,
       scala.Function1<java.sql.ResultSet,T> mapRow,
       scala.reflect.ClassTag<T> evidence$1) | 
| Modifier and Type | Method and Description | 
|---|---|
| scala.collection.Iterator<T> | compute(Partition thePart,
       TaskContext context):: DeveloperApi ::
 Implemented by subclasses to compute a given partition. | 
| static JavaRDD<Object[]> | create(JavaSparkContext sc,
      JdbcRDD.ConnectionFactory connectionFactory,
      String sql,
      long lowerBound,
      long upperBound,
      int numPartitions)Create an RDD that executes a SQL query on a JDBC connection and reads results. | 
| static <T> JavaRDD<T> | create(JavaSparkContext sc,
      JdbcRDD.ConnectionFactory connectionFactory,
      String sql,
      long lowerBound,
      long upperBound,
      int numPartitions,
      Function<java.sql.ResultSet,T> mapRow)Create an RDD that executes a SQL query on a JDBC connection and reads results. | 
| Partition[] | getPartitions()Implemented by subclasses to return the set of partitions in this RDD. | 
| static Object[] | resultSetToObjectArray(java.sql.ResultSet rs) | 
aggregate, barrier, cache, cartesian, checkpoint, cleanShuffleDependencies, coalesce, collect, collect, context, count, countApprox, countApproxDistinct, countApproxDistinct, countByValue, countByValueApprox, dependencies, distinct, distinct, doubleRDDToDoubleRDDFunctions, filter, first, flatMap, fold, foreach, foreachPartition, getCheckpointFile, getNumPartitions, getResourceProfile, getStorageLevel, glom, groupBy, groupBy, groupBy, id, intersection, intersection, intersection, isCheckpointed, isEmpty, iterator, keyBy, localCheckpoint, map, mapPartitions, mapPartitionsWithEvaluator, mapPartitionsWithIndex, max, min, name, numericRDDToDoubleRDDFunctions, partitioner, partitions, persist, persist, pipe, pipe, pipe, preferredLocations, randomSplit, rddToAsyncRDDActions, rddToOrderedRDDFunctions, rddToPairRDDFunctions, rddToSequenceFileRDDFunctions, reduce, repartition, sample, saveAsObjectFile, saveAsTextFile, saveAsTextFile, setName, sortBy, sparkContext, subtract, subtract, subtract, take, takeOrdered, takeSample, toDebugString, toJavaRDD, toLocalIterator, top, toString, treeAggregate, treeAggregate, treeReduce, union, unpersist, withResources, zip, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitionsWithEvaluator, zipWithIndex, zipWithUniqueId$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitializepublic JdbcRDD(SparkContext sc, scala.Function0<java.sql.Connection> getConnection, String sql, long lowerBound, long upperBound, int numPartitions, scala.Function1<java.sql.ResultSet,T> mapRow, scala.reflect.ClassTag<T> evidence$1)
public static Object[] resultSetToObjectArray(java.sql.ResultSet rs)
public static <T> JavaRDD<T> create(JavaSparkContext sc, JdbcRDD.ConnectionFactory connectionFactory, String sql, long lowerBound, long upperBound, int numPartitions, Function<java.sql.ResultSet,T> mapRow)
connectionFactory - a factory that returns an open Connection.
   The RDD takes care of closing the connection.sql - the text of the query.
   The query must contain two ? placeholders for parameters used to partition the results.
   For example,
   
   select title, author from books where ? <= id and id <= ?
   lowerBound - the minimum value of the first placeholderupperBound - the maximum value of the second placeholder
   The lower and upper bounds are inclusive.numPartitions - the number of partitions.
   Given a lowerBound of 1, an upperBound of 20, and a numPartitions of 2,
   the query would be executed twice, once with (1, 10) and once with (11, 20)mapRow - a function from a ResultSet to a single row of the desired result type(s).
   This should only call getInt, getString, etc; the RDD takes care of calling next.
   The default maps a ResultSet to an array of Object.sc - (undocumented)public static JavaRDD<Object[]> create(JavaSparkContext sc, JdbcRDD.ConnectionFactory connectionFactory, String sql, long lowerBound, long upperBound, int numPartitions)
Object array. For usage example, see test case JavaAPISuite.testJavaJdbcRDD.
 connectionFactory - a factory that returns an open Connection.
   The RDD takes care of closing the connection.sql - the text of the query.
   The query must contain two ? placeholders for parameters used to partition the results.
   For example,
   
   select title, author from books where ? <= id and id <= ?
   lowerBound - the minimum value of the first placeholderupperBound - the maximum value of the second placeholder
   The lower and upper bounds are inclusive.numPartitions - the number of partitions.
   Given a lowerBound of 1, an upperBound of 20, and a numPartitions of 2,
   the query would be executed twice, once with (1, 10) and once with (11, 20)sc - (undocumented)public Partition[] getPartitions()
RDD
 The partitions in this array must satisfy the following property:
   rdd.partitions.zipWithIndex.forall { case (partition, index) => partition.index == index }
public scala.collection.Iterator<T> compute(Partition thePart, TaskContext context)
RDD