Package org.apache.spark.rdd
Class UnionRDD<T>
Object
org.apache.spark.rdd.RDD<T>
org.apache.spark.rdd.UnionRDD<T>
- All Implemented Interfaces:
Serializable
,org.apache.spark.internal.Logging
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
-
Constructor Summary
ConstructorDescriptionUnionRDD
(SparkContext sc, scala.collection.immutable.Seq<RDD<T>> rdds, scala.reflect.ClassTag<T> evidence$2) -
Method Summary
Modifier and TypeMethodDescriptionvoid
scala.collection.Iterator<T>
compute
(Partition s, TaskContext context) :: DeveloperApi :: Implemented by subclasses to compute a given partition.scala.collection.immutable.Seq<Dependency<?>>
scala.collection.immutable.Seq<String>
rdds()
Methods inherited from class org.apache.spark.rdd.RDD
aggregate, barrier, cache, cartesian, checkpoint, cleanShuffleDependencies, coalesce, collect, collect, context, count, countApprox, countApproxDistinct, countApproxDistinct, countByValue, countByValueApprox, dependencies, distinct, distinct, doubleRDDToDoubleRDDFunctions, filter, first, flatMap, fold, foreach, foreachPartition, getCheckpointFile, getNumPartitions, getResourceProfile, getStorageLevel, glom, groupBy, groupBy, groupBy, id, intersection, intersection, intersection, isCheckpointed, isEmpty, iterator, keyBy, localCheckpoint, map, mapPartitions, mapPartitionsWithEvaluator, mapPartitionsWithIndex, max, min, name, numericRDDToDoubleRDDFunctions, partitioner, partitions, persist, persist, pipe, pipe, pipe, preferredLocations, randomSplit, rddToAsyncRDDActions, rddToOrderedRDDFunctions, rddToPairRDDFunctions, rddToSequenceFileRDDFunctions, reduce, repartition, sample, saveAsObjectFile, saveAsTextFile, saveAsTextFile, setName, sortBy, sparkContext, subtract, subtract, subtract, take, takeOrdered, takeSample, toDebugString, toJavaRDD, toLocalIterator, top, toString, treeAggregate, treeAggregate, treeReduce, union, unpersist, withResources, zip, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitionsWithEvaluator, zipWithIndex, zipWithUniqueId
Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, wait, wait, wait
Methods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContext
-
Constructor Details
-
UnionRDD
public UnionRDD(SparkContext sc, scala.collection.immutable.Seq<RDD<T>> rdds, scala.reflect.ClassTag<T> evidence$2)
-
-
Method Details
-
rdds
-
getPartitions
-
getDependencies
-
compute
Description copied from class:RDD
:: DeveloperApi :: Implemented by subclasses to compute a given partition. -
getPreferredLocations
-
clearDependencies
public void clearDependencies()
-