public class OrderedRDDFunctions<K,V,P extends scala.Product2<K,V>>
extends Object
implements org.apache.spark.internal.Logging, scala.Serializable
K
that has an implicit Ordering[K]
in scope. Ordering objects already exist for all of the standard primitive types. Users can also
define their own orderings for custom types, or to override the default ordering. The implicit
ordering that is in the closest scope will be used.
import org.apache.spark.SparkContext._
val rdd: RDD[(String, Int)] = ...
implicit val caseInsensitiveOrdering = new Ordering[String] {
override def compare(a: String, b: String) =
a.toLowerCase(Locale.ROOT).compare(b.toLowerCase(Locale.ROOT))
}
// Sort by key, using the above case insensitive ordering.
rdd.sortByKey()
Constructor and Description |
---|
OrderedRDDFunctions(RDD<P> self,
scala.math.Ordering<K> evidence$1,
scala.reflect.ClassTag<K> evidence$2,
scala.reflect.ClassTag<V> evidence$3,
scala.reflect.ClassTag<P> evidence$4) |
Modifier and Type | Method and Description |
---|---|
RDD<P> |
filterByRange(K lower,
K upper)
Returns an RDD containing only the elements in the inclusive range
lower to upper . |
RDD<scala.Tuple2<K,V>> |
repartitionAndSortWithinPartitions(Partitioner partitioner)
Repartition the RDD according to the given partitioner and, within each resulting partition,
sort records by their keys.
|
RDD<scala.Tuple2<K,V>> |
sortByKey(boolean ascending,
int numPartitions)
Sort the RDD by key, so that each partition contains a sorted range of the elements.
|
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitialize
public RDD<P> filterByRange(K lower, K upper)
lower
to upper
.
If the RDD has been partitioned using a RangePartitioner
, then this operation can be
performed efficiently by only scanning the partitions that might contain matching elements.
Otherwise, a standard filter
is applied to all partitions.lower
- (undocumented)upper
- (undocumented)public RDD<scala.Tuple2<K,V>> repartitionAndSortWithinPartitions(Partitioner partitioner)
This is more efficient than calling repartition
and then sorting within each partition
because it can push the sorting down into the shuffle machinery.
partitioner
- (undocumented)public RDD<scala.Tuple2<K,V>> sortByKey(boolean ascending, int numPartitions)
collect
or save
on the resulting RDD will return or output an ordered list of records
(in the save
case, they will be written to multiple part-X
files in the filesystem, in
order of the keys).ascending
- (undocumented)numPartitions
- (undocumented)