org.apache.spark.rdd
Class OrderedRDDFunctions<K,V,P extends scala.Product2<K,V>>

Object
  extended by org.apache.spark.rdd.OrderedRDDFunctions<K,V,P>
All Implemented Interfaces:
java.io.Serializable, Logging

public class OrderedRDDFunctions<K,V,P extends scala.Product2<K,V>>
extends Object
implements Logging, scala.Serializable

Extra functions available on RDDs of (key, value) pairs where the key is sortable through an implicit conversion. They will work with any key type K that has an implicit Ordering[K] in scope. Ordering objects already exist for all of the standard primitive types. Users can also define their own orderings for custom types, or to override the default ordering. The implicit ordering that is in the closest scope will be used.


   import org.apache.spark.SparkContext._

   val rdd: RDD[(String, Int)] = ...
   implicit val caseInsensitiveOrdering = new Ordering[String] {
     override def compare(a: String, b: String) = a.toLowerCase.compare(b.toLowerCase)
   }

   // Sort by key, using the above case insensitive ordering.
   rdd.sortByKey()
 

See Also:
Serialized Form

Constructor Summary
OrderedRDDFunctions(RDD<P> self, scala.math.Ordering<K> evidence$1, scala.reflect.ClassTag<K> evidence$2, scala.reflect.ClassTag<V> evidence$3, scala.reflect.ClassTag<P> evidence$4)
           
 
Method Summary
 RDD<P> filterByRange(K lower, K upper)
          Returns an RDD containing only the elements in the the inclusive range lower to upper.
 RDD<scala.Tuple2<K,V>> repartitionAndSortWithinPartitions(Partitioner partitioner)
          Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.
 RDD<scala.Tuple2<K,V>> sortByKey(boolean ascending, int numPartitions)
          Sort the RDD by key, so that each partition contains a sorted range of the elements.
 
Methods inherited from class Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 
Methods inherited from interface org.apache.spark.Logging
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
 

Constructor Detail

OrderedRDDFunctions

public OrderedRDDFunctions(RDD<P> self,
                           scala.math.Ordering<K> evidence$1,
                           scala.reflect.ClassTag<K> evidence$2,
                           scala.reflect.ClassTag<V> evidence$3,
                           scala.reflect.ClassTag<P> evidence$4)
Method Detail

sortByKey

public RDD<scala.Tuple2<K,V>> sortByKey(boolean ascending,
                                        int numPartitions)
Sort the RDD by key, so that each partition contains a sorted range of the elements. Calling collect or save on the resulting RDD will return or output an ordered list of records (in the save case, they will be written to multiple part-X files in the filesystem, in order of the keys).

Parameters:
ascending - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

repartitionAndSortWithinPartitions

public RDD<scala.Tuple2<K,V>> repartitionAndSortWithinPartitions(Partitioner partitioner)
Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.

This is more efficient than calling repartition and then sorting within each partition because it can push the sorting down into the shuffle machinery.

Parameters:
partitioner - (undocumented)
Returns:
(undocumented)

filterByRange

public RDD<P> filterByRange(K lower,
                            K upper)
Returns an RDD containing only the elements in the the inclusive range lower to upper. If the RDD has been partitioned using a RangePartitioner, then this operation can be performed efficiently by only scanning the partitions that might contain matching elements. Otherwise, a standard filter is applied to all partitions.

Parameters:
lower - (undocumented)
upper - (undocumented)
Returns:
(undocumented)