Package org.apache.spark.rdd
Class OrderedRDDFunctions<K,V,P extends scala.Product2<K,V>>   
Object
org.apache.spark.rdd.OrderedRDDFunctions<K,V,P>  
- All Implemented Interfaces:
- Serializable,- org.apache.spark.internal.Logging
public class OrderedRDDFunctions<K,V,P extends scala.Product2<K,V>>   
extends Object
implements org.apache.spark.internal.Logging, Serializable
Extra functions available on RDDs of (key, value) pairs where the key is sortable through
 an implicit conversion. They will work with any key type 
K that has an implicit Ordering[K]
 in scope. Ordering objects already exist for all of the standard primitive types. Users can also
 define their own orderings for custom types, or to override the default ordering. The implicit
 ordering that is in the closest scope will be used.
 
   import org.apache.spark.SparkContext._
   val rdd: RDD[(String, Int)] = ...
   implicit val caseInsensitiveOrdering = new Ordering[String] {
     override def compare(a: String, b: String) =
       a.toLowerCase(Locale.ROOT).compare(b.toLowerCase(Locale.ROOT))
   }
   // Sort by key, using the above case insensitive ordering.
   rdd.sortByKey()
 - See Also:
- 
Nested Class SummaryNested classes/interfaces inherited from interface org.apache.spark.internal.Loggingorg.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescriptionfilterByRange(K lower, K upper) Returns an RDD containing only the elements in the inclusive rangelowertoupper.repartitionAndSortWithinPartitions(Partitioner partitioner) Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.sortByKey(boolean ascending, int numPartitions) Sort the RDD by key, so that each partition contains a sorted range of the elements.Methods inherited from class java.lang.Objectequals, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface org.apache.spark.internal.LogginginitializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logBasedOnLevel, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, MDC, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContext
- 
Constructor Details- 
OrderedRDDFunctions
 
- 
- 
Method Details- 
filterByRangeReturns an RDD containing only the elements in the inclusive rangelowertoupper. If the RDD has been partitioned using aRangePartitioner, then this operation can be performed efficiently by only scanning the partitions that might contain matching elements. Otherwise, a standardfilteris applied to all partitions.- Parameters:
- lower- (undocumented)
- upper- (undocumented)
- Returns:
- (undocumented)
 
- 
repartitionAndSortWithinPartitionsRepartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.This is more efficient than calling repartitionand then sorting within each partition because it can push the sorting down into the shuffle machinery.- Parameters:
- partitioner- (undocumented)
- Returns:
- (undocumented)
 
- 
sortByKeySort the RDD by key, so that each partition contains a sorted range of the elements. Callingcollectorsaveon the resulting RDD will return or output an ordered list of records (in thesavecase, they will be written to multiplepart-Xfiles in the filesystem, in order of the keys).- Parameters:
- ascending- (undocumented)
- numPartitions- (undocumented)
- Returns:
- (undocumented)
 
 
-