class OrderedRDDFunctions[K, V, P <: Product2[K, V]] extends Logging with Serializable
Extra functions available on RDDs of (key, value) pairs where the key is sortable through
an implicit conversion. They will work with any key type K
that has an implicit Ordering[K]
in scope. Ordering objects already exist for all of the standard primitive types. Users can also
define their own orderings for custom types, or to override the default ordering. The implicit
ordering that is in the closest scope will be used.
import org.apache.spark.SparkContext._ val rdd: RDD[(String, Int)] = ... implicit val caseInsensitiveOrdering = new Ordering[String] { override def compare(a: String, b: String) = a.toLowerCase(Locale.ROOT).compare(b.toLowerCase(Locale.ROOT)) } // Sort by key, using the above case insensitive ordering. rdd.sortByKey()
- Alphabetic
- By Inheritance
- OrderedRDDFunctions
- Serializable
- Serializable
- Logging
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
filterByRange(lower: K, upper: K): RDD[P]
Returns an RDD containing only the elements in the inclusive range
lower
toupper
.Returns an RDD containing only the elements in the inclusive range
lower
toupper
. If the RDD has been partitioned using aRangePartitioner
, then this operation can be performed efficiently by only scanning the partitions that might contain matching elements. Otherwise, a standardfilter
is applied to all partitions. -
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
log: Logger
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logName: String
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
repartitionAndSortWithinPartitions(partitioner: Partitioner): RDD[(K, V)]
Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.
Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.
This is more efficient than calling
repartition
and then sorting within each partition because it can push the sorting down into the shuffle machinery. -
def
sortByKey(ascending: Boolean = true, numPartitions: Int = self.partitions.length): RDD[(K, V)]
Sort the RDD by key, so that each partition contains a sorted range of the elements.
Sort the RDD by key, so that each partition contains a sorted range of the elements. Calling
collect
orsave
on the resulting RDD will return or output an ordered list of records (in thesave
case, they will be written to multiplepart-X
files in the filesystem, in order of the keys). -
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()