Class Partitioner
- All Implemented Interfaces:
Serializable
- Direct Known Subclasses:
HashPartitioner
,RangePartitioner
numPartitions - 1
.
Note that, partitioner must be deterministic, i.e. it must return the same partition id given the same partition key.
- See Also:
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionstatic Partitioner
defaultPartitioner
(RDD<?> rdd, scala.collection.immutable.Seq<RDD<?>> others) Choose a partitioner to use for a cogroup-like operation between a number of RDDs.abstract int
getPartition
(Object key) abstract int
-
Constructor Details
-
Partitioner
public Partitioner()
-
-
Method Details
-
defaultPartitioner
public static Partitioner defaultPartitioner(RDD<?> rdd, scala.collection.immutable.Seq<RDD<?>> others) Choose a partitioner to use for a cogroup-like operation between a number of RDDs.If spark.default.parallelism is set, we'll use the value of SparkContext defaultParallelism as the default partitions number, otherwise we'll use the max number of upstream partitions.
When available, we choose the partitioner from rdds with maximum number of partitions. If this partitioner is eligible (number of partitions within an order of maximum number of partitions in rdds), or has partition number higher than or equal to default partitions number - we use this partitioner.
Otherwise, we'll use a new HashPartitioner with the default partitions number.
Unless spark.default.parallelism is set, the number of partitions will be the same as the number of partitions in the largest upstream RDD, as this should be least likely to cause out-of-memory errors.
We use two method parameters (rdd, others) to enforce callers passing at least 1 RDD.
- Parameters:
rdd
- (undocumented)others
- (undocumented)- Returns:
- (undocumented)
-
numPartitions
public abstract int numPartitions() -
getPartition
-