Class DefaultPartitionCoalescer
- All Implemented Interfaces:
- PartitionCoalescer
prev) into fewer partitions, so that each partition of
 this RDD computes one or more of the parent ones. It will produce exactly maxPartitions if the
 parent had more than maxPartitions, or fewer if the parent had fewer.
 This transformation is useful when an RDD with many partitions gets filtered into a smaller one, or to avoid having a large number of small tasks when processing a directory with many files.
If there is no locality information (no preferredLocations) in the parent, then the coalescing is very simple: chunk parents that are close in the Array in chunks. If there is locality information, it proceeds to pack them with the following four goals:
(1) Balance the groups so they roughly have the same number of parent partitions (2) Achieve locality per partition, i.e. find one machine which most parent partitions prefer (3) Be efficient, i.e. O(n) algorithm for n parent partitions (problem is likely NP-hard) (4) Balance preferred machines, i.e. avoid as much as possible picking the same preferred machine
Furthermore, it is assumed that the parent RDD may have many partitions, e.g. 100 000. We assume the final number of desired partitions is small, e.g. less than 1000.
The algorithm tries to assign unique preferred machines to each partition. If the number of desired partitions is greater than the number of preferred machines (can happen), it needs to start picking duplicate preferred machines. This is determined using coupon collector estimation (2n log(n)). The load balancing is done using power-of-two randomized bins-balls with one twist: it tries to also achieve locality. This is done by allowing a slack (balanceSlack, where 1.0 is all locality, 0 is all balance) between two bins. If two bins are within the slack in terms of balance, the algorithm will assign partitions according to locality. (contact alig for questions)
- 
Nested Class SummaryNested Classes
- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescriptionbooleanaddPartToPGroup(Partition part, PartitionGroup pgroup) doubleRuns the packing algorithm and returns an array of PartitionGroups that if possible are load balanced and grouped by localityscala.collection.immutable.Seq<String>currPrefLocs(Partition part, RDD<?> prev) scala.Option<PartitionGroup>getLeastGroupHash(String key) Gets the least element of the list associated with key in groupHash The returned PartitionGroup is the least loaded of all groups that represent the machine "key"scala.collection.mutable.ArrayBuffer<PartitionGroup>groupArr()scala.collection.mutable.Map<String,scala.collection.mutable.ArrayBuffer<PartitionGroup>> scala.collection.mutable.Set<Partition>booleanAccessor for nested Scala objectpickBin(Partition p, RDD<?> prev, double balanceSlack, org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations partitionLocs) Takes a parent RDD partition and decides which of the partition groups to put it in Takes locality into account, but also uses power of 2 choices to load balance It strikes a balance between the two using the balanceSlack variablescala.util.Randomrnd()voidsetupGroups(int targetLen, org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations partitionLocs) Initializes targetLen partition groups.voidthrowBalls(int maxPartitions, RDD<?> prev, double balanceSlack, org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations partitionLocs) 
- 
Constructor Details- 
DefaultPartitionCoalescerpublic DefaultPartitionCoalescer(double balanceSlack) 
 
- 
- 
Method Details- 
partitionGroupOrderingAccessor for nested Scala object- Returns:
- (undocumented)
 
- 
balanceSlackpublic double balanceSlack()
- 
rndpublic scala.util.Random rnd()
- 
groupArr
- 
groupHashpublic scala.collection.mutable.Map<String,scala.collection.mutable.ArrayBuffer<PartitionGroup>> groupHash()
- 
initialHash
- 
noLocalitypublic boolean noLocality()
- 
currPrefLocs
- 
getLeastGroupHashGets the least element of the list associated with key in groupHash The returned PartitionGroup is the least loaded of all groups that represent the machine "key"- Parameters:
- key- string representing a partitioned group on preferred machine key
- Returns:
- Option of PartitionGroupthat has least elements for key
 
- 
addPartToPGroup
- 
setupGroupspublic void setupGroups(int targetLen, org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations partitionLocs) Initializes targetLen partition groups. If there are preferred locations, each group is assigned a preferredLocation. This uses coupon collector to estimate how many preferredLocations it must rotate through until it has seen most of the preferred locations (2 * n log(n))- Parameters:
- targetLen- The number of desired partition groups
- partitionLocs- (undocumented)
 
- 
pickBinpublic PartitionGroup pickBin(Partition p, RDD<?> prev, double balanceSlack, org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations partitionLocs) Takes a parent RDD partition and decides which of the partition groups to put it in Takes locality into account, but also uses power of 2 choices to load balance It strikes a balance between the two using the balanceSlack variable- Parameters:
- p- partition (ball to be thrown)
- balanceSlack- determines the trade-off between load-balancing the partitions sizes and their locality. e.g., balanceSlack=0.10 means that it allows up to 10% imbalance in favor of locality
- prev- (undocumented)
- partitionLocs- (undocumented)
- Returns:
- partition group (bin to be put in)
 
- 
throwBallspublic void throwBalls(int maxPartitions, RDD<?> prev, double balanceSlack, org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations partitionLocs) 
- 
getPartitions
- 
coalesceRuns the packing algorithm and returns an array of PartitionGroups that if possible are load balanced and grouped by locality- Specified by:
- coalescein interface- PartitionCoalescer
- Parameters:
- maxPartitions- (undocumented)
- prev- (undocumented)
- Returns:
- array of partition groups
 
 
-