Class DefaultPartitionCoalescer

Object
org.apache.spark.rdd.DefaultPartitionCoalescer
All Implemented Interfaces:
PartitionCoalescer

public class DefaultPartitionCoalescer extends Object implements PartitionCoalescer
Coalesce the partitions of a parent RDD (prev) into fewer partitions, so that each partition of this RDD computes one or more of the parent ones. It will produce exactly maxPartitions if the parent had more than maxPartitions, or fewer if the parent had fewer.

This transformation is useful when an RDD with many partitions gets filtered into a smaller one, or to avoid having a large number of small tasks when processing a directory with many files.

If there is no locality information (no preferredLocations) in the parent, then the coalescing is very simple: chunk parents that are close in the Array in chunks. If there is locality information, it proceeds to pack them with the following four goals:

(1) Balance the groups so they roughly have the same number of parent partitions (2) Achieve locality per partition, i.e. find one machine which most parent partitions prefer (3) Be efficient, i.e. O(n) algorithm for n parent partitions (problem is likely NP-hard) (4) Balance preferred machines, i.e. avoid as much as possible picking the same preferred machine

Furthermore, it is assumed that the parent RDD may have many partitions, e.g. 100 000. We assume the final number of desired partitions is small, e.g. less than 1000.

The algorithm tries to assign unique preferred machines to each partition. If the number of desired partitions is greater than the number of preferred machines (can happen), it needs to start picking duplicate preferred machines. This is determined using coupon collector estimation (2n log(n)). The load balancing is done using power-of-two randomized bins-balls with one twist: it tries to also achieve locality. This is done by allowing a slack (balanceSlack, where 1.0 is all locality, 0 is all balance) between two bins. If two bins are within the slack in terms of balance, the algorithm will assign partitions according to locality. (contact alig for questions)

  • Constructor Details

    • DefaultPartitionCoalescer

      public DefaultPartitionCoalescer(double balanceSlack)
  • Method Details

    • partitionGroupOrdering

      public DefaultPartitionCoalescer.partitionGroupOrdering$ partitionGroupOrdering()
      Accessor for nested Scala object
      Returns:
      (undocumented)
    • balanceSlack

      public double balanceSlack()
    • rnd

      public scala.util.Random rnd()
    • groupArr

      public scala.collection.mutable.ArrayBuffer<PartitionGroup> groupArr()
    • groupHash

      public scala.collection.mutable.Map<String,scala.collection.mutable.ArrayBuffer<PartitionGroup>> groupHash()
    • initialHash

      public scala.collection.mutable.Set<Partition> initialHash()
    • noLocality

      public boolean noLocality()
    • currPrefLocs

      public scala.collection.Seq<String> currPrefLocs(Partition part, RDD<?> prev)
    • getLeastGroupHash

      public scala.Option<PartitionGroup> getLeastGroupHash(String key)
      Gets the least element of the list associated with key in groupHash The returned PartitionGroup is the least loaded of all groups that represent the machine "key"

      Parameters:
      key - string representing a partitioned group on preferred machine key
      Returns:
      Option of PartitionGroup that has least elements for key
    • addPartToPGroup

      public boolean addPartToPGroup(Partition part, PartitionGroup pgroup)
    • setupGroups

      public void setupGroups(int targetLen, org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations partitionLocs)
      Initializes targetLen partition groups. If there are preferred locations, each group is assigned a preferredLocation. This uses coupon collector to estimate how many preferredLocations it must rotate through until it has seen most of the preferred locations (2 * n log(n))
      Parameters:
      targetLen - The number of desired partition groups
      partitionLocs - (undocumented)
    • pickBin

      public PartitionGroup pickBin(Partition p, RDD<?> prev, double balanceSlack, org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations partitionLocs)
      Takes a parent RDD partition and decides which of the partition groups to put it in Takes locality into account, but also uses power of 2 choices to load balance It strikes a balance between the two using the balanceSlack variable
      Parameters:
      p - partition (ball to be thrown)
      balanceSlack - determines the trade-off between load-balancing the partitions sizes and their locality. e.g., balanceSlack=0.10 means that it allows up to 10% imbalance in favor of locality
      prev - (undocumented)
      partitionLocs - (undocumented)
      Returns:
      partition group (bin to be put in)
    • throwBalls

      public void throwBalls(int maxPartitions, RDD<?> prev, double balanceSlack, org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations partitionLocs)
    • getPartitions

      public PartitionGroup[] getPartitions()
    • coalesce

      public PartitionGroup[] coalesce(int maxPartitions, RDD<?> prev)
      Runs the packing algorithm and returns an array of PartitionGroups that if possible are load balanced and grouped by locality

      Specified by:
      coalesce in interface PartitionCoalescer
      Parameters:
      maxPartitions - (undocumented)
      prev - (undocumented)
      Returns:
      array of partition groups