For each key k in this
or other1
or other2
, return a resulting RDD that contains a
tuple with the list of values for that key in this
, other1
and other2
.
For each key k in this
or other
, return a resulting RDD that contains a tuple with the
list of values for that key in this
as well as other
.
For each key k in this
or other1
or other2
, return a resulting RDD that contains a
tuple with the list of values for that key in this
, other1
and other2
.
For each key k in this
or other
, return a resulting RDD that contains a tuple with the
list of values for that key in this
as well as other
.
For each key k in this
or other1
or other2
, return a resulting RDD that contains a
tuple with the list of values for that key in this
, other1
and other2
.
For each key k in this
or other
, return a resulting RDD that contains a tuple with the
list of values for that key in this
as well as other
.
Return the key-value pairs in this RDD to the master as a Map.
Simplified version of combineByKey that hash-partitions the resulting RDD using the default parallelism level.
Simplified version of combineByKey that hash-partitions the output RDD.
Generic function to combine the elements for each key using a custom set of aggregation functions.
Generic function to combine the elements for each key using a custom set of aggregation functions. Turns an RDD[(K, V)] into a result of type RDD[(K, C)], for a "combined type" C Note that V and C can be different -- for example, one might group an RDD of type (Int, Int) into an RDD of type (Int, Seq[Int]). Users provide three functions:
- createCombiner
, which turns a V into a C (e.g., creates a one-element list)
- mergeValue
, to merge a V into a C (e.g., adds it to the end of a list)
- mergeCombiners
, to combine two C's into a single one.
In addition, users can control the partitioning of the output RDD, and whether to perform map-side aggregation (if a mapper can produce multiple items with the same key).
Count the number of elements for each key, and return the result to the master as a Map.
(Experimental) Approximate version of countByKey that can return a partial result if it does not finish within a timeout.
Choose a partitioner to use for a cogroup-like operation between a number of RDDs.
Choose a partitioner to use for a cogroup-like operation between a number of RDDs. If any of the RDDs already has a partitioner, choose that one, otherwise use a default HashPartitioner.
Pass each value in the key-value pair RDD through a flatMap function without changing the keys; this also retains the original RDD's partitioning.
Group the values for each key in the RDD into a single sequence.
Group the values for each key in the RDD into a single sequence. Hash-partitions the resulting RDD with the default parallelism level.
Group the values for each key in the RDD into a single sequence.
Group the values for each key in the RDD into a single sequence. Hash-partitions the
resulting RDD with into numSplits
partitions.
Group the values for each key in the RDD into a single sequence.
Group the values for each key in the RDD into a single sequence. Allows controlling the partitioning of the resulting key-value pair RDD by passing a Partitioner.
Alias for cogroup.
Alias for cogroup.
Return an RDD containing all pairs of elements with matching keys in this
and other
.
Return an RDD containing all pairs of elements with matching keys in this
and other
. Each
pair of elements will be returned as a (k, (v1, v2)) tuple, where (k, v1) is in this
and
(k, v2) is in other
. Performs a hash join across the cluster.
Return an RDD containing all pairs of elements with matching keys in this
and other
.
Return an RDD containing all pairs of elements with matching keys in this
and other
. Each
pair of elements will be returned as a (k, (v1, v2)) tuple, where (k, v1) is in this
and
(k, v2) is in other
. Performs a hash join across the cluster.
Merge the values for each key using an associative reduce function.
Merge the values for each key using an associative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce.
Perform a left outer join of this
and other
.
Perform a left outer join of this
and other
. For each element (k, v) in this
, the
resulting RDD will either contain all pairs (k, (v, Some(w))) for w in other
, or the
pair (k, (v, None)) if no elements in other
have key k. Hash-partitions the output
into numSplits
partitions.
Perform a left outer join of this
and other
.
Perform a left outer join of this
and other
. For each element (k, v) in this
, the
resulting RDD will either contain all pairs (k, (v, Some(w))) for w in other
, or the
pair (k, (v, None)) if no elements in other
have key k. Hash-partitions the output
using the default level of parallelism.
Perform a left outer join of this
and other
.
Perform a left outer join of this
and other
. For each element (k, v) in this
, the
resulting RDD will either contain all pairs (k, (v, Some(w))) for w in other
, or the
pair (k, (v, None)) if no elements in other
have key k. Uses the given Partitioner to
partition the output RDD.
Return the list of values in the RDD for key key
.
Return the list of values in the RDD for key key
. This operation is done efficiently if the
RDD has a known partitioner by only searching the partition that the key maps to.
Pass each value in the key-value pair RDD through a map function without changing the keys; this also retains the original RDD's partitioning.
Return a copy of the RDD partitioned using the specified partitioner.
Return a copy of the RDD partitioned using the specified partitioner. If mapSideCombine
is true, Spark will group values of the same key together on the map side before the
repartitioning, to only send each key over the network once. If a large number of
duplicated keys are expected, and the size of the keys are large, mapSideCombine
should
be set to true.
Merge the values for each key using an associative reduce function.
Merge the values for each key using an associative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce. Output will be hash-partitioned with the default parallelism level.
Merge the values for each key using an associative reduce function.
Merge the values for each key using an associative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce. Output will be hash-partitioned with numSplits splits.
Merge the values for each key using an associative reduce function.
Merge the values for each key using an associative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce.
Merge the values for each key using an associative reduce function, but return the results immediately to the master as a Map.
Merge the values for each key using an associative reduce function, but return the results immediately to the master as a Map. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce.
Alias for reduceByKeyLocally
Perform a right outer join of this
and other
.
Perform a right outer join of this
and other
. For each element (k, w) in other
, the
resulting RDD will either contain all pairs (k, (Some(v), w)) for v in this
, or the
pair (k, (None, w)) if no elements in this
have key k. Hash-partitions the resulting
RDD into the given number of partitions.
Perform a right outer join of this
and other
.
Perform a right outer join of this
and other
. For each element (k, w) in other
, the
resulting RDD will either contain all pairs (k, (Some(v), w)) for v in this
, or the
pair (k, (None, w)) if no elements in this
have key k. Hash-partitions the resulting
RDD using the default parallelism level.
Perform a right outer join of this
and other
.
Perform a right outer join of this
and other
. For each element (k, w) in other
, the
resulting RDD will either contain all pairs (k, (Some(v), w)) for v in this
, or the
pair (k, (None, w)) if no elements in this
have key k. Uses the given Partitioner to
partition the output RDD.
Output the RDD to any Hadoop-supported storage system, using a Hadoop JobConf object for that storage system.
Output the RDD to any Hadoop-supported storage system, using a Hadoop JobConf object for that storage system. The JobConf should set an OutputFormat and any output paths required (e.g. a table name to write to) in the same way as it would be configured for a Hadoop MapReduce job.
Output the RDD to any Hadoop-supported file system, using a Hadoop OutputFormat
class
supporting the key and value types K and V in this RDD.
Output the RDD to any Hadoop-supported file system, using a Hadoop OutputFormat
class
supporting the key and value types K and V in this RDD.
Output the RDD to any Hadoop-supported file system, using a new Hadoop API OutputFormat
(mapreduce.
Output the RDD to any Hadoop-supported file system, using a new Hadoop API OutputFormat
(mapreduce.OutputFormat) object supporting the key and value types K and V in this RDD.
Output the RDD to any Hadoop-supported file system, using a new Hadoop API OutputFormat
(mapreduce.
Output the RDD to any Hadoop-supported file system, using a new Hadoop API OutputFormat
(mapreduce.OutputFormat) object supporting the key and value types K and V in this RDD.
Output the RDD to any Hadoop-supported file system, using a new Hadoop API OutputFormat
(mapreduce.
Output the RDD to any Hadoop-supported file system, using a new Hadoop API OutputFormat
(mapreduce.OutputFormat) object supporting the key and value types K and V in this RDD.
Extra functions available on RDDs of (key, value) pairs through an implicit conversion. Import
spark.SparkContext._
at the top of your program to use these functions.