Package org.apache.spark.graphx.impl
Class EdgeRDDImpl<ED,VD>
Object
org.apache.spark.rdd.RDD<Edge<ED>>
org.apache.spark.graphx.EdgeRDD<ED>
org.apache.spark.graphx.impl.EdgeRDDImpl<ED,VD>
- All Implemented Interfaces:
Serializable,org.apache.spark.internal.Logging
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter -
Method Summary
Modifier and TypeMethodDescriptioncache()Persists the edge partitions usingtargetStorageLevel, which defaults to MEMORY_ONLY.voidMark this RDD for checkpointing.collect()Return an array that contains all of the elements in this RDD.longcount()The number of edges in the RDD.scala.Option<String>Gets the name of the directory to which this RDD was checkpointed.Get the RDD's current storage level, or StorageLevel.NONE if none is set.<ED2,ED3> EdgeRDDImpl<ED3, VD> innerJoin(EdgeRDD<ED2> other, scala.Function4<Object, Object, ED, ED2, ED3> f, scala.reflect.ClassTag<ED2> evidence$4, scala.reflect.ClassTag<ED3> evidence$5) Inner joins this EdgeRDD with another EdgeRDD, assuming both are partitioned using the samePartitionStrategy.booleanReturn whether this RDD is checkpointed and materialized, either reliably or locally.<ED2,VD2> EdgeRDDImpl<ED2, VD2> mapEdgePartitions(scala.Function2<Object, org.apache.spark.graphx.impl.EdgePartition<ED, VD>, org.apache.spark.graphx.impl.EdgePartition<ED2, VD2>> f, scala.reflect.ClassTag<ED2> evidence$6, scala.reflect.ClassTag<VD2> evidence$7) <ED2> EdgeRDDImpl<ED2,VD> Map the values in an edge partitioning preserving the structure but changing the values.scala.Option<Partitioner>IfpartitionsRDDalready has a partitioner, use it.persist(StorageLevel newLevel) Persists the edge partitions at the specified storage level, ignoring any existing target storage level.reverse()Reverse all the edges in this RDD.Assign a name to this RDDunpersist(boolean blocking) Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.Methods inherited from class org.apache.spark.rdd.RDD
aggregate, barrier, cartesian, cleanShuffleDependencies, coalesce, collect, context, countApprox, countApproxDistinct, countApproxDistinct, countByValue, countByValueApprox, dependencies, distinct, distinct, doubleRDDToDoubleRDDFunctions, filter, first, flatMap, fold, foreach, foreachPartition, getNumPartitions, getResourceProfile, glom, groupBy, groupBy, groupBy, id, intersection, intersection, intersection, isEmpty, iterator, keyBy, localCheckpoint, map, mapPartitions, mapPartitionsWithEvaluator, mapPartitionsWithIndex, max, min, name, numericRDDToDoubleRDDFunctions, partitions, persist, pipe, pipe, pipe, preferredLocations, randomSplit, rddToAsyncRDDActions, rddToOrderedRDDFunctions, rddToPairRDDFunctions, rddToSequenceFileRDDFunctions, reduce, repartition, sample, saveAsObjectFile, saveAsTextFile, saveAsTextFile, sortBy, sparkContext, subtract, subtract, subtract, take, takeOrdered, takeSample, toDebugString, toJavaRDD, toLocalIterator, top, toString, treeAggregate, treeAggregate, treeReduce, union, withResources, zip, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitionsWithEvaluator, zipWithIndex, zipWithUniqueIdMethods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, wait, wait, waitMethods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContext
-
Method Details
-
partitionsRDD
-
targetStorageLevel
-
setName
Description copied from class:RDDAssign a name to this RDD -
partitioner
IfpartitionsRDDalready has a partitioner, use it. Otherwise assume that thePartitionIDs inpartitionsRDDcorrespond to the actual partitions and create a new partitioner that allows co-partitioning withpartitionsRDD.- Overrides:
partitionerin classRDD<Edge<ED>>- Returns:
- (undocumented)
-
collect
Description copied from class:RDDReturn an array that contains all of the elements in this RDD. -
persist
Persists the edge partitions at the specified storage level, ignoring any existing target storage level. -
unpersist
Description copied from class:RDDMark the RDD as non-persistent, and remove all blocks for it from memory and disk. -
cache
Persists the edge partitions usingtargetStorageLevel, which defaults to MEMORY_ONLY. -
getStorageLevel
Description copied from class:RDDGet the RDD's current storage level, or StorageLevel.NONE if none is set.- Overrides:
getStorageLevelin classRDD<Edge<ED>>
-
checkpoint
public void checkpoint()Description copied from class:RDDMark this RDD for checkpointing. It will be saved to a file inside the checkpoint directory set withSparkContext#setCheckpointDirand all references to its parent RDDs will be removed. This function must be called before any job has been executed on this RDD. It is strongly recommended that this RDD is persisted in memory, otherwise saving it on a file will require recomputation.The data is only checkpointed when
doCheckpoint()is called, and this only happens at the end of the first action execution on this RDD. The final data that is checkpointed after the first action may be different from the data that was used during the action, due to non-determinism of the underlying operation and retries. If the purpose of the checkpoint is to achieve saving a deterministic snapshot of the data, an eager action may need to be called first on the RDD to trigger the checkpoint.- Overrides:
checkpointin classRDD<Edge<ED>>
-
isCheckpointed
public boolean isCheckpointed()Description copied from class:RDDReturn whether this RDD is checkpointed and materialized, either reliably or locally.- Overrides:
isCheckpointedin classRDD<Edge<ED>>- Returns:
- (undocumented)
-
getCheckpointFile
Description copied from class:RDDGets the name of the directory to which this RDD was checkpointed. This is not defined if the RDD is checkpointed locally.- Overrides:
getCheckpointFilein classRDD<Edge<ED>>- Returns:
- (undocumented)
-
count
public long count()The number of edges in the RDD. -
mapValues
public <ED2> EdgeRDDImpl<ED2,VD> mapValues(scala.Function1<Edge<ED>, ED2> f, scala.reflect.ClassTag<ED2> evidence$3) Description copied from class:EdgeRDDMap the values in an edge partitioning preserving the structure but changing the values. -
reverse
Description copied from class:EdgeRDDReverse all the edges in this RDD. -
filter
public EdgeRDDImpl<ED,VD> filter(scala.Function1<EdgeTriplet<VD, ED>, Object> epred, scala.Function2<Object, VD, Object> vpred) -
innerJoin
public <ED2,ED3> EdgeRDDImpl<ED3,VD> innerJoin(EdgeRDD<ED2> other, scala.Function4<Object, Object, ED, ED2, ED3> f, scala.reflect.ClassTag<ED2> evidence$4, scala.reflect.ClassTag<ED3> evidence$5) Description copied from class:EdgeRDDInner joins this EdgeRDD with another EdgeRDD, assuming both are partitioned using the samePartitionStrategy.- Specified by:
innerJoinin classEdgeRDD<ED>- Parameters:
other- the EdgeRDD to join withf- the join function applied to corresponding values ofthisandotherevidence$4- (undocumented)evidence$5- (undocumented)- Returns:
- a new EdgeRDD containing only edges that appear in both
thisandother, with values supplied byf
-
mapEdgePartitions
public <ED2,VD2> EdgeRDDImpl<ED2,VD2> mapEdgePartitions(scala.Function2<Object, org.apache.spark.graphx.impl.EdgePartition<ED, VD>, org.apache.spark.graphx.impl.EdgePartition<ED2, VD2>> f, scala.reflect.ClassTag<ED2> evidence$6, scala.reflect.ClassTag<VD2> evidence$7)
-