Package org.apache.spark
Class FetchFailed
Object
org.apache.spark.FetchFailed
- All Implemented Interfaces:
- Serializable,- TaskEndReason,- TaskFailedReason,- scala.Equals,- scala.Product
:: DeveloperApi ::
 Task failed to fetch shuffle data from a remote node. Probably means we have lost the remote
 executors the task is trying to fetch from, and thus need to rerun the previous stage.
- See Also:
- 
Constructor SummaryConstructorsConstructorDescriptionFetchFailed(BlockManagerId bmAddress, int shuffleId, long mapId, int mapIndex, int reduceId, String message) 
- 
Method SummaryModifier and TypeMethodDescriptionabstract static Rapply(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6) booleanFetch failures lead to a different failure handling path: (1) we don't abort the stage after 4 task failures, instead we immediately go back to the stage which generated the map output, and regenerate the missing data.longmapId()intmapIndex()message()intreduceId()intError message displayed in the web UI.static StringtoString()Methods inherited from class java.lang.Objectequals, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface scala.EqualscanEqual, equalsMethods inherited from interface scala.ProductproductArity, productElement, productElementName, productElementNames, productIterator, productPrefix
- 
Constructor Details- 
FetchFailedpublic FetchFailed(BlockManagerId bmAddress, int shuffleId, long mapId, int mapIndex, int reduceId, String message) 
 
- 
- 
Method Details- 
applypublic abstract static R apply(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6) 
- 
toString
- 
bmAddress
- 
shuffleIdpublic int shuffleId()
- 
mapIdpublic long mapId()
- 
mapIndexpublic int mapIndex()
- 
reduceIdpublic int reduceId()
- 
message
- 
toErrorStringDescription copied from interface:TaskFailedReasonError message displayed in the web UI.- Specified by:
- toErrorStringin interface- TaskFailedReason
 
- 
countTowardsTaskFailurespublic boolean countTowardsTaskFailures()Fetch failures lead to a different failure handling path: (1) we don't abort the stage after 4 task failures, instead we immediately go back to the stage which generated the map output, and regenerate the missing data. (2) we don't count fetch failures from executors excluded due to too many task failures, since presumably its not the fault of the executor where the task ran, but the executor which stored the data. This is especially important because we might rack up a bunch of fetch-failures in rapid succession, on all nodes of the cluster, due to one bad node.- Specified by:
- countTowardsTaskFailuresin interface- TaskFailedReason
- Returns:
- (undocumented)
 
 
-