public interface SparkErrorUtils
extends org.apache.spark.internal.Logging
Modifier and Type | Method and Description |
---|---|
String |
stackTraceToString(Throwable t) |
<T> T |
tryOrIOException(scala.Function0<T> block)
Execute a block of code that returns a value, re-throwing any non-fatal uncaught
exceptions as IOException.
|
<R extends java.io.Closeable,T> |
tryWithResource(scala.Function0<R> createResource,
scala.Function1<R,T> f) |
<T> T |
tryWithSafeFinally(scala.Function0<T> block,
scala.Function0<scala.runtime.BoxedUnit> finallyBlock)
Execute a block of code, then a finally block, but if exceptions happen in
the finally block, do not suppress the original exception.
|
$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitialize
<T> T tryOrIOException(scala.Function0<T> block)
block
- (undocumented)<R extends java.io.Closeable,T> T tryWithResource(scala.Function0<R> createResource, scala.Function1<R,T> f)
<T> T tryWithSafeFinally(scala.Function0<T> block, scala.Function0<scala.runtime.BoxedUnit> finallyBlock)
This is primarily an issue with finally { out.close() }
blocks, where
close needs to be called to clean up out
, but if an exception happened
in out.write
, it's likely out
may be corrupted and out.close
will
fail as well. This would then suppress the original/likely more meaningful
exception from the original out.write
call.
block
- (undocumented)finallyBlock
- (undocumented)String stackTraceToString(Throwable t)