Class CircularDependencyException
Object
Throwable
Exception
org.apache.spark.sql.AnalysisException
org.apache.spark.sql.pipelines.graph.CircularDependencyException
- All Implemented Interfaces:
- Serializable,- SparkThrowable,- org.apache.spark.sql.catalyst.trees.WithOrigin,- scala.Equals,- scala.Product
public class CircularDependencyException
extends AnalysisException
implements scala.Product, Serializable
Raised when there's a circular dependency in the current pipeline. That is, a downstream
 table is referenced while creating a upstream table.
- See Also:
- 
Constructor SummaryConstructorsConstructorDescriptionCircularDependencyException(org.apache.spark.sql.catalyst.TableIdentifier downstreamTable, org.apache.spark.sql.catalyst.TableIdentifier upstreamDataset) 
- 
Method SummaryMethods inherited from class org.apache.spark.sql.AnalysisExceptioncause, context, copy, copy, errorClass, getCondition, getDefaultMessageTemplate, getMessage, getMessageParameters, getQueryContext, getSimpleMessage, getSqlState, line, message, messageParameters, messageTemplate, origin, sqlState, startPosition, withPositionMethods inherited from class java.lang.ThrowableaddSuppressed, fillInStackTrace, getCause, getLocalizedMessage, getStackTrace, getSuppressed, initCause, printStackTrace, printStackTrace, printStackTrace, setStackTrace, toStringMethods inherited from class java.lang.Objectequals, getClass, hashCode, notify, notifyAll, wait, wait, waitMethods inherited from interface scala.EqualscanEqual, equalsMethods inherited from interface scala.ProductproductArity, productElement, productElementName, productElementNames, productIterator, productPrefixMethods inherited from interface org.apache.spark.SparkThrowablegetBreakingChangeInfo, getErrorClass, isInternalError
- 
Constructor Details- 
CircularDependencyExceptionpublic CircularDependencyException(org.apache.spark.sql.catalyst.TableIdentifier downstreamTable, org.apache.spark.sql.catalyst.TableIdentifier upstreamDataset) 
 
- 
- 
Method Details- 
applypublic abstract static R apply(T1 v1, T2 v2) 
- 
toString
- 
downstreamTablepublic org.apache.spark.sql.catalyst.TableIdentifier downstreamTable()
- 
upstreamDatasetpublic org.apache.spark.sql.catalyst.TableIdentifier upstreamDataset()
 
-