A B C D E F G H I J K L M N O P Q R S T U V W Z _ 

A

abs(Column) - Static method in class org.apache.spark.sql.functions
Computes the absolute value.
AbsoluteError - Class in org.apache.spark.mllib.tree.loss
:: DeveloperApi :: Class for absolute error loss calculation (for regression).
AbsoluteError() - Constructor for class org.apache.spark.mllib.tree.loss.AbsoluteError
 
accId() - Method in class org.apache.spark.CleanAccum
 
Accumulable<R,T> - Class in org.apache.spark
A data type that can be accumulated, ie has an commutative and associative "add" operation, but where the result type, R, may be different from the element type being added, T.
Accumulable(R, AccumulableParam<R, T>, Option<String>) - Constructor for class org.apache.spark.Accumulable
 
Accumulable(R, AccumulableParam<R, T>) - Constructor for class org.apache.spark.Accumulable
 
accumulable(T, AccumulableParam<T, R>) - Method in class org.apache.spark.api.java.JavaSparkContext
Create an Accumulable shared variable of the given type, to which tasks can "add" values with add.
accumulable(T, String, AccumulableParam<T, R>) - Method in class org.apache.spark.api.java.JavaSparkContext
Create an Accumulable shared variable of the given type, to which tasks can "add" values with add.
accumulable(R, AccumulableParam<R, T>) - Method in class org.apache.spark.SparkContext
Create an Accumulable shared variable, to which tasks can add values with +=.
accumulable(R, String, AccumulableParam<R, T>) - Method in class org.apache.spark.SparkContext
Create an Accumulable shared variable, with a name for display in the Spark UI.
accumulableCollection(R, Function1<R, Growable<T>>, ClassTag<R>) - Method in class org.apache.spark.SparkContext
Create an accumulator from a "mutable collection" type.
AccumulableInfo - Class in org.apache.spark.scheduler
:: DeveloperApi :: Information about an Accumulable modified during a task or stage.
AccumulableInfo(long, String, Option<String>, String) - Constructor for class org.apache.spark.scheduler.AccumulableInfo
 
AccumulableInfo - Class in org.apache.spark.status.api.v1
 
AccumulableParam<R,T> - Interface in org.apache.spark
Helper object defining how to accumulate values of a particular type.
accumulables() - Method in class org.apache.spark.scheduler.StageInfo
Terminal values of accumulables updated during this stage.
accumulables() - Method in class org.apache.spark.scheduler.TaskInfo
Intermediate updates to accumulables during this task.
Accumulator<T> - Class in org.apache.spark
A simpler value of Accumulable where the result type being accumulated is the same as the types of elements being merged, i.e.
Accumulator(T, AccumulatorParam<T>, Option<String>) - Constructor for class org.apache.spark.Accumulator
 
Accumulator(T, AccumulatorParam<T>) - Constructor for class org.apache.spark.Accumulator
 
accumulator(int) - Method in class org.apache.spark.api.java.JavaSparkContext
Create an Accumulator integer variable, which tasks can "add" values to using the add method.
accumulator(int, String) - Method in class org.apache.spark.api.java.JavaSparkContext
Create an Accumulator integer variable, which tasks can "add" values to using the add method.
accumulator(double) - Method in class org.apache.spark.api.java.JavaSparkContext
Create an Accumulator double variable, which tasks can "add" values to using the add method.
accumulator(double, String) - Method in class org.apache.spark.api.java.JavaSparkContext
Create an Accumulator double variable, which tasks can "add" values to using the add method.
accumulator(T, AccumulatorParam<T>) - Method in class org.apache.spark.api.java.JavaSparkContext
Create an Accumulator variable of a given type, which tasks can "add" values to using the add method.
accumulator(T, String, AccumulatorParam<T>) - Method in class org.apache.spark.api.java.JavaSparkContext
Create an Accumulator variable of a given type, which tasks can "add" values to using the add method.
accumulator(T, AccumulatorParam<T>) - Method in class org.apache.spark.SparkContext
Create an Accumulator variable of a given type, which tasks can "add" values to using the += method.
accumulator(T, String, AccumulatorParam<T>) - Method in class org.apache.spark.SparkContext
Create an Accumulator variable of a given type, with a name for display in the Spark UI.
AccumulatorParam<T> - Interface in org.apache.spark
A simpler version of AccumulableParam where the only data type you can add in is the same type as the accumulated value.
AccumulatorParam.DoubleAccumulatorParam$ - Class in org.apache.spark
 
AccumulatorParam.DoubleAccumulatorParam$() - Constructor for class org.apache.spark.AccumulatorParam.DoubleAccumulatorParam$
 
AccumulatorParam.FloatAccumulatorParam$ - Class in org.apache.spark
 
AccumulatorParam.FloatAccumulatorParam$() - Constructor for class org.apache.spark.AccumulatorParam.FloatAccumulatorParam$
 
AccumulatorParam.IntAccumulatorParam$ - Class in org.apache.spark
 
AccumulatorParam.IntAccumulatorParam$() - Constructor for class org.apache.spark.AccumulatorParam.IntAccumulatorParam$
 
AccumulatorParam.LongAccumulatorParam$ - Class in org.apache.spark
 
AccumulatorParam.LongAccumulatorParam$() - Constructor for class org.apache.spark.AccumulatorParam.LongAccumulatorParam$
 
accumulatorUpdates() - Method in class org.apache.spark.status.api.v1.StageData
 
accumulatorUpdates() - Method in class org.apache.spark.status.api.v1.TaskData
 
accuracy() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
Returns accuracy
acos(Column) - Static method in class org.apache.spark.sql.functions
Computes the cosine inverse of the given value; the returned angle is in the range 0.0 through pi.
acos(String) - Static method in class org.apache.spark.sql.functions
Computes the cosine inverse of the given column; the returned angle is in the range 0.0 through pi.
active() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
 
activeJobs() - Method in class org.apache.spark.ui.jobs.JobProgressListener
 
activeStages() - Method in class org.apache.spark.ui.jobs.JobProgressListener
 
activeTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
 
ActorHelper - Interface in org.apache.spark.streaming.receiver
:: DeveloperApi :: A receiver trait to be mixed in with your Actor to gain access to the API for pushing received data into Spark Streaming for being processed.
actorStream(Props, String, StorageLevel, SupervisorStrategy) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream with any arbitrary user implemented actor receiver.
actorStream(Props, String, StorageLevel) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream with any arbitrary user implemented actor receiver.
actorStream(Props, String) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream with any arbitrary user implemented actor receiver.
actorStream(Props, String, StorageLevel, SupervisorStrategy, ClassTag<T>) - Method in class org.apache.spark.streaming.StreamingContext
Create an input stream with any arbitrary user implemented actor receiver.
ActorSupervisorStrategy - Class in org.apache.spark.streaming.receiver
:: DeveloperApi :: A helper with set of defaults for supervisor strategy
ActorSupervisorStrategy() - Constructor for class org.apache.spark.streaming.receiver.ActorSupervisorStrategy
 
actorSystem() - Method in class org.apache.spark.SparkEnv
 
add(T) - Method in class org.apache.spark.Accumulable
Add more data to this accumulator / accumulable
add(double, Vector) - Method in class org.apache.spark.ml.classification.LogisticAggregator
Add a new training data to this LogisticAggregator, and update the loss and gradient of the objective function.
add(double, Vector) - Method in class org.apache.spark.ml.regression.LeastSquaresAggregator
Add a new training data to this LeastSquaresAggregator, and update the loss and gradient of the objective function.
add(double[], MultivariateGaussian[], ExpectationSum, Vector<Object>) - Static method in class org.apache.spark.mllib.clustering.ExpectationSum
 
add(Vector) - Method in class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
Adds a new document.
add(BlockMatrix) - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
Adds two block matrices together.
add(Vector) - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
Add a new sample to this summarizer, and update the statistical summary.
add(Vector) - Method in class org.apache.spark.util.Vector
 
addAccumulator(R, T) - Method in interface org.apache.spark.AccumulableParam
Add additional data to the accumulator value.
addAccumulator(T, T) - Method in interface org.apache.spark.AccumulatorParam
 
addAppArgs(String...) - Method in class org.apache.spark.launcher.SparkLauncher
Adds command line arguments for the application.
addedFiles() - Method in class org.apache.spark.SparkContext
 
addedJars() - Method in class org.apache.spark.SparkContext
 
addFile(String) - Method in class org.apache.spark.api.java.JavaSparkContext
Add a file to be downloaded with this Spark job on every node.
addFile(String) - Method in class org.apache.spark.launcher.SparkLauncher
Adds a file to be submitted with the application.
addFile(String) - Method in class org.apache.spark.SparkContext
Add a file to be downloaded with this Spark job on every node.
addFile(String, boolean) - Method in class org.apache.spark.SparkContext
Add a file to be downloaded with this Spark job on every node.
addGrid(Param<T>, Iterable<T>) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Adds a param with multiple values (overwrites if the input param exists).
addGrid(DoubleParam, double[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Adds a double param with multiple values.
addGrid(IntParam, int[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Adds a int param with multiple values.
addGrid(FloatParam, float[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Adds a float param with multiple values.
addGrid(LongParam, long[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Adds a long param with multiple values.
addGrid(BooleanParam) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Adds a boolean param with true and false.
addInPlace(R, R) - Method in interface org.apache.spark.AccumulableParam
Merge two accumulated values together.
addInPlace(double, double) - Method in class org.apache.spark.AccumulatorParam.DoubleAccumulatorParam$
 
addInPlace(float, float) - Method in class org.apache.spark.AccumulatorParam.FloatAccumulatorParam$
 
addInPlace(int, int) - Method in class org.apache.spark.AccumulatorParam.IntAccumulatorParam$
 
addInPlace(long, long) - Method in class org.apache.spark.AccumulatorParam.LongAccumulatorParam$
 
addInPlace(double, double) - Method in class org.apache.spark.SparkContext.DoubleAccumulatorParam$
 
addInPlace(float, float) - Method in class org.apache.spark.SparkContext.FloatAccumulatorParam$
 
addInPlace(int, int) - Method in class org.apache.spark.SparkContext.IntAccumulatorParam$
 
addInPlace(long, long) - Method in class org.apache.spark.SparkContext.LongAccumulatorParam$
 
addInPlace(Vector) - Method in class org.apache.spark.util.Vector
 
addInPlace(Vector, Vector) - Method in class org.apache.spark.util.Vector.VectorAccumParam$
 
addJar(String) - Method in class org.apache.spark.api.java.JavaSparkContext
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
addJar(String) - Method in class org.apache.spark.launcher.SparkLauncher
Adds a jar file to be submitted with the application.
addJar(String) - Method in class org.apache.spark.SparkContext
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
addLocalConfiguration(String, int, int, int, JobConf) - Static method in class org.apache.spark.rdd.HadoopRDD
Add Hadoop configuration specific to a single partition and attempt.
addOnCompleteCallback(Function0<BoxedUnit>) - Method in class org.apache.spark.TaskContext
Adds a callback function to be executed on task completion.
addPartToPGroup(Partition, PartitionGroup) - Method in class org.apache.spark.rdd.PartitionCoalescer
 
addPyFile(String) - Method in class org.apache.spark.launcher.SparkLauncher
Adds a python file / zip / egg to be submitted with the application.
address() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
 
addSparkListener(SparkListener) - Method in class org.apache.spark.SparkContext
:: DeveloperApi :: Register a listener to receive up-calls from events that happen during execution.
addStreamingListener(StreamingListener) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Add a StreamingListener object for receiving system events related to streaming.
addStreamingListener(StreamingListener) - Method in class org.apache.spark.streaming.StreamingContext
Add a StreamingListener object for receiving system events related to streaming.
addTaskCompletionListener(TaskCompletionListener) - Method in class org.apache.spark.TaskContext
Adds a (Java friendly) listener to be executed on task completion.
addTaskCompletionListener(Function1<TaskContext, BoxedUnit>) - Method in class org.apache.spark.TaskContext
Adds a listener in the form of a Scala closure to be executed on task completion.
addVector(Vector) - Method in class org.apache.spark.ml.feature.VectorIndexer.CategoryStats
Add a new vector to this index, updating sets of unique feature values
agg(Column, Column...) - Method in class org.apache.spark.sql.DataFrame
Aggregates on the entire DataFrame without groups.
agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - Method in class org.apache.spark.sql.DataFrame
(Scala-specific) Aggregates on the entire DataFrame without groups.
agg(Map<String, String>) - Method in class org.apache.spark.sql.DataFrame
(Scala-specific) Aggregates on the entire DataFrame without groups.
agg(Map<String, String>) - Method in class org.apache.spark.sql.DataFrame
(Java-specific) Aggregates on the entire DataFrame without groups.
agg(Column, Seq<Column>) - Method in class org.apache.spark.sql.DataFrame
Aggregates on the entire DataFrame without groups.
agg(Column, Column...) - Method in class org.apache.spark.sql.GroupedData
Compute aggregates by specifying a series of aggregate columns.
agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - Method in class org.apache.spark.sql.GroupedData
(Scala-specific) Compute aggregates by specifying a map from column name to aggregate methods.
agg(Map<String, String>) - Method in class org.apache.spark.sql.GroupedData
(Scala-specific) Compute aggregates by specifying a map from column name to aggregate methods.
agg(Map<String, String>) - Method in class org.apache.spark.sql.GroupedData
(Java-specific) Compute aggregates by specifying a map from column name to aggregate methods.
agg(Column, Seq<Column>) - Method in class org.apache.spark.sql.GroupedData
Compute aggregates by specifying a series of aggregate columns.
aggregate(U, Function2<U, T, U>, Function2<U, U, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value".
aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value".
aggregateByKey(U, Partitioner, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
Aggregate the values of each key, using given combine functions and a neutral "zero value".
aggregateByKey(U, int, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
Aggregate the values of each key, using given combine functions and a neutral "zero value".
aggregateByKey(U, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
Aggregate the values of each key, using given combine functions and a neutral "zero value".
aggregateByKey(U, Partitioner, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Aggregate the values of each key, using given combine functions and a neutral "zero value".
aggregateByKey(U, int, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Aggregate the values of each key, using given combine functions and a neutral "zero value".
aggregateByKey(U, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Aggregate the values of each key, using given combine functions and a neutral "zero value".
AggregatedDialect - Class in org.apache.spark.sql.jdbc
:: DeveloperApi :: AggregatedDialect can unify multiple dialects into one virtual Dialect.
AggregatedDialect(List<JdbcDialect>) - Constructor for class org.apache.spark.sql.jdbc.AggregatedDialect
 
aggregateMessages(Function1<EdgeContext<VD, ED, A>, BoxedUnit>, Function2<A, A, A>, TripletFields, ClassTag<A>) - Method in class org.apache.spark.graphx.Graph
Aggregates values from the neighboring edges and vertices of each vertex.
aggregateMessagesWithActiveSet(Function1<EdgeContext<VD, ED, A>, BoxedUnit>, Function2<A, A, A>, TripletFields, Option<Tuple2<VertexRDD<?>, EdgeDirection>>, ClassTag<A>) - Method in class org.apache.spark.graphx.impl.GraphImpl
 
aggregateUsingIndex(RDD<Tuple2<Object, VD2>>, Function2<VD2, VD2, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
 
aggregateUsingIndex(RDD<Tuple2<Object, VD2>>, Function2<VD2, VD2, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.VertexRDD
Aggregates vertices in messages that have the same ids using reduceFunc, returning a VertexRDD co-indexed with this.
AggregatingEdgeContext<VD,ED,A> - Class in org.apache.spark.graphx.impl
 
AggregatingEdgeContext(Function2<A, A, A>, Object, BitSet) - Constructor for class org.apache.spark.graphx.impl.AggregatingEdgeContext
 
Aggregator<K,V,C> - Class in org.apache.spark
:: DeveloperApi :: A set of functions used to aggregate data.
Aggregator(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>) - Constructor for class org.apache.spark.Aggregator
 
aggregator() - Method in class org.apache.spark.ShuffleDependency
 
Algo - Class in org.apache.spark.mllib.tree.configuration
:: Experimental :: Enum to select the algorithm for the decision tree
Algo() - Constructor for class org.apache.spark.mllib.tree.configuration.Algo
 
algo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
algo() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
 
algo() - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
 
algo() - Method in class org.apache.spark.mllib.tree.model.RandomForestModel
 
algorithm() - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
 
alias(String) - Method in class org.apache.spark.sql.Column
Gives the column an alias.
All - Static variable in class org.apache.spark.graphx.TripletFields
Expose all the fields (source, edge, and destination).
AlphaComponent - Annotation Type in org.apache.spark.annotation
A new component of Spark which may have unstable API's.
ALS - Class in org.apache.spark.ml.recommendation
:: Experimental :: Alternating Least Squares (ALS) matrix factorization.
ALS(String) - Constructor for class org.apache.spark.ml.recommendation.ALS
 
ALS() - Constructor for class org.apache.spark.ml.recommendation.ALS
 
ALS - Class in org.apache.spark.mllib.recommendation
 
ALS() - Constructor for class org.apache.spark.mllib.recommendation.ALS
 
ALS.Rating<ID> - Class in org.apache.spark.ml.recommendation
:: DeveloperApi :: Rating class for better code readability.
ALS.Rating(ID, ID, float) - Constructor for class org.apache.spark.ml.recommendation.ALS.Rating
 
ALS.Rating$ - Class in org.apache.spark.ml.recommendation
 
ALS.Rating$() - Constructor for class org.apache.spark.ml.recommendation.ALS.Rating$
 
ALSModel - Class in org.apache.spark.ml.recommendation
:: Experimental :: Model fitted by ALS.
AnalysisException - Exception in org.apache.spark.sql
:: DeveloperApi :: Thrown when a query fails to analyze, usually because the query itself is invalid.
analyze(String) - Method in class org.apache.spark.sql.hive.HiveContext
Analyzes the given table in the current database to generate statistics, which will be used in query optimizations.
and(Column) - Method in class org.apache.spark.sql.Column
Boolean AND.
And - Class in org.apache.spark.sql.sources
A filter that evaluates to true iff both left or right evaluate to true.
And(Filter, Filter) - Constructor for class org.apache.spark.sql.sources.And
 
ANY() - Static method in class org.apache.spark.scheduler.TaskLocality
 
anyNull() - Method in interface org.apache.spark.sql.Row
Returns true if there are any NULL values in this row.
appAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
 
appendBias(Vector) - Static method in class org.apache.spark.mllib.util.MLUtils
Returns a new vector with 1.0 (bias) appended to the input vector.
appId() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
 
applicationAttemptId() - Method in class org.apache.spark.SparkContext
 
ApplicationAttemptInfo - Class in org.apache.spark.status.api.v1
 
applicationId() - Method in class org.apache.spark.SparkContext
 
ApplicationInfo - Class in org.apache.spark.status.api.v1
 
ApplicationStatus - Enum in org.apache.spark.status.api.v1
 
apply(RDD<Tuple2<Object, VD>>, RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.Graph
Construct a graph from a collection of vertices and edges with attributes.
apply(RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
Create a graph from edges, setting referenced vertices to `defaultVertexAttr`.
apply(RDD<Tuple2<Object, VD>>, RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
Create a graph from vertices and edges, setting missing vertices to `defaultVertexAttr`.
apply(VertexRDD<VD>, EdgeRDD<ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
Create a graph from a VertexRDD and an EdgeRDD with arbitrary replicated vertices.
apply(Graph<VD, ED>, A, int, EdgeDirection, Function3<Object, VD, A, VD>, Function1<EdgeTriplet<VD, ED>, Iterator<Tuple2<Object, A>>>, Function2<A, A, A>, ClassTag<VD>, ClassTag<ED>, ClassTag<A>) - Static method in class org.apache.spark.graphx.Pregel
Execute a Pregel-like iterative vertex-parallel abstraction.
apply(RDD<Tuple2<Object, VD>>, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
Constructs a standalone VertexRDD (one that is not set up for efficient joins with an EdgeRDD) from an RDD of vertex-attribute pairs.
apply(RDD<Tuple2<Object, VD>>, EdgeRDD<?>, VD, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
Constructs a VertexRDD from an RDD of vertex-attribute pairs.
apply(RDD<Tuple2<Object, VD>>, EdgeRDD<?>, VD, Function2<VD, VD, VD>, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
Constructs a VertexRDD from an RDD of vertex-attribute pairs.
apply(String) - Method in class org.apache.spark.ml.attribute.AttributeGroup
Gets an attribute by its name.
apply(int) - Method in class org.apache.spark.ml.attribute.AttributeGroup
Gets an attribute by its index.
apply(Param<T>) - Method in class org.apache.spark.ml.param.ParamMap
Gets the value of the input param or its default value if it does not exist.
apply(int, int) - Method in class org.apache.spark.mllib.linalg.DenseMatrix
 
apply(int) - Method in class org.apache.spark.mllib.linalg.DenseVector
 
apply(int, int) - Method in interface org.apache.spark.mllib.linalg.Matrix
Gets the (i, j)-th element.
apply(int, int) - Method in class org.apache.spark.mllib.linalg.SparseMatrix
 
apply(int) - Method in interface org.apache.spark.mllib.linalg.Vector
Gets the value of the ith element.
apply(int, Predict, double, boolean) - Static method in class org.apache.spark.mllib.tree.model.Node
Construct a node with nodeIndex, predict, impurity and isLeaf parameters.
apply(String) - Static method in class org.apache.spark.rdd.PartitionGroup
 
apply(long, String, Option<String>, String) - Static method in class org.apache.spark.scheduler.AccumulableInfo
 
apply(long, String, String) - Static method in class org.apache.spark.scheduler.AccumulableInfo
 
apply(long, TaskMetrics) - Static method in class org.apache.spark.scheduler.RuntimePercentage
 
apply(Object) - Method in class org.apache.spark.sql.Column
Extracts a value or values from a complex type.
apply(String) - Method in class org.apache.spark.sql.DataFrame
Selects column based on the column name and return it as a Column.
apply(DataFrame, Seq<Expression>, GroupedData.GroupType) - Static method in class org.apache.spark.sql.GroupedData
 
apply(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i.
apply(DataType) - Static method in class org.apache.spark.sql.types.ArrayType
Construct a ArrayType object with the given element type.
apply(double) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(long) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(int) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(BigDecimal) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(BigDecimal) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(BigDecimal, int, int) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(long, int, int) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(String) - Static method in class org.apache.spark.sql.types.Decimal
 
apply() - Static method in class org.apache.spark.sql.types.DecimalType
 
apply(int, int) - Static method in class org.apache.spark.sql.types.DecimalType
 
apply(DataType, DataType) - Static method in class org.apache.spark.sql.types.MapType
Construct a MapType object with the given key type and value type.
apply(String) - Method in class org.apache.spark.sql.types.StructType
Extracts a StructField of the given name.
apply(Set<String>) - Method in class org.apache.spark.sql.types.StructType
Returns a StructType containing StructFields of the given names, preserving the original order of fields.
apply(int) - Method in class org.apache.spark.sql.types.StructType
 
apply(String) - Static method in class org.apache.spark.sql.types.UTF8String
Create a UTF-8 String from String
apply(byte[]) - Static method in class org.apache.spark.sql.types.UTF8String
Create a UTF-8 String from Array[Byte], which should be encoded in UTF-8
apply(Seq<Column>) - Method in class org.apache.spark.sql.UserDefinedFunction
 
apply(String) - Static method in class org.apache.spark.storage.BlockId
Converts a BlockId "name" String back into a BlockId.
apply(String, String, int) - Static method in class org.apache.spark.storage.BlockManagerId
Returns a BlockManagerId for the given configuration.
apply(ObjectInput) - Static method in class org.apache.spark.storage.BlockManagerId
 
apply(boolean, boolean, boolean, boolean, int) - Static method in class org.apache.spark.storage.StorageLevel
:: DeveloperApi :: Create a new StorageLevel object without setting useOffHeap.
apply(boolean, boolean, boolean, int) - Static method in class org.apache.spark.storage.StorageLevel
:: DeveloperApi :: Create a new StorageLevel object.
apply(int, int) - Static method in class org.apache.spark.storage.StorageLevel
:: DeveloperApi :: Create a new StorageLevel object from its integer representation.
apply(ObjectInput) - Static method in class org.apache.spark.storage.StorageLevel
:: DeveloperApi :: Read StorageLevel object from ObjectInput stream.
apply(String, int) - Static method in class org.apache.spark.streaming.kafka.Broker
 
apply(String, int, long, long) - Static method in class org.apache.spark.streaming.kafka.OffsetRange
 
apply(TopicAndPartition, long, long) - Static method in class org.apache.spark.streaming.kafka.OffsetRange
 
apply(long) - Static method in class org.apache.spark.streaming.Milliseconds
 
apply(long) - Static method in class org.apache.spark.streaming.Minutes
 
apply(long) - Static method in class org.apache.spark.streaming.Seconds
 
apply(TraversableOnce<Object>) - Static method in class org.apache.spark.util.StatCounter
Build a StatCounter from a list of values.
apply(Seq<Object>) - Static method in class org.apache.spark.util.StatCounter
Build a StatCounter from a list of values passed as variable-length arguments.
apply(int) - Method in class org.apache.spark.util.Vector
 
applySchema(RDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
 
applySchema(JavaRDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
 
applySchema(RDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
 
applySchema(JavaRDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
 
appName() - Method in class org.apache.spark.api.java.JavaSparkContext
 
appName() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
 
appName() - Method in class org.apache.spark.SparkContext
 
approxCountDistinct(Column) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the approximate number of distinct items in a group.
approxCountDistinct(String) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the approximate number of distinct items in a group.
approxCountDistinct(Column, double) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the approximate number of distinct items in a group.
approxCountDistinct(String, double) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the approximate number of distinct items in a group.
ApproxHist() - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
 
areaUnderPR() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Computes the area under the precision-recall curve.
areaUnderROC() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Computes the area under the receiver operating characteristic (ROC) curve.
arr() - Method in class org.apache.spark.rdd.PartitionGroup
 
array(DataType) - Method in class org.apache.spark.sql.ColumnName
Creates a new StructField of type array.
array(Column...) - Static method in class org.apache.spark.sql.functions
Creates a new array column.
array(Seq<Column>) - Static method in class org.apache.spark.sql.functions
Creates a new array column.
array(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
Creates a new array column.
ArrayType - Class in org.apache.spark.sql.types
 
ArrayType(DataType, boolean) - Constructor for class org.apache.spark.sql.types.ArrayType
 
as(String) - Method in class org.apache.spark.sql.Column
Gives the column an alias.
as(Seq<String>) - Method in class org.apache.spark.sql.Column
(Scala-specific) Assigns the given aliases to the results of a table generating function.
as(String[]) - Method in class org.apache.spark.sql.Column
Assigns the given aliases to the results of a table generating function.
as(Symbol) - Method in class org.apache.spark.sql.Column
Gives the column an alias.
as(String, Metadata) - Method in class org.apache.spark.sql.Column
Gives the column an alias with metadata.
as(String) - Method in class org.apache.spark.sql.DataFrame
Returns a new DataFrame with an alias set.
as(Symbol) - Method in class org.apache.spark.sql.DataFrame
(Scala-specific) Returns a new DataFrame with an alias set.
asc() - Method in class org.apache.spark.sql.Column
Returns an ordering used in sorting.
asc(String) - Static method in class org.apache.spark.sql.functions
Returns a sort expression based on ascending order of the column.
asin(Column) - Static method in class org.apache.spark.sql.functions
Computes the sine inverse of the given value; the returned angle is in the range -pi/2 through pi/2.
asin(String) - Static method in class org.apache.spark.sql.functions
Computes the sine inverse of the given column; the returned angle is in the range -pi/2 through pi/2.
asIntegral() - Method in class org.apache.spark.sql.types.DecimalType
 
asIntegral() - Method in class org.apache.spark.sql.types.DoubleType
 
asIntegral() - Method in class org.apache.spark.sql.types.FloatType
 
asIterator() - Method in class org.apache.spark.serializer.DeserializationStream
Read the elements of this stream through an iterator.
asJavaPairRDD() - Method in class org.apache.spark.api.r.PairwiseRRDD
 
asJavaRDD() - Method in class org.apache.spark.api.r.RRDD
 
asJavaRDD() - Method in class org.apache.spark.api.r.StringRRDD
 
asKeyValueIterator() - Method in class org.apache.spark.serializer.DeserializationStream
Read the elements of this stream through an iterator over key-value pairs.
AskPermissionToCommitOutput - Class in org.apache.spark.scheduler
 
AskPermissionToCommitOutput(int, long, long) - Constructor for class org.apache.spark.scheduler.AskPermissionToCommitOutput
 
askTimeout(SparkConf) - Static method in class org.apache.spark.util.RpcUtils
Returns the default Spark timeout to use for RPC ask operations.
asRDDId() - Method in class org.apache.spark.storage.BlockId
 
assignments() - Method in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel
 
AsyncRDDActions<T> - Class in org.apache.spark.rdd
A set of asynchronous RDD actions available through an implicit conversion.
AsyncRDDActions(RDD<T>, ClassTag<T>) - Constructor for class org.apache.spark.rdd.AsyncRDDActions
 
atan(Column) - Static method in class org.apache.spark.sql.functions
Computes the tangent inverse of the given value.
atan(String) - Static method in class org.apache.spark.sql.functions
Computes the tangent inverse of the given column.
atan2(Column, Column) - Static method in class org.apache.spark.sql.functions
Returns the angle theta from the conversion of rectangular coordinates (x, y) to polar coordinates (r, theta).
atan2(Column, String) - Static method in class org.apache.spark.sql.functions
Returns the angle theta from the conversion of rectangular coordinates (x, y) to polar coordinates (r, theta).
atan2(String, Column) - Static method in class org.apache.spark.sql.functions
Returns the angle theta from the conversion of rectangular coordinates (x, y) to polar coordinates (r, theta).
atan2(String, String) - Static method in class org.apache.spark.sql.functions
Returns the angle theta from the conversion of rectangular coordinates (x, y) to polar coordinates (r, theta).
atan2(Column, double) - Static method in class org.apache.spark.sql.functions
Returns the angle theta from the conversion of rectangular coordinates (x, y) to polar coordinates (r, theta).
atan2(String, double) - Static method in class org.apache.spark.sql.functions
Returns the angle theta from the conversion of rectangular coordinates (x, y) to polar coordinates (r, theta).
atan2(double, Column) - Static method in class org.apache.spark.sql.functions
Returns the angle theta from the conversion of rectangular coordinates (x, y) to polar coordinates (r, theta).
atan2(double, String) - Static method in class org.apache.spark.sql.functions
Returns the angle theta from the conversion of rectangular coordinates (x, y) to polar coordinates (r, theta).
attempt() - Method in class org.apache.spark.scheduler.TaskInfo
 
attempt() - Method in class org.apache.spark.status.api.v1.TaskData
 
attemptId() - Method in class org.apache.spark.scheduler.StageInfo
 
attemptId() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
attemptId() - Method in class org.apache.spark.status.api.v1.StageData
 
attemptID() - Method in class org.apache.spark.TaskCommitDenied
 
attemptId() - Method in class org.apache.spark.TaskContext
 
attemptNumber() - Method in class org.apache.spark.TaskContext
How many times this task has been attempted.
attempts() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
 
attr() - Method in class org.apache.spark.graphx.Edge
 
attr() - Method in class org.apache.spark.graphx.EdgeContext
The attribute associated with the edge.
attr() - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
 
Attribute - Class in org.apache.spark.ml.attribute
:: DeveloperApi :: Abstract class for ML attributes.
Attribute() - Constructor for class org.apache.spark.ml.attribute.Attribute
 
attribute() - Method in class org.apache.spark.sql.sources.EqualTo
 
attribute() - Method in class org.apache.spark.sql.sources.GreaterThan
 
attribute() - Method in class org.apache.spark.sql.sources.GreaterThanOrEqual
 
attribute() - Method in class org.apache.spark.sql.sources.In
 
attribute() - Method in class org.apache.spark.sql.sources.IsNotNull
 
attribute() - Method in class org.apache.spark.sql.sources.IsNull
 
attribute() - Method in class org.apache.spark.sql.sources.LessThan
 
attribute() - Method in class org.apache.spark.sql.sources.LessThanOrEqual
 
attribute() - Method in class org.apache.spark.sql.sources.StringContains
 
attribute() - Method in class org.apache.spark.sql.sources.StringEndsWith
 
attribute() - Method in class org.apache.spark.sql.sources.StringStartsWith
 
AttributeGroup - Class in org.apache.spark.ml.attribute
:: DeveloperApi :: Attributes that describe a vector ML column.
AttributeGroup(String) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
Creates an attribute group without attribute info.
AttributeGroup(String, int) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
Creates an attribute group knowing only the number of attributes.
AttributeGroup(String, Attribute[]) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
Creates an attribute group with attributes.
attributes() - Method in class org.apache.spark.ml.attribute.AttributeGroup
Optional array of attributes.
AttributeType - Class in org.apache.spark.ml.attribute
:: DeveloperApi :: An enum-like type for attribute types: AttributeType$.Numeric, AttributeType$.Nominal, and AttributeType$.Binary.
AttributeType(String) - Constructor for class org.apache.spark.ml.attribute.AttributeType
 
attrType() - Method in class org.apache.spark.ml.attribute.Attribute
Attribute type.
attrType() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
 
attrType() - Method in class org.apache.spark.ml.attribute.NominalAttribute
 
attrType() - Method in class org.apache.spark.ml.attribute.NumericAttribute
 
attrType() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
 
avg(Column) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the average of the values in a group.
avg(String) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the average of the values in a group.
avg(String...) - Method in class org.apache.spark.sql.GroupedData
Compute the mean value for each numeric columns for each group.
avg(Seq<String>) - Method in class org.apache.spark.sql.GroupedData
Compute the mean value for each numeric columns for each group.
awaitTermination() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Wait for the execution to stop.
awaitTermination(long) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Deprecated.
As of 1.3.0, replaced by awaitTerminationOrTimeout(Long).
awaitTermination() - Method in class org.apache.spark.streaming.StreamingContext
Wait for the execution to stop.
awaitTermination(long) - Method in class org.apache.spark.streaming.StreamingContext
Deprecated.
As of 1.3.0, replaced by awaitTerminationOrTimeout(Long).
awaitTerminationOrTimeout(long) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Wait for the execution to stop.
awaitTerminationOrTimeout(long) - Method in class org.apache.spark.streaming.StreamingContext
Wait for the execution to stop.

B

baseOn(ParamPair<?>...) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Sets the given parameters in this grid to fixed values.
baseOn(ParamMap) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Sets the given parameters in this grid to fixed values.
baseOn(Seq<ParamPair<?>>) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Sets the given parameters in this grid to fixed values.
BaseRelation - Class in org.apache.spark.sql.sources
::DeveloperApi:: Represents a collection of tuples with a known schema.
BaseRelation() - Constructor for class org.apache.spark.sql.sources.BaseRelation
 
baseRelationToDataFrame(BaseRelation) - Method in class org.apache.spark.sql.SQLContext
 
BaseRRDD<T,U> - Class in org.apache.spark.api.r
 
BaseRRDD(RDD<T>, int, byte[], String, String, byte[], String, Broadcast<Object>[], ClassTag<T>, ClassTag<U>) - Constructor for class org.apache.spark.api.r.BaseRRDD
 
BATCHES() - Static method in class org.apache.spark.mllib.clustering.StreamingKMeans
 
BatchInfo - Class in org.apache.spark.streaming.scheduler
:: DeveloperApi :: Class having information on completed batches.
BatchInfo(Time, Map<Object, Object>, long, Option<Object>, Option<Object>) - Constructor for class org.apache.spark.streaming.scheduler.BatchInfo
 
batchInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
 
batchInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
 
batchInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
 
batchInfos() - Method in class org.apache.spark.streaming.scheduler.StatsReportListener
 
batchTime() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
 
Bernoulli() - Static method in class org.apache.spark.mllib.classification.NaiveBayes
String name for Bernoulli model type.
BernoulliCellSampler<T> - Class in org.apache.spark.util.random
:: DeveloperApi :: A sampler based on Bernoulli trials for partitioning a data sequence.
BernoulliCellSampler(double, double, boolean) - Constructor for class org.apache.spark.util.random.BernoulliCellSampler
 
BernoulliSampler<T> - Class in org.apache.spark.util.random
:: DeveloperApi :: A sampler based on Bernoulli trials.
BernoulliSampler(double, ClassTag<T>) - Constructor for class org.apache.spark.util.random.BernoulliSampler
 
bestModel() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
 
between(Object, Object) - Method in class org.apache.spark.sql.Column
True if the current column is between the lower bound and upper bound, inclusive.
Binarizer - Class in org.apache.spark.ml.feature
:: Experimental :: Binarize a column of continuous features given a threshold.
Binarizer(String) - Constructor for class org.apache.spark.ml.feature.Binarizer
 
Binarizer() - Constructor for class org.apache.spark.ml.feature.Binarizer
 
Binary() - Static method in class org.apache.spark.ml.attribute.AttributeType
Binary type.
binary() - Method in class org.apache.spark.sql.ColumnName
Creates a new StructField of type binary.
BinaryAttribute - Class in org.apache.spark.ml.attribute
:: DeveloperApi :: A binary attribute.
BinaryClassificationEvaluator - Class in org.apache.spark.ml.evaluation
:: Experimental :: Evaluator for binary classification, which expects two input columns: score and label.
BinaryClassificationEvaluator(String) - Constructor for class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
BinaryClassificationEvaluator() - Constructor for class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
BinaryClassificationMetrics - Class in org.apache.spark.mllib.evaluation
:: Experimental :: Evaluator for binary classification.
BinaryClassificationMetrics(RDD<Tuple2<Object, Object>>, int) - Constructor for class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
 
BinaryClassificationMetrics(RDD<Tuple2<Object, Object>>) - Constructor for class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Defaults numBins to 0.
binaryFiles(String, int) - Method in class org.apache.spark.api.java.JavaSparkContext
Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array.
binaryFiles(String) - Method in class org.apache.spark.api.java.JavaSparkContext
:: Experimental ::
binaryFiles(String, int) - Method in class org.apache.spark.SparkContext
:: Experimental ::
binaryLabelValidator() - Static method in class org.apache.spark.mllib.util.DataValidators
Function to check if labels used for classification are either zero or one.
binaryRecords(String, int) - Method in class org.apache.spark.api.java.JavaSparkContext
:: Experimental ::
binaryRecords(String, int, Configuration) - Method in class org.apache.spark.SparkContext
:: Experimental ::
binaryRecordsStream(String, int) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
:: Experimental ::
binaryRecordsStream(String, int) - Method in class org.apache.spark.streaming.StreamingContext
:: Experimental ::
binarySearchForBuckets(double[], double) - Static method in class org.apache.spark.ml.feature.Bucketizer
Binary searching in several buckets to place each data point.
BinaryType - Class in org.apache.spark.sql.types
:: DeveloperApi :: The data type representing Array[Byte] values.
BinaryType - Static variable in class org.apache.spark.sql.types.DataTypes
Gets the BinaryType object.
bitwiseAND(Object) - Method in class org.apache.spark.sql.Column
Compute bitwise AND of this expression with another expression.
bitwiseNOT(Column) - Static method in class org.apache.spark.sql.functions
Computes bitwise NOT.
bitwiseOR(Object) - Method in class org.apache.spark.sql.Column
Compute bitwise OR of this expression with another expression.
bitwiseXOR(Object) - Method in class org.apache.spark.sql.Column
Compute bitwise XOR of this expression with another expression.
BlockId - Class in org.apache.spark.storage
:: DeveloperApi :: Identifies a particular Block of data, usually associated with a single file.
BlockId() - Constructor for class org.apache.spark.storage.BlockId
 
blockManager() - Method in class org.apache.spark.SparkEnv
 
blockManagerId() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
 
blockManagerId() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
 
BlockManagerId - Class in org.apache.spark.storage
:: DeveloperApi :: This class represent an unique identifier for a BlockManager.
blockManagerId() - Method in class org.apache.spark.storage.StorageStatus
 
blockManagerIdCache() - Static method in class org.apache.spark.storage.BlockManagerId
 
blockManagerIds() - Method in class org.apache.spark.ui.jobs.JobProgressListener
 
BlockMatrix - Class in org.apache.spark.mllib.linalg.distributed
:: Experimental ::
BlockMatrix(RDD<Tuple2<Tuple2<Object, Object>, Matrix>>, int, int, long, long) - Constructor for class org.apache.spark.mllib.linalg.distributed.BlockMatrix
 
BlockMatrix(RDD<Tuple2<Tuple2<Object, Object>, Matrix>>, int, int) - Constructor for class org.apache.spark.mllib.linalg.distributed.BlockMatrix
Alternate constructor for BlockMatrix without the input of the number of rows and columns.
blockName() - Method in class org.apache.spark.status.api.v1.RDDPartitionInfo
 
BlockNotFoundException - Exception in org.apache.spark.storage
 
BlockNotFoundException(String) - Constructor for exception org.apache.spark.storage.BlockNotFoundException
 
blocks() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
 
blocks() - Method in class org.apache.spark.storage.StorageStatus
Return the blocks stored in this block manager.
BlockStatus - Class in org.apache.spark.storage
 
BlockStatus(StorageLevel, long, long, long) - Constructor for class org.apache.spark.storage.BlockStatus
 
blockTransferService() - Method in class org.apache.spark.SparkEnv
 
bmAddress() - Method in class org.apache.spark.FetchFailed
 
BooleanParam - Class in org.apache.spark.ml.param
:: DeveloperApi :: Specialized version of Param[Boolean] for Java.
BooleanParam(String, String, String) - Constructor for class org.apache.spark.ml.param.BooleanParam
 
BooleanParam(org.apache.spark.ml.util.Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.BooleanParam
 
BooleanType - Class in org.apache.spark.sql.types
:: DeveloperApi :: The data type representing Boolean values.
BooleanType - Static variable in class org.apache.spark.sql.types.DataTypes
Gets the BooleanType object.
booleanWritableConverter() - Static method in class org.apache.spark.SparkContext
 
boolToBoolWritable(boolean) - Static method in class org.apache.spark.SparkContext
 
BoostingStrategy - Class in org.apache.spark.mllib.tree.configuration
:: Experimental :: Configuration options for GradientBoostedTrees.
BoostingStrategy(Strategy, Loss, int, double, double) - Constructor for class org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
Both() - Static method in class org.apache.spark.graphx.EdgeDirection
Edges originating from *and* arriving at a vertex of interest.
boundaries() - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
 
BoundedDouble - Class in org.apache.spark.partial
:: Experimental :: A Double value with error bars and associated confidence.
BoundedDouble(double, double, double, double) - Constructor for class org.apache.spark.partial.BoundedDouble
 
broadcast(T) - Method in class org.apache.spark.api.java.JavaSparkContext
Broadcast a read-only variable to the cluster, returning a Broadcast object for reading it in distributed functions.
Broadcast<T> - Class in org.apache.spark.broadcast
A broadcast variable.
Broadcast(long, ClassTag<T>) - Constructor for class org.apache.spark.broadcast.Broadcast
 
broadcast(T, ClassTag<T>) - Method in class org.apache.spark.SparkContext
Broadcast a read-only variable to the cluster, returning a Broadcast object for reading it in distributed functions.
BROADCAST() - Static method in class org.apache.spark.storage.BlockId
 
BroadcastBlockId - Class in org.apache.spark.storage
 
BroadcastBlockId(long, String) - Constructor for class org.apache.spark.storage.BroadcastBlockId
 
BroadcastFactory - Interface in org.apache.spark.broadcast
:: DeveloperApi :: An interface for all the broadcast implementations in Spark (to allow multiple broadcast implementations).
broadcastId() - Method in class org.apache.spark.CleanBroadcast
 
broadcastId() - Method in class org.apache.spark.storage.BroadcastBlockId
 
broadcastManager() - Method in class org.apache.spark.SparkEnv
 
Broker - Class in org.apache.spark.streaming.kafka
:: Experimental :: Represent the host and port info for a Kafka broker.
brokerAddress() - Method in class org.apache.spark.streaming.kafka.KafkaTestUtils
 
Bucketizer - Class in org.apache.spark.ml.feature
:: Experimental :: Bucketizer maps a column of continuous features to a column of feature buckets.
Bucketizer(String) - Constructor for class org.apache.spark.ml.feature.Bucketizer
 
Bucketizer() - Constructor for class org.apache.spark.ml.feature.Bucketizer
 
build() - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Builds and returns all combinations of parameters specified by the param grid.
build(Node[]) - Method in class org.apache.spark.mllib.tree.model.Node
build the left node and right nodes if not leaf
build() - Method in class org.apache.spark.sql.types.MetadataBuilder
Builds the Metadata instance.
buildScan(Seq<Attribute>, Seq<Expression>) - Method in interface org.apache.spark.sql.sources.CatalystScan
 
buildScan(FileStatus[]) - Method in class org.apache.spark.sql.sources.HadoopFsRelation
For a non-partitioned relation, this method builds an RDD[Row] containing all rows within this relation.
buildScan(String[], FileStatus[]) - Method in class org.apache.spark.sql.sources.HadoopFsRelation
For a non-partitioned relation, this method builds an RDD[Row] containing all rows within this relation.
buildScan(String[], Filter[], FileStatus[]) - Method in class org.apache.spark.sql.sources.HadoopFsRelation
For a non-partitioned relation, this method builds an RDD[Row] containing all rows within this relation.
buildScan(String[], Filter[]) - Method in interface org.apache.spark.sql.sources.PrunedFilteredScan
 
buildScan(String[]) - Method in interface org.apache.spark.sql.sources.PrunedScan
 
buildScan() - Method in interface org.apache.spark.sql.sources.TableScan
 
bytesOfCodePointInUTF8() - Static method in class org.apache.spark.sql.types.UTF8String
 
bytesRead() - Method in class org.apache.spark.status.api.v1.InputMetricDistributions
 
bytesRead() - Method in class org.apache.spark.status.api.v1.InputMetrics
 
bytesToBytesWritable(byte[]) - Static method in class org.apache.spark.SparkContext
 
bytesWritableConverter() - Static method in class org.apache.spark.SparkContext
 
bytesWritten() - Method in class org.apache.spark.status.api.v1.OutputMetricDistributions
 
bytesWritten() - Method in class org.apache.spark.status.api.v1.OutputMetrics
 
bytesWritten() - Method in class org.apache.spark.status.api.v1.ShuffleWriteMetrics
 
ByteType - Class in org.apache.spark.sql.types
:: DeveloperApi :: The data type representing Byte values.
ByteType - Static variable in class org.apache.spark.sql.types.DataTypes
Gets the ByteType object.

C

cache() - Method in class org.apache.spark.api.java.JavaDoubleRDD
Persist this RDD with the default storage level (`MEMORY_ONLY`).
cache() - Method in class org.apache.spark.api.java.JavaPairRDD
Persist this RDD with the default storage level (`MEMORY_ONLY`).
cache() - Method in class org.apache.spark.api.java.JavaRDD
Persist this RDD with the default storage level (`MEMORY_ONLY`).
cache() - Method in class org.apache.spark.graphx.Graph
Caches the vertices and edges associated with this graph at the previously-specified target storage levels, which default to MEMORY_ONLY.
cache() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
Persists the edge partitions using `targetStorageLevel`, which defaults to MEMORY_ONLY.
cache() - Method in class org.apache.spark.graphx.impl.GraphImpl
 
cache() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
Persists the vertex partitions at `targetStorageLevel`, which defaults to MEMORY_ONLY.
cache() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
Caches the underlying RDD.
cache() - Method in class org.apache.spark.rdd.RDD
Persist this RDD with the default storage level (`MEMORY_ONLY`).
cache() - Method in class org.apache.spark.sql.DataFrame
 
cache() - Method in class org.apache.spark.streaming.api.java.JavaDStream
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
cache() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
cache() - Method in class org.apache.spark.streaming.dstream.DStream
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
cacheManager() - Method in class org.apache.spark.SparkEnv
 
cacheTable(String) - Method in class org.apache.spark.sql.SQLContext
Caches the specified table in-memory.
calculate(DenseVector<Object>) - Method in class org.apache.spark.ml.classification.LogisticCostFun
 
calculate(DenseVector<Object>) - Method in class org.apache.spark.ml.regression.LeastSquaresCostFun
 
calculate(double[], double) - Static method in class org.apache.spark.mllib.tree.impurity.Entropy
:: DeveloperApi :: information calculation for multiclass classification
calculate(double, double, double) - Static method in class org.apache.spark.mllib.tree.impurity.Entropy
:: DeveloperApi :: variance calculation
calculate(double[], double) - Static method in class org.apache.spark.mllib.tree.impurity.Gini
:: DeveloperApi :: information calculation for multiclass classification
calculate(double, double, double) - Static method in class org.apache.spark.mllib.tree.impurity.Gini
:: DeveloperApi :: variance calculation
calculate(double[], double) - Method in interface org.apache.spark.mllib.tree.impurity.Impurity
:: DeveloperApi :: information calculation for multiclass classification
calculate(double, double, double) - Method in interface org.apache.spark.mllib.tree.impurity.Impurity
:: DeveloperApi :: information calculation for regression
calculate(double[], double) - Static method in class org.apache.spark.mllib.tree.impurity.Variance
:: DeveloperApi :: information calculation for multiclass classification
calculate(double, double, double) - Static method in class org.apache.spark.mllib.tree.impurity.Variance
:: DeveloperApi :: variance calculation
call(T) - Method in interface org.apache.spark.api.java.function.DoubleFlatMapFunction
 
call(T) - Method in interface org.apache.spark.api.java.function.DoubleFunction
 
call(T) - Method in interface org.apache.spark.api.java.function.FlatMapFunction
 
call(T1, T2) - Method in interface org.apache.spark.api.java.function.FlatMapFunction2
 
call(T1) - Method in interface org.apache.spark.api.java.function.Function
 
call() - Method in interface org.apache.spark.api.java.function.Function0
 
call(T1, T2) - Method in interface org.apache.spark.api.java.function.Function2
 
call(T1, T2, T3) - Method in interface org.apache.spark.api.java.function.Function3
 
call(T) - Method in interface org.apache.spark.api.java.function.PairFlatMapFunction
 
call(T) - Method in interface org.apache.spark.api.java.function.PairFunction
 
call(T) - Method in interface org.apache.spark.api.java.function.VoidFunction
 
call(T1) - Method in interface org.apache.spark.sql.api.java.UDF1
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10) - Method in interface org.apache.spark.sql.api.java.UDF10
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11) - Method in interface org.apache.spark.sql.api.java.UDF11
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12) - Method in interface org.apache.spark.sql.api.java.UDF12
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13) - Method in interface org.apache.spark.sql.api.java.UDF13
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14) - Method in interface org.apache.spark.sql.api.java.UDF14
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15) - Method in interface org.apache.spark.sql.api.java.UDF15
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16) - Method in interface org.apache.spark.sql.api.java.UDF16
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17) - Method in interface org.apache.spark.sql.api.java.UDF17
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18) - Method in interface org.apache.spark.sql.api.java.UDF18
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19) - Method in interface org.apache.spark.sql.api.java.UDF19
 
call(T1, T2) - Method in interface org.apache.spark.sql.api.java.UDF2
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20) - Method in interface org.apache.spark.sql.api.java.UDF20
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20, T21) - Method in interface org.apache.spark.sql.api.java.UDF21
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22) - Method in interface org.apache.spark.sql.api.java.UDF22
 
call(T1, T2, T3) - Method in interface org.apache.spark.sql.api.java.UDF3
 
call(T1, T2, T3, T4) - Method in interface org.apache.spark.sql.api.java.UDF4
 
call(T1, T2, T3, T4, T5) - Method in interface org.apache.spark.sql.api.java.UDF5
 
call(T1, T2, T3, T4, T5, T6) - Method in interface org.apache.spark.sql.api.java.UDF6
 
call(T1, T2, T3, T4, T5, T6, T7) - Method in interface org.apache.spark.sql.api.java.UDF7
 
call(T1, T2, T3, T4, T5, T6, T7, T8) - Method in interface org.apache.spark.sql.api.java.UDF8
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9) - Method in interface org.apache.spark.sql.api.java.UDF9
 
callUDF(Function0<?>, DataType) - Static method in class org.apache.spark.sql.functions
Call a Scala function of 0 arguments as user-defined function (UDF).
callUDF(Function1<?, ?>, DataType, Column) - Static method in class org.apache.spark.sql.functions
Call a Scala function of 1 arguments as user-defined function (UDF).
callUDF(Function2<?, ?, ?>, DataType, Column, Column) - Static method in class org.apache.spark.sql.functions
Call a Scala function of 2 arguments as user-defined function (UDF).
callUDF(Function3<?, ?, ?, ?>, DataType, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
Call a Scala function of 3 arguments as user-defined function (UDF).
callUDF(Function4<?, ?, ?, ?, ?>, DataType, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
Call a Scala function of 4 arguments as user-defined function (UDF).
callUDF(Function5<?, ?, ?, ?, ?, ?>, DataType, Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
Call a Scala function of 5 arguments as user-defined function (UDF).
callUDF(Function6<?, ?, ?, ?, ?, ?, ?>, DataType, Column, Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
Call a Scala function of 6 arguments as user-defined function (UDF).
callUDF(Function7<?, ?, ?, ?, ?, ?, ?, ?>, DataType, Column, Column, Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
Call a Scala function of 7 arguments as user-defined function (UDF).
callUDF(Function8<?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType, Column, Column, Column, Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
Call a Scala function of 8 arguments as user-defined function (UDF).
callUDF(Function9<?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType, Column, Column, Column, Column, Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
Call a Scala function of 9 arguments as user-defined function (UDF).
callUDF(Function10<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType, Column, Column, Column, Column, Column, Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
Call a Scala function of 10 arguments as user-defined function (UDF).
callUdf(String, Seq<Column>) - Static method in class org.apache.spark.sql.functions
Call an user-defined function.
cancel() - Method in class org.apache.spark.ComplexFutureAction
 
cancel() - Method in interface org.apache.spark.FutureAction
Cancels the execution of this action.
cancel() - Method in class org.apache.spark.SimpleFutureAction
 
cancelAllJobs() - Method in class org.apache.spark.api.java.JavaSparkContext
Cancel all jobs that have been scheduled or are running.
cancelAllJobs() - Method in class org.apache.spark.SparkContext
Cancel all jobs that have been scheduled or are running.
cancelJobGroup(String) - Method in class org.apache.spark.api.java.JavaSparkContext
Cancel active jobs for the specified group.
cancelJobGroup(String) - Method in class org.apache.spark.SparkContext
Cancel active jobs for the specified group.
canEqual(Object) - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
 
canEqual(Object) - Method in class org.apache.spark.util.MutablePair
 
canHandle(String) - Method in class org.apache.spark.sql.jdbc.AggregatedDialect
 
canHandle(String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
Check if this dialect instance can handle a certain jdbc url.
canHandle(String) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
 
canHandle(String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
 
canHandle(String) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
 
cartesian(JavaRDDLike<U, ?>) - Method in interface org.apache.spark.api.java.JavaRDDLike
Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is in this and b is in other.
cartesian(RDD<U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is in this and b is in other.
cast(DataType) - Method in class org.apache.spark.sql.Column
Casts the column to a different data type.
cast(String) - Method in class org.apache.spark.sql.Column
Casts the column to a different data type, using the canonical string representation of the type.
CatalystScan - Interface in org.apache.spark.sql.sources
::Experimental:: An interface for experimenting with a more direct connection to the query planner.
Categorical() - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
 
categoricalFeaturesInfo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
CategoricalSplit - Class in org.apache.spark.ml.tree
:: DeveloperApi :: Split which tests a categorical feature.
categories() - Method in class org.apache.spark.mllib.tree.model.Split
 
categoryMaps() - Method in class org.apache.spark.ml.feature.VectorIndexerModel
 
cbrt(Column) - Static method in class org.apache.spark.sql.functions
Computes the cube-root of the given value.
cbrt(String) - Static method in class org.apache.spark.sql.functions
Computes the cube-root of the given column.
ceil(Column) - Static method in class org.apache.spark.sql.functions
Computes the ceiling of the given value.
ceil(String) - Static method in class org.apache.spark.sql.functions
Computes the ceiling of the given column.
changePrecision(int, int) - Method in class org.apache.spark.sql.types.Decimal
Update precision and scale while keeping our value the same, and return true if successful.
checkpoint() - Method in interface org.apache.spark.api.java.JavaRDDLike
Mark this RDD for checkpointing.
checkpoint() - Method in class org.apache.spark.graphx.Graph
Mark this Graph for checkpointing.
checkpoint() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
 
checkpoint() - Method in class org.apache.spark.graphx.impl.GraphImpl
 
checkpoint() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
 
checkpoint() - Method in class org.apache.spark.rdd.HadoopRDD
 
checkpoint() - Method in class org.apache.spark.rdd.RDD
Mark this RDD for checkpointing.
checkpoint(Duration) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Enable periodic checkpointing of RDDs of this DStream.
checkpoint(String) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Sets the context to periodically checkpoint the DStream operations for master fault-tolerance.
checkpoint(Duration) - Method in class org.apache.spark.streaming.dstream.DStream
Enable periodic checkpointing of RDDs of this DStream
checkpoint(String) - Method in class org.apache.spark.streaming.StreamingContext
Set the context to periodically checkpoint the DStream operations for driver fault-tolerance.
checkpointData() - Method in class org.apache.spark.rdd.RDD
 
checkpointData() - Method in class org.apache.spark.streaming.dstream.DStream
 
checkpointDir() - Method in class org.apache.spark.SparkContext
 
checkpointDir() - Method in class org.apache.spark.streaming.StreamingContext
 
checkpointDuration() - Method in class org.apache.spark.streaming.dstream.DStream
 
checkpointDuration() - Method in class org.apache.spark.streaming.StreamingContext
 
checkpointInterval() - Method in class org.apache.spark.mllib.clustering.EMLDAOptimizer
 
checkpointInterval() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
checkSplits(double[]) - Static method in class org.apache.spark.ml.feature.Bucketizer
We require splits to be of length >= 3 and to be in strictly increasing order.
child() - Method in class org.apache.spark.sql.sources.Not
 
ChiSqSelector - Class in org.apache.spark.mllib.feature
:: Experimental :: Creates a ChiSquared feature selector.
ChiSqSelector(int) - Constructor for class org.apache.spark.mllib.feature.ChiSqSelector
 
ChiSqSelectorModel - Class in org.apache.spark.mllib.feature
:: Experimental :: Chi Squared selector model.
ChiSqSelectorModel(int[]) - Constructor for class org.apache.spark.mllib.feature.ChiSqSelectorModel
 
chiSqTest(Vector, Vector) - Static method in class org.apache.spark.mllib.stat.Statistics
Conduct Pearson's chi-squared goodness of fit test of the observed data against the expected distribution.
chiSqTest(Vector) - Static method in class org.apache.spark.mllib.stat.Statistics
Conduct Pearson's chi-squared goodness of fit test of the observed data against the uniform distribution, with each category having an expected frequency of 1 / observed.size.
chiSqTest(Matrix) - Static method in class org.apache.spark.mllib.stat.Statistics
Conduct Pearson's independence test on the input contingency matrix, which cannot contain negative entries or columns or rows that sum up to 0.
chiSqTest(RDD<LabeledPoint>) - Static method in class org.apache.spark.mllib.stat.Statistics
Conduct Pearson's independence test for every feature against the label across the input RDD.
ChiSqTestResult - Class in org.apache.spark.mllib.stat.test
:: Experimental :: Object containing the test results for the chi-squared hypothesis test.
Classification() - Static method in class org.apache.spark.mllib.tree.configuration.Algo
 
ClassificationModel<FeaturesType,M extends ClassificationModel<FeaturesType,M>> - Class in org.apache.spark.ml.classification
:: DeveloperApi ::
ClassificationModel() - Constructor for class org.apache.spark.ml.classification.ClassificationModel
 
ClassificationModel - Interface in org.apache.spark.mllib.classification
:: Experimental :: Represents a classification model that predicts to which of a set of categories an example belongs.
Classifier<FeaturesType,E extends Classifier<FeaturesType,E,M>,M extends ClassificationModel<FeaturesType,M>> - Class in org.apache.spark.ml.classification
:: DeveloperApi ::
Classifier() - Constructor for class org.apache.spark.ml.classification.Classifier
 
className() - Method in class org.apache.spark.ExceptionFailure
 
classpathEntries() - Method in class org.apache.spark.ui.env.EnvironmentListener
 
classTag() - Method in class org.apache.spark.api.java.JavaDoubleRDD
 
classTag() - Method in class org.apache.spark.api.java.JavaPairRDD
 
classTag() - Method in class org.apache.spark.api.java.JavaRDD
 
classTag() - Method in interface org.apache.spark.api.java.JavaRDDLike
 
classTag() - Method in class org.apache.spark.streaming.api.java.JavaDStream
 
classTag() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
 
classTag() - Method in class org.apache.spark.streaming.api.java.JavaInputDStream
 
classTag() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
 
classTag() - Method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
 
clean(long, boolean) - Method in class org.apache.spark.streaming.util.WriteAheadLog
Clean all the records that are older than the threshold time.
CleanAccum - Class in org.apache.spark
 
CleanAccum(long) - Constructor for class org.apache.spark.CleanAccum
 
CleanBroadcast - Class in org.apache.spark
 
CleanBroadcast(long) - Constructor for class org.apache.spark.CleanBroadcast
 
CleanCheckpoint - Class in org.apache.spark
 
CleanCheckpoint(int) - Constructor for class org.apache.spark.CleanCheckpoint
 
CleanRDD - Class in org.apache.spark
 
CleanRDD(int) - Constructor for class org.apache.spark.CleanRDD
 
CleanShuffle - Class in org.apache.spark
 
CleanShuffle(int) - Constructor for class org.apache.spark.CleanShuffle
 
CleanupTask - Interface in org.apache.spark
Classes that represent cleaning tasks.
CleanupTaskWeakReference - Class in org.apache.spark
A WeakReference associated with a CleanupTask.
CleanupTaskWeakReference(CleanupTask, Object, ReferenceQueue<Object>) - Constructor for class org.apache.spark.CleanupTaskWeakReference
 
clear(Param<?>) - Method in interface org.apache.spark.ml.param.Params
Clears the user-supplied value for the input param.
clearCache() - Method in class org.apache.spark.sql.SQLContext
Removes all cached tables from the in-memory cache.
clearCallSite() - Method in class org.apache.spark.api.java.JavaSparkContext
Pass-through to SparkContext.setCallSite.
clearCallSite() - Method in class org.apache.spark.SparkContext
Clear the thread-local property for overriding the call sites of actions and RDDs.
clearDependencies() - Method in class org.apache.spark.rdd.CoGroupedRDD
 
clearDependencies() - Method in class org.apache.spark.rdd.ShuffledRDD
 
clearDependencies() - Method in class org.apache.spark.rdd.UnionRDD
 
clearFiles() - Method in class org.apache.spark.api.java.JavaSparkContext
Clear the job's list of files added by addFile so that they do not get downloaded to any new nodes.
clearFiles() - Method in class org.apache.spark.SparkContext
Clear the job's list of files added by addFile so that they do not get downloaded to any new nodes.
clearJars() - Method in class org.apache.spark.api.java.JavaSparkContext
Clear the job's list of JARs added by addJar so that they do not get downloaded to any new nodes.
clearJars() - Method in class org.apache.spark.SparkContext
Clear the job's list of JARs added by addJar so that they do not get downloaded to any new nodes.
clearJobGroup() - Method in class org.apache.spark.api.java.JavaSparkContext
Clear the current thread's job group ID and its description.
clearJobGroup() - Method in class org.apache.spark.SparkContext
Clear the current thread's job group ID and its description.
clearThreshold() - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
:: Experimental :: Clears the threshold so that predict will output raw prediction scores.
clearThreshold() - Method in class org.apache.spark.mllib.classification.SVMModel
:: Experimental :: Clears the threshold so that predict will output raw prediction scores.
clone() - Method in class org.apache.spark.SparkConf
Copy this object
clone() - Method in class org.apache.spark.sql.types.Decimal
 
clone() - Method in class org.apache.spark.sql.types.UTF8String
 
clone() - Method in class org.apache.spark.storage.StorageLevel
 
clone() - Method in class org.apache.spark.util.random.BernoulliCellSampler
 
clone() - Method in class org.apache.spark.util.random.BernoulliSampler
 
clone() - Method in class org.apache.spark.util.random.PoissonSampler
 
clone() - Method in interface org.apache.spark.util.random.RandomSampler
return a copy of the RandomSampler object
cloneComplement() - Method in class org.apache.spark.util.random.BernoulliCellSampler
Return a sampler that is the complement of the range specified of the current sampler.
close() - Method in class org.apache.spark.api.java.JavaSparkContext
 
close() - Method in class org.apache.spark.input.PortableDataStream
Close the file (if it is currently open)
close() - Method in class org.apache.spark.io.SnappyOutputStreamWrapper
 
close() - Method in class org.apache.spark.serializer.DeserializationStream
 
close() - Method in class org.apache.spark.serializer.SerializationStream
 
close() - Method in class org.apache.spark.sql.sources.OutputWriter
Closes the OutputWriter.
close() - Method in class org.apache.spark.storage.TimeTrackingOutputStream
 
close() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
 
close() - Method in class org.apache.spark.streaming.util.WriteAheadLog
Close this log and release any resources.
closureSerializer() - Method in class org.apache.spark.SparkEnv
 
cls() - Method in class org.apache.spark.util.MethodIdentifier
 
cluster() - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
 
clusterCenters() - Method in class org.apache.spark.mllib.clustering.KMeansModel
 
clusterCenters() - Method in class org.apache.spark.mllib.clustering.StreamingKMeansModel
 
clusterWeights() - Method in class org.apache.spark.mllib.clustering.StreamingKMeansModel
 
cn() - Method in class org.apache.spark.mllib.feature.VocabWord
 
coalesce(int) - Method in class org.apache.spark.api.java.JavaDoubleRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int, boolean) - Method in class org.apache.spark.api.java.JavaDoubleRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int) - Method in class org.apache.spark.api.java.JavaPairRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int, boolean) - Method in class org.apache.spark.api.java.JavaPairRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int) - Method in class org.apache.spark.api.java.JavaRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int, boolean) - Method in class org.apache.spark.api.java.JavaRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int, boolean, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int) - Method in class org.apache.spark.sql.DataFrame
Returns a new DataFrame that has exactly numPartitions partitions.
coalesce(Column...) - Static method in class org.apache.spark.sql.functions
Returns the first column that is not null.
coalesce(Seq<Column>) - Static method in class org.apache.spark.sql.functions
Returns the first column that is not null.
code() - Method in class org.apache.spark.mllib.feature.VocabWord
 
codeLen() - Method in class org.apache.spark.mllib.feature.VocabWord
 
cogroup(JavaPairRDD<K, W>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(JavaPairRDD<K, W>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(RDD<Tuple2<K, W>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
 
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
 
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
 
cogroup(RDD<Tuple2<K, W>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(RDD<Tuple2<K, W>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(JavaPairDStream<K, W>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
cogroup(JavaPairDStream<K, W>, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
cogroup(JavaPairDStream<K, W>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
cogroup(DStream<Tuple2<K, W>>, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
cogroup(DStream<Tuple2<K, W>>, int, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
cogroup(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
CoGroupedRDD<K> - Class in org.apache.spark.rdd
:: DeveloperApi :: A RDD that cogroups its parents.
CoGroupedRDD(Seq<RDD<? extends Product2<K, ?>>>, Partitioner) - Constructor for class org.apache.spark.rdd.CoGroupedRDD
 
col(String) - Method in class org.apache.spark.sql.DataFrame
Selects column based on the column name and return it as a Column.
col(String) - Static method in class org.apache.spark.sql.functions
Returns a Column based on the given column name.
collect() - Method in interface org.apache.spark.api.java.JavaRDDLike
Return an array that contains all of the elements in this RDD.
collect() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
 
collect() - Method in class org.apache.spark.rdd.RDD
Return an array that contains all of the elements in this RDD.
collect(PartialFunction<T, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
Return an RDD that contains all matching values by applying f.
collect() - Method in class org.apache.spark.sql.DataFrame
Returns an array that contains all of Rows in this DataFrame.
collectAsList() - Method in class org.apache.spark.sql.DataFrame
Returns a Java list that contains all of Rows in this DataFrame.
collectAsMap() - Method in class org.apache.spark.api.java.JavaPairRDD
Return the key-value pairs in this RDD to the master as a Map.
collectAsMap() - Method in class org.apache.spark.rdd.PairRDDFunctions
Return the key-value pairs in this RDD to the master as a Map.
collectAsync() - Method in interface org.apache.spark.api.java.JavaRDDLike
The asynchronous version of collect, which returns a future for retrieving an array containing all of the elements in this RDD.
collectAsync() - Method in class org.apache.spark.rdd.AsyncRDDActions
Returns a future for retrieving all elements of this RDD.
collectEdges(EdgeDirection) - Method in class org.apache.spark.graphx.GraphOps
Returns an RDD that contains for each vertex v its local edges, i.e., the edges that are incident on v, in the user-specified direction.
collectNeighborIds(EdgeDirection) - Method in class org.apache.spark.graphx.GraphOps
Collect the neighbor vertex ids for each vertex.
collectNeighbors(EdgeDirection) - Method in class org.apache.spark.graphx.GraphOps
Collect the neighbor vertex attributes for each vertex.
collectPartitions(int[]) - Method in interface org.apache.spark.api.java.JavaRDDLike
Return an array that contains all of the elements in a specific partition of this RDD.
colPtrs() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
 
colsPerBlock() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
 
colStats(RDD<Vector>) - Static method in class org.apache.spark.mllib.stat.Statistics
Computes column-wise summary statistics for the input RDD[Vector].
Column - Class in org.apache.spark.sql
:: Experimental :: A column in a DataFrame.
Column(Expression) - Constructor for class org.apache.spark.sql.Column
 
Column(String) - Constructor for class org.apache.spark.sql.Column
 
column(String) - Static method in class org.apache.spark.sql.functions
Returns a Column based on the given column name.
ColumnName - Class in org.apache.spark.sql
:: Experimental :: A convenient class used for constructing schema.
ColumnName(String) - Constructor for class org.apache.spark.sql.ColumnName
 
columns() - Method in class org.apache.spark.sql.DataFrame
Returns all column names as an array.
columnSimilarities() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
Compute all cosine similarities between columns of this matrix using the brute-force approach of computing normalized dot products.
columnSimilarities(double) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
Compute similarities between columns of this matrix using a sampling approach.
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, Serializer) - Method in class org.apache.spark.api.java.JavaPairRDD
Generic function to combine the elements for each key using a custom set of aggregation functions.
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
Generic function to combine the elements for each key using a custom set of aggregation functions.
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
Simplified version of combineByKey that hash-partitions the output RDD and uses map-side aggregation.
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>) - Method in class org.apache.spark.api.java.JavaPairRDD
Simplified version of combineByKey that hash-partitions the resulting RDD using the existing partitioner/parallelism level and using map-side aggregation.
combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, Serializer) - Method in class org.apache.spark.rdd.PairRDDFunctions
Generic function to combine the elements for each key using a custom set of aggregation functions.
combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
Simplified version of combineByKey that hash-partitions the output RDD.
combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>) - Method in class org.apache.spark.rdd.PairRDDFunctions
 
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Combine elements of each key in DStream's RDDs using custom function.
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Combine elements of each key in DStream's RDDs using custom function.
combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, ClassTag<C>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Combine elements of each key in DStream's RDDs using custom functions.
combineCombinersByKey(Iterator<Product2<K, C>>) - Method in class org.apache.spark.Aggregator
 
combineCombinersByKey(Iterator<Product2<K, C>>, TaskContext) - Method in class org.apache.spark.Aggregator
 
combineValuesByKey(Iterator<Product2<K, V>>) - Method in class org.apache.spark.Aggregator
 
combineValuesByKey(Iterator<Product2<K, V>>, TaskContext) - Method in class org.apache.spark.Aggregator
 
compare(PartitionGroup, PartitionGroup) - Method in class org.apache.spark.rdd.PartitionCoalescer
 
compare(Option<PartitionGroup>, Option<PartitionGroup>) - Method in class org.apache.spark.rdd.PartitionCoalescer
 
compare(Decimal) - Method in class org.apache.spark.sql.types.Decimal
 
compare(UTF8String) - Method in class org.apache.spark.sql.types.UTF8String
 
compare(RDDInfo) - Method in class org.apache.spark.storage.RDDInfo
 
compareTo(UTF8String) - Method in class org.apache.spark.sql.types.UTF8String
 
compareTo(SparkShutdownHook) - Method in class org.apache.spark.util.SparkShutdownHook
 
completed() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
completedJobs() - Method in class org.apache.spark.ui.jobs.JobProgressListener
 
completedStages() - Method in class org.apache.spark.ui.jobs.JobProgressListener
 
completedTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
 
completionTime() - Method in class org.apache.spark.scheduler.StageInfo
Time when all tasks in the stage completed or when the stage was cancelled.
completionTime() - Method in class org.apache.spark.status.api.v1.JobData
 
ComplexFutureAction<T> - Class in org.apache.spark
A FutureAction for actions that could trigger multiple Spark jobs.
ComplexFutureAction() - Constructor for class org.apache.spark.ComplexFutureAction
 
compressed() - Method in interface org.apache.spark.mllib.linalg.Vector
Returns a vector in either dense or sparse format, whichever uses less storage.
compressedInputStream(InputStream) - Method in interface org.apache.spark.io.CompressionCodec
 
compressedInputStream(InputStream) - Method in class org.apache.spark.io.LZ4CompressionCodec
 
compressedInputStream(InputStream) - Method in class org.apache.spark.io.LZFCompressionCodec
 
compressedInputStream(InputStream) - Method in class org.apache.spark.io.SnappyCompressionCodec
 
compressedOutputStream(OutputStream) - Method in interface org.apache.spark.io.CompressionCodec
 
compressedOutputStream(OutputStream) - Method in class org.apache.spark.io.LZ4CompressionCodec
 
compressedOutputStream(OutputStream) - Method in class org.apache.spark.io.LZFCompressionCodec
 
compressedOutputStream(OutputStream) - Method in class org.apache.spark.io.SnappyCompressionCodec
 
CompressionCodec - Interface in org.apache.spark.io
:: DeveloperApi :: CompressionCodec allows the customization of choosing different compression implementations to be used in block storage.
compute(Partition, TaskContext) - Method in class org.apache.spark.api.r.BaseRRDD
 
compute(Partition, TaskContext) - Method in class org.apache.spark.graphx.EdgeRDD
 
compute(Partition, TaskContext) - Method in class org.apache.spark.graphx.VertexRDD
Provides the RDD[(VertexId, VD)] equivalent output.
compute(Vector, double, Vector) - Method in class org.apache.spark.mllib.optimization.Gradient
Compute the gradient and loss given the features of a single data point.
compute(Vector, double, Vector, Vector) - Method in class org.apache.spark.mllib.optimization.Gradient
Compute the gradient and loss given the features of a single data point, add the gradient to a provided vector to avoid creating new objects, and return loss.
compute(Vector, double, Vector) - Method in class org.apache.spark.mllib.optimization.HingeGradient
 
compute(Vector, double, Vector, Vector) - Method in class org.apache.spark.mllib.optimization.HingeGradient
 
compute(Vector, Vector, double, int, double) - Method in class org.apache.spark.mllib.optimization.L1Updater
 
compute(Vector, double, Vector) - Method in class org.apache.spark.mllib.optimization.LeastSquaresGradient
 
compute(Vector, double, Vector, Vector) - Method in class org.apache.spark.mllib.optimization.LeastSquaresGradient
 
compute(Vector, double, Vector) - Method in class org.apache.spark.mllib.optimization.LogisticGradient
 
compute(Vector, double, Vector, Vector) - Method in class org.apache.spark.mllib.optimization.LogisticGradient
 
compute(Vector, Vector, double, int, double) - Method in class org.apache.spark.mllib.optimization.SimpleUpdater
 
compute(Vector, Vector, double, int, double) - Method in class org.apache.spark.mllib.optimization.SquaredL2Updater
 
compute(Vector, Vector, double, int, double) - Method in class org.apache.spark.mllib.optimization.Updater
Compute an updated value for weights given the gradient, stepSize, iteration number and regularization parameter.
compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.CoGroupedRDD
 
compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.HadoopRDD
 
compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.JdbcRDD
 
compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.NewHadoopRDD
 
compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.PartitionPruningRDD
 
compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.RDD
:: DeveloperApi :: Implemented by subclasses to compute a given partition.
compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.ShuffledRDD
 
compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.UnionRDD
 
compute(Time) - Method in class org.apache.spark.streaming.api.java.JavaDStream
Generate an RDD for the given duration
compute(Time) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Method that generates a RDD for the given Duration
compute(Time) - Method in class org.apache.spark.streaming.dstream.ConstantInputDStream
 
compute(Time) - Method in class org.apache.spark.streaming.dstream.DStream
Method that generates a RDD for the given time
compute(Time) - Method in class org.apache.spark.streaming.dstream.ReceiverInputDStream
 
computeColumnSummaryStatistics() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes column-wise summary statistics.
computeCost(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.KMeansModel
Return the K-means cost (sum of squared distances of points to their nearest center) for this model on the given data.
computeCovariance() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes the covariance matrix, treating each row as an observation.
computeError(org.apache.spark.mllib.tree.model.TreeEnsembleModel, RDD<LabeledPoint>) - Method in interface org.apache.spark.mllib.tree.loss.Loss
Method to calculate error of the base learner for the gradient boosting calculation.
computeError(double, double) - Method in interface org.apache.spark.mllib.tree.loss.Loss
Method to calculate loss when the predictions are already known.
computeGramianMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
 
computeGramianMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes the Gramian matrix A^T A.
computeInitialPredictionAndError(RDD<LabeledPoint>, double, DecisionTreeModel, Loss) - Static method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
Compute the initial predictions and errors for a dataset for the first iteration of gradient boosting.
computePreferredLocations(Seq<InputFormatInfo>) - Static method in class org.apache.spark.scheduler.InputFormatInfo
Computes the preferred locations based on input(s) and returned a location to block map.
computePrincipalComponents(int) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes the top k principal components.
computeSVD(int, boolean, double) - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
 
computeSVD(int, boolean, double) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes singular value decomposition of this matrix.
conf() - Method in class org.apache.spark.SparkEnv
 
conf() - Method in class org.apache.spark.streaming.StreamingContext
 
confidence() - Method in class org.apache.spark.partial.BoundedDouble
 
configuration() - Method in class org.apache.spark.scheduler.InputFormatInfo
 
CONFIGURATION_INSTANTIATION_LOCK() - Static method in class org.apache.spark.rdd.HadoopRDD
Configuration's constructor is not threadsafe (see SPARK-1097 and HADOOP-10456).
confusionMatrix() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
Returns confusion matrix: predicted classes are in columns, they are ordered by class label ascending, as in "labels"
connectedComponents() - Method in class org.apache.spark.graphx.GraphOps
Compute the connected component membership of each vertex and return a graph with the vertex value containing the lowest vertex id in the connected component containing that vertex.
ConnectedComponents - Class in org.apache.spark.graphx.lib
Connected components algorithm.
ConnectedComponents() - Constructor for class org.apache.spark.graphx.lib.ConnectedComponents
 
ConstantInputDStream<T> - Class in org.apache.spark.streaming.dstream
An input stream that always returns the same RDD on each timestep.
ConstantInputDStream(StreamingContext, RDD<T>, ClassTag<T>) - Constructor for class org.apache.spark.streaming.dstream.ConstantInputDStream
 
contains(Param<?>) - Method in class org.apache.spark.ml.param.ParamMap
Checks whether a parameter is explicitly specified.
contains(String) - Method in class org.apache.spark.SparkConf
Does the configuration contain a given parameter?
contains(Object) - Method in class org.apache.spark.sql.Column
Contains the other element.
contains(String) - Method in class org.apache.spark.sql.types.Metadata
Tests whether this Metadata contains a binding for a key.
contains(UTF8String) - Method in class org.apache.spark.sql.types.UTF8String
 
containsBlock(BlockId) - Method in class org.apache.spark.storage.StorageStatus
Return whether the given block is stored in this block manager in O(1) time.
containsCachedMetadata(String) - Static method in class org.apache.spark.rdd.HadoopRDD
 
containsNull() - Method in class org.apache.spark.sql.types.ArrayType
 
context() - Method in interface org.apache.spark.api.java.JavaRDDLike
The SparkContext that this RDD was created on.
context() - Method in class org.apache.spark.InterruptibleIterator
 
context() - Method in class org.apache.spark.rdd.RDD
The SparkContext that this RDD was created on.
context() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Return the StreamingContext associated with this DStream
context() - Method in class org.apache.spark.streaming.dstream.DStream
Return the StreamingContext associated with this DStream
Continuous() - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
 
ContinuousSplit - Class in org.apache.spark.ml.tree
:: DeveloperApi :: Split which tests a continuous feature.
convertToCanonicalEdges(Function2<ED, ED, ED>) - Method in class org.apache.spark.graphx.GraphOps
Convert bi-directional edges into uni-directional ones.
CoordinateMatrix - Class in org.apache.spark.mllib.linalg.distributed
 
CoordinateMatrix(RDD<MatrixEntry>, long, long) - Constructor for class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
 
CoordinateMatrix(RDD<MatrixEntry>) - Constructor for class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.GBTClassificationModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.Estimator
 
copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.Evaluator
 
copy(ParamMap) - Method in class org.apache.spark.ml.Model
 
copy() - Method in class org.apache.spark.ml.param.ParamMap
Creates a copy of this param map.
copy(ParamMap) - Method in interface org.apache.spark.ml.param.Params
Creates a copy of this instance with the same UID and some extra params.
copy(ParamMap) - Method in class org.apache.spark.ml.Pipeline
 
copy(ParamMap) - Method in class org.apache.spark.ml.PipelineModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.PipelineStage
 
copy(ParamMap) - Method in class org.apache.spark.ml.Predictor
 
copy(ParamMap) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.regression.GBTRegressionModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.regression.LinearRegressionModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.Transformer
 
copy(ParamMap) - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
 
copy() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
 
copy() - Method in class org.apache.spark.mllib.linalg.DenseVector
 
copy() - Method in interface org.apache.spark.mllib.linalg.Matrix
Get a deep copy of the matrix.
copy() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
 
copy() - Method in class org.apache.spark.mllib.linalg.SparseVector
 
copy() - Method in interface org.apache.spark.mllib.linalg.Vector
Makes a deep copy of this vector.
copy() - Method in class org.apache.spark.mllib.random.ExponentialGenerator
 
copy() - Method in class org.apache.spark.mllib.random.GammaGenerator
 
copy() - Method in class org.apache.spark.mllib.random.LogNormalGenerator
 
copy() - Method in class org.apache.spark.mllib.random.PoissonGenerator
 
copy() - Method in interface org.apache.spark.mllib.random.RandomDataGenerator
Returns a copy of the RandomDataGenerator with a new instance of the rng object used in the class when applicable for non-locking concurrent usage.
copy() - Method in class org.apache.spark.mllib.random.StandardNormalGenerator
 
copy() - Method in class org.apache.spark.mllib.random.UniformGenerator
 
copy() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
Returns a shallow copy of this instance.
copy() - Method in interface org.apache.spark.sql.Row
Make a copy of the current Row object.
copy() - Method in class org.apache.spark.util.StatCounter
Clone this StatCounter
copyValues(T, ParamMap) - Method in interface org.apache.spark.ml.param.Params
Copies param values from this instance to another instance for params shared by them.
corr(RDD<Vector>) - Static method in class org.apache.spark.mllib.stat.Statistics
Compute the Pearson correlation matrix for the input RDD of Vectors.
corr(RDD<Vector>, String) - Static method in class org.apache.spark.mllib.stat.Statistics
Compute the correlation matrix for the input RDD of Vectors using the specified method.
corr(RDD<Object>, RDD<Object>) - Static method in class org.apache.spark.mllib.stat.Statistics
Compute the Pearson correlation for the input RDDs.
corr(JavaRDD<Double>, JavaRDD<Double>) - Static method in class org.apache.spark.mllib.stat.Statistics
Java-friendly version of corr()
corr(RDD<Object>, RDD<Object>, String) - Static method in class org.apache.spark.mllib.stat.Statistics
Compute the correlation for the input RDDs using the specified method.
corr(JavaRDD<Double>, JavaRDD<Double>, String) - Static method in class org.apache.spark.mllib.stat.Statistics
Java-friendly version of corr()
corr(String, String, String) - Method in class org.apache.spark.sql.DataFrameStatFunctions
Calculates the correlation of two columns of a DataFrame.
corr(String, String) - Method in class org.apache.spark.sql.DataFrameStatFunctions
Calculates the Pearson Correlation Coefficient of two columns of a DataFrame.
cos(Column) - Static method in class org.apache.spark.sql.functions
Computes the cosine of the given value.
cos(String) - Static method in class org.apache.spark.sql.functions
Computes the cosine of the given column.
cosh(Column) - Static method in class org.apache.spark.sql.functions
Computes the hyperbolic cosine of the given value.
cosh(String) - Static method in class org.apache.spark.sql.functions
Computes the hyperbolic cosine of the given column.
count() - Method in interface org.apache.spark.api.java.JavaRDDLike
Return the number of elements in the RDD.
count() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
The number of edges in the RDD.
count() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
The number of vertices in the RDD.
count() - Method in class org.apache.spark.ml.classification.LogisticAggregator
 
count() - Method in class org.apache.spark.ml.regression.LeastSquaresAggregator
 
count() - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
 
count() - Method in interface org.apache.spark.mllib.stat.MultivariateStatisticalSummary
Sample size.
count() - Method in class org.apache.spark.rdd.RDD
Return the number of elements in the RDD.
count() - Method in class org.apache.spark.sql.DataFrame
Returns the number of rows in the DataFrame.
count(Column) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the number of items in a group.
count(String) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the number of items in a group.
count() - Method in class org.apache.spark.sql.GroupedData
Count the number of rows for each group.
count() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD has a single element generated by counting each RDD of this DStream.
count() - Method in class org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD has a single element generated by counting each RDD of this DStream.
count() - Method in class org.apache.spark.util.StatCounter
 
countApprox(long, double) - Method in interface org.apache.spark.api.java.JavaRDDLike
:: Experimental :: Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.
countApprox(long) - Method in interface org.apache.spark.api.java.JavaRDDLike
:: Experimental :: Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.
countApprox(long, double) - Method in class org.apache.spark.rdd.RDD
:: Experimental :: Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.
countApproxDistinct(double) - Method in interface org.apache.spark.api.java.JavaRDDLike
Return approximate number of distinct elements in the RDD.
countApproxDistinct(int, int) - Method in class org.apache.spark.rdd.RDD
:: Experimental :: Return approximate number of distinct elements in the RDD.
countApproxDistinct(double) - Method in class org.apache.spark.rdd.RDD
Return approximate number of distinct elements in the RDD.
countApproxDistinctByKey(double, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
Return approximate number of distinct values for each key in this RDD.
countApproxDistinctByKey(double, int) - Method in class org.apache.spark.api.java.JavaPairRDD
Return approximate number of distinct values for each key in this RDD.
countApproxDistinctByKey(double) - Method in class org.apache.spark.api.java.JavaPairRDD
Return approximate number of distinct values for each key in this RDD.
countApproxDistinctByKey(int, int, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
:: Experimental ::
countApproxDistinctByKey(double, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
Return approximate number of distinct values for each key in this RDD.
countApproxDistinctByKey(double, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
Return approximate number of distinct values for each key in this RDD.
countApproxDistinctByKey(double) - Method in class org.apache.spark.rdd.PairRDDFunctions
Return approximate number of distinct values for each key in this RDD.
countAsync() - Method in interface org.apache.spark.api.java.JavaRDDLike
The asynchronous version of count, which returns a future for counting the number of elements in this RDD.
countAsync() - Method in class org.apache.spark.rdd.AsyncRDDActions
Returns a future for counting the number of elements in the RDD.
countByKey() - Method in class org.apache.spark.api.java.JavaPairRDD
Count the number of elements for each key, and return the result to the master as a Map.
countByKey() - Method in class org.apache.spark.rdd.PairRDDFunctions
Count the number of elements for each key, collecting the results to a local Map.
countByKeyApprox(long) - Method in class org.apache.spark.api.java.JavaPairRDD
:: Experimental :: Approximate version of countByKey that can return a partial result if it does not finish within a timeout.
countByKeyApprox(long, double) - Method in class org.apache.spark.api.java.JavaPairRDD
:: Experimental :: Approximate version of countByKey that can return a partial result if it does not finish within a timeout.
countByKeyApprox(long, double) - Method in class org.apache.spark.rdd.PairRDDFunctions
:: Experimental :: Approximate version of countByKey that can return a partial result if it does not finish within a timeout.
countByValue() - Method in interface org.apache.spark.api.java.JavaRDDLike
Return the count of each unique value in this RDD as a map of (value, count) pairs.
countByValue(Ordering<T>) - Method in class org.apache.spark.rdd.RDD
Return the count of each unique value in this RDD as a local map of (value, count) pairs.
countByValue() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD contains the counts of each distinct value in each RDD of this DStream.
countByValue(int) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD contains the counts of each distinct value in each RDD of this DStream.
countByValue(int, Ordering<T>) - Method in class org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD contains the counts of each distinct value in each RDD of this DStream.
countByValueAndWindow(Duration, Duration) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD contains the count of distinct elements in RDDs in a sliding window over this DStream.
countByValueAndWindow(Duration, Duration, int) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD contains the count of distinct elements in RDDs in a sliding window over this DStream.
countByValueAndWindow(Duration, Duration, int, Ordering<T>) - Method in class org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD contains the count of distinct elements in RDDs in a sliding window over this DStream.
countByValueApprox(long, double) - Method in interface org.apache.spark.api.java.JavaRDDLike
(Experimental) Approximate version of countByValue().
countByValueApprox(long) - Method in interface org.apache.spark.api.java.JavaRDDLike
(Experimental) Approximate version of countByValue().
countByValueApprox(long, double, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
:: Experimental :: Approximate version of countByValue().
countByWindow(Duration, Duration) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD has a single element generated by counting the number of elements in a window over this DStream.
countByWindow(Duration, Duration) - Method in class org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD has a single element generated by counting the number of elements in a sliding window over this DStream.
countDistinct(Column, Column...) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the number of distinct items in a group.
countDistinct(String, String...) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the number of distinct items in a group.
countDistinct(Column, Seq<Column>) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the number of distinct items in a group.
countDistinct(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the number of distinct items in a group.
cov(String, String) - Method in class org.apache.spark.sql.DataFrameStatFunctions
Calculate the sample covariance of two numerical columns of a DataFrame.
CreatableRelationProvider - Interface in org.apache.spark.sql.sources
 
create(boolean, boolean, boolean, int) - Static method in class org.apache.spark.api.java.StorageLevels
Deprecated.
create(boolean, boolean, boolean, boolean, int) - Static method in class org.apache.spark.api.java.StorageLevels
Create a new StorageLevel object.
create(JavaSparkContext, JdbcRDD.ConnectionFactory, String, long, long, int, Function<ResultSet, T>) - Static method in class org.apache.spark.rdd.JdbcRDD
Create an RDD that executes an SQL query on a JDBC connection and reads results.
create(JavaSparkContext, JdbcRDD.ConnectionFactory, String, long, long, int) - Static method in class org.apache.spark.rdd.JdbcRDD
Create an RDD that executes an SQL query on a JDBC connection and reads results.
create(RDD<T>, Function1<Object, Object>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
Create a PartitionPruningRDD.
create(Object...) - Static method in class org.apache.spark.sql.RowFactory
Create a Row from the given arguments.
create() - Method in interface org.apache.spark.streaming.api.java.JavaStreamingContextFactory
 
create(String, int) - Static method in class org.apache.spark.streaming.kafka.Broker
 
create(String, int, long, long) - Static method in class org.apache.spark.streaming.kafka.OffsetRange
 
create(TopicAndPartition, long, long) - Static method in class org.apache.spark.streaming.kafka.OffsetRange
 
createArrayType(DataType) - Static method in class org.apache.spark.sql.types.DataTypes
Creates an ArrayType by specifying the data type of elements (elementType).
createArrayType(DataType, boolean) - Static method in class org.apache.spark.sql.types.DataTypes
Creates an ArrayType by specifying the data type of elements (elementType) and whether the array contains null values (containsNull).
createBroker(String, Integer) - Method in class org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper
 
createCombiner() - Method in class org.apache.spark.Aggregator
 
createDataFrame(RDD<A>, TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.SQLContext
 
createDataFrame(Seq<A>, TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.SQLContext
 
createDataFrame(RDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
 
createDataFrame(JavaRDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
 
createDataFrame(RDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
 
createDataFrame(JavaRDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
 
createDecimalType(int, int) - Static method in class org.apache.spark.sql.types.DataTypes
 
createDecimalType() - Static method in class org.apache.spark.sql.types.DataTypes
 
createDirectStream(StreamingContext, Map<String, String>, Map<TopicAndPartition, Object>, Function1<MessageAndMetadata<K, V>, R>, ClassTag<K>, ClassTag<V>, ClassTag<KD>, ClassTag<VD>, ClassTag<R>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
:: Experimental :: Create an input stream that directly pulls messages from Kafka Brokers without using any receiver.
createDirectStream(StreamingContext, Map<String, String>, Set<String>, ClassTag<K>, ClassTag<V>, ClassTag<KD>, ClassTag<VD>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
:: Experimental :: Create an input stream that directly pulls messages from Kafka Brokers without using any receiver.
createDirectStream(JavaStreamingContext, Class<K>, Class<V>, Class<KD>, Class<VD>, Class<R>, Map<String, String>, Map<TopicAndPartition, Long>, Function<MessageAndMetadata<K, V>, R>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
:: Experimental :: Create an input stream that directly pulls messages from Kafka Brokers without using any receiver.
createDirectStream(JavaStreamingContext, Class<K>, Class<V>, Class<KD>, Class<VD>, Map<String, String>, Set<String>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
:: Experimental :: Create an input stream that directly pulls messages from Kafka Brokers without using any receiver.
createDirectStream(JavaStreamingContext, Map<String, String>, Set<String>, Map<TopicAndPartition, Long>) - Method in class org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper
 
createExternalTable(String, String) - Method in class org.apache.spark.sql.SQLContext
 
createExternalTable(String, String, String) - Method in class org.apache.spark.sql.SQLContext
 
createExternalTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
 
createExternalTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
 
createExternalTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
 
createExternalTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
 
createJDBCTable(String, String, boolean) - Method in class org.apache.spark.sql.DataFrame
Deprecated.
As of 1.340, replaced by write().jdbc().
createMapType(DataType, DataType) - Static method in class org.apache.spark.sql.types.DataTypes
Creates a MapType by specifying the data type of keys (keyType) and values (keyType).
createMapType(DataType, DataType, boolean) - Static method in class org.apache.spark.sql.types.DataTypes
Creates a MapType by specifying the data type of keys (keyType), the data type of values (keyType), and whether values contain any null value (valueContainsNull).
createOffsetRange(String, Integer, Long, Long) - Method in class org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper
 
createPollingStream(StreamingContext, String, int, StorageLevel) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
createPollingStream(StreamingContext, Seq<InetSocketAddress>, StorageLevel) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
createPollingStream(StreamingContext, Seq<InetSocketAddress>, StorageLevel, int, int) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
createPollingStream(JavaStreamingContext, String, int) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
createPollingStream(JavaStreamingContext, String, int, StorageLevel) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
createPollingStream(JavaStreamingContext, InetSocketAddress[], StorageLevel) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
createPollingStream(JavaStreamingContext, InetSocketAddress[], StorageLevel, int, int) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
createRDD(SparkContext, Map<String, String>, OffsetRange[], ClassTag<K>, ClassTag<V>, ClassTag<KD>, ClassTag<VD>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
Create a RDD from Kafka using offset ranges for each topic and partition.
createRDD(SparkContext, Map<String, String>, OffsetRange[], Map<TopicAndPartition, Broker>, Function1<MessageAndMetadata<K, V>, R>, ClassTag<K>, ClassTag<V>, ClassTag<KD>, ClassTag<VD>, ClassTag<R>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
:: Experimental :: Create a RDD from Kafka using offset ranges for each topic and partition.
createRDD(JavaSparkContext, Class<K>, Class<V>, Class<KD>, Class<VD>, Map<String, String>, OffsetRange[]) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
Create a RDD from Kafka using offset ranges for each topic and partition.
createRDD(JavaSparkContext, Class<K>, Class<V>, Class<KD>, Class<VD>, Class<R>, Map<String, String>, OffsetRange[], Map<TopicAndPartition, Broker>, Function<MessageAndMetadata<K, V>, R>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
:: Experimental :: Create a RDD from Kafka using offset ranges for each topic and partition.
createRDD(JavaSparkContext, Map<String, String>, List<OffsetRange>, Map<TopicAndPartition, Broker>) - Method in class org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper
 
createRDDFromArray(JavaSparkContext, byte[][]) - Static method in class org.apache.spark.api.r.RRDD
Create an RRDD given a sequence of byte arrays.
createRelation(SQLContext, SaveMode, Map<String, String>, DataFrame) - Method in interface org.apache.spark.sql.sources.CreatableRelationProvider
Creates a relation with the given parameters based on the contents of the given DataFrame.
createRelation(SQLContext, String[], Option<StructType>, Option<StructType>, Map<String, String>) - Method in interface org.apache.spark.sql.sources.HadoopFsRelationProvider
Returns a new base relation with the given parameters, a user defined schema, and a list of partition columns.
createRelation(SQLContext, Map<String, String>) - Method in interface org.apache.spark.sql.sources.RelationProvider
Returns a new base relation with the given parameters.
createRelation(SQLContext, Map<String, String>, StructType) - Method in interface org.apache.spark.sql.sources.SchemaRelationProvider
Returns a new base relation with the given parameters and user defined schema.
createRWorker(String, int) - Static method in class org.apache.spark.api.r.RRDD
ProcessBuilder used to launch worker R processes.
createSparkContext(String, String, String, String[], Map<Object, Object>, Map<Object, Object>) - Static method in class org.apache.spark.api.r.RRDD
 
createStream(StreamingContext, String, int, StorageLevel) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
Create a input stream from a Flume source.
createStream(StreamingContext, String, int, StorageLevel, boolean) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
Create a input stream from a Flume source.
createStream(JavaStreamingContext, String, int) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
Creates a input stream from a Flume source.
createStream(JavaStreamingContext, String, int, StorageLevel) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
Creates a input stream from a Flume source.
createStream(JavaStreamingContext, String, int, StorageLevel, boolean) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
Creates a input stream from a Flume source.
createStream(StreamingContext, String, String, Map<String, Object>, StorageLevel) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
Create an input stream that pulls messages from Kafka Brokers.
createStream(StreamingContext, Map<String, String>, Map<String, Object>, StorageLevel, ClassTag<K>, ClassTag<V>, ClassTag<U>, ClassTag<T>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
Create an input stream that pulls messages from Kafka Brokers.
createStream(JavaStreamingContext, String, String, Map<String, Integer>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
Create an input stream that pulls messages from Kafka Brokers.
createStream(JavaStreamingContext, String, String, Map<String, Integer>, StorageLevel) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
Create an input stream that pulls messages from Kafka Brokers.
createStream(JavaStreamingContext, Class<K>, Class<V>, Class<U>, Class<T>, Map<String, String>, Map<String, Integer>, StorageLevel) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
Create an input stream that pulls messages from Kafka Brokers.
createStream(JavaStreamingContext, Map<String, String>, Map<String, Integer>, StorageLevel) - Method in class org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper
 
createStream(StreamingContext, String, String, String, String, InitialPositionInStream, Duration, StorageLevel) - Static method in class org.apache.spark.streaming.kinesis.KinesisUtils
Create an input stream that pulls messages from a Kinesis stream.
createStream(StreamingContext, String, String, String, String, InitialPositionInStream, Duration, StorageLevel, String, String) - Static method in class org.apache.spark.streaming.kinesis.KinesisUtils
Create an input stream that pulls messages from a Kinesis stream.
createStream(StreamingContext, String, String, Duration, InitialPositionInStream, StorageLevel) - Static method in class org.apache.spark.streaming.kinesis.KinesisUtils
Create an input stream that pulls messages from a Kinesis stream.
createStream(JavaStreamingContext, String, String, String, String, InitialPositionInStream, Duration, StorageLevel) - Static method in class org.apache.spark.streaming.kinesis.KinesisUtils
Create an input stream that pulls messages from a Kinesis stream.
createStream(JavaStreamingContext, String, String, String, String, InitialPositionInStream, Duration, StorageLevel, String, String) - Static method in class org.apache.spark.streaming.kinesis.KinesisUtils
Create an input stream that pulls messages from a Kinesis stream.
createStream(JavaStreamingContext, String, String, Duration, InitialPositionInStream, StorageLevel) - Static method in class org.apache.spark.streaming.kinesis.KinesisUtils
Create an input stream that pulls messages from a Kinesis stream.
createStream(StreamingContext, String, String, StorageLevel) - Static method in class org.apache.spark.streaming.mqtt.MQTTUtils
Create an input stream that receives messages pushed by a MQTT publisher.
createStream(JavaStreamingContext, String, String) - Static method in class org.apache.spark.streaming.mqtt.MQTTUtils
Create an input stream that receives messages pushed by a MQTT publisher.
createStream(JavaStreamingContext, String, String, StorageLevel) - Static method in class org.apache.spark.streaming.mqtt.MQTTUtils
Create an input stream that receives messages pushed by a MQTT publisher.
createStream(StreamingContext, Option<Authorization>, Seq<String>, StorageLevel) - Static method in class org.apache.spark.streaming.twitter.TwitterUtils
Create a input stream that returns tweets received from Twitter.
createStream(JavaStreamingContext) - Static method in class org.apache.spark.streaming.twitter.TwitterUtils
Create a input stream that returns tweets received from Twitter using Twitter4J's default OAuth authentication; this requires the system properties twitter4j.oauth.consumerKey, twitter4j.oauth.consumerSecret, twitter4j.oauth.accessToken and twitter4j.oauth.accessTokenSecret.
createStream(JavaStreamingContext, String[]) - Static method in class org.apache.spark.streaming.twitter.TwitterUtils
Create a input stream that returns tweets received from Twitter using Twitter4J's default OAuth authentication; this requires the system properties twitter4j.oauth.consumerKey, twitter4j.oauth.consumerSecret, twitter4j.oauth.accessToken and twitter4j.oauth.accessTokenSecret.
createStream(JavaStreamingContext, String[], StorageLevel) - Static method in class org.apache.spark.streaming.twitter.TwitterUtils
Create a input stream that returns tweets received from Twitter using Twitter4J's default OAuth authentication; this requires the system properties twitter4j.oauth.consumerKey, twitter4j.oauth.consumerSecret, twitter4j.oauth.accessToken and twitter4j.oauth.accessTokenSecret.
createStream(JavaStreamingContext, Authorization) - Static method in class org.apache.spark.streaming.twitter.TwitterUtils
Create a input stream that returns tweets received from Twitter.
createStream(JavaStreamingContext, Authorization, String[]) - Static method in class org.apache.spark.streaming.twitter.TwitterUtils
Create a input stream that returns tweets received from Twitter.
createStream(JavaStreamingContext, Authorization, String[], StorageLevel) - Static method in class org.apache.spark.streaming.twitter.TwitterUtils
Create a input stream that returns tweets received from Twitter.
createStream(StreamingContext, String, Subscribe, Function1<Seq<ByteString>, Iterator<T>>, StorageLevel, SupervisorStrategy, ClassTag<T>) - Static method in class org.apache.spark.streaming.zeromq.ZeroMQUtils
Create an input stream that receives messages pushed by a zeromq publisher.
createStream(JavaStreamingContext, String, Subscribe, Function<byte[][], Iterable<T>>, StorageLevel, SupervisorStrategy) - Static method in class org.apache.spark.streaming.zeromq.ZeroMQUtils
Create an input stream that receives messages pushed by a zeromq publisher.
createStream(JavaStreamingContext, String, Subscribe, Function<byte[][], Iterable<T>>, StorageLevel) - Static method in class org.apache.spark.streaming.zeromq.ZeroMQUtils
Create an input stream that receives messages pushed by a zeromq publisher.
createStream(JavaStreamingContext, String, Subscribe, Function<byte[][], Iterable<T>>) - Static method in class org.apache.spark.streaming.zeromq.ZeroMQUtils
Create an input stream that receives messages pushed by a zeromq publisher.
createStructField(String, DataType, boolean, Metadata) - Static method in class org.apache.spark.sql.types.DataTypes
Creates a StructField by specifying the name (name), data type (dataType) and whether values of this field can be null values (nullable).
createStructField(String, DataType, boolean) - Static method in class org.apache.spark.sql.types.DataTypes
Creates a StructField with empty metadata.
createStructType(List<StructField>) - Static method in class org.apache.spark.sql.types.DataTypes
Creates a StructType with the given list of StructFields (fields).
createStructType(StructField[]) - Static method in class org.apache.spark.sql.types.DataTypes
Creates a StructType with the given StructField array (fields).
createTopic(String) - Method in class org.apache.spark.streaming.kafka.KafkaTestUtils
Create a Kafka topic and wait until it propagated to the whole cluster
createTopicAndPartition(String, Integer) - Method in class org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper
 
creationSite() - Method in class org.apache.spark.rdd.RDD
User code that created this RDD (e.g.
creationSite() - Method in class org.apache.spark.streaming.dstream.DStream
 
crosstab(String, String) - Method in class org.apache.spark.sql.DataFrameStatFunctions
Computes a pair-wise frequency table of the given columns.
CrossValidator - Class in org.apache.spark.ml.tuning
:: Experimental :: K-fold cross validation.
CrossValidator(String) - Constructor for class org.apache.spark.ml.tuning.CrossValidator
 
CrossValidator() - Constructor for class org.apache.spark.ml.tuning.CrossValidator
 
CrossValidatorModel - Class in org.apache.spark.ml.tuning
:: Experimental :: Model from k-fold cross validation.
cube(Column...) - Method in class org.apache.spark.sql.DataFrame
Create a multi-dimensional cube for the current DataFrame using the specified columns, so we can run aggregation on them.
cube(String, String...) - Method in class org.apache.spark.sql.DataFrame
Create a multi-dimensional cube for the current DataFrame using the specified columns, so we can run aggregation on them.
cube(Seq<Column>) - Method in class org.apache.spark.sql.DataFrame
Create a multi-dimensional cube for the current DataFrame using the specified columns, so we can run aggregation on them.
cube(String, Seq<String>) - Method in class org.apache.spark.sql.DataFrame
Create a multi-dimensional cube for the current DataFrame using the specified columns, so we can run aggregation on them.
cumeDist() - Static method in class org.apache.spark.sql.functions
Window function: returns the cumulative distribution of values within a window partition, i.e.
currentAttemptId() - Method in interface org.apache.spark.SparkStageInfo
 
currentAttemptId() - Method in class org.apache.spark.SparkStageInfoImpl
 
currPrefLocs(Partition) - Method in class org.apache.spark.rdd.PartitionCoalescer
 

D

databaseTypeDefinition() - Method in class org.apache.spark.sql.jdbc.JdbcType
 
dataDistribution() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
 
DataFrame - Class in org.apache.spark.sql
:: Experimental :: A distributed collection of data organized into named columns.
DataFrame(SQLContext, LogicalPlan) - Constructor for class org.apache.spark.sql.DataFrame
A constructor that automatically analyzes the logical plan.
DataFrameNaFunctions - Class in org.apache.spark.sql
:: Experimental :: Functionality for working with missing data in DataFrames.
DataFrameReader - Class in org.apache.spark.sql
:: Experimental :: Interface used to load a DataFrame from external storage systems (e.g.
DataFrameStatFunctions - Class in org.apache.spark.sql
:: Experimental :: Statistic functions for DataFrames.
DataFrameWriter - Class in org.apache.spark.sql
:: Experimental :: Interface used to write a DataFrame to external storage systems (e.g.
dataSchema() - Method in class org.apache.spark.sql.sources.HadoopFsRelation
Specifies schema of actual data files.
DataType - Class in org.apache.spark.sql.types
:: DeveloperApi :: The base type of all Spark SQL data types.
DataType() - Constructor for class org.apache.spark.sql.types.DataType
 
dataType() - Method in class org.apache.spark.sql.types.StructField
 
dataType() - Method in class org.apache.spark.sql.UserDefinedFunction
 
DataTypes - Class in org.apache.spark.sql.types
To get/create specific data type, users should use singleton objects and factory methods provided by this class.
DataTypes() - Constructor for class org.apache.spark.sql.types.DataTypes
 
DataValidators - Class in org.apache.spark.mllib.util
:: DeveloperApi :: A collection of methods used to validate data before applying ML algorithms.
DataValidators() - Constructor for class org.apache.spark.mllib.util.DataValidators
 
date() - Method in class org.apache.spark.sql.ColumnName
Creates a new StructField of type date.
DateType - Static variable in class org.apache.spark.sql.types.DataTypes
Gets the DateType object.
DateType - Class in org.apache.spark.sql.types
:: DeveloperApi :: The data type representing java.sql.Date values.
decayFactor() - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
 
decimal() - Method in class org.apache.spark.sql.ColumnName
Creates a new StructField of type decimal.
decimal(int, int) - Method in class org.apache.spark.sql.ColumnName
Creates a new StructField of type decimal.
Decimal - Class in org.apache.spark.sql.types
A mutable implementation of BigDecimal that can hold a Long if values are small enough.
Decimal() - Constructor for class org.apache.spark.sql.types.Decimal
 
DecimalType - Class in org.apache.spark.sql.types
 
DecimalType(Option<PrecisionInfo>) - Constructor for class org.apache.spark.sql.types.DecimalType
 
DecisionTree - Class in org.apache.spark.mllib.tree
:: Experimental :: A class which implements a decision tree learning algorithm for classification and regression.
DecisionTree(Strategy) - Constructor for class org.apache.spark.mllib.tree.DecisionTree
 
DecisionTreeClassificationModel - Class in org.apache.spark.ml.classification
:: Experimental :: Decision tree model for classification.
DecisionTreeClassifier - Class in org.apache.spark.ml.classification
:: Experimental :: Decision tree learning algorithm for classification.
DecisionTreeClassifier(String) - Constructor for class org.apache.spark.ml.classification.DecisionTreeClassifier
 
DecisionTreeClassifier() - Constructor for class org.apache.spark.ml.classification.DecisionTreeClassifier
 
DecisionTreeModel - Class in org.apache.spark.mllib.tree.model
:: Experimental :: Decision tree model for classification or regression.
DecisionTreeModel(Node, Enumeration.Value) - Constructor for class org.apache.spark.mllib.tree.model.DecisionTreeModel
 
DecisionTreeRegressionModel - Class in org.apache.spark.ml.regression
:: Experimental :: Decision tree model for regression.
DecisionTreeRegressor - Class in org.apache.spark.ml.regression
:: Experimental :: Decision tree learning algorithm for regression.
DecisionTreeRegressor(String) - Constructor for class org.apache.spark.ml.regression.DecisionTreeRegressor
 
DecisionTreeRegressor() - Constructor for class org.apache.spark.ml.regression.DecisionTreeRegressor
 
defaultAttr() - Static method in class org.apache.spark.ml.attribute.BinaryAttribute
The default binary attribute.
defaultAttr() - Static method in class org.apache.spark.ml.attribute.NominalAttribute
The default nominal attribute.
defaultAttr() - Static method in class org.apache.spark.ml.attribute.NumericAttribute
The default numeric attribute.
defaultMinPartitions() - Method in class org.apache.spark.api.java.JavaSparkContext
Default min number of partitions for Hadoop RDDs when not given by user
defaultMinPartitions() - Method in class org.apache.spark.SparkContext
Default min number of partitions for Hadoop RDDs when not given by user Notice that we use math.min so the "defaultMinPartitions" cannot be higher than 2.
defaultMinSplits() - Method in class org.apache.spark.api.java.JavaSparkContext
Deprecated.
As of Spark 1.0.0, defaultMinSplits is deprecated, use JavaSparkContext.defaultMinPartitions() instead
defaultMinSplits() - Method in class org.apache.spark.SparkContext
Default min number of partitions for Hadoop RDDs when not given by user
defaultParallelism() - Method in class org.apache.spark.api.java.JavaSparkContext
Default level of parallelism to use when not given by user (e.g.
defaultParallelism() - Method in class org.apache.spark.SparkContext
Default level of parallelism to use when not given by user (e.g.
defaultParamMap() - Method in interface org.apache.spark.ml.param.Params
Internal param map for default values.
defaultParams(String) - Static method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
defaultParams(Enumeration.Value) - Static method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
defaultPartitioner(RDD<?>, Seq<RDD<?>>) - Static method in class org.apache.spark.Partitioner
Choose a partitioner to use for a cogroup-like operation between a number of RDDs.
defaultSize() - Method in class org.apache.spark.sql.types.ArrayType
The default size of a value of the ArrayType is 100 * the default size of the element type.
defaultSize() - Method in class org.apache.spark.sql.types.BinaryType
The default size of a value of the BinaryType is 4096 bytes.
defaultSize() - Method in class org.apache.spark.sql.types.BooleanType
The default size of a value of the BooleanType is 1 byte.
defaultSize() - Method in class org.apache.spark.sql.types.ByteType
The default size of a value of the ByteType is 1 byte.
defaultSize() - Method in class org.apache.spark.sql.types.DataType
The default size of a value of this data type, used internally for size estimation.
defaultSize() - Method in class org.apache.spark.sql.types.DateType
The default size of a value of the DateType is 4 bytes.
defaultSize() - Method in class org.apache.spark.sql.types.DecimalType
The default size of a value of the DecimalType is 4096 bytes.
defaultSize() - Method in class org.apache.spark.sql.types.DoubleType
The default size of a value of the DoubleType is 8 bytes.
defaultSize() - Method in class org.apache.spark.sql.types.FloatType
The default size of a value of the FloatType is 4 bytes.
defaultSize() - Method in class org.apache.spark.sql.types.IntegerType
The default size of a value of the IntegerType is 4 bytes.
defaultSize() - Method in class org.apache.spark.sql.types.LongType
The default size of a value of the LongType is 8 bytes.
defaultSize() - Method in class org.apache.spark.sql.types.MapType
The default size of a value of the MapType is 100 * (the default size of the key type + the default size of the value type).
defaultSize() - Method in class org.apache.spark.sql.types.NullType
 
defaultSize() - Method in class org.apache.spark.sql.types.ShortType
The default size of a value of the ShortType is 2 bytes.
defaultSize() - Method in class org.apache.spark.sql.types.StringType
The default size of a value of the StringType is 4096 bytes.
defaultSize() - Method in class org.apache.spark.sql.types.StructType
The default size of a value of the StructType is the total default sizes of all field types.
defaultSize() - Method in class org.apache.spark.sql.types.TimestampType
The default size of a value of the TimestampType is 12 bytes.
defaultSize() - Method in class org.apache.spark.sql.types.UserDefinedType
The default size of a value of the UserDefinedType is 4096 bytes.
defaultStategy(Enumeration.Value) - Static method in class org.apache.spark.mllib.tree.configuration.Strategy
Construct a default set of parameters for DecisionTree
defaultStrategy(String) - Static method in class org.apache.spark.mllib.tree.configuration.Strategy
Construct a default set of parameters for DecisionTree
defaultStrategy() - Static method in class org.apache.spark.streaming.receiver.ActorSupervisorStrategy
 
degree() - Method in class org.apache.spark.ml.feature.PolynomialExpansion
The polynomial degree to expand, which should be >= 1.
degrees() - Method in class org.apache.spark.graphx.GraphOps
The degree of each vertex in the graph.
degreesOfFreedom() - Method in class org.apache.spark.mllib.stat.test.ChiSqTestResult
 
degreesOfFreedom() - Method in interface org.apache.spark.mllib.stat.test.TestResult
Returns the degree(s) of freedom of the hypothesis test.
delegate() - Method in class org.apache.spark.InterruptibleIterator
 
dense(int, int, double[]) - Static method in class org.apache.spark.mllib.linalg.Matrices
Creates a column-major dense matrix.
dense(double, double...) - Static method in class org.apache.spark.mllib.linalg.Vectors
Creates a dense vector from its values.
dense(double, Seq<Object>) - Static method in class org.apache.spark.mllib.linalg.Vectors
Creates a dense vector from its values.
dense(double[]) - Static method in class org.apache.spark.mllib.linalg.Vectors
Creates a dense vector from a double array.
DenseMatrix - Class in org.apache.spark.mllib.linalg
Column-major dense matrix.
DenseMatrix(int, int, double[], boolean) - Constructor for class org.apache.spark.mllib.linalg.DenseMatrix
 
DenseMatrix(int, int, double[]) - Constructor for class org.apache.spark.mllib.linalg.DenseMatrix
Column-major dense matrix.
denseRank() - Static method in class org.apache.spark.sql.functions
Window function: returns the rank of rows within a window partition, without any gaps.
DenseVector - Class in org.apache.spark.mllib.linalg
A dense vector represented by a value array.
DenseVector(double[]) - Constructor for class org.apache.spark.mllib.linalg.DenseVector
 
dependencies() - Method in class org.apache.spark.rdd.RDD
Get the list of dependencies of this RDD, taking into account whether the RDD is checkpointed or not.
dependencies() - Method in class org.apache.spark.streaming.dstream.DStream
List of parent DStreams on which this DStream depends on
dependencies() - Method in class org.apache.spark.streaming.dstream.InputDStream
 
Dependency<T> - Class in org.apache.spark
:: DeveloperApi :: Base class for dependencies.
Dependency() - Constructor for class org.apache.spark.Dependency
 
depth() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
Get depth of tree.
desc() - Method in class org.apache.spark.sql.Column
Returns an ordering used in sorting.
desc(String) - Static method in class org.apache.spark.sql.functions
Returns a sort expression based on the descending order of the column.
desc() - Method in class org.apache.spark.util.MethodIdentifier
 
describe(String...) - Method in class org.apache.spark.sql.DataFrame
Computes statistics for numeric columns, including count, mean, stddev, min, and max.
describe(Seq<String>) - Method in class org.apache.spark.sql.DataFrame
Computes statistics for numeric columns, including count, mean, stddev, min, and max.
describeTopics(int) - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
 
describeTopics(int) - Method in class org.apache.spark.mllib.clustering.LDAModel
Return the topics described by weighted terms.
describeTopics() - Method in class org.apache.spark.mllib.clustering.LDAModel
Return the topics described by weighted terms.
describeTopics(int) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
 
description() - Method in class org.apache.spark.ExceptionFailure
 
description() - Method in class org.apache.spark.status.api.v1.JobData
 
description() - Method in class org.apache.spark.storage.StorageLevel
 
DeserializationStream - Class in org.apache.spark.serializer
:: DeveloperApi :: A stream for reading serialized objects.
DeserializationStream() - Constructor for class org.apache.spark.serializer.DeserializationStream
 
deserialize(ByteBuffer, ClassTag<T>) - Method in class org.apache.spark.serializer.SerializerInstance
 
deserialize(ByteBuffer, ClassLoader, ClassTag<T>) - Method in class org.apache.spark.serializer.SerializerInstance
 
deserialize(Object) - Method in class org.apache.spark.sql.types.UserDefinedType
Convert a SQL datum to the user type
deserialized() - Method in class org.apache.spark.storage.MemoryEntry
 
deserialized() - Method in class org.apache.spark.storage.StorageLevel
 
deserializeStream(InputStream) - Method in class org.apache.spark.serializer.SerializerInstance
 
destroy() - Method in class org.apache.spark.broadcast.Broadcast
Destroy all data and metadata related to this broadcast variable.
details() - Method in class org.apache.spark.scheduler.StageInfo
 
details() - Method in class org.apache.spark.status.api.v1.StageData
 
determineBounds(ArrayBuffer<Tuple2<K, Object>>, int, Ordering<K>, ClassTag<K>) - Static method in class org.apache.spark.RangePartitioner
Determines the bounds for range partitioning from candidates with weights indicating how many items each represents.
DeveloperApi - Annotation Type in org.apache.spark.annotation
A lower-level, unstable API intended for developers.
diag(Vector) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
Generate a diagonal matrix in DenseMatrix format from the supplied values.
diag(Vector) - Static method in class org.apache.spark.mllib.linalg.Matrices
Generate a diagonal matrix in Matrix format from the supplied values.
diff(RDD<Tuple2<Object, VD>>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
 
diff(VertexRDD<VD>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
 
diff(RDD<Tuple2<Object, VD>>) - Method in class org.apache.spark.graphx.VertexRDD
For each vertex present in both this and other, diff returns only those vertices with differing values; for values that are different, keeps the values from other.
diff(VertexRDD<VD>) - Method in class org.apache.spark.graphx.VertexRDD
For each vertex present in both this and other, diff returns only those vertices with differing values; for values that are different, keeps the values from other.
disableOutputSpecValidation() - Static method in class org.apache.spark.rdd.PairRDDFunctions
 
DISK_ONLY - Static variable in class org.apache.spark.api.java.StorageLevels
 
DISK_ONLY() - Static method in class org.apache.spark.storage.StorageLevel
 
DISK_ONLY_2 - Static variable in class org.apache.spark.api.java.StorageLevels
 
DISK_ONLY_2() - Static method in class org.apache.spark.storage.StorageLevel
 
diskBytesSpilled() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
 
diskBytesSpilled() - Method in class org.apache.spark.status.api.v1.StageData
 
diskBytesSpilled() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
 
diskBytesSpilled() - Method in class org.apache.spark.status.api.v1.TaskMetrics
 
diskSize() - Method in class org.apache.spark.storage.BlockStatus
 
diskSize() - Method in class org.apache.spark.storage.RDDInfo
 
diskUsed() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
 
diskUsed() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
 
diskUsed() - Method in class org.apache.spark.status.api.v1.RDDPartitionInfo
 
diskUsed() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
 
diskUsed() - Method in class org.apache.spark.storage.StorageStatus
Return the disk space used by this block manager.
diskUsedByRdd(int) - Method in class org.apache.spark.storage.StorageStatus
Return the disk space used by the given RDD in this block manager in O(1) time.
dist(Vector) - Method in class org.apache.spark.util.Vector
 
distinct() - Method in class org.apache.spark.api.java.JavaDoubleRDD
Return a new RDD containing the distinct elements in this RDD.
distinct(int) - Method in class org.apache.spark.api.java.JavaDoubleRDD
Return a new RDD containing the distinct elements in this RDD.
distinct() - Method in class org.apache.spark.api.java.JavaPairRDD
Return a new RDD containing the distinct elements in this RDD.
distinct(int) - Method in class org.apache.spark.api.java.JavaPairRDD
Return a new RDD containing the distinct elements in this RDD.
distinct() - Method in class org.apache.spark.api.java.JavaRDD
Return a new RDD containing the distinct elements in this RDD.
distinct(int) - Method in class org.apache.spark.api.java.JavaRDD
Return a new RDD containing the distinct elements in this RDD.
distinct(int, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
Return a new RDD containing the distinct elements in this RDD.
distinct() - Method in class org.apache.spark.rdd.RDD
Return a new RDD containing the distinct elements in this RDD.
distinct() - Method in class org.apache.spark.sql.DataFrame
Returns a new DataFrame that contains only the unique rows from this DataFrame.
DistributedLDAModel - Class in org.apache.spark.mllib.clustering
:: Experimental ::
DistributedMatrix - Interface in org.apache.spark.mllib.linalg.distributed
Represents a distributively stored matrix backed by one or more RDDs.
div(Duration) - Method in class org.apache.spark.streaming.Duration
 
divide(Object) - Method in class org.apache.spark.sql.Column
Division this expression by another expression.
divide(double) - Method in class org.apache.spark.util.Vector
 
doc() - Method in class org.apache.spark.ml.param.Param
 
docConcentration() - Method in class org.apache.spark.mllib.clustering.EMLDAOptimizer
 
dot(Vector) - Method in class org.apache.spark.util.Vector
 
doubleAccumulator(double) - Method in class org.apache.spark.api.java.JavaSparkContext
Create an Accumulator double variable, which tasks can "add" values to using the add method.
doubleAccumulator(double, String) - Method in class org.apache.spark.api.java.JavaSparkContext
Create an Accumulator double variable, which tasks can "add" values to using the add method.
DoubleArrayParam - Class in org.apache.spark.ml.param
:: DeveloperApi :: Specialized version of Param[Array[Double} for Java.
DoubleArrayParam(Params, String, String, Function1<double[], Object>) - Constructor for class org.apache.spark.ml.param.DoubleArrayParam
 
DoubleArrayParam(Params, String, String) - Constructor for class org.apache.spark.ml.param.DoubleArrayParam
 
DoubleFlatMapFunction<T> - Interface in org.apache.spark.api.java.function
A function that returns zero or more records of type Double from each input record.
DoubleFunction<T> - Interface in org.apache.spark.api.java.function
A function that returns Doubles, and can be used to construct DoubleRDDs.
DoubleParam - Class in org.apache.spark.ml.param
:: DeveloperApi :: Specialized version of Param[Double] for Java.
DoubleParam(String, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.DoubleParam
 
DoubleParam(String, String, String) - Constructor for class org.apache.spark.ml.param.DoubleParam
 
DoubleParam(org.apache.spark.ml.util.Identifiable, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.DoubleParam
 
DoubleParam(org.apache.spark.ml.util.Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.DoubleParam
 
DoubleRDDFunctions - Class in org.apache.spark.rdd
Extra functions available on RDDs of Doubles through an implicit conversion.
DoubleRDDFunctions(RDD<Object>) - Constructor for class org.apache.spark.rdd.DoubleRDDFunctions
 
doubleRDDToDoubleRDDFunctions(RDD<Object>) - Static method in class org.apache.spark.rdd.RDD
 
doubleRDDToDoubleRDDFunctions(RDD<Object>) - Static method in class org.apache.spark.SparkContext
 
doubleToDoubleWritable(double) - Static method in class org.apache.spark.SparkContext
 
doubleToMultiplier(double) - Static method in class org.apache.spark.util.Vector
 
DoubleType - Static variable in class org.apache.spark.sql.types.DataTypes
Gets the DoubleType object.
DoubleType - Class in org.apache.spark.sql.types
:: DeveloperApi :: The data type representing Double values.
doubleWritableConverter() - Static method in class org.apache.spark.SparkContext
 
DRIVER_EXTRA_CLASSPATH - Static variable in class org.apache.spark.launcher.SparkLauncher
Configuration key for the driver class path.
DRIVER_EXTRA_JAVA_OPTIONS - Static variable in class org.apache.spark.launcher.SparkLauncher
Configuration key for the driver VM options.
DRIVER_EXTRA_LIBRARY_PATH - Static variable in class org.apache.spark.launcher.SparkLauncher
Configuration key for the driver native library path.
DRIVER_IDENTIFIER() - Static method in class org.apache.spark.SparkContext
Executor id for the driver.
DRIVER_MEMORY - Static variable in class org.apache.spark.launcher.SparkLauncher
Configuration key for the driver memory.
driverActorSystemName() - Static method in class org.apache.spark.SparkEnv
 
drop(String) - Method in class org.apache.spark.sql.DataFrame
Returns a new DataFrame with a column dropped.
drop(Column) - Method in class org.apache.spark.sql.DataFrame
Returns a new DataFrame with a column dropped.
drop() - Method in class org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that drops rows containing any null values.
drop(String) - Method in class org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that drops rows containing null values.
drop(String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that drops rows containing any null values in the specified columns.
drop(Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
(Scala-specific) Returns a new DataFrame that drops rows containing any null values in the specified columns.
drop(String, String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that drops rows containing null values in the specified columns.
drop(String, Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
(Scala-specific) Returns a new DataFrame that drops rows containing null values in the specified columns.
drop(int) - Method in class org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that drops rows containing less than minNonNulls non-null values.
drop(int, String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that drops rows containing less than minNonNulls non-null values in the specified columns.
drop(int, Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
(Scala-specific) Returns a new DataFrame that drops rows containing less than minNonNulls non-null values in the specified columns.
dropDuplicates() - Method in class org.apache.spark.sql.DataFrame
Returns a new DataFrame that contains only the unique rows from this DataFrame.
dropDuplicates(Seq<String>) - Method in class org.apache.spark.sql.DataFrame
(Scala-specific) Returns a new DataFrame with duplicate rows removed, considering only the subset of columns.
dropDuplicates(String[]) - Method in class org.apache.spark.sql.DataFrame
Returns a new DataFrame with duplicate rows removed, considering only the subset of columns.
dropLast() - Method in class org.apache.spark.ml.feature.OneHotEncoder
Whether to drop the last category in the encoded vector (default: true)
dropTempTable(String) - Method in class org.apache.spark.sql.SQLContext
 
Dst - Static variable in class org.apache.spark.graphx.TripletFields
Expose the destination and edge fields but not the source field.
dstAttr() - Method in class org.apache.spark.graphx.EdgeContext
The vertex attribute of the edge's destination vertex.
dstAttr() - Method in class org.apache.spark.graphx.EdgeTriplet
The destination vertex attribute
dstAttr() - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
 
dstId() - Method in class org.apache.spark.graphx.Edge
 
dstId() - Method in class org.apache.spark.graphx.EdgeContext
The vertex id of the edge's destination vertex.
dstId() - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
 
dstream() - Method in class org.apache.spark.streaming.api.java.JavaDStream
 
dstream() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
 
dstream() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
 
DStream<T> - Class in org.apache.spark.streaming.dstream
A Discretized Stream (DStream), the basic abstraction in Spark Streaming, is a continuous sequence of RDDs (of the same type) representing a continuous stream of data (see org.apache.spark.rdd.RDD in the Spark core documentation for more details on RDDs).
DStream(StreamingContext, ClassTag<T>) - Constructor for class org.apache.spark.streaming.dstream.DStream
 
dtypes() - Method in class org.apache.spark.sql.DataFrame
Returns all column names and their data types as an array.
duration() - Method in class org.apache.spark.scheduler.TaskInfo
 
Duration - Class in org.apache.spark.streaming
 
Duration(long) - Constructor for class org.apache.spark.streaming.Duration
 
Durations - Class in org.apache.spark.streaming
 
Durations() - Constructor for class org.apache.spark.streaming.Durations
 

E

Edge<ED> - Class in org.apache.spark.graphx
A single directed edge consisting of a source id, target id, and the data associated with the edge.
Edge(long, long, ED) - Constructor for class org.apache.spark.graphx.Edge
 
EdgeActiveness - Enum in org.apache.spark.graphx.impl
Criteria for filtering edges based on activeness.
EdgeContext<VD,ED,A> - Class in org.apache.spark.graphx
Represents an edge along with its neighboring vertices and allows sending messages along the edge.
EdgeContext() - Constructor for class org.apache.spark.graphx.EdgeContext
 
EdgeDirection - Class in org.apache.spark.graphx
The direction of a directed edge relative to a vertex.
edgeListFile(SparkContext, String, boolean, int, StorageLevel, StorageLevel) - Static method in class org.apache.spark.graphx.GraphLoader
Loads a graph from an edge list formatted file where each line contains two integers: a source id and a target id.
EdgeOnly - Static variable in class org.apache.spark.graphx.TripletFields
Expose only the edge field and not the source or destination field.
EdgeRDD<ED> - Class in org.apache.spark.graphx
EdgeRDD[ED, VD] extends RDD[Edge[ED} by storing the edges in columnar format on each partition for performance.
EdgeRDD(SparkContext, Seq<Dependency<?>>) - Constructor for class org.apache.spark.graphx.EdgeRDD
 
EdgeRDDImpl<ED,VD> - Class in org.apache.spark.graphx.impl
 
edges() - Method in class org.apache.spark.graphx.Graph
An RDD containing the edges and their associated attributes.
edges() - Method in class org.apache.spark.graphx.impl.GraphImpl
 
EdgeTriplet<VD,ED> - Class in org.apache.spark.graphx
An edge triplet represents an edge along with the vertex attributes of its neighboring vertices.
EdgeTriplet() - Constructor for class org.apache.spark.graphx.EdgeTriplet
 
Either() - Static method in class org.apache.spark.graphx.EdgeDirection
Edges originating from *or* arriving at a vertex of interest.
elements() - Method in class org.apache.spark.util.Vector
 
elementType() - Method in class org.apache.spark.sql.types.ArrayType
 
ElementwiseProduct - Class in org.apache.spark.ml.feature
:: Experimental :: Outputs the Hadamard product (i.e., the element-wise product) of each input vector with a provided "weight" vector.
ElementwiseProduct(String) - Constructor for class org.apache.spark.ml.feature.ElementwiseProduct
 
ElementwiseProduct() - Constructor for class org.apache.spark.ml.feature.ElementwiseProduct
 
ElementwiseProduct - Class in org.apache.spark.mllib.feature
:: Experimental :: Outputs the Hadamard product (i.e., the element-wise product) of each input vector with a provided "weight" vector.
ElementwiseProduct(Vector) - Constructor for class org.apache.spark.mllib.feature.ElementwiseProduct
 
EMLDAOptimizer - Class in org.apache.spark.mllib.clustering
:: DeveloperApi ::
EMLDAOptimizer() - Constructor for class org.apache.spark.mllib.clustering.EMLDAOptimizer
 
empty() - Static method in class org.apache.spark.ml.param.ParamMap
Returns an empty param map.
empty() - Static method in class org.apache.spark.sql.types.Metadata
Returns an empty Metadata.
empty() - Static method in class org.apache.spark.storage.BlockStatus
 
emptyDataFrame() - Method in class org.apache.spark.sql.SQLContext
:: Experimental :: Returns a DataFrame with no rows or columns.
emptyNode(int) - Static method in class org.apache.spark.mllib.tree.model.Node
Return a node with the given node id (but nothing else set).
emptyRDD() - Method in class org.apache.spark.api.java.JavaSparkContext
Get an RDD that has no partitions or elements.
emptyRDD(ClassTag<T>) - Method in class org.apache.spark.SparkContext
Get an RDD that has no partitions or elements.
endpoint() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
 
endsWith(Column) - Method in class org.apache.spark.sql.Column
String ends with.
endsWith(String) - Method in class org.apache.spark.sql.Column
String ends with another string literal.
endsWith(UTF8String) - Method in class org.apache.spark.sql.types.UTF8String
 
endTime() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
entries() - Method in class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
 
Entropy - Class in org.apache.spark.mllib.tree.impurity
:: Experimental :: Class for calculating entropy during binary classification.
Entropy() - Constructor for class org.apache.spark.mllib.tree.impurity.Entropy
 
EnumUtil - Class in org.apache.spark.util
 
EnumUtil() - Constructor for class org.apache.spark.util.EnumUtil
 
env() - Method in class org.apache.spark.api.java.JavaSparkContext
 
env() - Method in class org.apache.spark.streaming.StreamingContext
 
environmentDetails() - Method in class org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
 
EnvironmentListener - Class in org.apache.spark.ui.env
:: DeveloperApi :: A SparkListener that prepares information to be displayed on the EnvironmentTab
EnvironmentListener() - Constructor for class org.apache.spark.ui.env.EnvironmentListener
 
EPSILON() - Static method in class org.apache.spark.mllib.util.MLUtils
 
eqNullSafe(Object) - Method in class org.apache.spark.sql.Column
Equality test that is safe for null values.
equals(Object) - Method in class org.apache.spark.graphx.EdgeDirection
 
equals(Object) - Method in class org.apache.spark.HashPartitioner
 
equals(Object) - Method in class org.apache.spark.ml.attribute.AttributeGroup
 
equals(Object) - Method in class org.apache.spark.ml.attribute.BinaryAttribute
 
equals(Object) - Method in class org.apache.spark.ml.attribute.NominalAttribute
 
equals(Object) - Method in class org.apache.spark.ml.attribute.NumericAttribute
 
equals(Object) - Method in class org.apache.spark.ml.param.Param
 
equals(Object) - Method in class org.apache.spark.ml.tree.CategoricalSplit
 
equals(Object) - Method in class org.apache.spark.ml.tree.ContinuousSplit
 
equals(Object) - Method in class org.apache.spark.mllib.linalg.DenseMatrix
 
equals(Object) - Method in interface org.apache.spark.mllib.linalg.Vector
 
equals(Object) - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
 
equals(Object) - Method in class org.apache.spark.mllib.tree.model.Predict
 
equals(Object) - Method in class org.apache.spark.RangePartitioner
 
equals(Object) - Method in class org.apache.spark.scheduler.AccumulableInfo
 
equals(Object) - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
 
equals(Object) - Method in class org.apache.spark.scheduler.InputFormatInfo
 
equals(Object) - Method in class org.apache.spark.scheduler.SplitInfo
 
equals(Object) - Method in class org.apache.spark.sql.Column
 
equals(Object) - Method in interface org.apache.spark.sql.Row
 
equals(Object) - Method in class org.apache.spark.sql.types.Decimal
 
equals(Object) - Method in class org.apache.spark.sql.types.Metadata
 
equals(Object) - Method in class org.apache.spark.sql.types.UTF8String
 
equals(Object) - Method in class org.apache.spark.storage.BlockId
 
equals(Object) - Method in class org.apache.spark.storage.BlockManagerId
 
equals(Object) - Method in class org.apache.spark.storage.StorageLevel
 
equals(Object) - Method in class org.apache.spark.streaming.kafka.Broker
Broker's port
equals(Object) - Method in class org.apache.spark.streaming.kafka.OffsetRange
exclusive ending offset
equalTo(Object) - Method in class org.apache.spark.sql.Column
Equality test.
EqualTo - Class in org.apache.spark.sql.sources
A filter that evaluates to true iff the attribute evaluates to a value equal to value.
EqualTo(String, Object) - Constructor for class org.apache.spark.sql.sources.EqualTo
 
errorMessage() - Method in class org.apache.spark.status.api.v1.TaskData
 
estimate(double[]) - Method in class org.apache.spark.mllib.stat.KernelDensity
Estimates probability density function at the given array of points.
estimate(Object) - Static method in class org.apache.spark.util.SizeEstimator
Estimate the number of bytes that the given object takes up on the JVM heap.
Estimator<M extends Model<M>> - Class in org.apache.spark.ml
:: DeveloperApi :: Abstract class for estimators that fit models to data.
Estimator() - Constructor for class org.apache.spark.ml.Estimator
 
evaluate(DataFrame) - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
evaluate(DataFrame, ParamMap) - Method in class org.apache.spark.ml.evaluation.Evaluator
Evaluates model output and returns a scalar metric (larger is better).
evaluate(DataFrame) - Method in class org.apache.spark.ml.evaluation.Evaluator
Evaluates the output.
evaluate(DataFrame) - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
 
evaluateEachIteration(RDD<LabeledPoint>, Loss) - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
Method to compute error or loss for every iteration of gradient boosting.
Evaluator - Class in org.apache.spark.ml.evaluation
:: DeveloperApi :: Abstract class for evaluators that compute metrics from predictions.
Evaluator() - Constructor for class org.apache.spark.ml.evaluation.Evaluator
 
event() - Method in class org.apache.spark.streaming.flume.SparkFlumeEvent
 
eventually(Time, Time, Function0<T>) - Method in class org.apache.spark.streaming.kafka.KafkaTestUtils
 
except(DataFrame) - Method in class org.apache.spark.sql.DataFrame
Returns a new DataFrame containing rows in this frame but not in another frame.
ExceptionFailure - Class in org.apache.spark
:: DeveloperApi :: Task failed due to a runtime exception.
ExceptionFailure(String, String, StackTraceElement[], String, Option<TaskMetrics>) - Constructor for class org.apache.spark.ExceptionFailure
 
execId() - Method in class org.apache.spark.ExecutorLostFailure
 
execId() - Method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
 
executor_() - Method in class org.apache.spark.streaming.receiver.Receiver
Handler object that runs the receiver.
EXECUTOR_CORES - Static variable in class org.apache.spark.launcher.SparkLauncher
Configuration key for the number of executor CPU cores.
EXECUTOR_EXTRA_CLASSPATH - Static variable in class org.apache.spark.launcher.SparkLauncher
Configuration key for the executor class path.
EXECUTOR_EXTRA_JAVA_OPTIONS - Static variable in class org.apache.spark.launcher.SparkLauncher
Configuration key for the executor VM options.
EXECUTOR_EXTRA_LIBRARY_PATH - Static variable in class org.apache.spark.launcher.SparkLauncher
Configuration key for the executor native library path.
EXECUTOR_MEMORY - Static variable in class org.apache.spark.launcher.SparkLauncher
Configuration key for the executor memory.
executorActorSystemName() - Static method in class org.apache.spark.SparkEnv
 
executorDeserializeTime() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
 
executorDeserializeTime() - Method in class org.apache.spark.status.api.v1.TaskMetrics
 
executorEnvs() - Method in class org.apache.spark.SparkContext
 
executorHost() - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
 
executorId() - Method in class org.apache.spark.scheduler.SparkListenerExecutorAdded
 
executorId() - Method in class org.apache.spark.scheduler.SparkListenerExecutorRemoved
 
executorId() - Method in class org.apache.spark.scheduler.TaskInfo
 
executorId() - Method in class org.apache.spark.SparkEnv
 
executorId() - Method in class org.apache.spark.status.api.v1.TaskData
 
executorId() - Method in class org.apache.spark.storage.BlockManagerId
 
executorIdToBlockManagerId() - Method in class org.apache.spark.ui.jobs.JobProgressListener
 
executorIdToData() - Method in class org.apache.spark.ui.exec.ExecutorsListener
 
executorIdToStorageStatus() - Method in class org.apache.spark.storage.StorageStatusListener
 
ExecutorInfo - Class in org.apache.spark.scheduler.cluster
:: DeveloperApi :: Stores information about an executor to pass from the scheduler to SparkListeners.
ExecutorInfo(String, int, Map<String, String>) - Constructor for class org.apache.spark.scheduler.cluster.ExecutorInfo
 
executorInfo() - Method in class org.apache.spark.scheduler.SparkListenerExecutorAdded
 
executorLogs() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
 
ExecutorLostFailure - Class in org.apache.spark
:: DeveloperApi :: The task failed because the executor that it was running on was lost.
ExecutorLostFailure(String) - Constructor for class org.apache.spark.ExecutorLostFailure
 
executorMemoryManager() - Method in class org.apache.spark.SparkEnv
 
executorPct() - Method in class org.apache.spark.scheduler.RuntimePercentage
 
executorRunTime() - Method in class org.apache.spark.status.api.v1.StageData
 
executorRunTime() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
 
executorRunTime() - Method in class org.apache.spark.status.api.v1.TaskMetrics
 
executors() - Method in class org.apache.spark.status.api.v1.RDDPartitionInfo
 
ExecutorsListener - Class in org.apache.spark.ui.exec
:: DeveloperApi :: A SparkListener that prepares information to be displayed on the ExecutorsTab
ExecutorsListener(StorageStatusListener) - Constructor for class org.apache.spark.ui.exec.ExecutorsListener
 
ExecutorStageSummary - Class in org.apache.spark.status.api.v1
 
ExecutorSummary - Class in org.apache.spark.status.api.v1
 
executorSummary() - Method in class org.apache.spark.status.api.v1.StageData
 
executorToDuration() - Method in class org.apache.spark.ui.exec.ExecutorsListener
 
executorToInputBytes() - Method in class org.apache.spark.ui.exec.ExecutorsListener
 
executorToInputRecords() - Method in class org.apache.spark.ui.exec.ExecutorsListener
 
executorToLogUrls() - Method in class org.apache.spark.ui.exec.ExecutorsListener
 
executorToOutputBytes() - Method in class org.apache.spark.ui.exec.ExecutorsListener
 
executorToOutputRecords() - Method in class org.apache.spark.ui.exec.ExecutorsListener
 
executorToShuffleRead() - Method in class org.apache.spark.ui.exec.ExecutorsListener
 
executorToShuffleWrite() - Method in class org.apache.spark.ui.exec.ExecutorsListener
 
executorToTasksActive() - Method in class org.apache.spark.ui.exec.ExecutorsListener
 
executorToTasksComplete() - Method in class org.apache.spark.ui.exec.ExecutorsListener
 
executorToTasksFailed() - Method in class org.apache.spark.ui.exec.ExecutorsListener
 
exp(Column) - Static method in class org.apache.spark.sql.functions
Computes the exponential of the given value.
exp(String) - Static method in class org.apache.spark.sql.functions
Computes the exponential of the given column.
expand(Vector, int) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
 
ExpectationSum - Class in org.apache.spark.mllib.clustering
 
ExpectationSum(double, double[], DenseVector<Object>[], DenseMatrix<Object>[]) - Constructor for class org.apache.spark.mllib.clustering.ExpectationSum
 
Experimental - Annotation Type in org.apache.spark.annotation
An experimental user-facing API.
experimental() - Method in class org.apache.spark.sql.SQLContext
:: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.
ExperimentalMethods - Class in org.apache.spark.sql
:: Experimental :: Holder for experimental methods for the bravest.
explain(boolean) - Method in class org.apache.spark.sql.Column
Prints the expression to the console for debugging purpose.
explain(boolean) - Method in class org.apache.spark.sql.DataFrame
Prints the plans (logical and physical) to the console for debugging purposes.
explain() - Method in class org.apache.spark.sql.DataFrame
Only prints the physical plan to the console for debugging purposes.
explainedVariance() - Method in class org.apache.spark.mllib.evaluation.RegressionMetrics
Returns the explained variance regression score.
explainParam(Param<?>) - Method in interface org.apache.spark.ml.param.Params
Explains a param.
explainParams() - Method in interface org.apache.spark.ml.param.Params
Explains all params of this instance.
explode(Seq<Column>, Function1<Row, TraversableOnce<A>>, TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.DataFrame
(Scala-specific) Returns a new DataFrame where each row has been expanded to zero or more rows by the provided function.
explode(String, String, Function1<A, TraversableOnce<B>>, TypeTags.TypeTag<B>) - Method in class org.apache.spark.sql.DataFrame
(Scala-specific) Returns a new DataFrame where a single column has been expanded to zero or more rows by the provided function.
explode(Column) - Static method in class org.apache.spark.sql.functions
Creates a new row for each element in the given array or map column.
expm1(Column) - Static method in class org.apache.spark.sql.functions
Computes the exponential of the given value minus one.
expm1(String) - Static method in class org.apache.spark.sql.functions
Computes the exponential of the given column.
ExponentialGenerator - Class in org.apache.spark.mllib.random
:: DeveloperApi :: Generates i.i.d.
ExponentialGenerator(double) - Constructor for class org.apache.spark.mllib.random.ExponentialGenerator
 
exponentialJavaRDD(JavaSparkContext, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
exponentialJavaRDD(JavaSparkContext, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
exponentialJavaRDD(JavaSparkContext, double, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
exponentialJavaVectorRDD(JavaSparkContext, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
exponentialJavaVectorRDD(JavaSparkContext, double, long, int, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
exponentialJavaVectorRDD(JavaSparkContext, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
exponentialRDD(SparkContext, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
Generates an RDD comprised of i.i.d. samples from the exponential distribution with the input mean.
exponentialVectorRDD(SparkContext, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the exponential distribution with the input mean.
externalBlockStoreFolderName() - Method in class org.apache.spark.SparkContext
 
externalBlockStoreSize() - Method in class org.apache.spark.storage.BlockStatus
 
externalBlockStoreSize() - Method in class org.apache.spark.storage.RDDInfo
 
extractDistribution(Function1<BatchInfo, Option<Object>>) - Method in class org.apache.spark.streaming.scheduler.StatsReportListener
 
extractDoubleDistribution(Seq<Tuple2<TaskInfo, TaskMetrics>>, Function2<TaskInfo, TaskMetrics, Option<Object>>) - Static method in class org.apache.spark.scheduler.StatsReportListener
 
extractLongDistribution(Seq<Tuple2<TaskInfo, TaskMetrics>>, Function2<TaskInfo, TaskMetrics, Option<Object>>) - Static method in class org.apache.spark.scheduler.StatsReportListener
 
extractParamMap(ParamMap) - Method in interface org.apache.spark.ml.param.Params
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
extractParamMap() - Method in interface org.apache.spark.ml.param.Params
extractParamMap with no extra values.
extraStrategies() - Method in class org.apache.spark.sql.ExperimentalMethods
Allows extra strategies to be injected into the query planner at runtime.
eye(int) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
Generate an Identity Matrix in DenseMatrix format.
eye(int) - Static method in class org.apache.spark.mllib.linalg.Matrices
Generate a dense Identity Matrix in Matrix format.

F

f() - Method in class org.apache.spark.sql.UserDefinedFunction
 
f1Measure() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
Returns document-based f1-measure averaged by the number of documents
f1Measure(double) - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
Returns f1-measure for a given label (category)
failed() - Method in class org.apache.spark.scheduler.TaskInfo
 
failedJobs() - Method in class org.apache.spark.ui.jobs.JobProgressListener
 
failedStages() - Method in class org.apache.spark.ui.jobs.JobProgressListener
 
failedTasks() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
 
failedTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
 
failureReason() - Method in class org.apache.spark.scheduler.StageInfo
If the stage failed, the reason why.
FAIR() - Static method in class org.apache.spark.scheduler.SchedulingMode
 
falsePositiveRate(double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
Returns false positive rate for a given label (category)
feature() - Method in class org.apache.spark.mllib.tree.model.Split
 
featureIndex() - Method in class org.apache.spark.ml.tree.CategoricalSplit
 
featureIndex() - Method in class org.apache.spark.ml.tree.ContinuousSplit
 
featureIndex() - Method in interface org.apache.spark.ml.tree.Split
Index of feature which this split tests
features() - Method in class org.apache.spark.mllib.regression.LabeledPoint
 
FeatureType - Class in org.apache.spark.mllib.tree.configuration
:: Experimental :: Enum to describe whether a feature is "continuous" or "categorical"
FeatureType() - Constructor for class org.apache.spark.mllib.tree.configuration.FeatureType
 
featureType() - Method in class org.apache.spark.mllib.tree.model.Split
 
FetchFailed - Class in org.apache.spark
:: DeveloperApi :: Task failed to fetch shuffle data from a remote node.
FetchFailed(BlockManagerId, int, int, int, String) - Constructor for class org.apache.spark.FetchFailed
 
fetchPct() - Method in class org.apache.spark.scheduler.RuntimePercentage
 
fetchWaitTime() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
 
fetchWaitTime() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetrics
 
field() - Method in class org.apache.spark.storage.BroadcastBlockId
 
fieldIndex(String) - Method in interface org.apache.spark.sql.Row
Returns the index of a given field name.
fieldIndex(String) - Method in class org.apache.spark.sql.types.StructType
Returns index of a given field
fieldNames() - Method in class org.apache.spark.sql.types.StructType
Returns all field names in an array.
fields() - Method in class org.apache.spark.sql.types.StructType
 
FIFO() - Static method in class org.apache.spark.scheduler.SchedulingMode
 
files() - Method in class org.apache.spark.SparkContext
 
fileStream(String, Class<K>, Class<V>, Class<F>) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
fileStream(String, Class<K>, Class<V>, Class<F>, Function<Path, Boolean>, boolean) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
fileStream(String, Class<K>, Class<V>, Class<F>, Function<Path, Boolean>, boolean, Configuration) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
fileStream(String, ClassTag<K>, ClassTag<V>, ClassTag<F>) - Method in class org.apache.spark.streaming.StreamingContext
Create a input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
fileStream(String, Function1<Path, Object>, boolean, ClassTag<K>, ClassTag<V>, ClassTag<F>) - Method in class org.apache.spark.streaming.StreamingContext
Create a input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
fileStream(String, Function1<Path, Object>, boolean, Configuration, ClassTag<K>, ClassTag<V>, ClassTag<F>) - Method in class org.apache.spark.streaming.StreamingContext
Create a input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format.
fill(double) - Method in class org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that replaces null values in numeric columns with value.
fill(String) - Method in class org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that replaces null values in string columns with value.
fill(double, String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that replaces null values in specified numeric columns.
fill(double, Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
(Scala-specific) Returns a new DataFrame that replaces null values in specified numeric columns.
fill(String, String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that replaces null values in specified string columns.
fill(String, Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
(Scala-specific) Returns a new DataFrame that replaces null values in specified string columns.
fill(Map<String, Object>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
Returns a new DataFrame that replaces null values.
fill(Map<String, Object>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
(Scala-specific) Returns a new DataFrame that replaces null values.
filter(Function<Double, Boolean>) - Method in class org.apache.spark.api.java.JavaDoubleRDD
Return a new RDD containing only the elements that satisfy a predicate.
filter(Function<Tuple2<K, V>, Boolean>) - Method in class org.apache.spark.api.java.JavaPairRDD
Return a new RDD containing only the elements that satisfy a predicate.
filter(Function<T, Boolean>) - Method in class org.apache.spark.api.java.JavaRDD
Return a new RDD containing only the elements that satisfy a predicate.
filter(Function1<Graph<VD, ED>, Graph<VD2, ED2>>, Function1<EdgeTriplet<VD2, ED2>, Object>, Function2<Object, VD2, Object>, ClassTag<VD2>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.GraphOps
Filter the graph by computing some values to filter on, and applying the predicates.
filter(Function1<EdgeTriplet<VD, ED>, Object>, Function2<Object, VD, Object>) - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
 
filter(Function1<Tuple2<Object, VD>, Object>) - Method in class org.apache.spark.graphx.VertexRDD
Restricts the vertex set to the set of vertices satisfying the given predicate.
filter(Params) - Method in class org.apache.spark.ml.param.ParamMap
Filters this param map for the given parent.
filter(Function1<T, Object>) - Method in class org.apache.spark.rdd.RDD
Return a new RDD containing only the elements that satisfy a predicate.
filter(Column) - Method in class org.apache.spark.sql.DataFrame
Filters rows using the given condition.
filter(String) - Method in class org.apache.spark.sql.DataFrame
Filters rows using the given SQL expression.
Filter - Class in org.apache.spark.sql.sources
A filter predicate for data sources.
Filter() - Constructor for class org.apache.spark.sql.sources.Filter
 
filter(Function<T, Boolean>) - Method in class org.apache.spark.streaming.api.java.JavaDStream
Return a new DStream containing only the elements that satisfy a predicate.
filter(Function<Tuple2<K, V>, Boolean>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream containing only the elements that satisfy a predicate.
filter(Function1<T, Object>) - Method in class org.apache.spark.streaming.dstream.DStream
Return a new DStream containing only the elements that satisfy a predicate.
filterByRange(K, K) - Method in class org.apache.spark.rdd.OrderedRDDFunctions
Returns an RDD containing only the elements in the the inclusive range lower to upper.
filterWith(Function1<Object, A>, Function2<T, A, Object>) - Method in class org.apache.spark.rdd.RDD
Filters this RDD with p, where p takes an additional parameter of type A.
findSynonyms(String, int) - Method in class org.apache.spark.mllib.feature.Word2VecModel
Find synonyms of a word
findSynonyms(Vector, int) - Method in class org.apache.spark.mllib.feature.Word2VecModel
Find synonyms of the vector representation of a word
finished() - Method in class org.apache.spark.scheduler.TaskInfo
 
finishTime() - Method in class org.apache.spark.scheduler.TaskInfo
The time when the task has completed successfully (including the time to remotely fetch results, if necessary).
first() - Method in class org.apache.spark.api.java.JavaDoubleRDD
 
first() - Method in class org.apache.spark.api.java.JavaPairRDD
 
first() - Method in interface org.apache.spark.api.java.JavaRDDLike
Return the first element in this RDD.
first() - Method in class org.apache.spark.rdd.RDD
Return the first element in this RDD.
first() - Method in class org.apache.spark.sql.DataFrame
Returns the first row.
first(Column) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the first value in a group.
first(String) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the first value of a column in a group.
fit(DataFrame) - Method in class org.apache.spark.ml.classification.OneVsRest
 
fit(DataFrame, ParamPair<?>, ParamPair<?>...) - Method in class org.apache.spark.ml.Estimator
Fits a single model to the input data with optional parameters.
fit(DataFrame, ParamPair<?>, Seq<ParamPair<?>>) - Method in class org.apache.spark.ml.Estimator
Fits a single model to the input data with optional parameters.
fit(DataFrame, ParamMap) - Method in class org.apache.spark.ml.Estimator
Fits a single model to the input data with provided parameter map.
fit(DataFrame) - Method in class org.apache.spark.ml.Estimator
Fits a model to the input data.
fit(DataFrame, ParamMap[]) - Method in class org.apache.spark.ml.Estimator
Fits multiple models to the input data with multiple sets of parameters.
fit(DataFrame) - Method in class org.apache.spark.ml.feature.IDF
 
fit(DataFrame) - Method in class org.apache.spark.ml.feature.StandardScaler
 
fit(DataFrame) - Method in class org.apache.spark.ml.feature.StringIndexer
 
fit(DataFrame) - Method in class org.apache.spark.ml.feature.VectorIndexer
 
fit(DataFrame) - Method in class org.apache.spark.ml.feature.Word2Vec
 
fit(DataFrame) - Method in class org.apache.spark.ml.Pipeline
Fits the pipeline to the input dataset with additional parameters.
fit(DataFrame) - Method in class org.apache.spark.ml.Predictor
 
fit(DataFrame) - Method in class org.apache.spark.ml.recommendation.ALS
 
fit(DataFrame) - Method in class org.apache.spark.ml.tuning.CrossValidator
 
fit(RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.feature.ChiSqSelector
Returns a ChiSquared feature selector.
fit(RDD<Vector>) - Method in class org.apache.spark.mllib.feature.IDF
Computes the inverse document frequency.
fit(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.feature.IDF
Computes the inverse document frequency.
fit(RDD<Vector>) - Method in class org.apache.spark.mllib.feature.PCA
Computes a PCAModel that contains the principal components of the input vectors.
fit(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.feature.PCA
Java-friendly version of fit()
fit(RDD<Vector>) - Method in class org.apache.spark.mllib.feature.StandardScaler
Computes the mean and variance and stores as a model to be used for later scaling.
fit(RDD<S>) - Method in class org.apache.spark.mllib.feature.Word2Vec
 
fit(JavaRDD<S>) - Method in class org.apache.spark.mllib.feature.Word2Vec
Computes the vector representation of each word in vocabulary (Java version).
flatMap(FlatMapFunction<T, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
flatMap(Function1<T, TraversableOnce<U>>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
flatMap(Function1<Row, TraversableOnce<R>>, ClassTag<R>) - Method in class org.apache.spark.sql.DataFrame
Returns a new RDD by first applying a function to all rows of this DataFrame, and then flattening the results.
flatMap(FlatMapFunction<T, U>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream by applying a function to all elements of this DStream, and then flattening the results
flatMap(Function1<T, Traversable<U>>, ClassTag<U>) - Method in class org.apache.spark.streaming.dstream.DStream
Return a new DStream by applying a function to all elements of this DStream, and then flattening the results
FlatMapFunction<T,R> - Interface in org.apache.spark.api.java.function
A function that returns zero or more output records from each input record.
FlatMapFunction2<T1,T2,R> - Interface in org.apache.spark.api.java.function
A function that takes two inputs and returns zero or more output records.
flatMapToDouble(DoubleFlatMapFunction<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
flatMapToPair(PairFlatMapFunction<T, K2, V2>) - Method in interface org.apache.spark.api.java.JavaRDDLike
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
flatMapToPair(PairFlatMapFunction<T, K2, V2>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream by applying a function to all elements of this DStream, and then flattening the results
flatMapValues(Function<V, Iterable<U>>) - Method in class org.apache.spark.api.java.JavaPairRDD
Pass each value in the key-value pair RDD through a flatMap function without changing the keys; this also retains the original RDD's partitioning.
flatMapValues(Function1<V, TraversableOnce<U>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Pass each value in the key-value pair RDD through a flatMap function without changing the keys; this also retains the original RDD's partitioning.
flatMapValues(Function<V, Iterable<U>>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying a flatmap function to the value of each key-value pairs in 'this' DStream without changing the key.
flatMapValues(Function1<V, TraversableOnce<U>>, ClassTag<U>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying a flatmap function to the value of each key-value pairs in 'this' DStream without changing the key.
flatMapWith(Function1<Object, A>, boolean, Function2<T, A, Seq<U>>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
FlatMaps f over this RDD, where f takes an additional parameter of type A.
FloatParam - Class in org.apache.spark.ml.param
:: DeveloperApi :: Specialized version of Param[Float] for Java.
FloatParam(String, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.FloatParam
 
FloatParam(String, String, String) - Constructor for class org.apache.spark.ml.param.FloatParam
 
FloatParam(org.apache.spark.ml.util.Identifiable, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.FloatParam
 
FloatParam(org.apache.spark.ml.util.Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.FloatParam
 
floatToFloatWritable(float) - Static method in class org.apache.spark.SparkContext
 
FloatType - Static variable in class org.apache.spark.sql.types.DataTypes
Gets the FloatType object.
FloatType - Class in org.apache.spark.sql.types
:: DeveloperApi :: The data type representing Float values.
floatWritableConverter() - Static method in class org.apache.spark.SparkContext
 
floor(Column) - Static method in class org.apache.spark.sql.functions
Computes the floor of the given value.
floor(String) - Static method in class org.apache.spark.sql.functions
Computes the floor of the given column.
floor(Duration) - Method in class org.apache.spark.streaming.Time
 
floor(Duration, Time) - Method in class org.apache.spark.streaming.Time
 
FlumeUtils - Class in org.apache.spark.streaming.flume
 
FlumeUtils() - Constructor for class org.apache.spark.streaming.flume.FlumeUtils
 
flush() - Method in class org.apache.spark.io.SnappyOutputStreamWrapper
 
flush() - Method in class org.apache.spark.serializer.SerializationStream
 
flush() - Method in class org.apache.spark.storage.TimeTrackingOutputStream
 
fMeasure(double, double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
Returns f-measure for a given label (category)
fMeasure(double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
Returns f1-measure for a given label (category)
fMeasure() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
Returns f-measure (equals to precision and recall because precision equals recall)
fMeasureByThreshold(double) - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Returns the (threshold, F-Measure) curve.
fMeasureByThreshold() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Returns the (threshold, F-Measure) curve with beta = 1.0.
fold(T, Function2<T, T, T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
Aggregate the elements of each partition, and then the results for all the partitions, using a given associative and commutative function and a neutral "zero value".
fold(T, Function2<T, T, T>) - Method in class org.apache.spark.rdd.RDD
Aggregate the elements of each partition, and then the results for all the partitions, using a given associative and commutative function and a neutral "zero value".
foldByKey(V, Partitioner, Function2<V, V, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g ., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
foldByKey(V, int, Function2<V, V, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g ., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
foldByKey(V, Function2<V, V, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
foldByKey(V, Partitioner, Function2<V, V, V>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
foldByKey(V, int, Function2<V, V, V>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
foldByKey(V, Function2<V, V, V>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
foreach(VoidFunction<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
Applies a function f to all elements of this RDD.
foreach(Function1<T, BoxedUnit>) - Method in class org.apache.spark.rdd.RDD
Applies a function f to all elements of this RDD.
foreach(Function1<Row, BoxedUnit>) - Method in class org.apache.spark.sql.DataFrame
Applies a function f to all rows.
foreach(Function<R, Void>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Deprecated.
As of release 0.9.0, replaced by foreachRDD
foreach(Function2<R, Time, Void>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Deprecated.
As of release 0.9.0, replaced by foreachRDD
foreach(Function1<RDD<T>, BoxedUnit>) - Method in class org.apache.spark.streaming.dstream.DStream
Deprecated.
As of 0.9.0, replaced by foreachRDD.
foreach(Function2<RDD<T>, Time, BoxedUnit>) - Method in class org.apache.spark.streaming.dstream.DStream
Deprecated.
As of 0.9.0, replaced by foreachRDD.
foreachActive(Function3<Object, Object, Object, BoxedUnit>) - Method in interface org.apache.spark.mllib.linalg.Matrix
Applies a function f to all the active elements of dense and sparse matrix.
foreachActive(Function2<Object, Object, BoxedUnit>) - Method in interface org.apache.spark.mllib.linalg.Vector
Applies a function f to all the active elements of dense and sparse vector.
foreachAsync(VoidFunction<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
The asynchronous version of the foreach action, which applies a function f to all the elements of this RDD.
foreachAsync(Function1<T, BoxedUnit>) - Method in class org.apache.spark.rdd.AsyncRDDActions
Applies a function f to all elements of this RDD.
foreachPartition(VoidFunction<Iterator<T>>) - Method in interface org.apache.spark.api.java.JavaRDDLike
Applies a function f to each partition of this RDD.
foreachPartition(Function1<Iterator<T>, BoxedUnit>) - Method in class org.apache.spark.rdd.RDD
Applies a function f to each partition of this RDD.
foreachPartition(Function1<Iterator<Row>, BoxedUnit>) - Method in class org.apache.spark.sql.DataFrame
Applies a function f to each partition of this DataFrame.
foreachPartitionAsync(VoidFunction<Iterator<T>>) - Method in interface org.apache.spark.api.java.JavaRDDLike
The asynchronous version of the foreachPartition action, which applies a function f to each partition of this RDD.
foreachPartitionAsync(Function1<Iterator<T>, BoxedUnit>) - Method in class org.apache.spark.rdd.AsyncRDDActions
Applies a function f to each partition of this RDD.
foreachRDD(Function<R, Void>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Apply a function to each RDD in this DStream.
foreachRDD(Function2<R, Time, Void>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Apply a function to each RDD in this DStream.
foreachRDD(Function1<RDD<T>, BoxedUnit>) - Method in class org.apache.spark.streaming.dstream.DStream
Apply a function to each RDD in this DStream.
foreachRDD(Function2<RDD<T>, Time, BoxedUnit>) - Method in class org.apache.spark.streaming.dstream.DStream
Apply a function to each RDD in this DStream.
foreachWith(Function1<Object, A>, Function2<T, A, BoxedUnit>) - Method in class org.apache.spark.rdd.RDD
Applies f to each element of this RDD, where f takes an additional parameter of type A.
format(String) - Method in class org.apache.spark.sql.DataFrameReader
Specifies the input data source format.
format(String) - Method in class org.apache.spark.sql.DataFrameWriter
Specifies the underlying output data source.
formatVersion() - Method in interface org.apache.spark.mllib.util.Saveable
Current version of model save/load format.
FPGrowth - Class in org.apache.spark.mllib.fpm
:: Experimental ::
FPGrowth() - Constructor for class org.apache.spark.mllib.fpm.FPGrowth
Constructs a default instance with default parameters {minSupport: 0.3, numPartitions: same as the input data}.
FPGrowth.FreqItemset<Item> - Class in org.apache.spark.mllib.fpm
Frequent itemset.
FPGrowth.FreqItemset(Object, long) - Constructor for class org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
 
FPGrowthModel<Item> - Class in org.apache.spark.mllib.fpm
:: Experimental ::
FPGrowthModel(RDD<FPGrowth.FreqItemset<Item>>, ClassTag<Item>) - Constructor for class org.apache.spark.mllib.fpm.FPGrowthModel
 
fractional() - Method in class org.apache.spark.sql.types.DecimalType
 
fractional() - Method in class org.apache.spark.sql.types.DoubleType
 
fractional() - Method in class org.apache.spark.sql.types.FloatType
 
freq() - Method in class org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
 
freqItems(String[], double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
Finding frequent items for columns, possibly with false positives.
freqItems(String[]) - Method in class org.apache.spark.sql.DataFrameStatFunctions
Finding frequent items for columns, possibly with false positives.
freqItems(Seq<String>, double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
(Scala-specific) Finding frequent items for columns, possibly with false positives.
freqItems(Seq<String>) - Method in class org.apache.spark.sql.DataFrameStatFunctions
(Scala-specific) Finding frequent items for columns, possibly with false positives.
freqItemsets() - Method in class org.apache.spark.mllib.fpm.FPGrowthModel
 
fromAvroFlumeEvent(AvroFlumeEvent) - Static method in class org.apache.spark.streaming.flume.SparkFlumeEvent
 
fromCaseClassString(String) - Static method in class org.apache.spark.sql.types.DataType
Deprecated.
As of 1.2.0, replaced by DataType.fromJson()
fromCOO(int, int, Iterable<Tuple3<Object, Object, Object>>) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
Generate a SparseMatrix from Coordinate List (COO) format.
fromDStream(DStream<T>, ClassTag<T>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
Convert a scala DStream to a Java-friendly JavaDStream.
fromEdgePartitions(RDD<Tuple2<Object, EdgePartition<ED, VD>>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
Create a graph from EdgePartitions, setting referenced vertices to `defaultVertexAttr`.
fromEdges(RDD<Edge<ED>>, ClassTag<ED>, ClassTag<VD>) - Static method in class org.apache.spark.graphx.EdgeRDD
Creates an EdgeRDD from a set of edges.
fromEdges(RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.Graph
Construct a graph from a collection of edges.
fromEdges(EdgeRDD<?>, int, VD, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
Constructs a VertexRDD containing all vertices referred to in edges.
fromEdgeTuples(RDD<Tuple2<Object, Object>>, VD, Option<PartitionStrategy>, StorageLevel, StorageLevel, ClassTag<VD>) - Static method in class org.apache.spark.graphx.Graph
Construct a graph from a collection of edges encoded as vertex id pairs.
fromExistingRDDs(VertexRDD<VD>, EdgeRDD<ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
Create a graph from a VertexRDD and an EdgeRDD with the same replicated vertex type as the vertices.
fromInputDStream(InputDStream<T>, ClassTag<T>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
Convert a scala InputDStream to a Java-friendly JavaInputDStream.
fromInputDStream(InputDStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
Convert a scala InputDStream of pairs to a Java-friendly JavaPairInputDStream.
fromJavaDStream(JavaDStream<Tuple2<K, V>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
 
fromJavaRDD(JavaRDD<Tuple2<K, V>>) - Static method in class org.apache.spark.api.java.JavaPairRDD
Convert a JavaRDD of key-value pairs to JavaPairRDD.
fromJson(String) - Static method in class org.apache.spark.sql.types.DataType
 
fromJson(String) - Static method in class org.apache.spark.sql.types.Metadata
Creates a Metadata instance from JSON.
fromName(String) - Static method in class org.apache.spark.ml.attribute.AttributeType
Gets the AttributeType object from its name.
fromOffset() - Method in class org.apache.spark.streaming.kafka.OffsetRange
inclusive starting offset
fromOld(DecisionTreeModel, DecisionTreeClassifier, Map<Object, Object>) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
(private[ml]) Convert a model from the old API
fromOld(GradientBoostedTreesModel, GBTClassifier, Map<Object, Object>) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
(private[ml]) Convert a model from the old API
fromOld(RandomForestModel, RandomForestClassifier, Map<Object, Object>) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
(private[ml]) Convert a model from the old API
fromOld(DecisionTreeModel, DecisionTreeRegressor, Map<Object, Object>) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
(private[ml]) Convert a model from the old API
fromOld(GradientBoostedTreesModel, GBTRegressor, Map<Object, Object>) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
(private[ml]) Convert a model from the old API
fromOld(RandomForestModel, RandomForestRegressor, Map<Object, Object>) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
(private[ml]) Convert a model from the old API
fromOld(Node, Map<Object, Object>) - Static method in class org.apache.spark.ml.tree.Node
Create a new Node from the old Node format, recursively creating child nodes as needed.
fromPairDStream(DStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
 
fromPairRDD(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.mllib.rdd.MLPairRDDFunctions
Implicit conversion from a pair RDD to MLPairRDDFunctions.
fromRDD(RDD<Object>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
 
fromRDD(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.api.java.JavaPairRDD
 
fromRDD(RDD<T>, ClassTag<T>) - Static method in class org.apache.spark.api.java.JavaRDD
 
fromRDD(RDD<T>, ClassTag<T>) - Static method in class org.apache.spark.mllib.rdd.RDDFunctions
Implicit conversion from an RDD to RDDFunctions.
fromRdd(RDD<?>) - Static method in class org.apache.spark.storage.RDDInfo
 
fromReceiverInputDStream(ReceiverInputDStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
Convert a scala ReceiverInputDStream to a Java-friendly JavaReceiverInputDStream.
fromReceiverInputDStream(ReceiverInputDStream<T>, ClassTag<T>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
Convert a scala ReceiverInputDStream to a Java-friendly JavaReceiverInputDStream.
fromSparkContext(SparkContext) - Static method in class org.apache.spark.api.java.JavaSparkContext
 
fromStage(Stage, Option<Object>) - Static method in class org.apache.spark.scheduler.StageInfo
Construct a StageInfo from a Stage.
fromString(String) - Static method in enum org.apache.spark.JobExecutionStatus
 
fromString(String) - Static method in class org.apache.spark.mllib.tree.loss.Losses
 
fromString(String) - Static method in enum org.apache.spark.status.api.v1.ApplicationStatus
 
fromString(String) - Static method in enum org.apache.spark.status.api.v1.StageStatus
 
fromString(String) - Static method in enum org.apache.spark.status.api.v1.TaskSorting
 
fromString(String) - Static method in class org.apache.spark.storage.StorageLevel
:: DeveloperApi :: Return the StorageLevel object with the specified name.
fromStructField(StructField) - Static method in class org.apache.spark.ml.attribute.AttributeGroup
Creates an attribute group from a StructField instance.
fullOuterJoin(JavaPairRDD<K, W>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
Perform a full outer join of this and other.
fullOuterJoin(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
Perform a full outer join of this and other.
fullOuterJoin(JavaPairRDD<K, W>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
Perform a full outer join of this and other.
fullOuterJoin(RDD<Tuple2<K, W>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
Perform a full outer join of this and other.
fullOuterJoin(RDD<Tuple2<K, W>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Perform a full outer join of this and other.
fullOuterJoin(RDD<Tuple2<K, W>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
Perform a full outer join of this and other.
fullOuterJoin(JavaPairDStream<K, W>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.
fullOuterJoin(JavaPairDStream<K, W>, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.
fullOuterJoin(JavaPairDStream<K, W>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.
fullOuterJoin(DStream<Tuple2<K, W>>, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.
fullOuterJoin(DStream<Tuple2<K, W>>, int, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.
fullOuterJoin(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.
fullStackTrace() - Method in class org.apache.spark.ExceptionFailure
 
Function<T1,R> - Interface in org.apache.spark.api.java.function
Base interface for functions whose return types do not create special RDDs.
Function0<R> - Interface in org.apache.spark.api.java.function
A zero-argument function that returns an R.
Function2<T1,T2,R> - Interface in org.apache.spark.api.java.function
A two-argument function that takes arguments of type T1 and T2 and returns an R.
Function3<T1,T2,T3,R> - Interface in org.apache.spark.api.java.function
A three-argument function that takes arguments of type T1, T2 and T3 and returns an R.
functions - Class in org.apache.spark.sql
 
functions() - Constructor for class org.apache.spark.sql.functions
 
FutureAction<T> - Interface in org.apache.spark
A future for the result of an action to support cancellation.
futureExecutionContext() - Static method in class org.apache.spark.rdd.AsyncRDDActions
 

G

gain() - Method in class org.apache.spark.ml.tree.InternalNode
 
gain() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
 
gamma1() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
 
gamma2() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
 
gamma6() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
 
gamma7() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
 
GammaGenerator - Class in org.apache.spark.mllib.random
:: DeveloperApi :: Generates i.i.d.
GammaGenerator(double, double) - Constructor for class org.apache.spark.mllib.random.GammaGenerator
 
gammaJavaRDD(JavaSparkContext, double, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
gammaJavaRDD(JavaSparkContext, double, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
gammaJavaRDD(JavaSparkContext, double, double, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
gammaJavaVectorRDD(JavaSparkContext, double, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
gammaJavaVectorRDD(JavaSparkContext, double, double, long, int, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
gammaJavaVectorRDD(JavaSparkContext, double, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
gammaRDD(SparkContext, double, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
Generates an RDD comprised of i.i.d. samples from the gamma distribution with the input shape and scale.
gammaVectorRDD(SparkContext, double, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the gamma distribution with the input shape and scale.
gaps() - Method in class org.apache.spark.ml.feature.RegexTokenizer
Indicates whether regex splits on gaps (true) or matches tokens (false).
GaussianMixture - Class in org.apache.spark.mllib.clustering
:: Experimental ::
GaussianMixture() - Constructor for class org.apache.spark.mllib.clustering.GaussianMixture
Constructs a default instance.
GaussianMixtureModel - Class in org.apache.spark.mllib.clustering
:: Experimental ::
GaussianMixtureModel(double[], MultivariateGaussian[]) - Constructor for class org.apache.spark.mllib.clustering.GaussianMixtureModel
 
gaussians() - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
 
GBTClassificationModel - Class in org.apache.spark.ml.classification
:: Experimental :: Gradient-Boosted Trees (GBTs) model for classification.
GBTClassificationModel(String, DecisionTreeRegressionModel[], double[]) - Constructor for class org.apache.spark.ml.classification.GBTClassificationModel
 
GBTClassifier - Class in org.apache.spark.ml.classification
:: Experimental :: Gradient-Boosted Trees (GBTs) learning algorithm for classification.
GBTClassifier(String) - Constructor for class org.apache.spark.ml.classification.GBTClassifier
 
GBTClassifier() - Constructor for class org.apache.spark.ml.classification.GBTClassifier
 
GBTRegressionModel - Class in org.apache.spark.ml.regression
:: Experimental ::
GBTRegressionModel(String, DecisionTreeRegressionModel[], double[]) - Constructor for class org.apache.spark.ml.regression.GBTRegressionModel
 
GBTRegressor - Class in org.apache.spark.ml.regression
:: Experimental :: Gradient-Boosted Trees (GBTs) learning algorithm for regression.
GBTRegressor(String) - Constructor for class org.apache.spark.ml.regression.GBTRegressor
 
GBTRegressor() - Constructor for class org.apache.spark.ml.regression.GBTRegressor
 
GeneralizedLinearAlgorithm<M extends GeneralizedLinearModel> - Class in org.apache.spark.mllib.regression
:: DeveloperApi :: GeneralizedLinearAlgorithm implements methods to train a Generalized Linear Model (GLM).
GeneralizedLinearAlgorithm() - Constructor for class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
 
GeneralizedLinearModel - Class in org.apache.spark.mllib.regression
:: DeveloperApi :: GeneralizedLinearModel (GLM) represents a model trained using GeneralizedLinearAlgorithm.
GeneralizedLinearModel(Vector, double) - Constructor for class org.apache.spark.mllib.regression.GeneralizedLinearModel
 
generate(String, String, int, int) - Static method in class org.apache.spark.examples.streaming.KinesisWordProducerASL
 
generatedRDDs() - Method in class org.apache.spark.streaming.dstream.DStream
 
generateKMeansRDD(SparkContext, int, int, int, double, int) - Static method in class org.apache.spark.mllib.util.KMeansDataGenerator
Generate an RDD containing test data for KMeans.
generateLinearInput(double, double[], int, int, double) - Static method in class org.apache.spark.mllib.util.LinearDataGenerator
For compatibility, the generated data without specifying the mean and variance will have zero mean and variance of (1.0/3.0) since the original output range is [-1, 1] with uniform distribution, and the variance of uniform distribution is (b - a)^2^ / 12 which will be (1.0/3.0)
generateLinearInput(double, double[], double[], double[], int, int, double) - Static method in class org.apache.spark.mllib.util.LinearDataGenerator
 
generateLinearInputAsList(double, double[], int, int, double) - Static method in class org.apache.spark.mllib.util.LinearDataGenerator
Return a Java List of synthetic data randomly generated according to a multi collinear model.
generateLinearRDD(SparkContext, int, int, double, int, double) - Static method in class org.apache.spark.mllib.util.LinearDataGenerator
Generate an RDD containing sample data for Linear Regression models - including Ridge, Lasso, and uregularized variants.
generateLogisticRDD(SparkContext, int, int, double, int, double) - Static method in class org.apache.spark.mllib.util.LogisticRegressionDataGenerator
Generate an RDD containing test data for LogisticRegression.
generateRandomEdges(int, int, int, long) - Static method in class org.apache.spark.graphx.util.GraphGenerators
 
geq(Object) - Method in class org.apache.spark.sql.Column
Greater than or equal to an expression.
get() - Method in interface org.apache.spark.FutureAction
Blocks and returns the result of this job.
get(Param<T>) - Method in class org.apache.spark.ml.param.ParamMap
Optionally returns the value associated with a param.
get(Param<T>) - Method in interface org.apache.spark.ml.param.Params
Optionally returns the user-supplied value of a param.
get(String) - Method in class org.apache.spark.SparkConf
Get a parameter; throws a NoSuchElementException if it's not set
get(String, String) - Method in class org.apache.spark.SparkConf
Get a parameter, falling back to a default if not set
get() - Static method in class org.apache.spark.SparkEnv
Returns the SparkEnv.
get(String) - Static method in class org.apache.spark.SparkFiles
Get the absolute path of a file added through SparkContext.addFile().
get(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i.
get() - Static method in class org.apache.spark.TaskContext
Return the currently active TaskContext.
getActive() - Static method in class org.apache.spark.streaming.StreamingContext
:: Experimental ::
getActiveJobIds() - Method in class org.apache.spark.api.java.JavaSparkStatusTracker
Returns an array containing the ids of all active jobs.
getActiveJobIds() - Method in class org.apache.spark.SparkStatusTracker
Returns an array containing the ids of all active jobs.
getActiveOrCreate(Function0<StreamingContext>) - Static method in class org.apache.spark.streaming.StreamingContext
:: Experimental ::
getActiveOrCreate(String, Function0<StreamingContext>, Configuration, boolean) - Static method in class org.apache.spark.streaming.StreamingContext
:: Experimental ::
getActiveStageIds() - Method in class org.apache.spark.api.java.JavaSparkStatusTracker
Returns an array containing the ids of all active stages.
getActiveStageIds() - Method in class org.apache.spark.SparkStatusTracker
Returns an array containing the ids of all active stages.
getAkkaConf() - Method in class org.apache.spark.SparkConf
Get all akka conf variables set on this SparkConf
getAlgo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
getAll() - Method in class org.apache.spark.SparkConf
Get all parameters as a list of pairs
getAllConfs() - Method in class org.apache.spark.sql.SQLContext
Return all the configuration properties that have been set (i.e.
getAllPools() - Method in class org.apache.spark.SparkContext
:: DeveloperApi :: Return pools for fair scheduler
getAlpha() - Method in class org.apache.spark.mllib.clustering.LDA
Alias for getDocConcentration
getAppId() - Method in class org.apache.spark.SparkConf
Returns the Spark application id, valid in the Driver after TaskScheduler registration and from the start in the Executor.
getAs(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i.
getAs(String) - Method in interface org.apache.spark.sql.Row
Returns the value of a given fieldName.
getAttr(String) - Method in class org.apache.spark.ml.attribute.AttributeGroup
Gets an attribute by its name.
getAttr(int) - Method in class org.apache.spark.ml.attribute.AttributeGroup
Gets an attribute by its index.
getBeta() - Method in class org.apache.spark.mllib.clustering.LDA
Alias for getTopicConcentration
getBlock(BlockId) - Method in class org.apache.spark.storage.StorageStatus
Return the given block stored in this block manager in O(1) time.
getBoolean(String, boolean) - Method in class org.apache.spark.SparkConf
Get a parameter as a boolean, falling back to a default if not set
getBoolean(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i as a primitive boolean.
getBoolean(String) - Method in class org.apache.spark.sql.types.Metadata
Gets a Boolean.
getBooleanArray(String) - Method in class org.apache.spark.sql.types.Metadata
Gets a Boolean array.
getByte(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i as a primitive byte.
getBytes() - Method in class org.apache.spark.sql.types.UTF8String
 
getCachedBlockManagerId(BlockManagerId) - Static method in class org.apache.spark.storage.BlockManagerId
 
getCachedMetadata(String) - Static method in class org.apache.spark.rdd.HadoopRDD
The three methods below are helpers for accessing the local map, a property of the SparkEnv of the local process.
getCatalystType(int, String, int, MetadataBuilder) - Method in class org.apache.spark.sql.jdbc.AggregatedDialect
 
getCatalystType(int, String, int, MetadataBuilder) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
Get the custom datatype mapping for the given jdbc meta information.
getCatalystType(int, String, int, MetadataBuilder) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
 
getCatalystType(int, String, int, MetadataBuilder) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
 
getCategoricalFeaturesInfo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
getCategoryMaps() - Method in class org.apache.spark.ml.feature.VectorIndexer.CategoryStats
Based on stats collected, decide which features are categorical, and choose indices for categories.
getCheckpointDir() - Method in class org.apache.spark.api.java.JavaSparkContext
 
getCheckpointDir() - Method in class org.apache.spark.SparkContext
 
getCheckpointFile() - Method in interface org.apache.spark.api.java.JavaRDDLike
Gets the name of the file to which this RDD was checkpointed
getCheckpointFile() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
 
getCheckpointFile() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
 
getCheckpointFile() - Method in class org.apache.spark.rdd.RDD
Gets the name of the file to which this RDD was checkpointed
getCheckpointFiles() - Method in class org.apache.spark.graphx.Graph
Gets the name of the files to which this Graph was checkpointed.
getCheckpointFiles() - Method in class org.apache.spark.graphx.impl.GraphImpl
 
getCheckpointInterval() - Method in class org.apache.spark.mllib.clustering.LDA
Period (in iterations) between checkpoints.
getCheckpointInterval() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
getConf() - Method in class org.apache.spark.api.java.JavaSparkContext
Return a copy of this JavaSparkContext's configuration.
getConf() - Method in class org.apache.spark.rdd.HadoopRDD
 
getConf() - Method in class org.apache.spark.rdd.NewHadoopRDD
 
getConf() - Method in class org.apache.spark.SparkContext
Return a copy of this SparkContext's configuration.
getConf(String) - Method in class org.apache.spark.sql.SQLContext
Return the value of Spark SQL configuration property for the given key.
getConf(String, String) - Method in class org.apache.spark.sql.SQLContext
Return the value of Spark SQL configuration property for the given key.
getConnection() - Method in interface org.apache.spark.rdd.JdbcRDD.ConnectionFactory
 
getConvergenceTol() - Method in class org.apache.spark.mllib.clustering.GaussianMixture
Return the largest change in log-likelihood at which convergence is considered to have occurred.
getDate(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i of date type as java.sql.Date.
getDecimal(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i of decimal type as java.math.BigDecimal.
getDefault(Param<T>) - Method in interface org.apache.spark.ml.param.Params
Gets the default value of a parameter.
getDegree() - Method in class org.apache.spark.ml.feature.PolynomialExpansion
 
getDependencies() - Method in class org.apache.spark.rdd.CoGroupedRDD
 
getDependencies() - Method in class org.apache.spark.rdd.ShuffledRDD
 
getDependencies() - Method in class org.apache.spark.rdd.UnionRDD
 
getDeprecatedConfig(String, SparkConf) - Static method in class org.apache.spark.SparkConf
Looks for available deprecated keys for the given config option, and return the first value available.
getDocConcentration() - Method in class org.apache.spark.mllib.clustering.LDA
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
getDouble(String, double) - Method in class org.apache.spark.SparkConf
Get a parameter as a double, falling back to a default if not set
getDouble(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i as a primitive double.
getDouble(String) - Method in class org.apache.spark.sql.types.Metadata
Gets a Double.
getDoubleArray(String) - Method in class org.apache.spark.sql.types.Metadata
Gets a Double array.
getEpsilon() - Method in class org.apache.spark.mllib.clustering.KMeans
The distance threshold within which we've consider centers to have converged.
getExecutorEnv() - Method in class org.apache.spark.SparkConf
Get all executor environment variables set on this SparkConf
getExecutorMemoryStatus() - Method in class org.apache.spark.SparkContext
Return a map from the slave to the max memory available for caching and the remaining memory available for caching.
getExecutorStorageStatus() - Method in class org.apache.spark.SparkContext
:: DeveloperApi :: Return information about blocks stored in all of the slaves
getField(String) - Method in class org.apache.spark.sql.Column
An expression that gets a field by name in a StructType.
getFinalValue() - Method in class org.apache.spark.partial.PartialResult
Blocking method to wait for and return the final value.
getFloat(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i as a primitive float.
getGaps() - Method in class org.apache.spark.ml.feature.RegexTokenizer
 
getImpurity() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
getInitializationMode() - Method in class org.apache.spark.mllib.clustering.KMeans
The initialization algorithm.
getInitializationSteps() - Method in class org.apache.spark.mllib.clustering.KMeans
Number of steps for the k-means|| initialization mode
getInitialModel() - Method in class org.apache.spark.mllib.clustering.GaussianMixture
Return the user supplied initial GMM, if supplied
getInt(String, int) - Method in class org.apache.spark.SparkConf
Get a parameter as an integer, falling back to a default if not set
getInt(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i as a primitive int.
getItem(Object) - Method in class org.apache.spark.sql.Column
An expression that gets an item at position ordinal out of an array, or gets a value by key key in a MapType.
getJavaMap(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i of array type as a Map.
getJDBCType(DataType) - Method in class org.apache.spark.sql.jdbc.AggregatedDialect
 
getJDBCType(DataType) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
Retrieve the jdbc / sql type for a given datatype.
getJDBCType(DataType) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
 
getJobIdsForGroup(String) - Method in class org.apache.spark.api.java.JavaSparkStatusTracker
Return a list of all known jobs in a particular job group.
getJobIdsForGroup(String) - Method in class org.apache.spark.SparkStatusTracker
Return a list of all known jobs in a particular job group.
getJobInfo(int) - Method in class org.apache.spark.api.java.JavaSparkStatusTracker
Returns job information, or null if the job info could not be found or was garbage collected.
getJobInfo(int) - Method in class org.apache.spark.SparkStatusTracker
Returns job information, or None if the job info could not be found or was garbage collected.
getK() - Method in class org.apache.spark.mllib.clustering.GaussianMixture
Return the number of Gaussians in the mixture model
getK() - Method in class org.apache.spark.mllib.clustering.KMeans
Number of clusters to create (k).
getK() - Method in class org.apache.spark.mllib.clustering.LDA
Number of topics to infer.
getKappa() - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
Learning rate: exponential decay rate
getLambda() - Method in class org.apache.spark.mllib.classification.NaiveBayes
Get the smoothing parameter.
getLDAModel(double[]) - Method in interface org.apache.spark.mllib.clustering.LDAOptimizer
 
getLearningRate() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
getLeastGroupHash(String) - Method in class org.apache.spark.rdd.PartitionCoalescer
Sorts and gets the least element of the list associated with key in groupHash The returned PartitionGroup is the least loaded of all groups that represent the machine "key"
getList(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i of array type as List.
getLocalProperty(String) - Method in class org.apache.spark.api.java.JavaSparkContext
Get a local property set in this thread, or null if it is missing.
getLocalProperty(String) - Method in class org.apache.spark.SparkContext
Get a local property set in this thread, or null if it is missing.
getLong(String, long) - Method in class org.apache.spark.SparkConf
Get a parameter as a long, falling back to a default if not set
getLong(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i as a primitive long.
getLong(String) - Method in class org.apache.spark.sql.types.Metadata
Gets a Long.
getLongArray(String) - Method in class org.apache.spark.sql.types.Metadata
Gets a Long array.
getLoss() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
getLossType() - Method in class org.apache.spark.ml.classification.GBTClassifier
 
getLossType() - Method in class org.apache.spark.ml.regression.GBTRegressor
 
getMap(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i of map type as a Scala Map.
getMaxBins() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
getMaxDepth() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
getMaxIterations() - Method in class org.apache.spark.mllib.clustering.GaussianMixture
Return the maximum number of iterations to run
getMaxIterations() - Method in class org.apache.spark.mllib.clustering.KMeans
Maximum number of iterations to run.
getMaxIterations() - Method in class org.apache.spark.mllib.clustering.LDA
Maximum number of iterations for learning.
getMaxMemoryInMB() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
getMessage() - Method in exception org.apache.spark.sql.AnalysisException
 
getMetadata(String) - Method in class org.apache.spark.sql.types.Metadata
Gets a Metadata.
getMetadataArray(String) - Method in class org.apache.spark.sql.types.Metadata
Gets a Metadata array.
getMetricName() - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
getMetricName() - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
 
getMiniBatchFraction() - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
Mini-batch fraction, which sets the fraction of document sampled and used in each iteration
getMinInfoGain() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
getMinInstancesPerNode() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
getMinTokenLength() - Method in class org.apache.spark.ml.feature.RegexTokenizer
 
getModelType() - Method in class org.apache.spark.mllib.classification.NaiveBayes
Get the model type.
getNode(int, Node) - Static method in class org.apache.spark.mllib.tree.model.Node
Traces down from a root node to get the node with the given node index.
getNumClasses() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
getNumFeatures() - Method in class org.apache.spark.ml.feature.HashingTF
 
getNumFeatures() - Method in class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
The dimension of training features.
getNumIterations() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
getNumValues() - Method in class org.apache.spark.ml.attribute.NominalAttribute
Get the number of values, either from numValues or from values.
getOptimizer() - Method in class org.apache.spark.mllib.clustering.LDA
:: DeveloperApi ::
getOption(String) - Method in class org.apache.spark.SparkConf
Get a parameter as an Option
getOrCreate(SparkConf) - Static method in class org.apache.spark.SparkContext
This function may be used to get or instantiate a SparkContext and register it as a singleton object.
getOrCreate() - Static method in class org.apache.spark.SparkContext
This function may be used to get or instantiate a SparkContext and register it as a singleton object.
getOrCreate(SparkContext) - Static method in class org.apache.spark.sql.SQLContext
Get the singleton SQLContext if it exists or create a new one using the given SparkContext.
getOrCreate(String, JavaStreamingContextFactory) - Static method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Deprecated.
As of 1.4.0, replaced by getOrCreate without JavaStreamingContextFactor.
getOrCreate(String, Configuration, JavaStreamingContextFactory) - Static method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Deprecated.
As of 1.4.0, replaced by getOrCreate without JavaStreamingContextFactor.
getOrCreate(String, Configuration, JavaStreamingContextFactory, boolean) - Static method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Deprecated.
As of 1.4.0, replaced by getOrCreate without JavaStreamingContextFactor.
getOrCreate(String, Function0<JavaStreamingContext>) - Static method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
getOrCreate(String, Function0<JavaStreamingContext>, Configuration) - Static method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
getOrCreate(String, Function0<JavaStreamingContext>, Configuration, boolean) - Static method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
getOrCreate(String, Function0<StreamingContext>, Configuration, boolean) - Static method in class org.apache.spark.streaming.StreamingContext
Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
getOrDefault(Param<T>) - Method in interface org.apache.spark.ml.param.Params
Gets the value of a param in the embedded param map or its default value.
getOrElse(Param<T>, T) - Method in class org.apache.spark.ml.param.ParamMap
Returns the value associated with a param or a default value.
getP() - Method in class org.apache.spark.ml.feature.Normalizer
 
getParam(String) - Method in interface org.apache.spark.ml.param.Params
Gets a param by its name.
getParents(int) - Method in class org.apache.spark.NarrowDependency
Get the parent partitions for a child partition.
getParents(int) - Method in class org.apache.spark.OneToOneDependency
 
getParents(int) - Method in class org.apache.spark.RangeDependency
 
getPartition(long, long, int) - Method in class org.apache.spark.graphx.PartitionStrategy.CanonicalRandomVertexCut$
 
getPartition(long, long, int) - Method in class org.apache.spark.graphx.PartitionStrategy.EdgePartition1D$
 
getPartition(long, long, int) - Method in class org.apache.spark.graphx.PartitionStrategy.EdgePartition2D$
 
getPartition(long, long, int) - Method in interface org.apache.spark.graphx.PartitionStrategy
Returns the partition number for a given edge.
getPartition(long, long, int) - Method in class org.apache.spark.graphx.PartitionStrategy.RandomVertexCut$
 
getPartition(Object) - Method in class org.apache.spark.HashPartitioner
 
getPartition(Object) - Method in class org.apache.spark.Partitioner
 
getPartition(Object) - Method in class org.apache.spark.RangePartitioner
 
getPartitions() - Method in class org.apache.spark.api.r.BaseRRDD
 
getPartitions() - Method in class org.apache.spark.rdd.CoGroupedRDD
 
getPartitions() - Method in class org.apache.spark.rdd.HadoopRDD
 
getPartitions() - Method in class org.apache.spark.rdd.JdbcRDD
 
getPartitions() - Method in class org.apache.spark.rdd.NewHadoopRDD
 
getPartitions() - Method in class org.apache.spark.rdd.PartitionCoalescer
 
getPartitions() - Method in class org.apache.spark.rdd.ShuffledRDD
 
getPartitions() - Method in class org.apache.spark.rdd.UnionRDD
 
getPath() - Method in class org.apache.spark.input.PortableDataStream
 
getPattern() - Method in class org.apache.spark.ml.feature.RegexTokenizer
 
getPersistentRDDs() - Method in class org.apache.spark.SparkContext
Returns an immutable map of RDDs that have marked themselves as persistent via cache() call.
getPoolForName(String) - Method in class org.apache.spark.SparkContext
:: DeveloperApi :: Return the pool associated with the given name, if one exists
getPreferredLocations(Partition) - Method in class org.apache.spark.rdd.HadoopRDD
 
getPreferredLocations(Partition) - Method in class org.apache.spark.rdd.NewHadoopRDD
 
getPreferredLocations(Partition) - Method in class org.apache.spark.rdd.UnionRDD
 
getQuantileCalculationStrategy() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
getRDDStorageInfo() - Method in class org.apache.spark.SparkContext
:: DeveloperApi :: Return information about what RDDs are cached, if they are in mem or on disk, how much space they take, etc.
getReceiver() - Method in class org.apache.spark.streaming.dstream.ReceiverInputDStream
Gets the receiver object that will be sent to the worker nodes to receive data.
getRootDirectory() - Static method in class org.apache.spark.SparkFiles
Get the root directory that contains files added through SparkContext.addFile().
getRuns() - Method in class org.apache.spark.mllib.clustering.KMeans
:: Experimental :: Number of runs of the algorithm to execute in parallel.
getScalingVec() - Method in class org.apache.spark.ml.feature.ElementwiseProduct
 
getSchedulingMode() - Method in class org.apache.spark.SparkContext
Return current scheduling mode
getSeed() - Method in class org.apache.spark.mllib.clustering.GaussianMixture
Return the random seed
getSeed() - Method in class org.apache.spark.mllib.clustering.KMeans
The random seed for cluster initialization.
getSeed() - Method in class org.apache.spark.mllib.clustering.LDA
Random seed
getSeq(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i of array type as a Scala Seq.
getSerializer(Serializer) - Static method in class org.apache.spark.serializer.Serializer
 
getSerializer(Option<Serializer>) - Static method in class org.apache.spark.serializer.Serializer
 
getShort(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i as a primitive short.
getSizeAsBytes(String) - Method in class org.apache.spark.SparkConf
Get a size parameter as bytes; throws a NoSuchElementException if it's not set.
getSizeAsBytes(String, String) - Method in class org.apache.spark.SparkConf
Get a size parameter as bytes, falling back to a default if not set.
getSizeAsGb(String) - Method in class org.apache.spark.SparkConf
Get a size parameter as Gibibytes; throws a NoSuchElementException if it's not set.
getSizeAsGb(String, String) - Method in class org.apache.spark.SparkConf
Get a size parameter as Gibibytes, falling back to a default if not set.
getSizeAsKb(String) - Method in class org.apache.spark.SparkConf
Get a size parameter as Kibibytes; throws a NoSuchElementException if it's not set.
getSizeAsKb(String, String) - Method in class org.apache.spark.SparkConf
Get a size parameter as Kibibytes, falling back to a default if not set.
getSizeAsMb(String) - Method in class org.apache.spark.SparkConf
Get a size parameter as Mebibytes; throws a NoSuchElementException if it's not set.
getSizeAsMb(String, String) - Method in class org.apache.spark.SparkConf
Get a size parameter as Mebibytes, falling back to a default if not set.
getSparkHome() - Method in class org.apache.spark.api.java.JavaSparkContext
Get Spark's home location from either a value set through the constructor, or the spark.home Java property, or the SPARK_HOME environment variable (in that order of preference).
getSplits() - Method in class org.apache.spark.ml.feature.Bucketizer
 
getStageInfo(int) - Method in class org.apache.spark.api.java.JavaSparkStatusTracker
Returns stage information, or null if the stage info could not be found or was garbage collected.
getStageInfo(int) - Method in class org.apache.spark.SparkStatusTracker
Returns stage information, or None if the stage info could not be found or was garbage collected.
getStages() - Method in class org.apache.spark.ml.Pipeline
 
getState() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
:: DeveloperApi ::
getState() - Method in class org.apache.spark.streaming.StreamingContext
:: DeveloperApi ::
getStorageLevel() - Method in interface org.apache.spark.api.java.JavaRDDLike
Get the RDD's current storage level, or StorageLevel.NONE if none is set.
getStorageLevel() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
 
getStorageLevel() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
 
getStorageLevel() - Method in class org.apache.spark.rdd.RDD
Get the RDD's current storage level, or StorageLevel.NONE if none is set.
getString(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i as a String object.
getString(String) - Method in class org.apache.spark.sql.types.Metadata
Gets a String.
getStringArray(String) - Method in class org.apache.spark.sql.types.Metadata
Gets a String array.
getStruct(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i of struct type as an Row object.
getSubsamplingRate() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
getTau0() - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
A (positive) learning parameter that downweights early iterations.
getThreadLocal() - Static method in class org.apache.spark.SparkEnv
Returns the ThreadLocal SparkEnv.
getThreshold() - Method in class org.apache.spark.ml.feature.Binarizer
 
getThreshold() - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
:: Experimental :: Returns the threshold (if any) used for converting raw prediction scores into 0/1 predictions.
getThreshold() - Method in class org.apache.spark.mllib.classification.SVMModel
:: Experimental :: Returns the threshold (if any) used for converting raw prediction scores into 0/1 predictions.
getTimeAsMs(String) - Method in class org.apache.spark.SparkConf
Get a time parameter as milliseconds; throws a NoSuchElementException if it's not set.
getTimeAsMs(String, String) - Method in class org.apache.spark.SparkConf
Get a time parameter as milliseconds, falling back to a default if not set.
getTimeAsSeconds(String) - Method in class org.apache.spark.SparkConf
Get a time parameter as seconds; throws a NoSuchElementException if it's not set.
getTimeAsSeconds(String, String) - Method in class org.apache.spark.SparkConf
Get a time parameter as seconds, falling back to a default if not set.
gettingResult() - Method in class org.apache.spark.scheduler.TaskInfo
 
gettingResultTime() - Method in class org.apache.spark.scheduler.TaskInfo
The time when the task started remotely getting the result.
getTopicConcentration() - Method in class org.apache.spark.mllib.clustering.LDA
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.
getTreeStrategy() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
getUseNodeIdCache() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
getValidationTol() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
getValue(int) - Method in class org.apache.spark.ml.attribute.NominalAttribute
Gets a value given its index.
getValuesMap(Seq<String>) - Method in interface org.apache.spark.sql.Row
Returns a Map(name -> value) for the requested fieldNames
getVectors() - Method in class org.apache.spark.mllib.feature.Word2VecModel
Returns a map of words to their vector representations.
Gini - Class in org.apache.spark.mllib.tree.impurity
:: Experimental :: Class for calculating the Gini impurity during binary classification.
Gini() - Constructor for class org.apache.spark.mllib.tree.impurity.Gini
 
globalTopicTotals() - Method in class org.apache.spark.mllib.clustering.EMLDAOptimizer
Aggregate distributions over topics from all term vertices.
glom() - Method in interface org.apache.spark.api.java.JavaRDDLike
Return an RDD created by coalescing all elements within each partition into an array.
glom() - Method in class org.apache.spark.rdd.RDD
Return an RDD created by coalescing all elements within each partition into an array.
glom() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Return a new DStream in which each RDD is generated by applying glom() to each RDD of this DStream.
glom() - Method in class org.apache.spark.streaming.dstream.DStream
Return a new DStream in which each RDD is generated by applying glom() to each RDD of this DStream.
gradient() - Method in class org.apache.spark.ml.classification.LogisticAggregator
 
gradient() - Method in class org.apache.spark.ml.regression.LeastSquaresAggregator
 
Gradient - Class in org.apache.spark.mllib.optimization
:: DeveloperApi :: Class used to compute the gradient for a loss function, given a single data point.
Gradient() - Constructor for class org.apache.spark.mllib.optimization.Gradient
 
gradient(double, double) - Static method in class org.apache.spark.mllib.tree.loss.AbsoluteError
Method to calculate the gradients for the gradient boosting calculation for least absolute error calculation.
gradient(double, double) - Static method in class org.apache.spark.mllib.tree.loss.LogLoss
Method to calculate the loss gradients for the gradient boosting calculation for binary classification The gradient with respect to F(x) is: - 4 y / (1 + exp(2 y F(x)))
gradient(double, double) - Method in interface org.apache.spark.mllib.tree.loss.Loss
Method to calculate the gradients for the gradient boosting calculation.
gradient(double, double) - Static method in class org.apache.spark.mllib.tree.loss.SquaredError
Method to calculate the gradients for the gradient boosting calculation for least squares error calculation.
GradientBoostedTrees - Class in org.apache.spark.mllib.tree
:: Experimental :: A class that implements Stochastic Gradient Boosting for regression and binary classification.
GradientBoostedTrees(BoostingStrategy) - Constructor for class org.apache.spark.mllib.tree.GradientBoostedTrees
 
GradientBoostedTreesModel - Class in org.apache.spark.mllib.tree.model
:: Experimental :: Represents a gradient boosted trees model.
GradientBoostedTreesModel(Enumeration.Value, DecisionTreeModel[], double[]) - Constructor for class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
 
GradientDescent - Class in org.apache.spark.mllib.optimization
Class used to solve an optimization problem using Gradient Descent.
Graph<VD,ED> - Class in org.apache.spark.graphx
The Graph abstractly represents a graph with arbitrary objects associated with vertices and edges.
graph() - Method in class org.apache.spark.mllib.clustering.EMLDAOptimizer
The following fields will only be initialized through the initialize() method
graph() - Method in class org.apache.spark.streaming.dstream.DStream
 
graph() - Method in class org.apache.spark.streaming.StreamingContext
 
GraphGenerators - Class in org.apache.spark.graphx.util
A collection of graph generating functions.
GraphGenerators() - Constructor for class org.apache.spark.graphx.util.GraphGenerators
 
GraphImpl<VD,ED> - Class in org.apache.spark.graphx.impl
An implementation of Graph to support computation on graphs.
GraphKryoRegistrator - Class in org.apache.spark.graphx
Registers GraphX classes with Kryo for improved performance.
GraphKryoRegistrator() - Constructor for class org.apache.spark.graphx.GraphKryoRegistrator
 
GraphLoader - Class in org.apache.spark.graphx
Provides utilities for loading Graphs from files.
GraphLoader() - Constructor for class org.apache.spark.graphx.GraphLoader
 
GraphOps<VD,ED> - Class in org.apache.spark.graphx
Contains additional functionality for Graph.
GraphOps(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - Constructor for class org.apache.spark.graphx.GraphOps
 
graphToGraphOps(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.Graph
Implicitly extracts the GraphOps member from a graph.
GraphXUtils - Class in org.apache.spark.graphx
 
GraphXUtils() - Constructor for class org.apache.spark.graphx.GraphXUtils
 
greater(Duration) - Method in class org.apache.spark.streaming.Duration
 
greater(Time) - Method in class org.apache.spark.streaming.Time
 
greaterEq(Duration) - Method in class org.apache.spark.streaming.Duration
 
greaterEq(Time) - Method in class org.apache.spark.streaming.Time
 
GreaterThan - Class in org.apache.spark.sql.sources
A filter that evaluates to true iff the attribute evaluates to a value greater than value.
GreaterThan(String, Object) - Constructor for class org.apache.spark.sql.sources.GreaterThan
 
GreaterThanOrEqual - Class in org.apache.spark.sql.sources
A filter that evaluates to true iff the attribute evaluates to a value greater than or equal to value.
GreaterThanOrEqual(String, Object) - Constructor for class org.apache.spark.sql.sources.GreaterThanOrEqual
 
gridGraph(SparkContext, int, int) - Static method in class org.apache.spark.graphx.util.GraphGenerators
Create rows by cols grid graph with each vertex connected to its row+1 and col+1 neighbors.
groupArr() - Method in class org.apache.spark.rdd.PartitionCoalescer
 
groupBy(Function<T, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
Return an RDD of grouped elements.
groupBy(Function<T, U>, int) - Method in interface org.apache.spark.api.java.JavaRDDLike
Return an RDD of grouped elements.
groupBy(Function1<T, K>, ClassTag<K>) - Method in class org.apache.spark.rdd.RDD
Return an RDD of grouped items.
groupBy(Function1<T, K>, int, ClassTag<K>) - Method in class org.apache.spark.rdd.RDD
Return an RDD of grouped elements.
groupBy(Function1<T, K>, Partitioner, ClassTag<K>, Ordering<K>) - Method in class org.apache.spark.rdd.RDD
Return an RDD of grouped items.
groupBy(Column...) - Method in class org.apache.spark.sql.DataFrame
Groups the DataFrame using the specified columns, so we can run aggregation on them.
groupBy(String, String...) - Method in class org.apache.spark.sql.DataFrame
Groups the DataFrame using the specified columns, so we can run aggregation on them.
groupBy(Seq<Column>) - Method in class org.apache.spark.sql.DataFrame
Groups the DataFrame using the specified columns, so we can run aggregation on them.
groupBy(String, Seq<String>) - Method in class org.apache.spark.sql.DataFrame
Groups the DataFrame using the specified columns, so we can run aggregation on them.
groupByKey(Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
Group the values for each key in the RDD into a single sequence.
groupByKey(int) - Method in class org.apache.spark.api.java.JavaPairRDD
Group the values for each key in the RDD into a single sequence.
groupByKey() - Method in class org.apache.spark.api.java.JavaPairRDD
Group the values for each key in the RDD into a single sequence.
groupByKey(Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
Group the values for each key in the RDD into a single sequence.
groupByKey(int) - Method in class org.apache.spark.rdd.PairRDDFunctions
Group the values for each key in the RDD into a single sequence.
groupByKey() - Method in class org.apache.spark.rdd.PairRDDFunctions
Group the values for each key in the RDD into a single sequence.
groupByKey() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying groupByKey to each RDD.
groupByKey(int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying groupByKey to each RDD.
groupByKey(Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying groupByKey on each RDD of this DStream.
groupByKey() - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying groupByKey to each RDD.
groupByKey(int) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying groupByKey to each RDD.
groupByKey(Partitioner) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying groupByKey on each RDD.
groupByKeyAndWindow(Duration) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying groupByKey over a sliding window.
groupByKeyAndWindow(Duration, Duration) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying groupByKey over a sliding window.
groupByKeyAndWindow(Duration, Duration, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying groupByKey over a sliding window on this DStream.
groupByKeyAndWindow(Duration, Duration, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying groupByKey over a sliding window on this DStream.
groupByKeyAndWindow(Duration) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying groupByKey over a sliding window.
groupByKeyAndWindow(Duration, Duration) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying groupByKey over a sliding window.
groupByKeyAndWindow(Duration, Duration, int) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying groupByKey over a sliding window on this DStream.
groupByKeyAndWindow(Duration, Duration, Partitioner) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Create a new DStream by applying groupByKey over a sliding window on this DStream.
GroupedData - Class in org.apache.spark.sql
:: Experimental :: A set of methods for aggregations on a DataFrame, created by DataFrame.groupBy.
groupEdges(Function2<ED, ED, ED>) - Method in class org.apache.spark.graphx.Graph
Merges multiple edges between two vertices into a single edge.
groupEdges(Function2<ED, ED, ED>) - Method in class org.apache.spark.graphx.impl.GraphImpl
 
groupHash() - Method in class org.apache.spark.rdd.PartitionCoalescer
 
groupWith(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
Alias for cogroup.
groupWith(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>) - Method in class org.apache.spark.api.java.JavaPairRDD
Alias for cogroup.
groupWith(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>) - Method in class org.apache.spark.api.java.JavaPairRDD
Alias for cogroup.
groupWith(RDD<Tuple2<K, W>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Alias for cogroup.
groupWith(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Alias for cogroup.
groupWith(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Alias for cogroup.
gt(double) - Static method in class org.apache.spark.ml.param.ParamValidators
Check if value > lowerBound
gt(Object) - Method in class org.apache.spark.sql.Column
Greater than.
gtEq(double) - Static method in class org.apache.spark.ml.param.ParamValidators
Check if value >= lowerBound

H

hadoopConfiguration() - Method in class org.apache.spark.api.java.JavaSparkContext
Returns the Hadoop configuration used for the Hadoop code (e.g.
hadoopConfiguration() - Method in class org.apache.spark.SparkContext
A default Hadoop Configuration for the Hadoop code (e.g.
hadoopFile(String, Class<F>, Class<K>, Class<V>, int) - Method in class org.apache.spark.api.java.JavaSparkContext
Get an RDD for a Hadoop file with an arbitrary InputFormat.
hadoopFile(String, Class<F>, Class<K>, Class<V>) - Method in class org.apache.spark.api.java.JavaSparkContext
Get an RDD for a Hadoop file with an arbitrary InputFormat
hadoopFile(String, Class<? extends InputFormat<K, V>>, Class<K>, Class<V>, int) - Method in class org.apache.spark.SparkContext
Get an RDD for a Hadoop file with an arbitrary InputFormat
hadoopFile(String, int, ClassTag<K>, ClassTag<V>, ClassTag<F>) - Method in class org.apache.spark.SparkContext
Smarter version of hadoopFile() that uses class tags to figure out the classes of keys, values and the InputFormat so that users don't need to pass them directly.
hadoopFile(String, ClassTag<K>, ClassTag<V>, ClassTag<F>) - Method in class org.apache.spark.SparkContext
Smarter version of hadoopFile() that uses class tags to figure out the classes of keys, values and the InputFormat so that users don't need to pass them directly.
HadoopFsRelation - Class in org.apache.spark.sql.sources
::Experimental:: A BaseRelation that provides much of the common code required for formats that store their data to an HDFS compatible filesystem.
HadoopFsRelation() - Constructor for class org.apache.spark.sql.sources.HadoopFsRelation
 
HadoopFsRelationProvider - Interface in org.apache.spark.sql.sources
::Experimental:: Implemented by objects that produce relations for a specific kind of data source with a given schema and partitioned columns.
hadoopJobMetadata() - Method in class org.apache.spark.SparkEnv
 
hadoopRDD(JobConf, Class<F>, Class<K>, Class<V>, int) - Method in class org.apache.spark.api.java.JavaSparkContext
Get an RDD for a Hadoop-readable dataset from a Hadooop JobConf giving its InputFormat and any other necessary info (e.g.
hadoopRDD(JobConf, Class<F>, Class<K>, Class<V>) - Method in class org.apache.spark.api.java.JavaSparkContext
Get an RDD for a Hadoop-readable dataset from a Hadooop JobConf giving its InputFormat and any other necessary info (e.g.
HadoopRDD<K,V> - Class in org.apache.spark.rdd
:: DeveloperApi :: An RDD that provides core functionality for reading data stored in Hadoop (e.g., files in HDFS, sources in HBase, or S3), using the older MapReduce API (org.apache.hadoop.mapred).
HadoopRDD(SparkContext, Broadcast<SerializableWritable<Configuration>>, Option<Function1<JobConf, BoxedUnit>>, Class<? extends InputFormat<K, V>>, Class<K>, Class<V>, int) - Constructor for class org.apache.spark.rdd.HadoopRDD
 
HadoopRDD(SparkContext, JobConf, Class<? extends InputFormat<K, V>>, Class<K>, Class<V>, int) - Constructor for class org.apache.spark.rdd.HadoopRDD
 
hadoopRDD(JobConf, Class<? extends InputFormat<K, V>>, Class<K>, Class<V>, int) - Method in class org.apache.spark.SparkContext
Get an RDD for a Hadoop-readable dataset from a Hadoop JobConf given its InputFormat and other necessary info (e.g.
hammingLoss() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
Returns Hamming-loss
handle(Signal) - Method in class org.apache.spark.util.SignalLoggerHandler
 
hasAttr(String) - Method in class org.apache.spark.ml.attribute.AttributeGroup
Test whether this attribute group contains a specific attribute.
hasDefault(Param<T>) - Method in interface org.apache.spark.ml.param.Params
Tests whether the input param has a default value set.
hashCode() - Method in class org.apache.spark.graphx.EdgeDirection
 
hashCode() - Method in class org.apache.spark.HashPartitioner
 
hashCode() - Method in class org.apache.spark.ml.attribute.AttributeGroup
 
hashCode() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
 
hashCode() - Method in class org.apache.spark.ml.attribute.NominalAttribute
 
hashCode() - Method in class org.apache.spark.ml.attribute.NumericAttribute
 
hashCode() - Method in class org.apache.spark.ml.param.Param
 
hashCode() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
 
hashCode() - Method in class org.apache.spark.mllib.linalg.DenseVector
 
hashCode() - Method in class org.apache.spark.mllib.linalg.SparseVector
 
hashCode() - Method in interface org.apache.spark.mllib.linalg.Vector
Returns a hash code value for the vector.
hashCode() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
 
hashCode() - Method in class org.apache.spark.mllib.tree.model.Predict
 
hashCode() - Method in interface org.apache.spark.Partition
 
hashCode() - Method in class org.apache.spark.RangePartitioner
 
hashCode() - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
 
hashCode() - Method in class org.apache.spark.scheduler.InputFormatInfo
 
hashCode() - Method in class org.apache.spark.scheduler.SplitInfo
 
hashCode() - Method in class org.apache.spark.sql.Column
 
hashCode() - Method in interface org.apache.spark.sql.Row
 
hashCode() - Method in class org.apache.spark.sql.types.Decimal
 
hashCode() - Method in class org.apache.spark.sql.types.Metadata
 
hashCode() - Method in class org.apache.spark.sql.types.UTF8String
 
hashCode() - Method in class org.apache.spark.storage.BlockId
 
hashCode() - Method in class org.apache.spark.storage.BlockManagerId
 
hashCode() - Method in class org.apache.spark.storage.StorageLevel
 
hashCode() - Method in class org.apache.spark.streaming.kafka.Broker
 
hashCode() - Method in class org.apache.spark.streaming.kafka.OffsetRange
 
HashingTF - Class in org.apache.spark.ml.feature
:: Experimental :: Maps a sequence of terms to their term frequencies using the hashing trick.
HashingTF(String) - Constructor for class org.apache.spark.ml.feature.HashingTF
 
HashingTF() - Constructor for class org.apache.spark.ml.feature.HashingTF
 
HashingTF - Class in org.apache.spark.mllib.feature
:: Experimental :: Maps a sequence of terms to their term frequencies using the hashing trick.
HashingTF(int) - Constructor for class org.apache.spark.mllib.feature.HashingTF
 
HashingTF() - Constructor for class org.apache.spark.mllib.feature.HashingTF
 
HashPartitioner - Class in org.apache.spark
A Partitioner that implements hash-based partitioning using Java's Object.hashCode.
HashPartitioner(int) - Constructor for class org.apache.spark.HashPartitioner
 
hasNext() - Method in class org.apache.spark.InterruptibleIterator
 
hasNext() - Method in class org.apache.spark.rdd.PartitionCoalescer.LocationIterator
 
HasOffsetRanges - Interface in org.apache.spark.streaming.kafka
:: Experimental :: Represents any object that has a collection of OffsetRanges.
hasParam(String) - Method in interface org.apache.spark.ml.param.Params
Tests whether this instance contains a param with a given name.
hasParent() - Method in class org.apache.spark.ml.Model
Indicates whether this Model has a corresponding parent.
hasValue(String) - Method in class org.apache.spark.ml.attribute.NominalAttribute
Tests whether this attribute contains a specific value.
head(int) - Method in class org.apache.spark.sql.DataFrame
Returns the first n rows.
head() - Method in class org.apache.spark.sql.DataFrame
Returns the first row.
high() - Method in class org.apache.spark.partial.BoundedDouble
 
HingeGradient - Class in org.apache.spark.mllib.optimization
:: DeveloperApi :: Compute gradient and loss for a Hinge loss function, as used in SVM binary classification.
HingeGradient() - Constructor for class org.apache.spark.mllib.optimization.HingeGradient
 
histogram(int) - Method in class org.apache.spark.api.java.JavaDoubleRDD
Compute a histogram of the data using bucketCount number of buckets evenly spaced between the minimum and maximum of the RDD.
histogram(double[]) - Method in class org.apache.spark.api.java.JavaDoubleRDD
Compute a histogram using the provided buckets.
histogram(Double[], boolean) - Method in class org.apache.spark.api.java.JavaDoubleRDD
 
histogram(int) - Method in class org.apache.spark.rdd.DoubleRDDFunctions
Compute a histogram of the data using bucketCount number of buckets evenly spaced between the minimum and maximum of the RDD.
histogram(double[], boolean) - Method in class org.apache.spark.rdd.DoubleRDDFunctions
Compute a histogram using the provided buckets.
HIVE_METASTORE_JARS() - Static method in class org.apache.spark.sql.hive.HiveContext
 
HIVE_METASTORE_VERSION() - Static method in class org.apache.spark.sql.hive.HiveContext
 
HiveContext - Class in org.apache.spark.sql.hive
An instance of the Spark SQL execution engine that integrates with data stored in Hive.
HiveContext(SparkContext) - Constructor for class org.apache.spark.sql.hive.HiveContext
 
hiveExecutionVersion() - Static method in class org.apache.spark.sql.hive.HiveContext
The version of hive used internally by Spark SQL.
horzcat(Matrix[]) - Static method in class org.apache.spark.mllib.linalg.Matrices
Horizontally concatenate a sequence of matrices.
host() - Method in class org.apache.spark.scheduler.TaskInfo
 
host() - Method in class org.apache.spark.status.api.v1.TaskData
 
host() - Method in class org.apache.spark.storage.BlockManagerId
 
host() - Method in class org.apache.spark.streaming.kafka.Broker
Broker's hostname
hostLocation() - Method in class org.apache.spark.scheduler.SplitInfo
 
hostPort() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
 
hostPort() - Method in class org.apache.spark.storage.BlockManagerId
 
hours() - Static method in class org.apache.spark.scheduler.StatsReportListener
 
HttpBroadcastFactory - Class in org.apache.spark.broadcast
A BroadcastFactory implementation that uses a HTTP server as the broadcast mechanism.
HttpBroadcastFactory() - Constructor for class org.apache.spark.broadcast.HttpBroadcastFactory
 
httpFileServer() - Method in class org.apache.spark.SparkEnv
 
hypot(Column, Column) - Static method in class org.apache.spark.sql.functions
Computes sqrt(a^2^ + b^2^) without intermediate overflow or underflow.
hypot(Column, String) - Static method in class org.apache.spark.sql.functions
Computes sqrt(a^2^ + b^2^) without intermediate overflow or underflow.
hypot(String, Column) - Static method in class org.apache.spark.sql.functions
Computes sqrt(a^2^ + b^2^) without intermediate overflow or underflow.
hypot(String, String) - Static method in class org.apache.spark.sql.functions
Computes sqrt(a^2^ + b^2^) without intermediate overflow or underflow.
hypot(Column, double) - Static method in class org.apache.spark.sql.functions
Computes sqrt(a^2^ + b^2^) without intermediate overflow or underflow.
hypot(String, double) - Static method in class org.apache.spark.sql.functions
Computes sqrt(a^2^ + b^2^) without intermediate overflow or underflow.
hypot(double, Column) - Static method in class org.apache.spark.sql.functions
Computes sqrt(a^2^ + b^2^) without intermediate overflow or underflow.
hypot(double, String) - Static method in class org.apache.spark.sql.functions
Computes sqrt(a^2^ + b^2^) without intermediate overflow or underflow.

I

i() - Method in class org.apache.spark.mllib.linalg.distributed.MatrixEntry
 
id() - Method in class org.apache.spark.Accumulable
 
id() - Method in interface org.apache.spark.api.java.JavaRDDLike
A unique ID for this RDD (within its SparkContext).
id() - Method in class org.apache.spark.broadcast.Broadcast
 
id() - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
 
id() - Method in class org.apache.spark.mllib.tree.model.Node
 
id() - Method in class org.apache.spark.rdd.RDD
A unique ID for this RDD (within its SparkContext).
id() - Method in class org.apache.spark.scheduler.AccumulableInfo
 
id() - Method in class org.apache.spark.scheduler.TaskInfo
 
id() - Method in class org.apache.spark.status.api.v1.AccumulableInfo
 
id() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
 
id() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
 
id() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
 
id() - Method in class org.apache.spark.storage.RDDInfo
 
id() - Method in class org.apache.spark.streaming.dstream.InputDStream
This is an unique identifier for the input stream.
IDF - Class in org.apache.spark.ml.feature
:: Experimental :: Compute the Inverse Document Frequency (IDF) given a collection of documents.
IDF(String) - Constructor for class org.apache.spark.ml.feature.IDF
 
IDF() - Constructor for class org.apache.spark.ml.feature.IDF
 
IDF - Class in org.apache.spark.mllib.feature
:: Experimental :: Inverse document frequency (IDF).
IDF(int) - Constructor for class org.apache.spark.mllib.feature.IDF
 
IDF() - Constructor for class org.apache.spark.mllib.feature.IDF
 
idf() - Method in class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
Returns the current IDF vector.
idf() - Method in class org.apache.spark.mllib.feature.IDFModel
 
IDF.DocumentFrequencyAggregator - Class in org.apache.spark.mllib.feature
Document frequency aggregator.
IDF.DocumentFrequencyAggregator(int) - Constructor for class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
 
IDF.DocumentFrequencyAggregator() - Constructor for class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
 
IDFModel - Class in org.apache.spark.ml.feature
 
IDFModel - Class in org.apache.spark.mllib.feature
:: Experimental :: Represents an IDF model that can transform term frequency vectors.
implicits() - Method in class org.apache.spark.sql.SQLContext
Accessor for nested Scala object
impurity() - Method in class org.apache.spark.ml.tree.InternalNode
 
impurity() - Method in class org.apache.spark.ml.tree.LeafNode
 
impurity() - Method in class org.apache.spark.ml.tree.Node
Impurity measure at this node (for training data)
impurity() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
Impurity - Interface in org.apache.spark.mllib.tree.impurity
:: Experimental :: Trait for calculating information gain.
impurity() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
 
impurity() - Method in class org.apache.spark.mllib.tree.model.Node
 
In() - Static method in class org.apache.spark.graphx.EdgeDirection
Edges arriving at a vertex.
in(Column...) - Method in class org.apache.spark.sql.Column
A boolean expression that is evaluated to true if the value of this expression is contained by the evaluated values of the arguments.
in(Seq<Column>) - Method in class org.apache.spark.sql.Column
A boolean expression that is evaluated to true if the value of this expression is contained by the evaluated values of the arguments.
In - Class in org.apache.spark.sql.sources
A filter that evaluates to true iff the attribute evaluates to one of the values in the array.
In(String, Object[]) - Constructor for class org.apache.spark.sql.sources.In
 
inArray(Object) - Static method in class org.apache.spark.ml.param.ParamValidators
Check for value in an allowed set of values.
inArray(List<T>) - Static method in class org.apache.spark.ml.param.ParamValidators
Check for value in an allowed set of values.
inDegrees() - Method in class org.apache.spark.graphx.GraphOps
The in-degree of each vertex in the graph.
index() - Method in class org.apache.spark.ml.attribute.Attribute
Index of the attribute.
index() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
 
index() - Method in class org.apache.spark.ml.attribute.NominalAttribute
 
index() - Method in class org.apache.spark.ml.attribute.NumericAttribute
 
index() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
 
index() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRow
 
index(int, int) - Method in interface org.apache.spark.mllib.linalg.Matrix
Return the index for the (i, j)-th element in the backing array.
index() - Method in interface org.apache.spark.Partition
Get the partition's index within its parent RDD
index() - Method in class org.apache.spark.scheduler.TaskInfo
 
index() - Method in class org.apache.spark.status.api.v1.TaskData
 
IndexedRow - Class in org.apache.spark.mllib.linalg.distributed
:: Experimental :: Represents a row of IndexedRowMatrix.
IndexedRow(long, Vector) - Constructor for class org.apache.spark.mllib.linalg.distributed.IndexedRow
 
IndexedRowMatrix - Class in org.apache.spark.mllib.linalg.distributed
 
IndexedRowMatrix(RDD<IndexedRow>, long, int) - Constructor for class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
 
IndexedRowMatrix(RDD<IndexedRow>) - Constructor for class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
 
indexOf(String) - Method in class org.apache.spark.ml.attribute.AttributeGroup
Index of an attribute specified by name.
indexOf(String) - Method in class org.apache.spark.ml.attribute.NominalAttribute
Index of a specific value.
indexOf(Object) - Method in class org.apache.spark.mllib.feature.HashingTF
Returns the index of the input term.
indexToLevel(int) - Static method in class org.apache.spark.mllib.tree.model.Node
Return the level of a tree which the given node is in.
indices() - Method in class org.apache.spark.mllib.linalg.SparseVector
 
InformationGainStats - Class in org.apache.spark.mllib.tree.model
:: DeveloperApi :: Information gain statistics for each split param: gain information gain value param: impurity current node impurity param: leftImpurity left node impurity param: rightImpurity right node impurity param: leftPredict left node predict param: rightPredict right node predict
InformationGainStats(double, double, double, double, Predict, Predict) - Constructor for class org.apache.spark.mllib.tree.model.InformationGainStats
 
initialHash() - Method in class org.apache.spark.rdd.PartitionCoalescer
 
initialize(boolean, SparkConf, org.apache.spark.SecurityManager) - Method in interface org.apache.spark.broadcast.BroadcastFactory
 
initialize(boolean, SparkConf, org.apache.spark.SecurityManager) - Method in class org.apache.spark.broadcast.HttpBroadcastFactory
 
initialize(boolean, SparkConf, org.apache.spark.SecurityManager) - Method in class org.apache.spark.broadcast.TorrentBroadcastFactory
 
initialize(RDD<Tuple2<Object, Vector>>, LDA) - Method in interface org.apache.spark.mllib.clustering.LDAOptimizer
Initializer for the optimizer.
initializeIfNecessary() - Method in interface org.apache.spark.Logging
 
initializeLogging() - Method in interface org.apache.spark.Logging
 
initialValue() - Method in class org.apache.spark.partial.PartialResult
 
initLocalProperties() - Method in class org.apache.spark.SparkContext
 
InnerClosureFinder - Class in org.apache.spark.util
 
InnerClosureFinder(Set<Class<?>>) - Constructor for class org.apache.spark.util.InnerClosureFinder
 
innerJoin(EdgeRDD<ED2>, Function4<Object, Object, ED, ED2, ED3>, ClassTag<ED2>, ClassTag<ED3>) - Method in class org.apache.spark.graphx.EdgeRDD
Inner joins this EdgeRDD with another EdgeRDD, assuming both are partitioned using the same PartitionStrategy.
innerJoin(EdgeRDD<ED2>, Function4<Object, Object, ED, ED2, ED3>, ClassTag<ED2>, ClassTag<ED3>) - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
 
innerJoin(RDD<Tuple2<Object, U>>, Function3<Object, VD, U, VD2>, ClassTag<U>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
 
innerJoin(RDD<Tuple2<Object, U>>, Function3<Object, VD, U, VD2>, ClassTag<U>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.VertexRDD
Inner joins this VertexRDD with an RDD containing vertex attribute pairs.
innerZipJoin(VertexRDD<U>, Function3<Object, VD, U, VD2>, ClassTag<U>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
 
innerZipJoin(VertexRDD<U>, Function3<Object, VD, U, VD2>, ClassTag<U>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.VertexRDD
Efficiently inner joins this VertexRDD with another VertexRDD sharing the same index.
inputBytes() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
 
inputBytes() - Method in class org.apache.spark.status.api.v1.StageData
 
inputDStream() - Method in class org.apache.spark.streaming.api.java.JavaInputDStream
 
inputDStream() - Method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
 
InputDStream<T> - Class in org.apache.spark.streaming.dstream
This is the abstract base class for all input streams.
InputDStream(StreamingContext, ClassTag<T>) - Constructor for class org.apache.spark.streaming.dstream.InputDStream
 
inputFormatClazz() - Method in class org.apache.spark.scheduler.InputFormatInfo
 
inputFormatClazz() - Method in class org.apache.spark.scheduler.SplitInfo
 
InputFormatInfo - Class in org.apache.spark.scheduler
:: DeveloperApi :: Parses and holds information about inputFormat (and files) specified as a parameter.
InputFormatInfo(Configuration, Class<?>, String) - Constructor for class org.apache.spark.scheduler.InputFormatInfo
 
InputMetricDistributions - Class in org.apache.spark.status.api.v1
 
InputMetrics - Class in org.apache.spark.status.api.v1
 
inputMetrics() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
 
inputMetrics() - Method in class org.apache.spark.status.api.v1.TaskMetrics
 
inputRecords() - Method in class org.apache.spark.status.api.v1.StageData
 
inRange(double, double, boolean, boolean) - Static method in class org.apache.spark.ml.param.ParamValidators
Check for value in range lowerBound to upperBound.
inRange(double, double) - Static method in class org.apache.spark.ml.param.ParamValidators
Version of inRange() which uses inclusive be default: [lowerBound, upperBound]
insert(DataFrame, boolean) - Method in interface org.apache.spark.sql.sources.InsertableRelation
 
InsertableRelation - Interface in org.apache.spark.sql.sources
::DeveloperApi:: A BaseRelation that can be used to insert data into it through the insert method.
insertInto(String, boolean) - Method in class org.apache.spark.sql.DataFrame
Deprecated.
As of 1.4.0, replaced by write().mode(SaveMode.Append|SaveMode.Overwrite).saveAsTable(tableName).
insertInto(String) - Method in class org.apache.spark.sql.DataFrame
Deprecated.
As of 1.4.0, replaced by write().mode(SaveMode.Append).saveAsTable(tableName).
insertInto(String) - Method in class org.apache.spark.sql.DataFrameWriter
Inserts the content of the DataFrame to the specified table.
insertIntoJDBC(String, String, boolean) - Method in class org.apache.spark.sql.DataFrame
Deprecated.
As of 1.4.0, replaced by write().jdbc().
instance() - Static method in class org.apache.spark.mllib.tree.impurity.Entropy
Get this impurity instance.
instance() - Static method in class org.apache.spark.mllib.tree.impurity.Gini
Get this impurity instance.
instance() - Static method in class org.apache.spark.mllib.tree.impurity.Variance
Get this impurity instance.
intAccumulator(int) - Method in class org.apache.spark.api.java.JavaSparkContext
Create an Accumulator integer variable, which tasks can "add" values to using the add method.
intAccumulator(int, String) - Method in class org.apache.spark.api.java.JavaSparkContext
Create an Accumulator integer variable, which tasks can "add" values to using the add method.
IntegerType - Static variable in class org.apache.spark.sql.types.DataTypes
Gets the IntegerType object.
IntegerType - Class in org.apache.spark.sql.types
:: DeveloperApi :: The data type representing Int values.
integral() - Method in class org.apache.spark.sql.types.ByteType
 
integral() - Method in class org.apache.spark.sql.types.IntegerType
 
integral() - Method in class org.apache.spark.sql.types.LongType
 
integral() - Method in class org.apache.spark.sql.types.ShortType
 
intercept() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
 
intercept() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
 
intercept() - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
 
intercept() - Method in class org.apache.spark.mllib.classification.SVMModel
 
intercept() - Method in class org.apache.spark.mllib.regression.GeneralizedLinearModel
 
intercept() - Method in class org.apache.spark.mllib.regression.LassoModel
 
intercept() - Method in class org.apache.spark.mllib.regression.LinearRegressionModel
 
intercept() - Method in class org.apache.spark.mllib.regression.RidgeRegressionModel
 
InternalNode - Class in org.apache.spark.ml.tree
:: DeveloperApi :: Internal Decision Tree node.
InterruptibleIterator<T> - Class in org.apache.spark
:: DeveloperApi :: An iterator that wraps around an existing iterator to provide task killing functionality.
InterruptibleIterator(TaskContext, Iterator<T>) - Constructor for class org.apache.spark.InterruptibleIterator
 
interruptThread() - Method in class org.apache.spark.scheduler.local.KillTask
 
intersect(DataFrame) - Method in class org.apache.spark.sql.DataFrame
Returns a new DataFrame containing rows only in both this frame and another frame.
intersection(JavaDoubleRDD) - Method in class org.apache.spark.api.java.JavaDoubleRDD
Return the intersection of this RDD and another one.
intersection(JavaPairRDD<K, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
Return the intersection of this RDD and another one.
intersection(JavaRDD<T>) - Method in class org.apache.spark.api.java.JavaRDD
Return the intersection of this RDD and another one.
intersection(RDD<T>) - Method in class org.apache.spark.rdd.RDD
Return the intersection of this RDD and another one.
intersection(RDD<T>, Partitioner, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
Return the intersection of this RDD and another one.
intersection(RDD<T>, int) - Method in class org.apache.spark.rdd.RDD
Return the intersection of this RDD and another one.
IntParam - Class in org.apache.spark.ml.param
:: DeveloperApi :: Specialized version of Param[Int] for Java.
IntParam(String, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.IntParam
 
IntParam(String, String, String) - Constructor for class org.apache.spark.ml.param.IntParam
 
IntParam(org.apache.spark.ml.util.Identifiable, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.IntParam
 
IntParam(org.apache.spark.ml.util.Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.IntParam
 
intRddToDataFrameHolder(RDD<Object>) - Method in class org.apache.spark.sql.SQLContext.implicits$
 
intToIntWritable(int) - Static method in class org.apache.spark.SparkContext
 
intWritableConverter() - Static method in class org.apache.spark.SparkContext
 
invalidInformationGainStats() - Static method in class org.apache.spark.mllib.tree.model.InformationGainStats
An InformationGainStats object to denote that current split doesn't satisfies minimum info gain or minimum number of instances per node.
isAddIntercept() - Method in class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
Get if the algorithm uses addIntercept
isAkkaConf(String) - Static method in class org.apache.spark.SparkConf
Return whether the given config is an akka config (e.g.
isAllowed(Enumeration.Value, Enumeration.Value) - Static method in class org.apache.spark.scheduler.TaskLocality
 
isBroadcast() - Method in class org.apache.spark.storage.BlockId
 
isCached(String) - Method in class org.apache.spark.sql.SQLContext
Returns true if the table is currently cached in-memory.
isCached() - Method in class org.apache.spark.storage.BlockStatus
 
isCached() - Method in class org.apache.spark.storage.RDDInfo
 
isCancelled() - Method in class org.apache.spark.ComplexFutureAction
 
isCancelled() - Method in interface org.apache.spark.FutureAction
Returns whether the action has been cancelled.
isCancelled() - Method in class org.apache.spark.SimpleFutureAction
 
isCheckpointed() - Method in interface org.apache.spark.api.java.JavaRDDLike
Return whether this RDD has been checkpointed or not
isCheckpointed() - Method in class org.apache.spark.graphx.Graph
Return whether this Graph has been checkpointed or not.
isCheckpointed() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
 
isCheckpointed() - Method in class org.apache.spark.graphx.impl.GraphImpl
 
isCheckpointed() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
 
isCheckpointed() - Method in class org.apache.spark.rdd.RDD
Return whether this RDD has been checkpointed or not
isCheckpointPresent() - Method in class org.apache.spark.streaming.StreamingContext
 
isCompleted() - Method in class org.apache.spark.ComplexFutureAction
 
isCompleted() - Method in interface org.apache.spark.FutureAction
Returns whether the action has already been completed with a value or an exception.
isCompleted() - Method in class org.apache.spark.SimpleFutureAction
 
isCompleted() - Method in class org.apache.spark.TaskContext
Returns true if the task has completed.
isDefined(Param<?>) - Method in interface org.apache.spark.ml.param.Params
Checks whether a param is explicitly set or has a default value.
isDriver() - Method in class org.apache.spark.storage.BlockManagerId
 
isEmpty() - Method in interface org.apache.spark.api.java.JavaRDDLike
 
isEmpty() - Method in class org.apache.spark.rdd.PartitionCoalescer.LocationIterator
 
isEmpty() - Method in class org.apache.spark.rdd.RDD
 
isExecutorStartupConf(String) - Static method in class org.apache.spark.SparkConf
Return whether the given config should be passed to an executor on start-up.
isFixed(DataType) - Static method in class org.apache.spark.sql.types.DecimalType
 
isInitialValueFinal() - Method in class org.apache.spark.partial.PartialResult
 
isInterrupted() - Method in class org.apache.spark.TaskContext
Returns true if the task has been killed.
isLeaf() - Method in class org.apache.spark.mllib.tree.model.Node
 
isLeftChild(int) - Static method in class org.apache.spark.mllib.tree.model.Node
Returns true if this is a left child.
isLocal() - Method in class org.apache.spark.api.java.JavaSparkContext
 
isLocal() - Method in class org.apache.spark.SparkContext
 
isLocal() - Method in class org.apache.spark.sql.DataFrame
Returns true if the collect and take methods can be run locally (without any Spark executors).
isMulticlassClassification() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
isMulticlassWithCategoricalFeatures() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
isMultipleOf(Duration) - Method in class org.apache.spark.streaming.Duration
 
isMultipleOf(Duration) - Method in class org.apache.spark.streaming.Time
 
isNominal() - Method in class org.apache.spark.ml.attribute.Attribute
Tests whether this attribute is nominal, true for NominalAttribute and BinaryAttribute.
isNominal() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
 
isNominal() - Method in class org.apache.spark.ml.attribute.NominalAttribute
 
isNominal() - Method in class org.apache.spark.ml.attribute.NumericAttribute
 
isNominal() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
 
isNotNull() - Method in class org.apache.spark.sql.Column
True if the current expression is NOT null.
IsNotNull - Class in org.apache.spark.sql.sources
A filter that evaluates to true iff the attribute evaluates to a non-null value.
IsNotNull(String) - Constructor for class org.apache.spark.sql.sources.IsNotNull
 
isNull() - Method in class org.apache.spark.sql.Column
True if the current expression is null.
IsNull - Class in org.apache.spark.sql.sources
A filter that evaluates to true iff the attribute evaluates to null.
IsNull(String) - Constructor for class org.apache.spark.sql.sources.IsNull
 
isNullAt(int) - Method in interface org.apache.spark.sql.Row
Checks whether the value at position i is null.
isNumeric() - Method in class org.apache.spark.ml.attribute.Attribute
Tests whether this attribute is numeric, true for NumericAttribute and BinaryAttribute.
isNumeric() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
 
isNumeric() - Method in class org.apache.spark.ml.attribute.NominalAttribute
 
isNumeric() - Method in class org.apache.spark.ml.attribute.NumericAttribute
 
isNumeric() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
 
isOrdinal() - Method in class org.apache.spark.ml.attribute.NominalAttribute
 
isotonic() - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
 
IsotonicRegression - Class in org.apache.spark.mllib.regression
 
IsotonicRegression() - Constructor for class org.apache.spark.mllib.regression.IsotonicRegression
 
IsotonicRegressionModel - Class in org.apache.spark.mllib.regression
:: Experimental ::
IsotonicRegressionModel(double[], double[], boolean) - Constructor for class org.apache.spark.mllib.regression.IsotonicRegressionModel
 
IsotonicRegressionModel(Iterable<Object>, Iterable<Object>, Boolean) - Constructor for class org.apache.spark.mllib.regression.IsotonicRegressionModel
A Java-friendly constructor that takes two Iterable parameters and one Boolean parameter.
isRDD() - Method in class org.apache.spark.storage.BlockId
 
isRunningLocally() - Method in class org.apache.spark.TaskContext
Returns true if the task is running locally in the driver program.
isSet(Param<?>) - Method in interface org.apache.spark.ml.param.Params
Checks whether a param is explicitly set.
isShuffle() - Method in class org.apache.spark.storage.BlockId
 
isSparkPortConf(String) - Static method in class org.apache.spark.SparkConf
Return true if the given config matches either spark.*.port or spark.port.*.
isStarted() - Method in class org.apache.spark.streaming.receiver.Receiver
Check if the receiver has started or not.
isStopped() - Method in class org.apache.spark.SparkEnv
 
isStopped() - Method in class org.apache.spark.streaming.receiver.Receiver
Check if receiver has been marked for stopping.
isTraceEnabled() - Method in interface org.apache.spark.Logging
 
isTransposed() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
 
isTransposed() - Method in interface org.apache.spark.mllib.linalg.Matrix
Flag that keeps track whether the matrix is transposed or not.
isTransposed() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
 
isValid() - Method in class org.apache.spark.ml.param.Param
 
isValid() - Method in class org.apache.spark.storage.StorageLevel
 
isZero() - Method in class org.apache.spark.sql.types.Decimal
 
isZero() - Method in class org.apache.spark.streaming.Duration
 
it() - Method in class org.apache.spark.rdd.PartitionCoalescer.LocationIterator
 
item() - Method in class org.apache.spark.ml.recommendation.ALS.Rating
 
itemFactors() - Method in class org.apache.spark.ml.recommendation.ALSModel
 
items() - Method in class org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
 
iterationTimes() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
 
iterator(Partition, TaskContext) - Method in interface org.apache.spark.api.java.JavaRDDLike
Internal method to this RDD; will read from cache if applicable, or otherwise compute it.
iterator(Partition, TaskContext) - Method in class org.apache.spark.rdd.RDD
Internal method to this RDD; will read from cache if applicable, or otherwise compute it.
iterator() - Method in class org.apache.spark.sql.types.StructType
 

J

j() - Method in class org.apache.spark.mllib.linalg.distributed.MatrixEntry
 
JacksonUtils - Class in org.apache.spark.sql.json
 
JacksonUtils() - Constructor for class org.apache.spark.sql.json.JacksonUtils
 
jarOfClass(Class<?>) - Static method in class org.apache.spark.api.java.JavaSparkContext
Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to SparkContext.
jarOfClass(Class<?>) - Static method in class org.apache.spark.SparkContext
Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to SparkContext.
jarOfClass(Class<?>) - Static method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to StreamingContext.
jarOfClass(Class<?>) - Static method in class org.apache.spark.streaming.StreamingContext
Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to StreamingContext.
jarOfObject(Object) - Static method in class org.apache.spark.api.java.JavaSparkContext
Find the JAR that contains the class of a particular object, to make it easy for users to pass their JARs to SparkContext.
jarOfObject(Object) - Static method in class org.apache.spark.SparkContext
Find the JAR that contains the class of a particular object, to make it easy for users to pass their JARs to SparkContext.
jars() - Method in class org.apache.spark.api.java.JavaSparkContext
 
jars() - Method in class org.apache.spark.SparkContext
 
javaCategoryMaps() - Method in class org.apache.spark.ml.feature.VectorIndexerModel
Java-friendly version of categoryMaps
JavaDoubleRDD - Class in org.apache.spark.api.java
 
JavaDoubleRDD(RDD<Object>) - Constructor for class org.apache.spark.api.java.JavaDoubleRDD
 
JavaDStream<T> - Class in org.apache.spark.streaming.api.java
A Java-friendly interface to DStream, the basic abstraction in Spark Streaming that represents a continuous stream of data.
JavaDStream(DStream<T>, ClassTag<T>) - Constructor for class org.apache.spark.streaming.api.java.JavaDStream
 
JavaDStreamLike<T,This extends JavaDStreamLike<T,This,R>,R extends JavaRDDLike<T,R>> - Interface in org.apache.spark.streaming.api.java
 
JavaFutureAction<T> - Interface in org.apache.spark.api.java
 
JavaHadoopRDD<K,V> - Class in org.apache.spark.api.java
 
JavaHadoopRDD(HadoopRDD<K, V>, ClassTag<K>, ClassTag<V>) - Constructor for class org.apache.spark.api.java.JavaHadoopRDD
 
JavaInputDStream<T> - Class in org.apache.spark.streaming.api.java
A Java-friendly interface to InputDStream.
JavaInputDStream(InputDStream<T>, ClassTag<T>) - Constructor for class org.apache.spark.streaming.api.java.JavaInputDStream
 
javaItems() - Method in class org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
Returns items in a Java List.
JavaIterableWrapperSerializer - Class in org.apache.spark.serializer
A Kryo serializer for serializing results returned by asJavaIterable.
JavaIterableWrapperSerializer() - Constructor for class org.apache.spark.serializer.JavaIterableWrapperSerializer
 
JavaKinesisWordCountASL - Class in org.apache.spark.examples.streaming
Consumes messages from a Amazon Kinesis streams and does wordcount.
JavaKinesisWordCountASL() - Constructor for class org.apache.spark.examples.streaming.JavaKinesisWordCountASL
 
JavaNewHadoopRDD<K,V> - Class in org.apache.spark.api.java
 
JavaNewHadoopRDD(NewHadoopRDD<K, V>, ClassTag<K>, ClassTag<V>) - Constructor for class org.apache.spark.api.java.JavaNewHadoopRDD
 
JavaPairDStream<K,V> - Class in org.apache.spark.streaming.api.java
A Java-friendly interface to a DStream of key-value pairs, which provides extra methods like reduceByKey and join.
JavaPairDStream(DStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Constructor for class org.apache.spark.streaming.api.java.JavaPairDStream
 
JavaPairInputDStream<K,V> - Class in org.apache.spark.streaming.api.java
A Java-friendly interface to InputDStream of key-value pairs.
JavaPairInputDStream(InputDStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Constructor for class org.apache.spark.streaming.api.java.JavaPairInputDStream
 
JavaPairRDD<K,V> - Class in org.apache.spark.api.java
 
JavaPairRDD(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Constructor for class org.apache.spark.api.java.JavaPairRDD
 
JavaPairReceiverInputDStream<K,V> - Class in org.apache.spark.streaming.api.java
A Java-friendly interface to ReceiverInputDStream, the abstract class for defining any input stream that receives data over the network.
JavaPairReceiverInputDStream(ReceiverInputDStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Constructor for class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
 
JavaParams - Class in org.apache.spark.ml.param
:: DeveloperApi :: Java-friendly wrapper for Params.
JavaParams() - Constructor for class org.apache.spark.ml.param.JavaParams
 
JavaRDD<T> - Class in org.apache.spark.api.java
 
JavaRDD(RDD<T>, ClassTag<T>) - Constructor for class org.apache.spark.api.java.JavaRDD
 
javaRDD() - Method in class org.apache.spark.sql.DataFrame
Returns the content of the DataFrame as a JavaRDD of Rows.
JavaRDDLike<T,This extends JavaRDDLike<T,This>> - Interface in org.apache.spark.api.java
Defines operations common to several Java RDD implementations.
JavaReceiverInputDStream<T> - Class in org.apache.spark.streaming.api.java
A Java-friendly interface to ReceiverInputDStream, the abstract class for defining any input stream that receives data over the network.
JavaReceiverInputDStream(ReceiverInputDStream<T>, ClassTag<T>) - Constructor for class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
 
JavaSerializer - Class in org.apache.spark.serializer
:: DeveloperApi :: A Spark serializer that uses Java's built-in serialization.
JavaSerializer(SparkConf) - Constructor for class org.apache.spark.serializer.JavaSerializer
 
JavaSparkContext - Class in org.apache.spark.api.java
A Java-friendly version of SparkContext that returns JavaRDDs and works with Java collections instead of Scala ones.
JavaSparkContext(SparkContext) - Constructor for class org.apache.spark.api.java.JavaSparkContext
 
JavaSparkContext() - Constructor for class org.apache.spark.api.java.JavaSparkContext
Create a JavaSparkContext that loads settings from system properties (for instance, when launching with ./bin/spark-submit).
JavaSparkContext(SparkConf) - Constructor for class org.apache.spark.api.java.JavaSparkContext
 
JavaSparkContext(String, String) - Constructor for class org.apache.spark.api.java.JavaSparkContext
 
JavaSparkContext(String, String, SparkConf) - Constructor for class org.apache.spark.api.java.JavaSparkContext
 
JavaSparkContext(String, String, String, String) - Constructor for class org.apache.spark.api.java.JavaSparkContext
 
JavaSparkContext(String, String, String, String[]) - Constructor for class org.apache.spark.api.java.JavaSparkContext
 
JavaSparkContext(String, String, String, String[], Map<String, String>) - Constructor for class org.apache.spark.api.java.JavaSparkContext
 
JavaSparkListener - Class in org.apache.spark
Java clients should extend this class instead of implementing SparkListener directly.
JavaSparkListener() - Constructor for class org.apache.spark.JavaSparkListener
 
JavaSparkStatusTracker - Class in org.apache.spark.api.java
Low-level status reporting APIs for monitoring job and stage progress.
JavaStreamingContext - Class in org.apache.spark.streaming.api.java
A Java-friendly version of StreamingContext which is the main entry point for Spark Streaming functionality.
JavaStreamingContext(StreamingContext) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
 
JavaStreamingContext(String, String, Duration) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
Create a StreamingContext.
JavaStreamingContext(String, String, Duration, String, String) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
Create a StreamingContext.
JavaStreamingContext(String, String, Duration, String, String[]) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
Create a StreamingContext.
JavaStreamingContext(String, String, Duration, String, String[], Map<String, String>) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
Create a StreamingContext.
JavaStreamingContext(JavaSparkContext, Duration) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
Create a JavaStreamingContext using an existing JavaSparkContext.
JavaStreamingContext(SparkConf, Duration) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
Create a JavaStreamingContext using a SparkConf configuration.
JavaStreamingContext(String) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
Recreate a JavaStreamingContext from a checkpoint file.
JavaStreamingContext(String, Configuration) - Constructor for class org.apache.spark.streaming.api.java.JavaStreamingContext
Re-creates a JavaStreamingContext from a checkpoint file.
JavaStreamingContextFactory - Interface in org.apache.spark.streaming.api.java
Factory interface for creating a new JavaStreamingContext
javaTopicDistributions() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
Java-friendly version of topicDistributions
jdbc(String, String, Properties) - Method in class org.apache.spark.sql.DataFrameReader
Construct a DataFrame representing the database table accessible via JDBC URL url named table and connection properties.
jdbc(String, String, String, long, long, int, Properties) - Method in class org.apache.spark.sql.DataFrameReader
Construct a DataFrame representing the database table accessible via JDBC URL url named table.
jdbc(String, String, String[], Properties) - Method in class org.apache.spark.sql.DataFrameReader
Construct a DataFrame representing the database table accessible via JDBC URL url named table using connection properties.
jdbc(String, String, Properties) - Method in class org.apache.spark.sql.DataFrameWriter
Saves the content of the DataFrame to a external database table via JDBC.
jdbc(String, String) - Method in class org.apache.spark.sql.SQLContext
Deprecated.
As of 1.4.0, replaced by read().jdbc().
jdbc(String, String, String, long, long, int) - Method in class org.apache.spark.sql.SQLContext
Deprecated.
As of 1.4.0, replaced by read().jdbc().
jdbc(String, String, String[]) - Method in class org.apache.spark.sql.SQLContext
Deprecated.
As of 1.4.0, replaced by read().jdbc().
JdbcDialect - Class in org.apache.spark.sql.jdbc
:: DeveloperApi :: Encapsulates everything (extensions, workarounds, quirks) to handle the SQL dialect of a certain database or jdbc driver.
JdbcDialect() - Constructor for class org.apache.spark.sql.jdbc.JdbcDialect
 
JdbcDialects - Class in org.apache.spark.sql.jdbc
:: DeveloperApi :: Registry of dialects that apply to every new jdbc DataFrame.
JdbcDialects() - Constructor for class org.apache.spark.sql.jdbc.JdbcDialects
 
jdbcNullType() - Method in class org.apache.spark.sql.jdbc.JdbcType
 
JdbcRDD<T> - Class in org.apache.spark.rdd
An RDD that executes an SQL query on a JDBC connection and reads results.
JdbcRDD(SparkContext, Function0<Connection>, String, long, long, int, Function1<ResultSet, T>, ClassTag<T>) - Constructor for class org.apache.spark.rdd.JdbcRDD
 
JdbcRDD.ConnectionFactory - Interface in org.apache.spark.rdd
 
JdbcType - Class in org.apache.spark.sql.jdbc
:: DeveloperApi :: A database type definition coupled with the jdbc type needed to send null values to the database.
JdbcType(String, int) - Constructor for class org.apache.spark.sql.jdbc.JdbcType
 
JobData - Class in org.apache.spark.status.api.v1
 
JobExecutionStatus - Enum in org.apache.spark
 
jobGroup() - Method in class org.apache.spark.status.api.v1.JobData
 
jobGroupToJobIds() - Method in class org.apache.spark.ui.jobs.JobProgressListener
 
jobId() - Method in class org.apache.spark.scheduler.SparkListenerJobEnd
 
jobId() - Method in class org.apache.spark.scheduler.SparkListenerJobStart
 
jobId() - Method in interface org.apache.spark.SparkJobInfo
 
jobId() - Method in class org.apache.spark.SparkJobInfoImpl
 
jobId() - Method in class org.apache.spark.status.api.v1.JobData
 
jobID() - Method in class org.apache.spark.TaskCommitDenied
 
jobIds() - Method in interface org.apache.spark.api.java.JavaFutureAction
Returns the job IDs run by the underlying async operation.
jobIds() - Method in class org.apache.spark.ComplexFutureAction
 
jobIds() - Method in interface org.apache.spark.FutureAction
Returns the job IDs run by the underlying async operation.
jobIds() - Method in class org.apache.spark.SimpleFutureAction
 
jobIdToData() - Method in class org.apache.spark.ui.jobs.JobProgressListener
 
JobLogger - Class in org.apache.spark.scheduler
:: DeveloperApi :: A logger class to record runtime information for jobs in Spark.
JobLogger(String, String) - Constructor for class org.apache.spark.scheduler.JobLogger
 
JobLogger() - Constructor for class org.apache.spark.scheduler.JobLogger
 
JobProgressListener - Class in org.apache.spark.ui.jobs
:: DeveloperApi :: Tracks task-level information to be displayed in the UI.
JobProgressListener(SparkConf) - Constructor for class org.apache.spark.ui.jobs.JobProgressListener
 
JobResult - Interface in org.apache.spark.scheduler
:: DeveloperApi :: A result of a job in the DAGScheduler.
jobResult() - Method in class org.apache.spark.scheduler.SparkListenerJobEnd
 
JobSucceeded - Class in org.apache.spark.scheduler
 
JobSucceeded() - Constructor for class org.apache.spark.scheduler.JobSucceeded
 
jobUIData() - Method in class org.apache.spark.streaming.ui.SparkJobIdWithUIData
 
join(JavaPairRDD<K, W>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
Merge the values for each key using an associative reduce function.
join(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
Return an RDD containing all pairs of elements with matching keys in this and other.
join(JavaPairRDD<K, W>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
Return an RDD containing all pairs of elements with matching keys in this and other.
join(RDD<Tuple2<K, W>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
Return an RDD containing all pairs of elements with matching keys in this and other.
join(RDD<Tuple2<K, W>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Return an RDD containing all pairs of elements with matching keys in this and other.
join(RDD<Tuple2<K, W>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
Return an RDD containing all pairs of elements with matching keys in this and other.
join(DataFrame) - Method in class org.apache.spark.sql.DataFrame
Cartesian join with another DataFrame.
join(DataFrame, String) - Method in class org.apache.spark.sql.DataFrame
Inner equi-join with another DataFrame using the given column.
join(DataFrame, Column) - Method in class org.apache.spark.sql.DataFrame
Inner join with another DataFrame, using the given join expression.
join(DataFrame, Column, String) - Method in class org.apache.spark.sql.DataFrame
Join with another DataFrame, using the given join expression.
join(JavaPairDStream<K, W>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.
join(JavaPairDStream<K, W>, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.
join(JavaPairDStream<K, W>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.
join(DStream<Tuple2<K, W>>, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.
join(DStream<Tuple2<K, W>>, int, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.
join(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.
joinVertices(RDD<Tuple2<Object, U>>, Function3<Object, VD, U, VD>, ClassTag<U>) - Method in class org.apache.spark.graphx.GraphOps
Join the vertices with an RDD and then apply a function from the vertex and RDD entry to a new vertex value.
json(String) - Method in class org.apache.spark.sql.DataFrameReader
Loads a JSON file (one object per line) and returns the result as a DataFrame.
json(JavaRDD<String>) - Method in class org.apache.spark.sql.DataFrameReader
Loads an JavaRDD[String] storing JSON objects (one object per record) and returns the result as a DataFrame.
json(RDD<String>) - Method in class org.apache.spark.sql.DataFrameReader
Loads an RDD[String] storing JSON objects (one object per record) and returns the result as a DataFrame.
json(String) - Method in class org.apache.spark.sql.DataFrameWriter
Saves the content of the DataFrame in JSON format at the specified path.
json() - Method in class org.apache.spark.sql.types.DataType
The compact JSON representation of this data type.
json() - Method in class org.apache.spark.sql.types.Metadata
Converts to its JSON representation.
jsonFile(String) - Method in class org.apache.spark.sql.SQLContext
Deprecated.
As of 1.4.0, replaced by read().json().
jsonFile(String, StructType) - Method in class org.apache.spark.sql.SQLContext
Deprecated.
As of 1.4.0, replaced by read().json().
jsonFile(String, double) - Method in class org.apache.spark.sql.SQLContext
Deprecated.
As of 1.4.0, replaced by read().json().
jsonRDD(RDD<String>) - Method in class org.apache.spark.sql.SQLContext
Deprecated.
As of 1.4.0, replaced by read().json().
jsonRDD(JavaRDD<String>) - Method in class org.apache.spark.sql.SQLContext
Deprecated.
As of 1.4.0, replaced by read().json().
jsonRDD(RDD<String>, StructType) - Method in class org.apache.spark.sql.SQLContext
Deprecated.
As of 1.4.0, replaced by read().json().
jsonRDD(JavaRDD<String>, StructType) - Method in class org.apache.spark.sql.SQLContext
Deprecated.
As of 1.4.0, replaced by read().json().
jsonRDD(RDD<String>, double) - Method in class org.apache.spark.sql.SQLContext
Deprecated.
As of 1.4.0, replaced by read().json().
jsonRDD(JavaRDD<String>, double) - Method in class org.apache.spark.sql.SQLContext
Deprecated.
As of 1.4.0, replaced by read().json().
jvmGcTime() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
 
jvmGcTime() - Method in class org.apache.spark.status.api.v1.TaskMetrics
 
jvmInformation() - Method in class org.apache.spark.ui.env.EnvironmentListener
 

K

k() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
 
k() - Method in class org.apache.spark.mllib.clustering.EMLDAOptimizer
 
k() - Method in class org.apache.spark.mllib.clustering.ExpectationSum
 
k() - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
Number of gaussians in mixture