Packages

  • package root
    Definition Classes
    root
  • package org
    Definition Classes
    root
  • package apache
    Definition Classes
    org
  • package spark

    Core Spark functionality.

    Core Spark functionality. org.apache.spark.SparkContext serves as the main entry point to Spark, while org.apache.spark.rdd.RDD is the data type representing a distributed collection, and provides most parallel operations.

    In addition, org.apache.spark.rdd.PairRDDFunctions contains operations available only on RDDs of key-value pairs, such as groupByKey and join; org.apache.spark.rdd.DoubleRDDFunctions contains operations available only on RDDs of Doubles; and org.apache.spark.rdd.SequenceFileRDDFunctions contains operations available on RDDs that can be saved as SequenceFiles. These operations are automatically available on any RDD of the right type (e.g. RDD[(Int, Int)] through implicit conversions.

    Java programmers should reference the org.apache.spark.api.java package for Spark programming APIs in Java.

    Classes and methods marked with Experimental are user-facing features which have not been officially adopted by the Spark project. These are subject to change or removal in minor releases.

    Classes and methods marked with Developer API are intended for advanced users want to extend Spark through lower level interfaces. These are subject to changes or removal in minor releases.

    Definition Classes
    apache
  • package ml

    DataFrame-based machine learning APIs to let users quickly assemble and configure practical machine learning pipelines.

    DataFrame-based machine learning APIs to let users quickly assemble and configure practical machine learning pipelines.

    Definition Classes
    spark
  • package attribute

    The ML pipeline API uses DataFrames as ML datasets.

    ML attributes

    The ML pipeline API uses DataFrames as ML datasets. Each dataset consists of typed columns, e.g., string, double, vector, etc. However, knowing only the column type may not be sufficient to handle the data properly. For instance, a double column with values 0.0, 1.0, 2.0, ... may represent some label indices, which cannot be treated as numeric values in ML algorithms, and, for another instance, we may want to know the names and types of features stored in a vector column. ML attributes are used to provide additional information to describe columns in a dataset.

    ML columns

    A column with ML attributes attached is called an ML column. The data in ML columns are stored as double values, i.e., an ML column is either a scalar column of double values or a vector column. Columns of other types must be encoded into ML columns using transformers. We use Attribute to describe a scalar ML column, and AttributeGroup to describe a vector ML column. ML attributes are stored in the metadata field of the column schema.

    Definition Classes
    ml
  • package classification
    Definition Classes
    ml
  • package clustering
    Definition Classes
    ml
  • package evaluation
    Definition Classes
    ml
  • package feature

    The ml.feature package provides common feature transformers that help convert raw data or features into more suitable forms for model fitting.

    Feature transformers

    The ml.feature package provides common feature transformers that help convert raw data or features into more suitable forms for model fitting. Most feature transformers are implemented as Transformers, which transform one DataFrame into another, e.g., HashingTF. Some feature transformers are implemented as Estimators, because the transformation requires some aggregated information of the dataset, e.g., document frequencies in IDF. For those feature transformers, calling Estimator.fit is required to obtain the model first, e.g., IDFModel, in order to apply transformation. The transformation is usually done by appending new columns to the input DataFrame, so all input columns are carried over.

    We try to make each transformer minimal, so it becomes flexible to assemble feature transformation pipelines. Pipeline can be used to chain feature transformers, and VectorAssembler can be used to combine multiple feature transformations, for example:

    import org.apache.spark.ml.feature._
    import org.apache.spark.ml.Pipeline
    
    // a DataFrame with three columns: id (integer), text (string), and rating (double).
    val df = spark.createDataFrame(Seq(
      (0, "Hi I heard about Spark", 3.0),
      (1, "I wish Java could use case classes", 4.0),
      (2, "Logistic regression models are neat", 4.0)
    )).toDF("id", "text", "rating")
    
    // define feature transformers
    val tok = new RegexTokenizer()
      .setInputCol("text")
      .setOutputCol("words")
    val sw = new StopWordsRemover()
      .setInputCol("words")
      .setOutputCol("filtered_words")
    val tf = new HashingTF()
      .setInputCol("filtered_words")
      .setOutputCol("tf")
      .setNumFeatures(10000)
    val idf = new IDF()
      .setInputCol("tf")
      .setOutputCol("tf_idf")
    val assembler = new VectorAssembler()
      .setInputCols(Array("tf_idf", "rating"))
      .setOutputCol("features")
    
    // assemble and fit the feature transformation pipeline
    val pipeline = new Pipeline()
      .setStages(Array(tok, sw, tf, idf, assembler))
    val model = pipeline.fit(df)
    
    // save transformed features with raw data
    model.transform(df)
      .select("id", "text", "rating", "features")
      .write.format("parquet").save("/output/path")

    Some feature transformers implemented in MLlib are inspired by those implemented in scikit-learn. The major difference is that most scikit-learn feature transformers operate eagerly on the entire input dataset, while MLlib's feature transformers operate lazily on individual columns, which is more efficient and flexible to handle large and complex datasets.

    Definition Classes
    ml
    See also

    scikit-learn.preprocessing

  • package fpm
    Definition Classes
    ml
  • package image
    Definition Classes
    ml
  • package linalg
    Definition Classes
    ml
  • package param
    Definition Classes
    ml
  • package recommendation
    Definition Classes
    ml
  • package regression
    Definition Classes
    ml
  • package source
    Definition Classes
    ml
  • package stat
    Definition Classes
    ml
  • package tree
    Definition Classes
    ml
  • package tuning
    Definition Classes
    ml
  • package util
    Definition Classes
    ml
  • Estimator
  • FitEnd
  • FitStart
  • LoadInstanceEnd
  • LoadInstanceStart
  • MLEvent
  • Model
  • Pipeline
  • PipelineModel
  • PipelineStage
  • PredictionModel
  • Predictor
  • SaveInstanceEnd
  • SaveInstanceStart
  • TransformEnd
  • TransformStart
  • Transformer
  • UnaryTransformer
  • functions
c

org.apache.spark.ml

Predictor

abstract class Predictor[FeaturesType, Learner <: Predictor[FeaturesType, Learner, M], M <: PredictionModel[FeaturesType, M]] extends Estimator[M] with PredictorParams

Abstraction for prediction problems (regression and classification). It accepts all NumericType labels and will automatically cast it to DoubleType in fit(). If this predictor supports weights, it accepts all NumericType weights, which will be automatically casted to DoubleType in fit().

FeaturesType

Type of features. E.g., VectorUDT for vector features.

Learner

Specialization of this class. If you subclass this type, use this type parameter to specify the concrete type.

M

Specialization of PredictionModel. If you subclass this type, use this type parameter to specify the concrete type for the corresponding model.

Source
Predictor.scala
Linear Supertypes
PredictorParams, HasPredictionCol, HasFeaturesCol, HasLabelCol, Estimator[M], PipelineStage, Logging, Params, Serializable, Serializable, Identifiable, AnyRef, Any
Ordering
  1. Grouped
  2. Alphabetic
  3. By Inheritance
Inherited
  1. Predictor
  2. PredictorParams
  3. HasPredictionCol
  4. HasFeaturesCol
  5. HasLabelCol
  6. Estimator
  7. PipelineStage
  8. Logging
  9. Params
  10. Serializable
  11. Serializable
  12. Identifiable
  13. AnyRef
  14. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Parameters

A list of (hyper-)parameter keys this algorithm can take. Users can set and get the parameter values through setters and getters, respectively.

  1. final val featuresCol: Param[String]

    Param for features column name.

    Param for features column name.

    Definition Classes
    HasFeaturesCol
  2. final val labelCol: Param[String]

    Param for label column name.

    Param for label column name.

    Definition Classes
    HasLabelCol
  3. final val predictionCol: Param[String]

    Param for prediction column name.

    Param for prediction column name.

    Definition Classes
    HasPredictionCol

Members

  1. abstract def copy(extra: ParamMap): Learner

    Creates a copy of this instance with the same UID and some extra params.

    Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. See defaultCopy().

    Definition Classes
    PredictorEstimatorPipelineStageParams
  2. abstract val uid: String

    An immutable unique ID for the object and its derivatives.

    An immutable unique ID for the object and its derivatives.

    Definition Classes
    Identifiable
  3. final def clear(param: Param[_]): Predictor.this.type

    Clears the user-supplied value for the input param.

    Clears the user-supplied value for the input param.

    Definition Classes
    Params
  4. def explainParam(param: Param[_]): String

    Explains a param.

    Explains a param.

    param

    input param, must belong to this instance.

    returns

    a string that contains the input param name, doc, and optionally its default value and the user-supplied value

    Definition Classes
    Params
  5. def explainParams(): String

    Explains all params of this instance.

    Explains all params of this instance. See explainParam().

    Definition Classes
    Params
  6. final def extractParamMap(): ParamMap

    extractParamMap with no extra values.

    extractParamMap with no extra values.

    Definition Classes
    Params
  7. final def extractParamMap(extra: ParamMap): ParamMap

    Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values less than user-supplied values less than extra.

    Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values less than user-supplied values less than extra.

    Definition Classes
    Params
  8. def fit(dataset: Dataset[_]): M

    Fits a model to the input data.

    Fits a model to the input data.

    Definition Classes
    PredictorEstimator
  9. def fit(dataset: Dataset[_], paramMaps: Seq[ParamMap]): Seq[M]

    Fits multiple models to the input data with multiple sets of parameters.

    Fits multiple models to the input data with multiple sets of parameters. The default implementation uses a for loop on each parameter map. Subclasses could override this to optimize multi-model training.

    dataset

    input dataset

    paramMaps

    An array of parameter maps. These values override any specified in this Estimator's embedded ParamMap.

    returns

    fitted models, matching the input parameter maps

    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" )
  10. def fit(dataset: Dataset[_], paramMap: ParamMap): M

    Fits a single model to the input data with provided parameter map.

    Fits a single model to the input data with provided parameter map.

    dataset

    input dataset

    paramMap

    Parameter map. These values override any specified in this Estimator's embedded ParamMap.

    returns

    fitted model

    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" )
  11. def fit(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*): M

    Fits a single model to the input data with optional parameters.

    Fits a single model to the input data with optional parameters.

    dataset

    input dataset

    firstParamPair

    the first param pair, overrides embedded params

    otherParamPairs

    other param pairs. These values override any specified in this Estimator's embedded ParamMap.

    returns

    fitted model

    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" ) @varargs()
  12. final def get[T](param: Param[T]): Option[T]

    Optionally returns the user-supplied value of a param.

    Optionally returns the user-supplied value of a param.

    Definition Classes
    Params
  13. final def getDefault[T](param: Param[T]): Option[T]

    Gets the default value of a parameter.

    Gets the default value of a parameter.

    Definition Classes
    Params
  14. final def getOrDefault[T](param: Param[T]): T

    Gets the value of a param in the embedded param map or its default value.

    Gets the value of a param in the embedded param map or its default value. Throws an exception if neither is set.

    Definition Classes
    Params
  15. def getParam(paramName: String): Param[Any]

    Gets a param by its name.

    Gets a param by its name.

    Definition Classes
    Params
  16. final def hasDefault[T](param: Param[T]): Boolean

    Tests whether the input param has a default value set.

    Tests whether the input param has a default value set.

    Definition Classes
    Params
  17. def hasParam(paramName: String): Boolean

    Tests whether this instance contains a param with a given name.

    Tests whether this instance contains a param with a given name.

    Definition Classes
    Params
  18. final def isDefined(param: Param[_]): Boolean

    Checks whether a param is explicitly set or has a default value.

    Checks whether a param is explicitly set or has a default value.

    Definition Classes
    Params
  19. final def isSet(param: Param[_]): Boolean

    Checks whether a param is explicitly set.

    Checks whether a param is explicitly set.

    Definition Classes
    Params
  20. lazy val params: Array[Param[_]]

    Returns all params sorted by their names.

    Returns all params sorted by their names. The default implementation uses Java reflection to list all public methods that have no arguments and return Param.

    Definition Classes
    Params
    Note

    Developer should not use this method in constructor because we cannot guarantee that this variable gets initialized before other params.

  21. final def set[T](param: Param[T], value: T): Predictor.this.type

    Sets a parameter in the embedded param map.

    Sets a parameter in the embedded param map.

    Definition Classes
    Params
  22. def toString(): String
    Definition Classes
    Identifiable → AnyRef → Any
  23. def transformSchema(schema: StructType): StructType

    Check transform validity and derive the output schema from the input schema.

    Check transform validity and derive the output schema from the input schema.

    We check validity for interactions between parameters during transformSchema and raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled by Param.validate().

    Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.

    Definition Classes
    PredictorPipelineStage

Parameter setters

  1. def setFeaturesCol(value: String): Learner

  2. def setLabelCol(value: String): Learner

  3. def setPredictionCol(value: String): Learner

Parameter getters

  1. final def getFeaturesCol: String

    Definition Classes
    HasFeaturesCol
  2. final def getLabelCol: String

    Definition Classes
    HasLabelCol
  3. final def getPredictionCol: String

    Definition Classes
    HasPredictionCol