Packages

  • package root
    Definition Classes
    root
  • package org
    Definition Classes
    root
  • package apache
    Definition Classes
    org
  • package spark

    Core Spark functionality.

    Core Spark functionality. org.apache.spark.SparkContext serves as the main entry point to Spark, while org.apache.spark.rdd.RDD is the data type representing a distributed collection, and provides most parallel operations.

    In addition, org.apache.spark.rdd.PairRDDFunctions contains operations available only on RDDs of key-value pairs, such as groupByKey and join; org.apache.spark.rdd.DoubleRDDFunctions contains operations available only on RDDs of Doubles; and org.apache.spark.rdd.SequenceFileRDDFunctions contains operations available on RDDs that can be saved as SequenceFiles. These operations are automatically available on any RDD of the right type (e.g. RDD[(Int, Int)] through implicit conversions.

    Java programmers should reference the org.apache.spark.api.java package for Spark programming APIs in Java.

    Classes and methods marked with Experimental are user-facing features which have not been officially adopted by the Spark project. These are subject to change or removal in minor releases.

    Classes and methods marked with Developer API are intended for advanced users want to extend Spark through lower level interfaces. These are subject to changes or removal in minor releases.

    Definition Classes
    apache
  • package mllib

    RDD-based machine learning APIs (in maintenance mode).

    RDD-based machine learning APIs (in maintenance mode).

    The spark.mllib package is in maintenance mode as of the Spark 2.0.0 release to encourage migration to the DataFrame-based APIs under the org.apache.spark.ml package. While in maintenance mode,

    • no new features in the RDD-based spark.mllib package will be accepted, unless they block implementing new features in the DataFrame-based spark.ml package;
    • bug fixes in the RDD-based APIs will still be accepted.

    The developers will continue adding more features to the DataFrame-based APIs in the 2.x series to reach feature parity with the RDD-based APIs. And once we reach feature parity, this package will be deprecated.

    Definition Classes
    spark
    See also

    SPARK-4591 to track the progress of feature parity

  • package tree

    This package contains the default implementation of the decision tree algorithm, which supports:

    This package contains the default implementation of the decision tree algorithm, which supports:

    • binary classification,
    • regression,
    • information loss calculation with entropy and Gini for classification and variance for regression,
    • both continuous and categorical features.
    Definition Classes
    mllib
  • package model
    Definition Classes
    tree
  • DecisionTreeModel
  • GradientBoostedTreesModel
  • InformationGainStats
  • Node
  • Predict
  • RandomForestModel
  • Split

class DecisionTreeModel extends Serializable with Saveable

Decision tree model for classification or regression. This model stores the decision tree structure and parameters.

Annotations
@Since( "1.0.0" )
Source
DecisionTreeModel.scala
Linear Supertypes
Saveable, Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DecisionTreeModel
  2. Saveable
  3. Serializable
  4. Serializable
  5. AnyRef
  6. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new DecisionTreeModel(topNode: Node, algo: Algo)

    topNode

    root node

    algo

    algorithm type -- classification or regression

    Annotations
    @Since( "1.0.0" )

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. val algo: Algo
    Annotations
    @Since( "1.0.0" )
  5. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  6. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  7. def depth: Int

    Get depth of tree.

    Get depth of tree. E.g.: Depth 0 means 1 leaf node. Depth 1 means 1 internal node and 2 leaf nodes.

    Annotations
    @Since( "1.1.0" )
  8. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  10. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  12. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  13. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  14. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  15. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  16. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  17. def numNodes: Int

    Get number of nodes in tree, including leaf nodes.

    Get number of nodes in tree, including leaf nodes.

    Annotations
    @Since( "1.1.0" )
  18. def predict(features: JavaRDD[Vector]): JavaRDD[Double]

    Predict values for the given data set using the model trained.

    Predict values for the given data set using the model trained.

    features

    JavaRDD representing data points to be predicted

    returns

    JavaRDD of predictions for each of the given data points

    Annotations
    @Since( "1.2.0" )
  19. def predict(features: RDD[Vector]): RDD[Double]

    Predict values for the given data set using the model trained.

    Predict values for the given data set using the model trained.

    features

    RDD representing data points to be predicted

    returns

    RDD of predictions for each of the given data points

    Annotations
    @Since( "1.0.0" )
  20. def predict(features: Vector): Double

    Predict values for a single data point using the model trained.

    Predict values for a single data point using the model trained.

    features

    array representing a single data point

    returns

    Double prediction from the trained model

    Annotations
    @Since( "1.0.0" )
  21. def save(sc: SparkContext, path: String): Unit

    sc

    Spark context used to save model data.

    path

    Path specifying the directory in which to save this model. If the directory already exists, this method throws an exception.

    Definition Classes
    DecisionTreeModelSaveable
    Annotations
    @Since( "1.3.0" )
  22. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  23. def toDebugString: String

    Print the full model to a string.

    Print the full model to a string.

    Annotations
    @Since( "1.2.0" )
  24. def toString(): String

    Print a summary of the model.

    Print a summary of the model.

    Definition Classes
    DecisionTreeModel → AnyRef → Any
  25. val topNode: Node
    Annotations
    @Since( "1.0.0" )
  26. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  27. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  28. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from Saveable

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped