org.apache.spark.mllib.tree

DecisionTree

object DecisionTree extends Serializable with Logging

Linear Supertypes
Logging, Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. DecisionTree
  2. Logging
  3. Serializable
  4. Serializable
  5. AnyRef
  6. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  10. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. def findBestSplits(input: RDD[TreePoint], parentImpurities: Array[Double], metadata: DecisionTreeMetadata, level: Int, nodes: Array[Node], splits: Array[Array[Split]], bins: Array[Array[Bin]], maxLevelForSingleGroup: Int, timer: TimeTracker = new TimeTracker): Array[(Split, InformationGainStats)]

    Returns an array of optimal splits for all nodes at a given level.

    Returns an array of optimal splits for all nodes at a given level. Splits the task into multiple groups if the level-wise training task could lead to memory overflow.

    input

    Training data: RDD of org.apache.spark.mllib.tree.impl.TreePoint

    parentImpurities

    Impurities for all parent nodes for the current level

    metadata

    Learning and dataset metadata

    level

    Level of the tree

    splits

    possible splits for all features

    bins

    possible bins for all features

    maxLevelForSingleGroup

    the deepest level for single-group level-wise computation.

    returns

    array (over nodes) of splits with best split for each node at a given level.

    Attributes
    protected[org.apache.spark.mllib.tree]
  12. def findSplitsBins(input: RDD[LabeledPoint], metadata: DecisionTreeMetadata): (Array[Array[Split]], Array[Array[Bin]])

    Returns splits and bins for decision tree calculation.

    Returns splits and bins for decision tree calculation. Continuous and categorical features are handled differently.

    Continuous features: For each feature, there are numBins - 1 possible splits representing the possible binary decisions at each node in the tree.

    Categorical features: For each feature, there is 1 bin per split. Splits and bins are handled in 2 ways: (a) "unordered features" For multiclass classification with a low-arity feature (i.e., if isMulticlass && isSpaceSufficientForAllCategoricalSplits), the feature is split based on subsets of categories. There are (1 << maxFeatureValue - 1) - 1 splits. (b) "ordered features" For regression and binary classification, and for multiclass classification with a high-arity feature, there is one bin per category.

    input

    Training data: RDD of org.apache.spark.mllib.regression.LabeledPoint

    metadata

    Learning and dataset metadata

    returns

    A tuple of (splits, bins). Splits is an Array of org.apache.spark.mllib.tree.model.Split of size (numFeatures, numBins - 1). Bins is an Array of org.apache.spark.mllib.tree.model.Bin of size (numFeatures, numBins).

    Attributes
    protected[org.apache.spark.mllib.tree]
  13. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  14. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  15. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  16. def isTraceEnabled(): Boolean

    Attributes
    protected
    Definition Classes
    Logging
  17. def log: Logger

    Attributes
    protected
    Definition Classes
    Logging
  18. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  19. def logDebug(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  20. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  21. def logError(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  22. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  23. def logInfo(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  24. def logName: String

    Attributes
    protected
    Definition Classes
    Logging
  25. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  26. def logTrace(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  27. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  28. def logWarning(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  29. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  30. final def notify(): Unit

    Definition Classes
    AnyRef
  31. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  32. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  33. def toString(): String

    Definition Classes
    AnyRef → Any
  34. def train(input: RDD[LabeledPoint], algo: Algo, impurity: Impurity, maxDepth: Int, numClassesForClassification: Int, maxBins: Int, quantileCalculationStrategy: QuantileStrategy, categoricalFeaturesInfo: Map[Int, Int]): DecisionTreeModel

    Method to train a decision tree model.

    Method to train a decision tree model. The method supports binary and multiclass classification and regression.

    Note: Using org.apache.spark.mllib.tree.DecisionTree$#trainClassifier and org.apache.spark.mllib.tree.DecisionTree$#trainRegressor is recommended to clearly separate classification and regression.

    input

    Training dataset: RDD of org.apache.spark.mllib.regression.LabeledPoint. For classification, labels should take values {0, 1, ..., numClasses-1}. For regression, labels are real numbers.

    algo

    classification or regression

    impurity

    criterion used for information gain calculation

    maxDepth

    Maximum depth of the tree. E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes.

    numClassesForClassification

    number of classes for classification. Default value of 2.

    maxBins

    maximum number of bins used for splitting features

    quantileCalculationStrategy

    algorithm for calculating quantiles

    categoricalFeaturesInfo

    Map storing arity of categorical features. E.g., an entry (n -> k) indicates that feature n is categorical with k categories indexed from 0: {0, 1, ..., k-1}.

    returns

    DecisionTreeModel that can be used for prediction

  35. def train(input: RDD[LabeledPoint], algo: Algo, impurity: Impurity, maxDepth: Int, numClassesForClassification: Int): DecisionTreeModel

    Method to train a decision tree model.

    Method to train a decision tree model. The method supports binary and multiclass classification and regression.

    Note: Using org.apache.spark.mllib.tree.DecisionTree$#trainClassifier and org.apache.spark.mllib.tree.DecisionTree$#trainRegressor is recommended to clearly separate classification and regression.

    input

    Training dataset: RDD of org.apache.spark.mllib.regression.LabeledPoint. For classification, labels should take values {0, 1, ..., numClasses-1}. For regression, labels are real numbers.

    algo

    algorithm, classification or regression

    impurity

    impurity criterion used for information gain calculation

    maxDepth

    Maximum depth of the tree. E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes.

    numClassesForClassification

    number of classes for classification. Default value of 2.

    returns

    DecisionTreeModel that can be used for prediction

  36. def train(input: RDD[LabeledPoint], algo: Algo, impurity: Impurity, maxDepth: Int): DecisionTreeModel

    Method to train a decision tree model.

    Method to train a decision tree model. The method supports binary and multiclass classification and regression.

    Note: Using org.apache.spark.mllib.tree.DecisionTree$#trainClassifier and org.apache.spark.mllib.tree.DecisionTree$#trainRegressor is recommended to clearly separate classification and regression.

    input

    Training dataset: RDD of org.apache.spark.mllib.regression.LabeledPoint. For classification, labels should take values {0, 1, ..., numClasses-1}. For regression, labels are real numbers.

    algo

    algorithm, classification or regression

    impurity

    impurity criterion used for information gain calculation

    maxDepth

    Maximum depth of the tree. E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes.

    returns

    DecisionTreeModel that can be used for prediction

  37. def train(input: RDD[LabeledPoint], strategy: Strategy): DecisionTreeModel

    Method to train a decision tree model.

    Method to train a decision tree model. The method supports binary and multiclass classification and regression.

    Note: Using org.apache.spark.mllib.tree.DecisionTree$#trainClassifier and org.apache.spark.mllib.tree.DecisionTree$#trainRegressor is recommended to clearly separate classification and regression.

    input

    Training dataset: RDD of org.apache.spark.mllib.regression.LabeledPoint. For classification, labels should take values {0, 1, ..., numClasses-1}. For regression, labels are real numbers.

    strategy

    The configuration parameters for the tree algorithm which specify the type of algorithm (classification, regression, etc.), feature type (continuous, categorical), depth of the tree, quantile calculation strategy, etc.

    returns

    DecisionTreeModel that can be used for prediction

  38. def trainClassifier(input: JavaRDD[LabeledPoint], numClassesForClassification: Int, categoricalFeaturesInfo: Map[Integer, Integer], impurity: String, maxDepth: Int, maxBins: Int): DecisionTreeModel

    Java-friendly API for org.apache.spark.mllib.tree.DecisionTree$#trainClassifier

  39. def trainClassifier(input: RDD[LabeledPoint], numClassesForClassification: Int, categoricalFeaturesInfo: Map[Int, Int], impurity: String, maxDepth: Int, maxBins: Int): DecisionTreeModel

    Method to train a decision tree model for binary or multiclass classification.

    Method to train a decision tree model for binary or multiclass classification.

    input

    Training dataset: RDD of org.apache.spark.mllib.regression.LabeledPoint. Labels should take values {0, 1, ..., numClasses-1}.

    numClassesForClassification

    number of classes for classification.

    categoricalFeaturesInfo

    Map storing arity of categorical features. E.g., an entry (n -> k) indicates that feature n is categorical with k categories indexed from 0: {0, 1, ..., k-1}.

    impurity

    Criterion used for information gain calculation. Supported values: "gini" (recommended) or "entropy".

    maxDepth

    Maximum depth of the tree. E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes. (suggested value: 4)

    maxBins

    maximum number of bins used for splitting features (suggested value: 100)

    returns

    DecisionTreeModel that can be used for prediction

  40. def trainRegressor(input: JavaRDD[LabeledPoint], categoricalFeaturesInfo: Map[Integer, Integer], impurity: String, maxDepth: Int, maxBins: Int): DecisionTreeModel

    Java-friendly API for org.apache.spark.mllib.tree.DecisionTree$#trainRegressor

  41. def trainRegressor(input: RDD[LabeledPoint], categoricalFeaturesInfo: Map[Int, Int], impurity: String, maxDepth: Int, maxBins: Int): DecisionTreeModel

    Method to train a decision tree model for regression.

    Method to train a decision tree model for regression.

    input

    Training dataset: RDD of org.apache.spark.mllib.regression.LabeledPoint. Labels are real numbers.

    categoricalFeaturesInfo

    Map storing arity of categorical features. E.g., an entry (n -> k) indicates that feature n is categorical with k categories indexed from 0: {0, 1, ..., k-1}.

    impurity

    Criterion used for information gain calculation. Supported values: "variance".

    maxDepth

    Maximum depth of the tree. E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes. (suggested value: 4)

    maxBins

    maximum number of bins used for splitting features (suggested value: 100)

    returns

    DecisionTreeModel that can be used for prediction

  42. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  43. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  44. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Logging

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped