Packages

  • package root
    Definition Classes
    root
  • package org
    Definition Classes
    root
  • package apache
    Definition Classes
    org
  • package spark

    Core Spark functionality.

    Core Spark functionality. org.apache.spark.SparkContext serves as the main entry point to Spark, while org.apache.spark.rdd.RDD is the data type representing a distributed collection, and provides most parallel operations.

    In addition, org.apache.spark.rdd.PairRDDFunctions contains operations available only on RDDs of key-value pairs, such as groupByKey and join; org.apache.spark.rdd.DoubleRDDFunctions contains operations available only on RDDs of Doubles; and org.apache.spark.rdd.SequenceFileRDDFunctions contains operations available on RDDs that can be saved as SequenceFiles. These operations are automatically available on any RDD of the right type (e.g. RDD[(Int, Int)] through implicit conversions.

    Java programmers should reference the org.apache.spark.api.java package for Spark programming APIs in Java.

    Classes and methods marked with Experimental are user-facing features which have not been officially adopted by the Spark project. These are subject to change or removal in minor releases.

    Classes and methods marked with Developer API are intended for advanced users want to extend Spark through lower level interfaces. These are subject to changes or removal in minor releases.

    Definition Classes
    apache
  • package mllib

    RDD-based machine learning APIs (in maintenance mode).

    RDD-based machine learning APIs (in maintenance mode).

    The spark.mllib package is in maintenance mode as of the Spark 2.0.0 release to encourage migration to the DataFrame-based APIs under the org.apache.spark.ml package. While in maintenance mode,

    • no new features in the RDD-based spark.mllib package will be accepted, unless they block implementing new features in the DataFrame-based spark.ml package;
    • bug fixes in the RDD-based APIs will still be accepted.

    The developers will continue adding more features to the DataFrame-based APIs in the 2.x series to reach feature parity with the RDD-based APIs. And once we reach feature parity, this package will be deprecated.

    Definition Classes
    spark
    See also

    SPARK-4591 to track the progress of feature parity

  • package classification
    Definition Classes
    mllib
  • ClassificationModel
  • LogisticRegressionModel
  • LogisticRegressionWithLBFGS
  • LogisticRegressionWithSGD
  • NaiveBayes
  • NaiveBayesModel
  • SVMModel
  • SVMWithSGD
  • StreamingLogisticRegressionWithSGD

class NaiveBayes extends Serializable with Logging

Trains a Naive Bayes model given an RDD of (label, features) pairs.

This is the Multinomial NB (see here) which can handle all kinds of discrete data. For example, by converting documents into TF-IDF vectors, it can be used for document classification. By making every vector a 0-1 vector, it can also be used as Bernoulli NB (see here). The input feature values must be nonnegative.

Annotations
@Since("0.9.0")
Source
NaiveBayes.scala
Linear Supertypes
Logging, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. NaiveBayes
  2. Logging
  3. Serializable
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Instance Constructors

  1. new NaiveBayes()
    Annotations
    @Since("0.9.0")
  2. new NaiveBayes(lambda: Double)
    Annotations
    @Since("1.4.0")

Type Members

  1. implicit class LogStringContext extends AnyRef
    Definition Classes
    Logging

Value Members

  1. def getLambda: Double

    Get the smoothing parameter.

    Get the smoothing parameter.

    Annotations
    @Since("1.4.0")
  2. def getModelType: String

    Get the model type.

    Get the model type.

    Annotations
    @Since("1.4.0")
  3. def run(data: RDD[LabeledPoint]): NaiveBayesModel

    Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries.

    Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries.

    data

    RDD of org.apache.spark.mllib.regression.LabeledPoint.

    Annotations
    @Since("0.9.0")
  4. def setLambda(lambda: Double): NaiveBayes

    Set the smoothing parameter.

    Set the smoothing parameter. Default: 1.0.

    Annotations
    @Since("0.9.0")
  5. def setModelType(modelType: String): NaiveBayes

    Set the model type using a string (case-sensitive).

    Set the model type using a string (case-sensitive). Supported options: "multinomial" (default) and "bernoulli".

    Annotations
    @Since("1.4.0")