# Packages

• package
Definition Classes
root
• package
Definition Classes
root
• package
Definition Classes
org
• package

Core Spark functionality.

Core Spark functionality. org.apache.spark.SparkContext serves as the main entry point to Spark, while org.apache.spark.rdd.RDD is the data type representing a distributed collection, and provides most parallel operations.

In addition, org.apache.spark.rdd.PairRDDFunctions contains operations available only on RDDs of key-value pairs, such as groupByKey and join; org.apache.spark.rdd.DoubleRDDFunctions contains operations available only on RDDs of Doubles; and org.apache.spark.rdd.SequenceFileRDDFunctions contains operations available on RDDs that can be saved as SequenceFiles. These operations are automatically available on any RDD of the right type (e.g. RDD[(Int, Int)] through implicit conversions.

Java programmers should reference the org.apache.spark.api.java package for Spark programming APIs in Java.

Classes and methods marked with Experimental are user-facing features which have not been officially adopted by the Spark project. These are subject to change or removal in minor releases.

Classes and methods marked with Developer API are intended for advanced users want to extend Spark through lower level interfaces. These are subject to changes or removal in minor releases.

Definition Classes
apache
• package

RDD-based machine learning APIs (in maintenance mode).

RDD-based machine learning APIs (in maintenance mode).

The spark.mllib package is in maintenance mode as of the Spark 2.0.0 release to encourage migration to the DataFrame-based APIs under the org.apache.spark.ml package. While in maintenance mode,

• no new features in the RDD-based spark.mllib package will be accepted, unless they block implementing new features in the DataFrame-based spark.ml package;
• bug fixes in the RDD-based APIs will still be accepted.

The developers will continue adding more features to the DataFrame-based APIs in the 2.x series to reach feature parity with the RDD-based APIs. And once we reach feature parity, this package will be deprecated.

Definition Classes
spark

SPARK-4591 to track the progress of feature parity

• package
Definition Classes
mllib
• package
Definition Classes
mllib
• package
Definition Classes
mllib
• package
Definition Classes
mllib
• package
Definition Classes
mllib
• package
Definition Classes
mllib
• package optimization
Definition Classes
mllib
• L1Updater
• LBFGS
• Optimizer
• SimpleUpdater
• SquaredL2Updater
• Updater
• package
Definition Classes
mllib
• package
Definition Classes
mllib
• package
Definition Classes
mllib
• package
Definition Classes
mllib
• package
Definition Classes
mllib
• package
Definition Classes
mllib
• package

This package contains the default implementation of the decision tree algorithm, which supports:

This package contains the default implementation of the decision tree algorithm, which supports:

• binary classification,
• regression,
• information loss calculation with entropy and Gini for classification and variance for regression,
• both continuous and categorical features.
Definition Classes
mllib
• package
Definition Classes
mllib
p

# optimization 

Ordering
1. Alphabetic
Visibility
1. Public
2. All

### Type Members

1. abstract class Gradient extends Serializable

Class used to compute the gradient for a loss function, given a single data point.

2. class GradientDescent extends Optimizer with Logging

Class used to solve an optimization problem using Gradient Descent.

Compute gradient and loss for a Hinge loss function, as used in SVM binary classification.

Compute gradient and loss for a Hinge loss function, as used in SVM binary classification. See also the documentation for the precise formulation.

Note

This assumes that the labels are {0,1}

4. class L1Updater extends Updater

Updater for L1 regularized problems.

Updater for L1 regularized problems. R(w) = ||w||_1 Uses a step-size decreasing with the square root of the number of iterations.

Instead of subgradient of the regularizer, the proximal operator for the L1 regularization is applied after the gradient step. This is known to result in better sparsity of the intermediate solution.

The corresponding proximal operator for the L1 norm is the soft-thresholding function. That is, each weight component is shrunk towards 0 by shrinkageVal.

If w is greater than shrinkageVal, set weight component to w-shrinkageVal. If w is less than -shrinkageVal, set weight component to w+shrinkageVal. If w is (-shrinkageVal, shrinkageVal), set weight component to 0.

Equivalently, set weight component to signum(w) * max(0.0, abs(w) - shrinkageVal)

5. class LBFGS extends Optimizer with Logging

Class used to solve an optimization problem using Limited-memory BFGS.

Class used to solve an optimization problem using Limited-memory BFGS. Reference: Wikipedia on Limited-memory BFGS

Compute gradient and loss for a Least-squared loss function, as used in linear regression.

Compute gradient and loss for a Least-squared loss function, as used in linear regression. This is correct for the averaged least squares loss function (mean squared error) L = 1/2n ||A weights-y||^2 See also the documentation for the precise formulation.

Compute gradient and loss for a multinomial logistic loss function, as used in multi-class classification (it is also used in binary logistic regression).

Compute gradient and loss for a multinomial logistic loss function, as used in multi-class classification (it is also used in binary logistic regression).

In The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd Edition by Trevor Hastie, Robert Tibshirani, and Jerome Friedman, which can be downloaded from http://statweb.stanford.edu/~tibs/ElemStatLearn/ , Eq. (4.17) on page 119 gives the formula of multinomial logistic regression model. A simple calculation shows that

$$P(y=0|x, w) = 1 / (1 + \sum_i^{K-1} \exp(x w_i))\\ P(y=1|x, w) = exp(x w_1) / (1 + \sum_i^{K-1} \exp(x w_i))\\ ...\\ P(y=K-1|x, w) = exp(x w_{K-1}) / (1 + \sum_i^{K-1} \exp(x w_i))\\$$

for K classes multiclass classification problem.

The model weights $$w = (w_1, w_2, ..., w_{K-1})^T$$ becomes a matrix which has dimension of (K-1) * (N+1) if the intercepts are added. If the intercepts are not added, the dimension will be (K-1) * N.

As a result, the loss of objective function for a single instance of data can be written as

\begin{align} l(w, x) &= -log P(y|x, w) = -\alpha(y) log P(y=0|x, w) - (1-\alpha(y)) log P(y|x, w) \\ &= log(1 + \sum_i^{K-1}\exp(x w_i)) - (1-\alpha(y)) x w_{y-1} \\ &= log(1 + \sum_i^{K-1}\exp(margins_i)) - (1-\alpha(y)) margins_{y-1} \end{align}

where $\alpha(i) = 1$ if $$i \ne 0$$, and $\alpha(i) = 0$ if $$i == 0$$, $$margins_i = x w_i$$.

For optimization, we have to calculate the first derivative of the loss function, and a simple calculation shows that

\begin{align} \frac{\partial l(w, x)}{\partial w_{ij}} &= (\exp(x w_i) / (1 + \sum_k^{K-1} \exp(x w_k)) - (1-\alpha(y)\delta_{y, i+1})) * x_j \\ &= multiplier_i * x_j \end{align}

where $\delta_{i, j} = 1$ if $$i == j$$, $\delta_{i, j} = 0$ if $$i != j$$, and multiplier = $\exp(margins_i) / (1 + \sum_k^{K-1} \exp(margins_i)) - (1-\alpha(y)\delta_{y, i+1})$

If any of margins is larger than 709.78, the numerical computation of multiplier and loss function will be suffered from arithmetic overflow. This issue occurs when there are outliers in data which are far away from hyperplane, and this will cause the failing of training once infinity / infinity is introduced. Note that this is only a concern when max(margins) > 0.

Fortunately, when max(margins) = maxMargin > 0, the loss function and the multiplier can be easily rewritten into the following equivalent numerically stable formula.

\begin{align} l(w, x) &= log(1 + \sum_i^{K-1}\exp(margins_i)) - (1-\alpha(y)) margins_{y-1} \\ &= log(\exp(-maxMargin) + \sum_i^{K-1}\exp(margins_i - maxMargin)) + maxMargin - (1-\alpha(y)) margins_{y-1} \\ &= log(1 + sum) + maxMargin - (1-\alpha(y)) margins_{y-1} \end{align}

where sum = $\exp(-maxMargin) + \sum_i^{K-1}\exp(margins_i - maxMargin) - 1$.

Note that each term, $(margins_i - maxMargin)$ in $\exp$ is smaller than zero; as a result, overflow will not happen with this formula.

For multiplier, similar trick can be applied as the following,

\begin{align} multiplier &= \exp(margins_i) / (1 + \sum_k^{K-1} \exp(margins_i)) - (1-\alpha(y)\delta_{y, i+1}) \\ &= \exp(margins_i - maxMargin) / (1 + sum) - (1-\alpha(y)\delta_{y, i+1}) \end{align}

where each term in $\exp$ is also smaller than zero, so overflow is not a concern.

For the detailed mathematical derivation, see the reference at http://www.slideshare.net/dbtsai/2014-0620-mlor-36132297

8. trait Optimizer extends Serializable

Trait for optimization problem solvers.

9. class SimpleUpdater extends Updater

A simple updater for gradient descent *without* any regularization.

A simple updater for gradient descent *without* any regularization. Uses a step-size decreasing with the square root of the number of iterations.

10. class SquaredL2Updater extends Updater

Updater for L2 regularized problems.

Updater for L2 regularized problems. R(w) = 1/2 ||w||^2 Uses a step-size decreasing with the square root of the number of iterations.

11. abstract class Updater extends Serializable

Class used to perform steps (weight update) using Gradient Descent methods.

Class used to perform steps (weight update) using Gradient Descent methods.

For general minimization problems, or for regularized problems of the form min L(w) + regParam * R(w), the compute function performs the actual update step, when given some (e.g. stochastic) gradient direction for the loss L(w), and a desired step-size (learning rate).

The updater is responsible to also perform the update coming from the regularization term R(w) (if any regularization is used).

### Value Members

1. object GradientDescent extends Logging with Serializable

Top-level method to run gradient descent.

2. object LBFGS extends Logging with Serializable

Top-level method to run L-BFGS.