Class GradientDescent
Object
org.apache.spark.mllib.optimization.GradientDescent
- All Implemented Interfaces:
- Serializable,- org.apache.spark.internal.Logging,- Optimizer
Class used to solve an optimization problem using Gradient Descent.
 param:  gradient Gradient function to be used.
 param:  updater Updater to be used to update weights after every iteration.
- See Also:
- 
Nested Class SummaryNested classes/interfaces inherited from interface org.apache.spark.internal.Loggingorg.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
- 
Method SummaryModifier and TypeMethodDescriptionstatic org.apache.spark.internal.Logging.LogStringContextLogStringContext(scala.StringContext sc) Runs gradient descent on the given training data.scala.Tuple2<Vector,double[]> optimizeWithLossReturned(RDD<scala.Tuple2<Object, Vector>> data, Vector initialWeights) Runs gradient descent on the given training data.static org.slf4j.Loggerstatic voidorg$apache$spark$internal$Logging$$log__$eq(org.slf4j.Logger x$1) static scala.Tuple2<Vector,double[]> runMiniBatchSGD(RDD<scala.Tuple2<Object, Vector>> data, Gradient gradient, Updater updater, double stepSize, int numIterations, double regParam, double miniBatchFraction, Vector initialWeights) Alias ofrunMiniBatchSGDwith convergenceTol set to default value of 0.001.static scala.Tuple2<Vector,double[]> runMiniBatchSGD(RDD<scala.Tuple2<Object, Vector>> data, Gradient gradient, Updater updater, double stepSize, int numIterations, double regParam, double miniBatchFraction, Vector initialWeights, double convergenceTol) Run stochastic gradient descent (SGD) in parallel using mini batches.setConvergenceTol(double tolerance) Set the convergence tolerance.setGradient(Gradient gradient) Set the gradient function (of the loss function of one single data example) to be used for SGD.setMiniBatchFraction(double fraction) Set fraction of data to be used for each SGD iteration.setNumIterations(int iters) Set the number of iterations for SGD.setRegParam(double regParam) Set the regularization parameter.setStepSize(double step) Set the initial step size of SGD for the first step.setUpdater(Updater updater) Set the updater function to actually perform a gradient step in a given direction.Methods inherited from class java.lang.Objectequals, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface org.apache.spark.internal.LogginginitializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logBasedOnLevel, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, MDC, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContext
- 
Method Details- 
runMiniBatchSGDpublic static scala.Tuple2<Vector,double[]> runMiniBatchSGD(RDD<scala.Tuple2<Object, Vector>> data, Gradient gradient, Updater updater, double stepSize, int numIterations, double regParam, double miniBatchFraction, Vector initialWeights, double convergenceTol) Run stochastic gradient descent (SGD) in parallel using mini batches. In each iteration, we sample a subset (fraction miniBatchFraction) of the total data in order to compute a gradient estimate. Sampling, and averaging the subgradients over this subset is performed using one standard spark map-reduce in each iteration.- Parameters:
- data- Input data for SGD. RDD of the set of data examples, each of the form (label, [feature values]).
- gradient- Gradient object (used to compute the gradient of the loss function of one single data example)
- updater- Updater function to actually perform a gradient step in a given direction.
- stepSize- initial step size for the first step
- numIterations- number of iterations that SGD should be run.
- regParam- regularization parameter
- miniBatchFraction- fraction of the input data set that should be used for one iteration of SGD. Default value 1.0.
- convergenceTol- Minibatch iteration will end before numIterations if the relative difference between the current weight and the previous weight is less than this value. In measuring convergence, L2 norm is calculated. Default value 0.001. Must be between 0.0 and 1.0 inclusively.
- initialWeights- (undocumented)
- Returns:
- A tuple containing two elements. The first element is a column matrix containing weights for every feature, and the second element is an array containing the stochastic loss computed for every iteration.
 
- 
runMiniBatchSGDpublic static scala.Tuple2<Vector,double[]> runMiniBatchSGD(RDD<scala.Tuple2<Object, Vector>> data, Gradient gradient, Updater updater, double stepSize, int numIterations, double regParam, double miniBatchFraction, Vector initialWeights) Alias ofrunMiniBatchSGDwith convergenceTol set to default value of 0.001.- Parameters:
- data- (undocumented)
- gradient- (undocumented)
- updater- (undocumented)
- stepSize- (undocumented)
- numIterations- (undocumented)
- regParam- (undocumented)
- miniBatchFraction- (undocumented)
- initialWeights- (undocumented)
- Returns:
- (undocumented)
 
- 
org$apache$spark$internal$Logging$$log_public static org.slf4j.Logger org$apache$spark$internal$Logging$$log_()
- 
org$apache$spark$internal$Logging$$log__$eqpublic static void org$apache$spark$internal$Logging$$log__$eq(org.slf4j.Logger x$1) 
- 
LogStringContextpublic static org.apache.spark.internal.Logging.LogStringContext LogStringContext(scala.StringContext sc) 
- 
setStepSizeSet the initial step size of SGD for the first step. Default 1.0. In subsequent steps, the step size will decrease with stepSize/sqrt(t)- Parameters:
- step- (undocumented)
- Returns:
- (undocumented)
 
- 
setMiniBatchFractionSet fraction of data to be used for each SGD iteration. Default 1.0 (corresponding to deterministic/classical gradient descent)- Parameters:
- fraction- (undocumented)
- Returns:
- (undocumented)
 
- 
setNumIterationsSet the number of iterations for SGD. Default 100.- Parameters:
- iters- (undocumented)
- Returns:
- (undocumented)
 
- 
setRegParamSet the regularization parameter. Default 0.0.- Parameters:
- regParam- (undocumented)
- Returns:
- (undocumented)
 
- 
setConvergenceTolSet the convergence tolerance. Default 0.001 convergenceTol is a condition which decides iteration termination. The end of iteration is decided based on below logic.- If the norm of the new solution vector is greater than 1, the diff of solution vectors is compared to relative tolerance which means normalizing by the norm of the new solution vector. - If the norm of the new solution vector is less than or equal to 1, the diff of solution vectors is compared to absolute tolerance which is not normalizing. Must be between 0.0 and 1.0 inclusively. - Parameters:
- tolerance- (undocumented)
- Returns:
- (undocumented)
 
- 
setGradientSet the gradient function (of the loss function of one single data example) to be used for SGD.- Parameters:
- gradient- (undocumented)
- Returns:
- (undocumented)
 
- 
setUpdaterSet the updater function to actually perform a gradient step in a given direction. The updater is responsible to perform the update from the regularization term as well, and therefore determines what kind or regularization is used, if any.- Parameters:
- updater- (undocumented)
- Returns:
- (undocumented)
 
- 
optimizeRuns gradient descent on the given training data.
- 
optimizeWithLossReturnedpublic scala.Tuple2<Vector,double[]> optimizeWithLossReturned(RDD<scala.Tuple2<Object, Vector>> data, Vector initialWeights) Runs gradient descent on the given training data.- Parameters:
- data- training data
- initialWeights- initial weights
- Returns:
- solution vector and loss value in an array
 
 
-