class GradientDescent extends Optimizer with Logging
Class used to solve an optimization problem using Gradient Descent.
- Source
- GradientDescent.scala
- Alphabetic
- By Inheritance
- GradientDescent
- Logging
- Optimizer
- Serializable
- Serializable
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
log: Logger
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logName: String
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
optimize(data: RDD[(Double, Vector)], initialWeights: Vector): Vector
Runs gradient descent on the given training data.
Runs gradient descent on the given training data.
- data
training data
- initialWeights
initial weights
- returns
solution vector
- Definition Classes
- GradientDescent → Optimizer
-
def
optimizeWithLossReturned(data: RDD[(Double, Vector)], initialWeights: Vector): (Vector, Array[Double])
Runs gradient descent on the given training data.
Runs gradient descent on the given training data.
- data
training data
- initialWeights
initial weights
- returns
solution vector and loss value in an array
-
def
setConvergenceTol(tolerance: Double): GradientDescent.this.type
Set the convergence tolerance.
Set the convergence tolerance. Default 0.001 convergenceTol is a condition which decides iteration termination. The end of iteration is decided based on below logic.
- If the norm of the new solution vector is greater than 1, the diff of solution vectors is compared to relative tolerance which means normalizing by the norm of the new solution vector.
- If the norm of the new solution vector is less than or equal to 1, the diff of solution vectors is compared to absolute tolerance which is not normalizing.
Must be between 0.0 and 1.0 inclusively.
-
def
setGradient(gradient: Gradient): GradientDescent.this.type
Set the gradient function (of the loss function of one single data example) to be used for SGD.
-
def
setMiniBatchFraction(fraction: Double): GradientDescent.this.type
Set fraction of data to be used for each SGD iteration.
Set fraction of data to be used for each SGD iteration. Default 1.0 (corresponding to deterministic/classical gradient descent)
-
def
setNumIterations(iters: Int): GradientDescent.this.type
Set the number of iterations for SGD.
Set the number of iterations for SGD. Default 100.
-
def
setRegParam(regParam: Double): GradientDescent.this.type
Set the regularization parameter.
Set the regularization parameter. Default 0.0.
-
def
setStepSize(step: Double): GradientDescent.this.type
Set the initial step size of SGD for the first step.
Set the initial step size of SGD for the first step. Default 1.0. In subsequent steps, the step size will decrease with stepSize/sqrt(t)
-
def
setUpdater(updater: Updater): GradientDescent.this.type
Set the updater function to actually perform a gradient step in a given direction.
Set the updater function to actually perform a gradient step in a given direction. The updater is responsible to perform the update from the regularization term as well, and therefore determines what kind or regularization is used, if any.
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()