final class OnlineLDAOptimizer extends LDAOptimizer with Logging
An online optimizer for LDA. The Optimizer implements the Online variational Bayes LDA algorithm, which processes a subset of the corpus on each iteration, and updates the term-topic distribution adaptively.
Original Online LDA paper: Hoffman, Blei and Bach, "Online Learning for Latent Dirichlet Allocation." NIPS, 2010.
- Annotations
- @Since("1.4.0")
- Source
- LDAOptimizer.scala
- Alphabetic
- By Inheritance
- OnlineLDAOptimizer
- Logging
- LDAOptimizer
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
- new OnlineLDAOptimizer()
Type Members
- implicit class LogStringContext extends AnyRef
- Definition Classes
- Logging
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- def getKappa: Double
Learning rate: exponential decay rate
Learning rate: exponential decay rate
- Annotations
- @Since("1.4.0")
- def getMiniBatchFraction: Double
Mini-batch fraction, which sets the fraction of document sampled and used in each iteration
Mini-batch fraction, which sets the fraction of document sampled and used in each iteration
- Annotations
- @Since("1.4.0")
- def getOptimizeDocConcentration: Boolean
Optimize docConcentration, indicates whether docConcentration (Dirichlet parameter for document-topic distribution) will be optimized during training.
Optimize docConcentration, indicates whether docConcentration (Dirichlet parameter for document-topic distribution) will be optimized during training.
- Annotations
- @Since("1.5.0")
- def getTau0: Double
A (positive) learning parameter that downweights early iterations.
A (positive) learning parameter that downweights early iterations. Larger values make early iterations count less.
- Annotations
- @Since("1.4.0")
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
- def initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
- def log: Logger
- Attributes
- protected
- Definition Classes
- Logging
- def logDebug(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logDebug(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logDebug(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logDebug(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logError(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logError(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logError(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logError(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logInfo(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logInfo(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logInfo(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logInfo(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logName: String
- Attributes
- protected
- Definition Classes
- Logging
- def logTrace(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logTrace(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logTrace(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logTrace(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logWarning(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logWarning(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logWarning(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logWarning(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- def setKappa(kappa: Double): OnlineLDAOptimizer.this.type
Learning rate: exponential decay rate---should be between (0.5, 1.0] to guarantee asymptotic convergence.
Learning rate: exponential decay rate---should be between (0.5, 1.0] to guarantee asymptotic convergence. Default: 0.51, based on the original Online LDA paper.
- Annotations
- @Since("1.4.0")
- def setMiniBatchFraction(miniBatchFraction: Double): OnlineLDAOptimizer.this.type
Mini-batch fraction in (0, 1], which sets the fraction of document sampled and used in each iteration.
Mini-batch fraction in (0, 1], which sets the fraction of document sampled and used in each iteration.
- Annotations
- @Since("1.4.0")
- Note
This should be adjusted in synch with
LDA.setMaxIterations()
so the entire corpus is used. Specifically, set both so that maxIterations * miniBatchFraction is at least 1. Default: 0.05, i.e., 5% of total documents.
- def setOptimizeDocConcentration(optimizeDocConcentration: Boolean): OnlineLDAOptimizer.this.type
Sets whether to optimize docConcentration parameter during training.
Sets whether to optimize docConcentration parameter during training.
Default: false
- Annotations
- @Since("1.5.0")
- def setTau0(tau0: Double): OnlineLDAOptimizer.this.type
A (positive) learning parameter that downweights early iterations.
A (positive) learning parameter that downweights early iterations. Larger values make early iterations count less. Default: 1024, following the original Online LDA paper.
- Annotations
- @Since("1.4.0")
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- def withLogContext(context: HashMap[String, String])(body: => Unit): Unit
- Attributes
- protected
- Definition Classes
- Logging
Deprecated Value Members
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
(Since version 9)