class LDA extends Logging
Latent Dirichlet Allocation (LDA), a topic model designed for text documents.
Terminology:
- "word" = "term": an element of the vocabulary
- "token": instance of a term appearing in a document
- "topic": multinomial distribution over words representing some concept
References:
- Original LDA paper (journal version): Blei, Ng, and Jordan. "Latent Dirichlet Allocation." JMLR, 2003.
- Annotations
- @Since("1.3.0")
- Source
- LDA.scala
- See also
- Alphabetic
- By Inheritance
- LDA
- Logging
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
-    new LDA()Constructs a LDA instance with default parameters. Constructs a LDA instance with default parameters. - Annotations
- @Since("1.3.0")
 
Type Members
-   implicit  class LogStringContext extends AnyRef- Definition Classes
- Logging
 
Value Members
-   final  def !=(arg0: Any): Boolean- Definition Classes
- AnyRef → Any
 
-   final  def ##: Int- Definition Classes
- AnyRef → Any
 
-   final  def ==(arg0: Any): Boolean- Definition Classes
- AnyRef → Any
 
-    def MDC(key: LogKey, value: Any): MDC- Attributes
- protected
- Definition Classes
- Logging
 
-   final  def asInstanceOf[T0]: T0- Definition Classes
- Any
 
-    def clone(): AnyRef- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
 
-   final  def eq(arg0: AnyRef): Boolean- Definition Classes
- AnyRef
 
-    def equals(arg0: AnyRef): Boolean- Definition Classes
- AnyRef → Any
 
-    def getAlpha: DoubleAlias for getDocConcentration Alias for getDocConcentration - Annotations
- @Since("1.3.0")
 
-    def getAsymmetricAlpha: VectorAlias for getAsymmetricDocConcentration Alias for getAsymmetricDocConcentration - Annotations
- @Since("1.5.0")
 
-    def getAsymmetricDocConcentration: VectorConcentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta"). Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta"). This is the parameter to a Dirichlet distribution. - Annotations
- @Since("1.5.0")
 
-    def getBeta: DoubleAlias for getTopicConcentration Alias for getTopicConcentration - Annotations
- @Since("1.3.0")
 
-    def getCheckpointInterval: IntPeriod (in iterations) between checkpoints. Period (in iterations) between checkpoints. - Annotations
- @Since("1.3.0")
 
-   final  def getClass(): Class[_ <: AnyRef]- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
 
-    def getDocConcentration: DoubleConcentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta"). Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta"). This method assumes the Dirichlet distribution is symmetric and can be described by a single Doubleparameter. It should fail if docConcentration is asymmetric.- Annotations
- @Since("1.3.0")
 
-    def getK: IntNumber of topics to infer, i.e., the number of soft cluster centers. Number of topics to infer, i.e., the number of soft cluster centers. - Annotations
- @Since("1.3.0")
 
-    def getMaxIterations: IntMaximum number of iterations allowed. Maximum number of iterations allowed. - Annotations
- @Since("1.3.0")
 
-    def getOptimizer: LDAOptimizerLDAOptimizer used to perform the actual calculation LDAOptimizer used to perform the actual calculation - Annotations
- @Since("1.4.0")
 
-    def getSeed: LongRandom seed for cluster initialization. Random seed for cluster initialization. - Annotations
- @Since("1.3.0")
 
-    def getTopicConcentration: DoubleConcentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms. Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms. This is the parameter to a symmetric Dirichlet distribution. - Annotations
- @Since("1.3.0")
- Note
- The topics' distributions over terms are called "beta" in the original LDA paper by Blei et al., but are called "phi" in many later papers such as Asuncion et al., 2009. 
 
-    def hashCode(): Int- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
 
-    def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean- Attributes
- protected
- Definition Classes
- Logging
 
-    def initializeLogIfNecessary(isInterpreter: Boolean): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-   final  def isInstanceOf[T0]: Boolean- Definition Classes
- Any
 
-    def isTraceEnabled(): Boolean- Attributes
- protected
- Definition Classes
- Logging
 
-    def log: Logger- Attributes
- protected
- Definition Classes
- Logging
 
-    def logBasedOnLevel(level: Level)(f: => MessageWithContext): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logDebug(msg: => String, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logDebug(entry: LogEntry, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logDebug(entry: LogEntry): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logDebug(msg: => String): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logError(msg: => String, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logError(entry: LogEntry, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logError(entry: LogEntry): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logError(msg: => String): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logInfo(msg: => String, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logInfo(entry: LogEntry, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logInfo(entry: LogEntry): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logInfo(msg: => String): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logName: String- Attributes
- protected
- Definition Classes
- Logging
 
-    def logTrace(msg: => String, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logTrace(entry: LogEntry, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logTrace(entry: LogEntry): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logTrace(msg: => String): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logWarning(msg: => String, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logWarning(entry: LogEntry, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logWarning(entry: LogEntry): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logWarning(msg: => String): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-   final  def ne(arg0: AnyRef): Boolean- Definition Classes
- AnyRef
 
-   final  def notify(): Unit- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
 
-   final  def notifyAll(): Unit- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
 
-    def run(documents: JavaPairRDD[Long, Vector]): LDAModelJava-friendly version of run()Java-friendly version of run()- Annotations
- @Since("1.3.0")
 
-    def run(documents: RDD[(Long, Vector)]): LDAModelLearn an LDA model using the given dataset. Learn an LDA model using the given dataset. - documents
- RDD of documents, which are term (word) count vectors paired with IDs. The term count vectors are "bags of words" with a fixed-size vocabulary (where the vocabulary size is the length of the vector). Document IDs must be unique and greater than or equal to 0. 
- returns
- Inferred LDA model 
 - Annotations
- @Since("1.3.0")
 
-    def setAlpha(alpha: Double): LDA.this.typeAlias for setDocConcentration()Alias for setDocConcentration()- Annotations
- @Since("1.3.0")
 
-    def setAlpha(alpha: Vector): LDA.this.typeAlias for setDocConcentration()Alias for setDocConcentration()- Annotations
- @Since("1.5.0")
 
-    def setBeta(beta: Double): LDA.this.typeAlias for setTopicConcentration()Alias for setTopicConcentration()- Annotations
- @Since("1.3.0")
 
-    def setCheckpointInterval(checkpointInterval: Int): LDA.this.typeParameter for set checkpoint interval (greater than or equal to 1) or disable checkpoint (-1). Parameter for set checkpoint interval (greater than or equal to 1) or disable checkpoint (-1). E.g. 10 means that the cache will get checkpointed every 10 iterations. Checkpointing helps with recovery (when nodes fail). It also helps with eliminating temporary shuffle files on disk, which can be important when LDA is run for many iterations. If the checkpoint directory is not set in org.apache.spark.SparkContext, this setting is ignored. (default = 10) - Annotations
- @Since("1.3.0")
- See also
 
-    def setDocConcentration(docConcentration: Double): LDA.this.typeReplicates a DoubledocConcentration to create a symmetric prior.Replicates a DoubledocConcentration to create a symmetric prior.- Annotations
- @Since("1.3.0")
 
-    def setDocConcentration(docConcentration: Vector): LDA.this.typeConcentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta"). Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta"). This is the parameter to a Dirichlet distribution, where larger values mean more smoothing (more regularization). If set to a singleton vector Vector(-1), then docConcentration is set automatically. If set to singleton vector Vector(t) where t != -1, then t is replicated to a vector of length k during LDAOptimizer.initialize(). Otherwise, thedocConcentrationvector must be length k. (default = Vector(-1) = automatic)Optimizer-specific parameter settings: - EM- Currently only supports symmetric distributions, so all values in the vector should be the same.
- Values should be greater than 1.0
- default = uniformly (50 / k) + 1, where 50/k is common in LDA libraries and +1 follows from Asuncion et al. (2009), who recommend a +1 adjustment for EM.
 
- Online- Values should be greater than or equal to 0
- default = uniformly (1.0 / k), following the implementation from here.
 
 - Annotations
- @Since("1.5.0")
 
- EM
-    def setK(k: Int): LDA.this.typeSet the number of topics to infer, i.e., the number of soft cluster centers. Set the number of topics to infer, i.e., the number of soft cluster centers. (default = 10) - Annotations
- @Since("1.3.0")
 
-    def setMaxIterations(maxIterations: Int): LDA.this.typeSet the maximum number of iterations allowed. Set the maximum number of iterations allowed. (default = 20) - Annotations
- @Since("1.3.0")
 
-    def setOptimizer(optimizerName: String): LDA.this.typeSet the LDAOptimizer used to perform the actual calculation by algorithm name. Set the LDAOptimizer used to perform the actual calculation by algorithm name. Currently "em", "online" are supported. - Annotations
- @Since("1.4.0")
 
-    def setOptimizer(optimizer: LDAOptimizer): LDA.this.typeLDAOptimizer used to perform the actual calculation (default = EMLDAOptimizer) LDAOptimizer used to perform the actual calculation (default = EMLDAOptimizer) - Annotations
- @Since("1.4.0")
 
-    def setSeed(seed: Long): LDA.this.typeSet the random seed for cluster initialization. Set the random seed for cluster initialization. - Annotations
- @Since("1.3.0")
 
-    def setTopicConcentration(topicConcentration: Double): LDA.this.typeConcentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms. Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms. This is the parameter to a symmetric Dirichlet distribution. - Annotations
- @Since("1.3.0")
- Note
- The topics' distributions over terms are called "beta" in the original LDA paper by Blei et al., but are called "phi" in many later papers such as Asuncion et al., 2009. If set to -1, then topicConcentration is set automatically. (default = -1 = automatic) Optimizer-specific parameter settings: - EM- Value should be greater than 1.0
- default = 0.1 + 1, where 0.1 gives a small amount of smoothing and +1 follows Asuncion et al. (2009), who recommend a +1 adjustment for EM.
 
- Online- Value should be greater than or equal to 0
- default = (1.0 / k), following the implementation from here.
 
 
- EM
 
-   final  def synchronized[T0](arg0: => T0): T0- Definition Classes
- AnyRef
 
-    def toString(): String- Definition Classes
- AnyRef → Any
 
-   final  def wait(arg0: Long, arg1: Int): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
 
-   final  def wait(arg0: Long): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
 
-   final  def wait(): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
 
-    def withLogContext(context: Map[String, String])(body: => Unit): Unit- Attributes
- protected
- Definition Classes
- Logging
 
Deprecated Value Members
-    def finalize(): Unit- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
- (Since version 9)