class LDA extends Logging
Latent Dirichlet Allocation (LDA), a topic model designed for text documents.
Terminology:
- "word" = "term": an element of the vocabulary
- "token": instance of a term appearing in a document
- "topic": multinomial distribution over words representing some concept
References:
- Original LDA paper (journal version): Blei, Ng, and Jordan. "Latent Dirichlet Allocation." JMLR, 2003.
- Annotations
- @Since( "1.3.0" )
- Source
- LDA.scala
- See also
- Alphabetic
- By Inheritance
- LDA
- Logging
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
-
new
LDA()
Constructs a LDA instance with default parameters.
Constructs a LDA instance with default parameters.
- Annotations
- @Since( "1.3.0" )
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native() @IntrinsicCandidate()
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
getAlpha: Double
Alias for getDocConcentration
Alias for getDocConcentration
- Annotations
- @Since( "1.3.0" )
-
def
getAsymmetricAlpha: Vector
Alias for getAsymmetricDocConcentration
Alias for getAsymmetricDocConcentration
- Annotations
- @Since( "1.5.0" )
-
def
getAsymmetricDocConcentration: Vector
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
This is the parameter to a Dirichlet distribution.
- Annotations
- @Since( "1.5.0" )
-
def
getBeta: Double
Alias for getTopicConcentration
Alias for getTopicConcentration
- Annotations
- @Since( "1.3.0" )
-
def
getCheckpointInterval: Int
Period (in iterations) between checkpoints.
Period (in iterations) between checkpoints.
- Annotations
- @Since( "1.3.0" )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @IntrinsicCandidate()
-
def
getDocConcentration: Double
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
This method assumes the Dirichlet distribution is symmetric and can be described by a single
Double
parameter. It should fail if docConcentration is asymmetric.- Annotations
- @Since( "1.3.0" )
-
def
getK: Int
Number of topics to infer, i.e., the number of soft cluster centers.
Number of topics to infer, i.e., the number of soft cluster centers.
- Annotations
- @Since( "1.3.0" )
-
def
getMaxIterations: Int
Maximum number of iterations allowed.
Maximum number of iterations allowed.
- Annotations
- @Since( "1.3.0" )
-
def
getOptimizer: LDAOptimizer
LDAOptimizer used to perform the actual calculation
LDAOptimizer used to perform the actual calculation
- Annotations
- @Since( "1.4.0" )
-
def
getSeed: Long
Random seed for cluster initialization.
Random seed for cluster initialization.
- Annotations
- @Since( "1.3.0" )
-
def
getTopicConcentration: Double
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.
This is the parameter to a symmetric Dirichlet distribution.
- Annotations
- @Since( "1.3.0" )
- Note
The topics' distributions over terms are called "beta" in the original LDA paper by Blei et al., but are called "phi" in many later papers such as Asuncion et al., 2009.
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @IntrinsicCandidate()
-
def
initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
log: Logger
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logName: String
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @IntrinsicCandidate()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @IntrinsicCandidate()
-
def
run(documents: JavaPairRDD[Long, Vector]): LDAModel
Java-friendly version of
run()
Java-friendly version of
run()
- Annotations
- @Since( "1.3.0" )
-
def
run(documents: RDD[(Long, Vector)]): LDAModel
Learn an LDA model using the given dataset.
Learn an LDA model using the given dataset.
- documents
RDD of documents, which are term (word) count vectors paired with IDs. The term count vectors are "bags of words" with a fixed-size vocabulary (where the vocabulary size is the length of the vector). Document IDs must be unique and greater than or equal to 0.
- returns
Inferred LDA model
- Annotations
- @Since( "1.3.0" )
-
def
setAlpha(alpha: Double): LDA.this.type
Alias for
setDocConcentration()
Alias for
setDocConcentration()
- Annotations
- @Since( "1.3.0" )
-
def
setAlpha(alpha: Vector): LDA.this.type
Alias for
setDocConcentration()
Alias for
setDocConcentration()
- Annotations
- @Since( "1.5.0" )
-
def
setBeta(beta: Double): LDA.this.type
Alias for
setTopicConcentration()
Alias for
setTopicConcentration()
- Annotations
- @Since( "1.3.0" )
-
def
setCheckpointInterval(checkpointInterval: Int): LDA.this.type
Parameter for set checkpoint interval (greater than or equal to 1) or disable checkpoint (-1).
Parameter for set checkpoint interval (greater than or equal to 1) or disable checkpoint (-1). E.g. 10 means that the cache will get checkpointed every 10 iterations. Checkpointing helps with recovery (when nodes fail). It also helps with eliminating temporary shuffle files on disk, which can be important when LDA is run for many iterations. If the checkpoint directory is not set in org.apache.spark.SparkContext, this setting is ignored. (default = 10)
- Annotations
- @Since( "1.3.0" )
- See also
-
def
setDocConcentration(docConcentration: Double): LDA.this.type
Replicates a
Double
docConcentration to create a symmetric prior.Replicates a
Double
docConcentration to create a symmetric prior.- Annotations
- @Since( "1.3.0" )
-
def
setDocConcentration(docConcentration: Vector): LDA.this.type
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
This is the parameter to a Dirichlet distribution, where larger values mean more smoothing (more regularization).
If set to a singleton vector Vector(-1), then docConcentration is set automatically. If set to singleton vector Vector(t) where t != -1, then t is replicated to a vector of length k during
LDAOptimizer.initialize()
. Otherwise, thedocConcentration
vector must be length k. (default = Vector(-1) = automatic)Optimizer-specific parameter settings:
- EM
- Currently only supports symmetric distributions, so all values in the vector should be the same.
- Values should be greater than 1.0
- default = uniformly (50 / k) + 1, where 50/k is common in LDA libraries and +1 follows from Asuncion et al. (2009), who recommend a +1 adjustment for EM.
- Online
- Values should be greater than or equal to 0
- default = uniformly (1.0 / k), following the implementation from here.
- Annotations
- @Since( "1.5.0" )
- EM
-
def
setK(k: Int): LDA.this.type
Set the number of topics to infer, i.e., the number of soft cluster centers.
Set the number of topics to infer, i.e., the number of soft cluster centers. (default = 10)
- Annotations
- @Since( "1.3.0" )
-
def
setMaxIterations(maxIterations: Int): LDA.this.type
Set the maximum number of iterations allowed.
Set the maximum number of iterations allowed. (default = 20)
- Annotations
- @Since( "1.3.0" )
-
def
setOptimizer(optimizerName: String): LDA.this.type
Set the LDAOptimizer used to perform the actual calculation by algorithm name.
Set the LDAOptimizer used to perform the actual calculation by algorithm name. Currently "em", "online" are supported.
- Annotations
- @Since( "1.4.0" )
-
def
setOptimizer(optimizer: LDAOptimizer): LDA.this.type
LDAOptimizer used to perform the actual calculation (default = EMLDAOptimizer)
LDAOptimizer used to perform the actual calculation (default = EMLDAOptimizer)
- Annotations
- @Since( "1.4.0" )
-
def
setSeed(seed: Long): LDA.this.type
Set the random seed for cluster initialization.
Set the random seed for cluster initialization.
- Annotations
- @Since( "1.3.0" )
-
def
setTopicConcentration(topicConcentration: Double): LDA.this.type
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.
This is the parameter to a symmetric Dirichlet distribution.
- Annotations
- @Since( "1.3.0" )
- Note
The topics' distributions over terms are called "beta" in the original LDA paper by Blei et al., but are called "phi" in many later papers such as Asuncion et al., 2009. If set to -1, then topicConcentration is set automatically. (default = -1 = automatic) Optimizer-specific parameter settings:
- EM
- Value should be greater than 1.0
- default = 0.1 + 1, where 0.1 gives a small amount of smoothing and +1 follows Asuncion et al. (2009), who recommend a +1 adjustment for EM.
- Online
- Value should be greater than or equal to 0
- default = (1.0 / k), following the implementation from here.
- EM
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
Deprecated Value Members
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] ) @Deprecated
- Deprecated