Packages

class LDA extends Logging

Latent Dirichlet Allocation (LDA), a topic model designed for text documents.

Terminology:

  • "word" = "term": an element of the vocabulary
  • "token": instance of a term appearing in a document
  • "topic": multinomial distribution over words representing some concept

References:

  • Original LDA paper (journal version): Blei, Ng, and Jordan. "Latent Dirichlet Allocation." JMLR, 2003.
Annotations
@Since("1.3.0")
Source
LDA.scala
See also

Latent Dirichlet allocation (Wikipedia)

Linear Supertypes
Logging, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. LDA
  2. Logging
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Instance Constructors

  1. new LDA()

    Constructs a LDA instance with default parameters.

    Constructs a LDA instance with default parameters.

    Annotations
    @Since("1.3.0")

Type Members

  1. implicit class LogStringContext extends AnyRef
    Definition Classes
    Logging

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
  6. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  7. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  8. def getAlpha: Double

    Alias for getDocConcentration

    Annotations
    @Since("1.3.0")
  9. def getAsymmetricAlpha: Vector

    Alias for getAsymmetricDocConcentration

    Annotations
    @Since("1.5.0")
  10. def getAsymmetricDocConcentration: Vector

    Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").

    Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").

    This is the parameter to a Dirichlet distribution.

    Annotations
    @Since("1.5.0")
  11. def getBeta: Double

    Alias for getTopicConcentration

    Annotations
    @Since("1.3.0")
  12. def getCheckpointInterval: Int

    Period (in iterations) between checkpoints.

    Period (in iterations) between checkpoints.

    Annotations
    @Since("1.3.0")
  13. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  14. def getDocConcentration: Double

    Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").

    Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").

    This method assumes the Dirichlet distribution is symmetric and can be described by a single Double parameter. It should fail if docConcentration is asymmetric.

    Annotations
    @Since("1.3.0")
  15. def getK: Int

    Number of topics to infer, i.e., the number of soft cluster centers.

    Number of topics to infer, i.e., the number of soft cluster centers.

    Annotations
    @Since("1.3.0")
  16. def getMaxIterations: Int

    Maximum number of iterations allowed.

    Maximum number of iterations allowed.

    Annotations
    @Since("1.3.0")
  17. def getOptimizer: LDAOptimizer

    LDAOptimizer used to perform the actual calculation

    LDAOptimizer used to perform the actual calculation

    Annotations
    @Since("1.4.0")
  18. def getSeed: Long

    Random seed for cluster initialization.

    Random seed for cluster initialization.

    Annotations
    @Since("1.3.0")
  19. def getTopicConcentration: Double

    Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.

    Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.

    This is the parameter to a symmetric Dirichlet distribution.

    Annotations
    @Since("1.3.0")
    Note

    The topics' distributions over terms are called "beta" in the original LDA paper by Blei et al., but are called "phi" in many later papers such as Asuncion et al., 2009.

  20. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  21. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  22. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  23. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  24. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  25. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  26. def logDebug(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  27. def logDebug(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  28. def logDebug(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    Logging
  29. def logDebug(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  30. def logError(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  31. def logError(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  32. def logError(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    Logging
  33. def logError(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  34. def logInfo(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  35. def logInfo(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  36. def logInfo(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    Logging
  37. def logInfo(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  38. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  39. def logTrace(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  40. def logTrace(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  41. def logTrace(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    Logging
  42. def logTrace(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  43. def logWarning(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  44. def logWarning(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  45. def logWarning(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    Logging
  46. def logWarning(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  47. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  48. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  49. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  50. def run(documents: JavaPairRDD[Long, Vector]): LDAModel

    Java-friendly version of run()

    Java-friendly version of run()

    Annotations
    @Since("1.3.0")
  51. def run(documents: RDD[(Long, Vector)]): LDAModel

    Learn an LDA model using the given dataset.

    Learn an LDA model using the given dataset.

    documents

    RDD of documents, which are term (word) count vectors paired with IDs. The term count vectors are "bags of words" with a fixed-size vocabulary (where the vocabulary size is the length of the vector). Document IDs must be unique and greater than or equal to 0.

    returns

    Inferred LDA model

    Annotations
    @Since("1.3.0")
  52. def setAlpha(alpha: Double): LDA.this.type

    Alias for setDocConcentration()

    Alias for setDocConcentration()

    Annotations
    @Since("1.3.0")
  53. def setAlpha(alpha: Vector): LDA.this.type

    Alias for setDocConcentration()

    Alias for setDocConcentration()

    Annotations
    @Since("1.5.0")
  54. def setBeta(beta: Double): LDA.this.type

    Alias for setTopicConcentration()

    Alias for setTopicConcentration()

    Annotations
    @Since("1.3.0")
  55. def setCheckpointInterval(checkpointInterval: Int): LDA.this.type

    Parameter for set checkpoint interval (greater than or equal to 1) or disable checkpoint (-1).

    Parameter for set checkpoint interval (greater than or equal to 1) or disable checkpoint (-1). E.g. 10 means that the cache will get checkpointed every 10 iterations. Checkpointing helps with recovery (when nodes fail). It also helps with eliminating temporary shuffle files on disk, which can be important when LDA is run for many iterations. If the checkpoint directory is not set in org.apache.spark.SparkContext, this setting is ignored. (default = 10)

    Annotations
    @Since("1.3.0")
    See also

    org.apache.spark.SparkContext#setCheckpointDir

  56. def setDocConcentration(docConcentration: Double): LDA.this.type

    Replicates a Double docConcentration to create a symmetric prior.

    Replicates a Double docConcentration to create a symmetric prior.

    Annotations
    @Since("1.3.0")
  57. def setDocConcentration(docConcentration: Vector): LDA.this.type

    Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").

    Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").

    This is the parameter to a Dirichlet distribution, where larger values mean more smoothing (more regularization).

    If set to a singleton vector Vector(-1), then docConcentration is set automatically. If set to singleton vector Vector(t) where t != -1, then t is replicated to a vector of length k during LDAOptimizer.initialize(). Otherwise, the docConcentration vector must be length k. (default = Vector(-1) = automatic)

    Optimizer-specific parameter settings:

    • EM
      • Currently only supports symmetric distributions, so all values in the vector should be the same.
      • Values should be greater than 1.0
      • default = uniformly (50 / k) + 1, where 50/k is common in LDA libraries and +1 follows from Asuncion et al. (2009), who recommend a +1 adjustment for EM.
    • Online
      • Values should be greater than or equal to 0
      • default = uniformly (1.0 / k), following the implementation from here.
    Annotations
    @Since("1.5.0")
  58. def setK(k: Int): LDA.this.type

    Set the number of topics to infer, i.e., the number of soft cluster centers.

    Set the number of topics to infer, i.e., the number of soft cluster centers. (default = 10)

    Annotations
    @Since("1.3.0")
  59. def setMaxIterations(maxIterations: Int): LDA.this.type

    Set the maximum number of iterations allowed.

    Set the maximum number of iterations allowed. (default = 20)

    Annotations
    @Since("1.3.0")
  60. def setOptimizer(optimizerName: String): LDA.this.type

    Set the LDAOptimizer used to perform the actual calculation by algorithm name.

    Set the LDAOptimizer used to perform the actual calculation by algorithm name. Currently "em", "online" are supported.

    Annotations
    @Since("1.4.0")
  61. def setOptimizer(optimizer: LDAOptimizer): LDA.this.type

    LDAOptimizer used to perform the actual calculation (default = EMLDAOptimizer)

    LDAOptimizer used to perform the actual calculation (default = EMLDAOptimizer)

    Annotations
    @Since("1.4.0")
  62. def setSeed(seed: Long): LDA.this.type

    Set the random seed for cluster initialization.

    Set the random seed for cluster initialization.

    Annotations
    @Since("1.3.0")
  63. def setTopicConcentration(topicConcentration: Double): LDA.this.type

    Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.

    Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.

    This is the parameter to a symmetric Dirichlet distribution.

    Annotations
    @Since("1.3.0")
    Note

    The topics' distributions over terms are called "beta" in the original LDA paper by Blei et al., but are called "phi" in many later papers such as Asuncion et al., 2009. If set to -1, then topicConcentration is set automatically. (default = -1 = automatic) Optimizer-specific parameter settings:

    • EM
      • Value should be greater than 1.0
      • default = 0.1 + 1, where 0.1 gives a small amount of smoothing and +1 follows Asuncion et al. (2009), who recommend a +1 adjustment for EM.
    • Online
      • Value should be greater than or equal to 0
      • default = (1.0 / k), following the implementation from here.
  64. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  65. def toString(): String
    Definition Classes
    AnyRef → Any
  66. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  67. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  68. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  69. def withLogContext(context: HashMap[String, String])(body: => Unit): Unit
    Attributes
    protected
    Definition Classes
    Logging

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable]) @Deprecated
    Deprecated

    (Since version 9)

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Ungrouped