public abstract class LDAModel extends Model<LDAModel> implements LDAParams, Logging, MLWritable
LDA
.
param: vocabSize Vocabulary size (number of terms or words in the vocabulary) param: sparkSession Used to construct local DataFrames for returning query results
Modifier and Type | Method and Description |
---|---|
Dataset<Row> |
describeTopics() |
Dataset<Row> |
describeTopics(int maxTermsPerTopic)
Return the topics described by their top-weighted terms.
|
Vector |
estimatedDocConcentration()
Value for
docConcentration estimated from data. |
abstract boolean |
isDistributed()
Indicates whether this instance is of type
DistributedLDAModel |
double |
logLikelihood(Dataset<?> dataset)
Calculates a lower bound on the log likelihood of the entire corpus.
|
double |
logPerplexity(Dataset<?> dataset)
Calculate an upper bound on perplexity.
|
LDAModel |
setFeaturesCol(String value)
The features for LDA should be a
Vector representing the word counts in a document. |
LDAModel |
setSeed(long value) |
LDAModel |
setTopicDistributionCol(String value) |
Matrix |
topicsMatrix()
Inferred topics, where each topic is represented by a distribution over terms.
|
Dataset<Row> |
transform(Dataset<?> dataset)
Transforms the input dataset.
|
StructType |
transformSchema(StructType schema)
:: DeveloperApi ::
|
String |
uid()
An immutable unique ID for the object and its derivatives.
|
int |
vocabSize() |
transform, transform, transform
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
docConcentration, getDocConcentration, getK, getKeepLastCheckpoint, getLearningDecay, getLearningOffset, getOldDocConcentration, getOldOptimizer, getOldTopicConcentration, getOptimizeDocConcentration, getOptimizer, getSubsamplingRate, getTopicConcentration, getTopicDistributionCol, k, keepLastCheckpoint, learningDecay, learningOffset, optimizeDocConcentration, optimizer, subsamplingRate, supportedOptimizers, topicConcentration, topicDistributionCol, validateAndTransformSchema
featuresCol, getFeaturesCol
getMaxIter, maxIter
checkpointInterval, getCheckpointInterval
clear, copy, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
toString
initializeLogging, initializeLogIfNecessary, initializeLogIfNecessary, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
save, write
public Dataset<Row> describeTopics(int maxTermsPerTopic)
maxTermsPerTopic
- Maximum number of terms to collect for each topic.
Default value of 10.public Vector estimatedDocConcentration()
docConcentration
estimated from data.
If Online LDA was used and optimizeDocConcentration
was set to false,
then this returns the fixed (given) value for the docConcentration
parameter.public abstract boolean isDistributed()
DistributedLDAModel
public double logLikelihood(Dataset<?> dataset)
See Equation (16) in the Online LDA paper (Hoffman et al., 2010).
WARNING: If this model is an instance of DistributedLDAModel
(produced when optimizer
is set to "em"), this involves collecting a large topicsMatrix
to the driver.
This implementation may be changed in the future.
dataset
- test corpus to use for calculating log likelihoodpublic double logPerplexity(Dataset<?> dataset)
WARNING: If this model is an instance of DistributedLDAModel
(produced when optimizer
is set to "em"), this involves collecting a large topicsMatrix
to the driver.
This implementation may be changed in the future.
dataset
- test corpus to use for calculating perplexitypublic LDAModel setFeaturesCol(String value)
Vector
representing the word counts in a document.
The vector should be of length vocabSize, with counts for each term (word).
value
- (undocumented)public LDAModel setSeed(long value)
public LDAModel setTopicDistributionCol(String value)
public Matrix topicsMatrix()
WARNING: If this model is actually a DistributedLDAModel
instance produced by
the Expectation-Maximization ("em") optimizer
, then this method could involve
collecting a large amount of data to the driver (on the order of vocabSize x k).
public Dataset<Row> transform(Dataset<?> dataset)
WARNING: If this model is an instance of DistributedLDAModel
(produced when optimizer
is set to "em"), this involves collecting a large topicsMatrix
to the driver.
This implementation may be changed in the future.
transform
in class Transformer
dataset
- (undocumented)public StructType transformSchema(StructType schema)
PipelineStage
Check transform validity and derive the output schema from the input schema.
We check validity for interactions between parameters during transformSchema
and
raise an exception if any parameter value is invalid. Parameter value checks which
do not depend on other parameters are handled by Param.validate()
.
Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
transformSchema
in class PipelineStage
schema
- (undocumented)public String uid()
Identifiable
uid
in interface Identifiable
public int vocabSize()