public class LocalLDAModel extends LDAModel
Local (non-distributed) model fitted by LDA
.
This model stores the inferred topics only; it does not store info about the training dataset.
Modifier and Type | Method and Description |
---|---|
protected static <T> T |
$(Param<T> param) |
static IntParam |
checkpointInterval() |
IntParam |
checkpointInterval()
Param for set checkpoint interval (>= 1) or disable checkpoint (-1).
|
static Params |
clear(Param<?> param) |
LocalLDAModel |
copy(ParamMap extra)
Creates a copy of this instance with the same UID and some extra params.
|
protected static <T extends Params> |
copyValues(T to,
ParamMap extra) |
protected static <T extends Params> |
copyValues$default$2() |
protected static <T extends Params> |
defaultCopy(ParamMap extra) |
static Dataset<Row> |
describeTopics() |
static Dataset<Row> |
describeTopics(int maxTermsPerTopic) |
static DoubleArrayParam |
docConcentration() |
DoubleArrayParam |
docConcentration()
Concentration parameter (commonly named "alpha") for the prior placed on documents'
distributions over topics ("theta").
|
static Vector |
estimatedDocConcentration() |
static java.lang.String |
explainParam(Param<?> param) |
static java.lang.String |
explainParams() |
static ParamMap |
extractParamMap() |
static ParamMap |
extractParamMap(ParamMap extra) |
static Param<java.lang.String> |
featuresCol() |
Param<java.lang.String> |
featuresCol()
Param for features column name.
|
static <T> scala.Option<T> |
get(Param<T> param) |
static int |
getCheckpointInterval() |
int |
getCheckpointInterval() |
static <T> scala.Option<T> |
getDefault(Param<T> param) |
static double[] |
getDocConcentration() |
double[] |
getDocConcentration() |
static java.lang.String |
getFeaturesCol() |
java.lang.String |
getFeaturesCol() |
static int |
getK() |
int |
getK() |
static boolean |
getKeepLastCheckpoint() |
boolean |
getKeepLastCheckpoint() |
static double |
getLearningDecay() |
double |
getLearningDecay() |
static double |
getLearningOffset() |
double |
getLearningOffset() |
static int |
getMaxIter() |
int |
getMaxIter() |
protected LDAModel |
getModel()
Returns underlying spark.mllib model, which may be local or distributed
|
protected static Vector |
getOldDocConcentration() |
Vector |
getOldDocConcentration()
Get docConcentration used by spark.mllib LDA
|
LDAOptimizer |
getOldOptimizer() |
protected static double |
getOldTopicConcentration() |
double |
getOldTopicConcentration()
Get topicConcentration used by spark.mllib LDA
|
static boolean |
getOptimizeDocConcentration() |
boolean |
getOptimizeDocConcentration() |
static java.lang.String |
getOptimizer() |
java.lang.String |
getOptimizer() |
static <T> T |
getOrDefault(Param<T> param) |
static Param<java.lang.Object> |
getParam(java.lang.String paramName) |
static long |
getSeed() |
long |
getSeed() |
static double |
getSubsamplingRate() |
double |
getSubsamplingRate() |
static double |
getTopicConcentration() |
double |
getTopicConcentration() |
static java.lang.String |
getTopicDistributionCol() |
java.lang.String |
getTopicDistributionCol() |
static <T> boolean |
hasDefault(Param<T> param) |
static boolean |
hasParam(java.lang.String paramName) |
static boolean |
hasParent() |
protected static void |
initializeLogIfNecessary(boolean isInterpreter) |
static boolean |
isDefined(Param<?> param) |
boolean |
isDistributed()
Indicates whether this instance is of type
DistributedLDAModel |
static boolean |
isSet(Param<?> param) |
protected static boolean |
isTraceEnabled() |
static IntParam |
k() |
IntParam |
k()
Param for the number of topics (clusters) to infer.
|
static BooleanParam |
keepLastCheckpoint() |
BooleanParam |
keepLastCheckpoint()
For EM optimizer only:
optimizer = "em". |
static DoubleParam |
learningDecay() |
DoubleParam |
learningDecay()
For Online optimizer only:
optimizer = "online". |
static DoubleParam |
learningOffset() |
DoubleParam |
learningOffset()
For Online optimizer only:
optimizer = "online". |
static LocalLDAModel |
load(java.lang.String path) |
protected static org.slf4j.Logger |
log() |
protected static void |
logDebug(scala.Function0<java.lang.String> msg) |
protected static void |
logDebug(scala.Function0<java.lang.String> msg,
java.lang.Throwable throwable) |
protected static void |
logError(scala.Function0<java.lang.String> msg) |
protected static void |
logError(scala.Function0<java.lang.String> msg,
java.lang.Throwable throwable) |
protected static void |
logInfo(scala.Function0<java.lang.String> msg) |
protected static void |
logInfo(scala.Function0<java.lang.String> msg,
java.lang.Throwable throwable) |
static double |
logLikelihood(Dataset<?> dataset) |
protected static java.lang.String |
logName() |
static double |
logPerplexity(Dataset<?> dataset) |
protected static void |
logTrace(scala.Function0<java.lang.String> msg) |
protected static void |
logTrace(scala.Function0<java.lang.String> msg,
java.lang.Throwable throwable) |
protected static void |
logWarning(scala.Function0<java.lang.String> msg) |
protected static void |
logWarning(scala.Function0<java.lang.String> msg,
java.lang.Throwable throwable) |
static IntParam |
maxIter() |
IntParam |
maxIter()
Param for maximum number of iterations (>= 0).
|
protected LocalLDAModel |
oldLocalModel()
Underlying spark.mllib model.
|
static BooleanParam |
optimizeDocConcentration() |
BooleanParam |
optimizeDocConcentration()
For Online optimizer only (currently):
optimizer = "online". |
static Param<java.lang.String> |
optimizer() |
Param<java.lang.String> |
optimizer()
Optimizer or inference algorithm used to estimate the LDA model.
|
static Param<?>[] |
params() |
static void |
parent_$eq(Estimator<M> x$1) |
static Estimator<M> |
parent() |
static MLReader<LocalLDAModel> |
read() |
static void |
save(java.lang.String path) |
static LongParam |
seed() |
LongParam |
seed()
Param for random seed.
|
static <T> Params |
set(Param<T> param,
T value) |
protected static Params |
set(ParamPair<?> paramPair) |
protected static Params |
set(java.lang.String param,
java.lang.Object value) |
protected static <T> Params |
setDefault(Param<T> param,
T value) |
protected static Params |
setDefault(scala.collection.Seq<ParamPair<?>> paramPairs) |
static LDAModel |
setFeaturesCol(java.lang.String value) |
static M |
setParent(Estimator<M> parent) |
static LDAModel |
setSeed(long value) |
static DoubleParam |
subsamplingRate() |
DoubleParam |
subsamplingRate()
For Online optimizer only:
optimizer = "online". |
static java.lang.String[] |
supportedOptimizers() |
java.lang.String[] |
supportedOptimizers()
Supported values for Param
optimizer . |
static DoubleParam |
topicConcentration() |
DoubleParam |
topicConcentration()
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics'
distributions over terms.
|
static Param<java.lang.String> |
topicDistributionCol() |
Param<java.lang.String> |
topicDistributionCol()
Output column with estimates of the topic mixture distribution for each document (often called
"theta" in the literature).
|
static Matrix |
topicsMatrix() |
static java.lang.String |
toString() |
static Dataset<Row> |
transform(Dataset<?> dataset) |
static Dataset<Row> |
transform(Dataset<?> dataset,
ParamMap paramMap) |
static Dataset<Row> |
transform(Dataset<?> dataset,
ParamPair<?> firstParamPair,
ParamPair<?>... otherParamPairs) |
static Dataset<Row> |
transform(Dataset<?> dataset,
ParamPair<?> firstParamPair,
scala.collection.Seq<ParamPair<?>> otherParamPairs) |
static StructType |
transformSchema(StructType schema) |
protected static StructType |
transformSchema(StructType schema,
boolean logging) |
static java.lang.String |
uid() |
protected static StructType |
validateAndTransformSchema(StructType schema) |
StructType |
validateAndTransformSchema(StructType schema)
Validates and transforms the input schema.
|
static void |
validateParams() |
static int |
vocabSize() |
MLWriter |
write()
Returns an
MLWriter instance for this ML instance. |
describeTopics, describeTopics, estimatedDocConcentration, logLikelihood, logPerplexity, setFeaturesCol, setSeed, topicsMatrix, transform, transformSchema, uid, vocabSize
transform, transform, transform
transformSchema
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
clear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn, validateParams
toString
save
public static MLReader<LocalLDAModel> read()
public static LocalLDAModel load(java.lang.String path)
public static java.lang.String toString()
public static Param<?>[] params()
public static void validateParams()
public static java.lang.String explainParam(Param<?> param)
public static java.lang.String explainParams()
public static final boolean isSet(Param<?> param)
public static final boolean isDefined(Param<?> param)
public static boolean hasParam(java.lang.String paramName)
public static Param<java.lang.Object> getParam(java.lang.String paramName)
protected static final Params set(java.lang.String param, java.lang.Object value)
public static final <T> scala.Option<T> get(Param<T> param)
public static final <T> T getOrDefault(Param<T> param)
protected static final <T> T $(Param<T> param)
public static final <T> scala.Option<T> getDefault(Param<T> param)
public static final <T> boolean hasDefault(Param<T> param)
public static final ParamMap extractParamMap()
protected static java.lang.String logName()
protected static org.slf4j.Logger log()
protected static void logInfo(scala.Function0<java.lang.String> msg)
protected static void logDebug(scala.Function0<java.lang.String> msg)
protected static void logTrace(scala.Function0<java.lang.String> msg)
protected static void logWarning(scala.Function0<java.lang.String> msg)
protected static void logError(scala.Function0<java.lang.String> msg)
protected static void logInfo(scala.Function0<java.lang.String> msg, java.lang.Throwable throwable)
protected static void logDebug(scala.Function0<java.lang.String> msg, java.lang.Throwable throwable)
protected static void logTrace(scala.Function0<java.lang.String> msg, java.lang.Throwable throwable)
protected static void logWarning(scala.Function0<java.lang.String> msg, java.lang.Throwable throwable)
protected static void logError(scala.Function0<java.lang.String> msg, java.lang.Throwable throwable)
protected static boolean isTraceEnabled()
protected static void initializeLogIfNecessary(boolean isInterpreter)
protected static StructType transformSchema(StructType schema, boolean logging)
public static Dataset<Row> transform(Dataset<?> dataset, ParamPair<?> firstParamPair, scala.collection.Seq<ParamPair<?>> otherParamPairs)
public static Dataset<Row> transform(Dataset<?> dataset, ParamPair<?> firstParamPair, ParamPair<?>... otherParamPairs)
public static Estimator<M> parent()
public static void parent_$eq(Estimator<M> x$1)
public static M setParent(Estimator<M> parent)
public static boolean hasParent()
public static final Param<java.lang.String> featuresCol()
public static final java.lang.String getFeaturesCol()
public static final IntParam maxIter()
public static final int getMaxIter()
public static final LongParam seed()
public static final long getSeed()
public static final IntParam checkpointInterval()
public static final int getCheckpointInterval()
public static final IntParam k()
public static int getK()
public static final DoubleArrayParam docConcentration()
public static double[] getDocConcentration()
protected static Vector getOldDocConcentration()
public static final DoubleParam topicConcentration()
public static double getTopicConcentration()
protected static double getOldTopicConcentration()
public static final java.lang.String[] supportedOptimizers()
public static final Param<java.lang.String> optimizer()
public static java.lang.String getOptimizer()
public static final Param<java.lang.String> topicDistributionCol()
public static java.lang.String getTopicDistributionCol()
public static final DoubleParam learningOffset()
public static double getLearningOffset()
public static final DoubleParam learningDecay()
public static double getLearningDecay()
public static final DoubleParam subsamplingRate()
public static double getSubsamplingRate()
public static final BooleanParam optimizeDocConcentration()
public static boolean getOptimizeDocConcentration()
public static final BooleanParam keepLastCheckpoint()
public static boolean getKeepLastCheckpoint()
protected static StructType validateAndTransformSchema(StructType schema)
public static void save(java.lang.String path) throws java.io.IOException
java.io.IOException
public static java.lang.String uid()
public static int vocabSize()
public static LDAModel setFeaturesCol(java.lang.String value)
public static LDAModel setSeed(long value)
public static StructType transformSchema(StructType schema)
public static Vector estimatedDocConcentration()
public static Matrix topicsMatrix()
public static double logLikelihood(Dataset<?> dataset)
public static double logPerplexity(Dataset<?> dataset)
protected LocalLDAModel oldLocalModel()
LDAModel
oldLocalModel
in class LDAModel
public LocalLDAModel copy(ParamMap extra)
Params
protected LDAModel getModel()
LDAModel
public boolean isDistributed()
LDAModel
DistributedLDAModel
isDistributed
in class LDAModel
public MLWriter write()
MLWritable
MLWriter
instance for this ML instance.public IntParam k()
public int getK()
public DoubleArrayParam docConcentration()
This is the parameter to a Dirichlet distribution, where larger values mean more smoothing (more regularization).
If not set by the user, then docConcentration is set automatically. If set to
singleton vector [alpha], then alpha is replicated to a vector of length k in fitting.
Otherwise, the docConcentration
vector must be length k.
(default = automatic)
Optimizer-specific parameter settings:
- EM
- Currently only supports symmetric distributions, so all values in the vector should be
the same.
- Values should be > 1.0
- default = uniformly (50 / k) + 1, where 50/k is common in LDA libraries and +1 follows
from Asuncion et al. (2009), who recommend a +1 adjustment for EM.
- Online
- Values should be >= 0
- default = uniformly (1.0 / k), following the implementation from
https://github.com/Blei-Lab/onlineldavb
.
public double[] getDocConcentration()
public Vector getOldDocConcentration()
public DoubleParam topicConcentration()
This is the parameter to a symmetric Dirichlet distribution.
Note: The topics' distributions over terms are called "beta" in the original LDA paper by Blei et al., but are called "phi" in many later papers such as Asuncion et al., 2009.
If not set by the user, then topicConcentration is set automatically. (default = automatic)
Optimizer-specific parameter settings:
- EM
- Value should be > 1.0
- default = 0.1 + 1, where 0.1 gives a small amount of smoothing and +1 follows
Asuncion et al. (2009), who recommend a +1 adjustment for EM.
- Online
- Value should be >= 0
- default = (1.0 / k), following the implementation from
https://github.com/Blei-Lab/onlineldavb
.
public double getTopicConcentration()
public double getOldTopicConcentration()
public java.lang.String[] supportedOptimizers()
optimizer
.public Param<java.lang.String> optimizer()
For details, see the following papers:
- Online LDA:
Hoffman, Blei and Bach. "Online Learning for Latent Dirichlet Allocation."
Neural Information Processing Systems, 2010.
http://www.cs.columbia.edu/~blei/papers/HoffmanBleiBach2010b.pdf
- EM:
Asuncion et al. "On Smoothing and Inference for Topic Models."
Uncertainty in Artificial Intelligence, 2009.
http://arxiv.org/pdf/1205.2662.pdf
public java.lang.String getOptimizer()
public Param<java.lang.String> topicDistributionCol()
This uses a variational approximation following Hoffman et al. (2010), where the approximate distribution is called "gamma." Technically, this method returns this approximation "gamma" for each document.
public java.lang.String getTopicDistributionCol()
public DoubleParam learningOffset()
optimizer
= "online".
A (positive) learning parameter that downweights early iterations. Larger values make early iterations count less. This is called "tau0" in the Online LDA paper (Hoffman et al., 2010) Default: 1024, following Hoffman et al.
public double getLearningOffset()
public DoubleParam learningDecay()
optimizer
= "online".
Learning rate, set as an exponential decay rate. This should be between (0.5, 1.0] to guarantee asymptotic convergence. This is called "kappa" in the Online LDA paper (Hoffman et al., 2010). Default: 0.51, based on Hoffman et al.
public double getLearningDecay()
public DoubleParam subsamplingRate()
optimizer
= "online".
Fraction of the corpus to be sampled and used in each iteration of mini-batch gradient descent, in range (0, 1].
Note that this should be adjusted in synch with LDA.maxIter
so the entire corpus is used. Specifically, set both so that
maxIterations * miniBatchFraction >= 1.
Note: This is the same as the miniBatchFraction
parameter in
OnlineLDAOptimizer
.
Default: 0.05, i.e., 5% of total documents.
public double getSubsamplingRate()
public BooleanParam optimizeDocConcentration()
optimizer
= "online".
Indicates whether the docConcentration (Dirichlet parameter for document-topic distribution) will be optimized during training. Setting this to true will make the model more expressive and fit the training data better. Default: false
public boolean getOptimizeDocConcentration()
public BooleanParam keepLastCheckpoint()
optimizer
= "em".
If using checkpointing, this indicates whether to keep the last checkpoint. If false, then the checkpoint will be deleted. Deleting the checkpoint can cause failures if a data partition is lost, so set this bit with care. Note that checkpoints will be cleaned up via reference counting, regardless.
See DistributedLDAModel.getCheckpointFiles
for getting remaining checkpoints and
DistributedLDAModel.deleteCheckpointFiles
for removing remaining checkpoints.
Default: true
public boolean getKeepLastCheckpoint()
public StructType validateAndTransformSchema(StructType schema)
schema
- input schemapublic LDAOptimizer getOldOptimizer()
public Param<java.lang.String> featuresCol()
public java.lang.String getFeaturesCol()
public IntParam maxIter()
public int getMaxIter()
public LongParam seed()
public long getSeed()
public IntParam checkpointInterval()
public int getCheckpointInterval()