Class ProbabilisticClassificationModel<FeaturesType,M extends ProbabilisticClassificationModel<FeaturesType,M>>
Object
org.apache.spark.ml.PipelineStage
org.apache.spark.ml.Transformer
org.apache.spark.ml.Model<M>
org.apache.spark.ml.PredictionModel<FeaturesType,M>
org.apache.spark.ml.classification.ClassificationModel<FeaturesType,M>
org.apache.spark.ml.classification.ProbabilisticClassificationModel<FeaturesType,M>
- Type Parameters:
FeaturesType- Type of input features. E.g.,VectorM- Concrete Model type
- All Implemented Interfaces:
Serializable,org.apache.spark.internal.Logging,ClassifierParams,ProbabilisticClassifierParams,Params,HasFeaturesCol,HasLabelCol,HasPredictionCol,HasProbabilityCol,HasRawPredictionCol,HasThresholds,PredictorParams,Identifiable
- Direct Known Subclasses:
DecisionTreeClassificationModel,FMClassificationModel,GBTClassificationModel,LogisticRegressionModel,MultilayerPerceptronClassificationModel,NaiveBayesModel,RandomForestClassificationModel
public abstract class ProbabilisticClassificationModel<FeaturesType,M extends ProbabilisticClassificationModel<FeaturesType,M>>
extends ClassificationModel<FeaturesType,M>
implements ProbabilisticClassifierParams
Model produced by a
ProbabilisticClassifier.
Classes are indexed {0, 1, ..., numClasses - 1}.
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionstatic voidNormalize a vector of raw predictions to be a multinomial probability vector, in place.predictProbability(FeaturesType features) Predict the probability of each class given the features.Param for Column name for predicted class conditional probabilities.setProbabilityCol(String value) setThresholds(double[] value) Param for Thresholds in multi-class classification to adjust the probability of predicting each class.Transforms dataset by reading fromPredictionModel.featuresCol(), and appending new columns as specified by parameters: - predicted labels asPredictionModel.predictionCol()of typeDouble- raw predictions (confidences) asClassificationModel.rawPredictionCol()of typeVector- probability of each class asprobabilityCol()of typeVector.transformSchema(StructType schema) Check transform validity and derive the output schema from the input schema.Methods inherited from class org.apache.spark.ml.classification.ClassificationModel
numClasses, predict, predictRaw, rawPredictionCol, setRawPredictionCol, transformImplMethods inherited from class org.apache.spark.ml.PredictionModel
featuresCol, labelCol, numFeatures, predictionCol, setFeaturesCol, setPredictionColMethods inherited from class org.apache.spark.ml.Transformer
transform, transform, transformMethods inherited from class org.apache.spark.ml.PipelineStage
paramsMethods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface org.apache.spark.ml.param.shared.HasFeaturesCol
featuresCol, getFeaturesColMethods inherited from interface org.apache.spark.ml.param.shared.HasLabelCol
getLabelCol, labelColMethods inherited from interface org.apache.spark.ml.param.shared.HasPredictionCol
getPredictionCol, predictionColMethods inherited from interface org.apache.spark.ml.param.shared.HasProbabilityCol
getProbabilityColMethods inherited from interface org.apache.spark.ml.param.shared.HasRawPredictionCol
getRawPredictionCol, rawPredictionColMethods inherited from interface org.apache.spark.ml.param.shared.HasThresholds
getThresholdsMethods inherited from interface org.apache.spark.ml.util.Identifiable
toString, uidMethods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContextMethods inherited from interface org.apache.spark.ml.param.Params
clear, copy, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, onParamChange, paramMap, params, set, set, set, setDefault, setDefault, shouldOwnMethods inherited from interface org.apache.spark.ml.classification.ProbabilisticClassifierParams
validateAndTransformSchema
-
Constructor Details
-
ProbabilisticClassificationModel
public ProbabilisticClassificationModel()
-
-
Method Details
-
normalizeToProbabilitiesInPlace
Normalize a vector of raw predictions to be a multinomial probability vector, in place.The input raw predictions should be nonnegative. The output vector sums to 1.
NOTE: This is NOT applicable to all models, only ones which effectively use class instance counts for raw predictions.
- Parameters:
v- (undocumented)- Throws:
IllegalArgumentException- if the input vector is all-0 or including negative values
-
thresholds
Description copied from interface:HasThresholdsParam for Thresholds in multi-class classification to adjust the probability of predicting each class. Array must have length equal to the number of classes, with values > 0 excepting that at most one value may be 0. The class with largest value p/t is predicted, where p is the original probability of that class and t is the class's threshold.- Specified by:
thresholdsin interfaceHasThresholds- Returns:
- (undocumented)
-
probabilityCol
Description copied from interface:HasProbabilityColParam for Column name for predicted class conditional probabilities. Note: Not all models output well-calibrated probability estimates! These probabilities should be treated as confidences, not precise probabilities.- Specified by:
probabilityColin interfaceHasProbabilityCol- Returns:
- (undocumented)
-
setProbabilityCol
-
setThresholds
-
transformSchema
Description copied from class:PipelineStageCheck transform validity and derive the output schema from the input schema.We check validity for interactions between parameters during
transformSchemaand raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled byParam.validate().Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
- Overrides:
transformSchemain classClassificationModel<FeaturesType,M extends ProbabilisticClassificationModel<FeaturesType, M>> - Parameters:
schema- (undocumented)- Returns:
- (undocumented)
-
transform
Transforms dataset by reading fromPredictionModel.featuresCol(), and appending new columns as specified by parameters: - predicted labels asPredictionModel.predictionCol()of typeDouble- raw predictions (confidences) asClassificationModel.rawPredictionCol()of typeVector- probability of each class asprobabilityCol()of typeVector.- Overrides:
transformin classClassificationModel<FeaturesType,M extends ProbabilisticClassificationModel<FeaturesType, M>> - Parameters:
dataset- input dataset- Returns:
- transformed dataset
-
predictProbability
Predict the probability of each class given the features. These predictions are also called class conditional probabilities.This internal method is used to implement
transform()and outputprobabilityCol().- Parameters:
features- (undocumented)- Returns:
- Estimated class conditional probabilities
-