Class ProbabilisticClassificationModel<FeaturesType,M extends ProbabilisticClassificationModel<FeaturesType,M>>  
Object
org.apache.spark.ml.PipelineStage
org.apache.spark.ml.Transformer
org.apache.spark.ml.Model<M>
org.apache.spark.ml.PredictionModel<FeaturesType,M>
 
org.apache.spark.ml.classification.ClassificationModel<FeaturesType,M>
 
org.apache.spark.ml.classification.ProbabilisticClassificationModel<FeaturesType,M> 
- Type Parameters:
- FeaturesType- Type of input features. E.g.,- Vector
- M- Concrete Model type
- All Implemented Interfaces:
- Serializable,- org.apache.spark.internal.Logging,- ClassifierParams,- ProbabilisticClassifierParams,- Params,- HasFeaturesCol,- HasLabelCol,- HasPredictionCol,- HasProbabilityCol,- HasRawPredictionCol,- HasThresholds,- PredictorParams,- Identifiable
- Direct Known Subclasses:
- DecisionTreeClassificationModel,- FMClassificationModel,- GBTClassificationModel,- LogisticRegressionModel,- MultilayerPerceptronClassificationModel,- NaiveBayesModel,- RandomForestClassificationModel
public abstract class ProbabilisticClassificationModel<FeaturesType,M extends ProbabilisticClassificationModel<FeaturesType,M>>  
extends ClassificationModel<FeaturesType,M>
implements ProbabilisticClassifierParams 
Model produced by a 
ProbabilisticClassifier.
 Classes are indexed {0, 1, ..., numClasses - 1}.
 - See Also:
- 
Nested Class SummaryNested classes/interfaces inherited from interface org.apache.spark.internal.Loggingorg.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescriptionstatic voidNormalize a vector of raw predictions to be a multinomial probability vector, in place.predictProbability(FeaturesType features) Predict the probability of each class given the features.Param for Column name for predicted class conditional probabilities.setProbabilityCol(String value) setThresholds(double[] value) Param for Thresholds in multi-class classification to adjust the probability of predicting each class.Transforms dataset by reading fromPredictionModel.featuresCol(), and appending new columns as specified by parameters: - predicted labels asPredictionModel.predictionCol()of typeDouble- raw predictions (confidences) asClassificationModel.rawPredictionCol()of typeVector- probability of each class asprobabilityCol()of typeVector.transformSchema(StructType schema) Check transform validity and derive the output schema from the input schema.Methods inherited from class org.apache.spark.ml.classification.ClassificationModelnumClasses, predict, predictRaw, rawPredictionCol, setRawPredictionCol, transformImplMethods inherited from class org.apache.spark.ml.PredictionModelfeaturesCol, labelCol, numFeatures, predictionCol, setFeaturesCol, setPredictionColMethods inherited from class org.apache.spark.ml.Transformertransform, transform, transformMethods inherited from class org.apache.spark.ml.PipelineStageparamsMethods inherited from class java.lang.Objectequals, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface org.apache.spark.ml.param.shared.HasFeaturesColfeaturesCol, getFeaturesColMethods inherited from interface org.apache.spark.ml.param.shared.HasLabelColgetLabelCol, labelColMethods inherited from interface org.apache.spark.ml.param.shared.HasPredictionColgetPredictionCol, predictionColMethods inherited from interface org.apache.spark.ml.param.shared.HasProbabilityColgetProbabilityColMethods inherited from interface org.apache.spark.ml.param.shared.HasRawPredictionColgetRawPredictionCol, rawPredictionColMethods inherited from interface org.apache.spark.ml.param.shared.HasThresholdsgetThresholdsMethods inherited from interface org.apache.spark.ml.util.IdentifiabletoString, uidMethods inherited from interface org.apache.spark.internal.LogginginitializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logBasedOnLevel, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, MDC, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContextMethods inherited from interface org.apache.spark.ml.param.Paramsclear, copy, copyValues, defaultCopy, defaultParamMap, estimateMatadataSize, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, onParamChange, paramMap, params, set, set, set, setDefault, setDefault, shouldOwnMethods inherited from interface org.apache.spark.ml.classification.ProbabilisticClassifierParamsvalidateAndTransformSchema
- 
Constructor Details- 
ProbabilisticClassificationModelpublic ProbabilisticClassificationModel()
 
- 
- 
Method Details- 
normalizeToProbabilitiesInPlaceNormalize a vector of raw predictions to be a multinomial probability vector, in place.The input raw predictions should be nonnegative. The output vector sums to 1. NOTE: This is NOT applicable to all models, only ones which effectively use class instance counts for raw predictions. - Parameters:
- v- (undocumented)
- Throws:
- IllegalArgumentException- if the input vector is all-0 or including negative values
 
- 
thresholdsDescription copied from interface:HasThresholdsParam for Thresholds in multi-class classification to adjust the probability of predicting each class. Array must have length equal to the number of classes, with values > 0 excepting that at most one value may be 0. The class with largest value p/t is predicted, where p is the original probability of that class and t is the class's threshold.- Specified by:
- thresholdsin interface- HasThresholds
- Returns:
- (undocumented)
 
- 
probabilityColDescription copied from interface:HasProbabilityColParam for Column name for predicted class conditional probabilities. Note: Not all models output well-calibrated probability estimates! These probabilities should be treated as confidences, not precise probabilities.- Specified by:
- probabilityColin interface- HasProbabilityCol
- Returns:
- (undocumented)
 
- 
setProbabilityCol
- 
setThresholds
- 
transformSchemaDescription copied from class:PipelineStageCheck transform validity and derive the output schema from the input schema.We check validity for interactions between parameters during transformSchemaand raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled byParam.validate().Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks. - Overrides:
- transformSchemain class- ClassificationModel<FeaturesType,- M extends ProbabilisticClassificationModel<FeaturesType, - M>> 
- Parameters:
- schema- (undocumented)
- Returns:
- (undocumented)
 
- 
transformTransforms dataset by reading fromPredictionModel.featuresCol(), and appending new columns as specified by parameters: - predicted labels asPredictionModel.predictionCol()of typeDouble- raw predictions (confidences) asClassificationModel.rawPredictionCol()of typeVector- probability of each class asprobabilityCol()of typeVector.- Overrides:
- transformin class- ClassificationModel<FeaturesType,- M extends ProbabilisticClassificationModel<FeaturesType, - M>> 
- Parameters:
- dataset- input dataset
- Returns:
- transformed dataset
 
- 
predictProbabilityPredict the probability of each class given the features. These predictions are also called class conditional probabilities.This internal method is used to implement transform()and outputprobabilityCol().- Parameters:
- features- (undocumented)
- Returns:
- Estimated class conditional probabilities
 
 
-