MulticlassClassificationEvaluator¶
- 
class pyspark.ml.evaluation.MulticlassClassificationEvaluator(*, predictionCol: str = 'prediction', labelCol: str = 'label', metricName: MulticlassClassificationEvaluatorMetricType = 'f1', weightCol: Optional[str] = None, metricLabel: float = 0.0, beta: float = 1.0, probabilityCol: str = 'probability', eps: float = 1e-15)[source]¶
- Evaluator for Multiclass Classification, which expects input columns: prediction, label, weight (optional) and probabilityCol (only for logLoss). - New in version 1.5.0. - Examples - >>> scoreAndLabels = [(0.0, 0.0), (0.0, 1.0), (0.0, 0.0), ... (1.0, 0.0), (1.0, 1.0), (1.0, 1.0), (1.0, 1.0), (2.0, 2.0), (2.0, 0.0)] >>> dataset = spark.createDataFrame(scoreAndLabels, ["prediction", "label"]) >>> evaluator = MulticlassClassificationEvaluator() >>> evaluator.setPredictionCol("prediction") MulticlassClassificationEvaluator... >>> evaluator.evaluate(dataset) 0.66... >>> evaluator.evaluate(dataset, {evaluator.metricName: "accuracy"}) 0.66... >>> evaluator.evaluate(dataset, {evaluator.metricName: "truePositiveRateByLabel", ... evaluator.metricLabel: 1.0}) 0.75... >>> evaluator.setMetricName("hammingLoss") MulticlassClassificationEvaluator... >>> evaluator.evaluate(dataset) 0.33... >>> mce_path = temp_path + "/mce" >>> evaluator.save(mce_path) >>> evaluator2 = MulticlassClassificationEvaluator.load(mce_path) >>> str(evaluator2.getPredictionCol()) 'prediction' >>> scoreAndLabelsAndWeight = [(0.0, 0.0, 1.0), (0.0, 1.0, 1.0), (0.0, 0.0, 1.0), ... (1.0, 0.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), ... (2.0, 2.0, 1.0), (2.0, 0.0, 1.0)] >>> dataset = spark.createDataFrame(scoreAndLabelsAndWeight, ["prediction", "label", "weight"]) >>> evaluator = MulticlassClassificationEvaluator(predictionCol="prediction", ... weightCol="weight") >>> evaluator.evaluate(dataset) 0.66... >>> evaluator.evaluate(dataset, {evaluator.metricName: "accuracy"}) 0.66... >>> predictionAndLabelsWithProbabilities = [ ... (1.0, 1.0, 1.0, [0.1, 0.8, 0.1]), (0.0, 2.0, 1.0, [0.9, 0.05, 0.05]), ... (0.0, 0.0, 1.0, [0.8, 0.2, 0.0]), (1.0, 1.0, 1.0, [0.3, 0.65, 0.05])] >>> dataset = spark.createDataFrame(predictionAndLabelsWithProbabilities, ["prediction", ... "label", "weight", "probability"]) >>> evaluator = MulticlassClassificationEvaluator(predictionCol="prediction", ... probabilityCol="probability") >>> evaluator.setMetricName("logLoss") MulticlassClassificationEvaluator... >>> evaluator.evaluate(dataset) 0.9682... - Methods - clear(param)- Clears a param from the param map if it has been explicitly set. - copy([extra])- Creates a copy of this instance with the same uid and some extra params. - evaluate(dataset[, params])- Evaluates the output with optional parameters. - explainParam(param)- Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string. - Returns the documentation of all params with their optionally default values and user-supplied values. - extractParamMap([extra])- Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra. - getBeta()- Gets the value of beta or its default value. - getEps()- Gets the value of eps or its default value. - Gets the value of labelCol or its default value. - Gets the value of metricLabel or its default value. - Gets the value of metricName or its default value. - getOrDefault(param)- Gets the value of a param in the user-supplied param map or its default value. - getParam(paramName)- Gets a param by its name. - Gets the value of predictionCol or its default value. - Gets the value of probabilityCol or its default value. - Gets the value of weightCol or its default value. - hasDefault(param)- Checks whether a param has a default value. - hasParam(paramName)- Tests whether this instance contains a param with a given (string) name. - isDefined(param)- Checks whether a param is explicitly set by user or has a default value. - Indicates whether the metric returned by - evaluate()should be maximized (True, default) or minimized (False).- isSet(param)- Checks whether a param is explicitly set by user. - load(path)- Reads an ML instance from the input path, a shortcut of read().load(path). - read()- Returns an MLReader instance for this class. - save(path)- Save this ML instance to the given path, a shortcut of ‘write().save(path)’. - set(param, value)- Sets a parameter in the embedded param map. - setBeta(value)- Sets the value of - beta.- setEps(value)- Sets the value of - eps.- setLabelCol(value)- Sets the value of - labelCol.- setMetricLabel(value)- Sets the value of - metricLabel.- setMetricName(value)- Sets the value of - metricName.- setParams(self, \*[, predictionCol, …])- Sets params for multiclass classification evaluator. - setPredictionCol(value)- Sets the value of - predictionCol.- setProbabilityCol(value)- Sets the value of - probabilityCol.- setWeightCol(value)- Sets the value of - weightCol.- write()- Returns an MLWriter instance for this ML instance. - Attributes - Returns all params ordered by name. - Methods Documentation - 
clear(param: pyspark.ml.param.Param) → None¶
- Clears a param from the param map if it has been explicitly set. 
 - 
copy(extra: Optional[ParamMap] = None) → JP¶
- Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied. - Parameters
- extradict, optional
- Extra parameters to copy to the new instance 
 
- Returns
- JavaParams
- Copy of this instance 
 
 
 - 
evaluate(dataset: pyspark.sql.dataframe.DataFrame, params: Optional[ParamMap] = None) → float¶
- Evaluates the output with optional parameters. - New in version 1.4.0. - Parameters
- datasetpyspark.sql.DataFrame
- a dataset that contains labels/observations and predictions 
- paramsdict, optional
- an optional param map that overrides embedded params 
 
- dataset
- Returns
- float
- metric 
 
 
 - 
explainParam(param: Union[str, pyspark.ml.param.Param]) → str¶
- Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string. 
 - 
explainParams() → str¶
- Returns the documentation of all params with their optionally default values and user-supplied values. 
 - 
extractParamMap(extra: Optional[ParamMap] = None) → ParamMap¶
- Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra. - Parameters
- extradict, optional
- extra param values 
 
- Returns
- dict
- merged param map 
 
 
 - 
getLabelCol() → str¶
- Gets the value of labelCol or its default value. 
 - 
getMetricLabel() → float[source]¶
- Gets the value of metricLabel or its default value. - New in version 3.0.0. 
 - 
getMetricName() → MulticlassClassificationEvaluatorMetricType[source]¶
- Gets the value of metricName or its default value. - New in version 1.5.0. 
 - 
getOrDefault(param: Union[str, pyspark.ml.param.Param[T]]) → Union[Any, T]¶
- Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set. 
 - 
getParam(paramName: str) → pyspark.ml.param.Param¶
- Gets a param by its name. 
 - 
getPredictionCol() → str¶
- Gets the value of predictionCol or its default value. 
 - 
getProbabilityCol() → str¶
- Gets the value of probabilityCol or its default value. 
 - 
getWeightCol() → str¶
- Gets the value of weightCol or its default value. 
 - 
hasDefault(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
- Checks whether a param has a default value. 
 - 
hasParam(paramName: str) → bool¶
- Tests whether this instance contains a param with a given (string) name. 
 - 
isDefined(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
- Checks whether a param is explicitly set by user or has a default value. 
 - 
isLargerBetter() → bool¶
- Indicates whether the metric returned by - evaluate()should be maximized (True, default) or minimized (False). A given evaluator may support multiple metrics which may be maximized or minimized.- New in version 1.5.0. 
 - 
isSet(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
- Checks whether a param is explicitly set by user. 
 - 
classmethod load(path: str) → RL¶
- Reads an ML instance from the input path, a shortcut of read().load(path). 
 - 
classmethod read() → pyspark.ml.util.JavaMLReader[RL]¶
- Returns an MLReader instance for this class. 
 - 
save(path: str) → None¶
- Save this ML instance to the given path, a shortcut of ‘write().save(path)’. 
 - 
set(param: pyspark.ml.param.Param, value: Any) → None¶
- Sets a parameter in the embedded param map. 
 - 
setBeta(value: float) → pyspark.ml.evaluation.MulticlassClassificationEvaluator[source]¶
- Sets the value of - beta.- New in version 3.0.0. 
 - 
setEps(value: float) → pyspark.ml.evaluation.MulticlassClassificationEvaluator[source]¶
- Sets the value of - eps.- New in version 3.0.0. 
 - 
setLabelCol(value: str) → pyspark.ml.evaluation.MulticlassClassificationEvaluator[source]¶
- Sets the value of - labelCol.
 - 
setMetricLabel(value: float) → pyspark.ml.evaluation.MulticlassClassificationEvaluator[source]¶
- Sets the value of - metricLabel.- New in version 3.0.0. 
 - 
setMetricName(value: MulticlassClassificationEvaluatorMetricType) → MulticlassClassificationEvaluator[source]¶
- Sets the value of - metricName.- New in version 1.5.0. 
 - 
setParams(self, \*, predictionCol="prediction", labelCol="label", metricName="f1", weightCol=None, metricLabel=0.0, beta=1.0, probabilityCol="probability", eps=1e-15)[source]¶
- Sets params for multiclass classification evaluator. - New in version 1.5.0. 
 - 
setPredictionCol(value: str) → pyspark.ml.evaluation.MulticlassClassificationEvaluator[source]¶
- Sets the value of - predictionCol.
 - 
setProbabilityCol(value: str) → pyspark.ml.evaluation.MulticlassClassificationEvaluator[source]¶
- Sets the value of - probabilityCol.- New in version 3.0.0. 
 - 
setWeightCol(value: str) → pyspark.ml.evaluation.MulticlassClassificationEvaluator[source]¶
- Sets the value of - weightCol.- New in version 3.0.0. 
 - 
write() → pyspark.ml.util.JavaMLWriter¶
- Returns an MLWriter instance for this ML instance. 
 - Attributes Documentation - 
beta: pyspark.ml.param.Param[float] = Param(parent='undefined', name='beta', doc='The beta value used in weightedFMeasure|fMeasureByLabel. Must be > 0. The default value is 1.')¶
 - 
eps: pyspark.ml.param.Param[float] = Param(parent='undefined', name='eps', doc='log-loss is undefined for p=0 or p=1, so probabilities are clipped to max(eps, min(1 - eps, p)). Must be in range (0, 0.5). The default value is 1e-15.')¶
 - 
labelCol= Param(parent='undefined', name='labelCol', doc='label column name.')¶
 - 
metricLabel: pyspark.ml.param.Param[float] = Param(parent='undefined', name='metricLabel', doc='The class whose metric will be computed in truePositiveRateByLabel|falsePositiveRateByLabel|precisionByLabel|recallByLabel|fMeasureByLabel. Must be >= 0. The default value is 0.')¶
 - 
metricName: pyspark.ml.param.Param[MulticlassClassificationEvaluatorMetricType] = Param(parent='undefined', name='metricName', doc='metric name in evaluation (f1|accuracy|weightedPrecision|weightedRecall|weightedTruePositiveRate| weightedFalsePositiveRate|weightedFMeasure|truePositiveRateByLabel| falsePositiveRateByLabel|precisionByLabel|recallByLabel|fMeasureByLabel| logLoss|hammingLoss)')¶
 - 
params¶
- Returns all params ordered by name. The default implementation uses - dir()to get all attributes of type- Param.
 - 
predictionCol= Param(parent='undefined', name='predictionCol', doc='prediction column name.')¶
 - 
probabilityCol= Param(parent='undefined', name='probabilityCol', doc='Column name for predicted class conditional probabilities. Note: Not all models output well-calibrated probability estimates! These probabilities should be treated as confidences, not precise probabilities.')¶
 - 
weightCol= Param(parent='undefined', name='weightCol', doc='weight column name. If this is not set or empty, we treat all instance weights as 1.0.')¶
 
-