org.apache.spark.ml.classification
Class LogisticRegressionModel

Object
  extended by org.apache.spark.ml.PipelineStage
      extended by org.apache.spark.ml.Transformer
          extended by org.apache.spark.ml.Model<M>
              extended by org.apache.spark.ml.PredictionModel<FeaturesType,M>
                  extended by org.apache.spark.ml.classification.ClassificationModel<FeaturesType,M>
                      extended by org.apache.spark.ml.classification.LogisticRegressionModel
All Implemented Interfaces:
java.io.Serializable, Logging, Params

public class LogisticRegressionModel
extends ClassificationModel<FeaturesType,M>

:: Experimental :: Model produced by LogisticRegression.

See Also:
Serialized Form

Method Summary
 LogisticRegressionModel copy(ParamMap extra)
          Creates a copy of this instance with the same UID and some extra params.
 double intercept()
           
 int numClasses()
          Number of classes (values which the label can take).
 M setProbabilityCol(String value)
           
 LogisticRegressionModel setThreshold(double value)
           
 DataFrame transform(DataFrame dataset)
          Transforms dataset by reading from featuresCol, and appending new columns as specified by parameters: - predicted labels as predictionCol of type Double - raw predictions (confidences) as rawPredictionCol of type Vector - probability of each class as probabilityCol of type Vector.
 String uid()
           
 StructType validateAndTransformSchema(StructType schema, boolean fitting, DataType featuresDataType)
           
 StructType validateAndTransformSchema(StructType schema, boolean fitting, DataType featuresDataType)
          Validates and transforms the input schema with the provided param map.
 Vector weights()
           
 
Methods inherited from class org.apache.spark.ml.classification.ClassificationModel
setRawPredictionCol
 
Methods inherited from class org.apache.spark.ml.PredictionModel
setFeaturesCol, setPredictionCol, transformSchema
 
Methods inherited from class org.apache.spark.ml.Model
hasParent, parent, setParent
 
Methods inherited from class org.apache.spark.ml.Transformer
transform, transform, transform
 
Methods inherited from class Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 
Methods inherited from interface org.apache.spark.ml.param.Params
clear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, paramMap, params, set, set, set, setDefault, setDefault, setDefault, shouldOwn, validateParams
 
Methods inherited from interface org.apache.spark.Logging
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
 

Method Detail

uid

public String uid()

weights

public Vector weights()

intercept

public double intercept()

setThreshold

public LogisticRegressionModel setThreshold(double value)

numClasses

public int numClasses()
Description copied from class: ClassificationModel
Number of classes (values which the label can take).

Specified by:
numClasses in class ClassificationModel<Vector,LogisticRegressionModel>

copy

public LogisticRegressionModel copy(ParamMap extra)
Description copied from interface: Params
Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly.

Specified by:
copy in interface Params
Specified by:
copy in class Model<LogisticRegressionModel>
Parameters:
extra - (undocumented)
Returns:
(undocumented)
See Also:
defaultCopy()

validateAndTransformSchema

public StructType validateAndTransformSchema(StructType schema,
                                             boolean fitting,
                                             DataType featuresDataType)

setProbabilityCol

public M setProbabilityCol(String value)

transform

public DataFrame transform(DataFrame dataset)
Transforms dataset by reading from featuresCol, and appending new columns as specified by parameters: - predicted labels as predictionCol of type Double - raw predictions (confidences) as rawPredictionCol of type Vector - probability of each class as probabilityCol of type Vector.

Overrides:
transform in class ClassificationModel<FeaturesType,M extends org.apache.spark.ml.classification.ProbabilisticClassificationModel<FeaturesType,M>>
Parameters:
dataset - input dataset
Returns:
transformed dataset

validateAndTransformSchema

public StructType validateAndTransformSchema(StructType schema,
                                             boolean fitting,
                                             DataType featuresDataType)
Validates and transforms the input schema with the provided param map.

Parameters:
schema - input schema
fitting - whether this is in fitting
featuresDataType - SQL DataType for FeaturesType. E.g., VectorUDT for vector features.
Returns:
output schema