class LinearRegression extends Regressor[Vector, LinearRegression, LinearRegressionModel] with LinearRegressionParams with DefaultParamsWritable with Logging
Linear regression.
The learning objective is to minimize the specified loss function, with regularization. This supports two kinds of loss:
- squaredError (a.k.a squared loss)
- huber (a hybrid of squared error for relatively small errors and absolute error for relatively large ones, and we estimate the scale parameter from training data)
This supports multiple types of regularization:
- none (a.k.a. ordinary least squares)
- L2 (ridge regression)
- L1 (Lasso)
- L2 + L1 (elastic net)
The squared error objective function is:
$$ \begin{align} \min_{w}\frac{1}{2n}{\sum_{i=1}^n(X_{i}w - y_{i})^{2} + \lambda\left[\frac{1-\alpha}{2}{||w||_{2}}^{2} + \alpha{||w||_{1}}\right]} \end{align} $$
The huber objective function is:
$$ \begin{align} \min_{w, \sigma}\frac{1}{2n}{\sum_{i=1}^n\left(\sigma + H_m\left(\frac{X_{i}w - y_{i}}{\sigma}\right)\sigma\right) + \frac{1}{2}\lambda {||w||_2}^2} \end{align} $$
where
$$ \begin{align} H_m(z) = \begin{cases} z^2, & \text {if } |z| < \epsilon, \\ 2\epsilon|z| - \epsilon^2, & \text{otherwise} \end{cases} \end{align} $$
Since 3.1.0, it supports stacking instances into blocks and using GEMV for better performance. The block size will be 1.0 MB, if param maxBlockSizeInMB is set 0.0 by default.
Note: Fitting with huber loss only supports none and L2 regularization.
- Annotations
- @Since("1.3.0")
- Source
- LinearRegression.scala
- Grouped
- Alphabetic
- By Inheritance
- LinearRegression
- DefaultParamsWritable
- MLWritable
- LinearRegressionParams
- HasMaxBlockSizeInMB
- HasLoss
- HasAggregationDepth
- HasSolver
- HasWeightCol
- HasStandardization
- HasFitIntercept
- HasTol
- HasMaxIter
- HasElasticNetParam
- HasRegParam
- Regressor
- Predictor
- PredictorParams
- HasPredictionCol
- HasFeaturesCol
- HasLabelCol
- Estimator
- PipelineStage
- Logging
- Params
- Serializable
- Identifiable
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
Type Members
-   implicit  class LogStringContext extends AnyRef- Definition Classes
- Logging
 
Value Members
-   final  def !=(arg0: Any): Boolean- Definition Classes
- AnyRef → Any
 
-   final  def ##: Int- Definition Classes
- AnyRef → Any
 
-   final  def $[T](param: Param[T]): TAn alias for getOrDefault().An alias for getOrDefault().- Attributes
- protected
- Definition Classes
- Params
 
-   final  def ==(arg0: Any): Boolean- Definition Classes
- AnyRef → Any
 
-    def MDC(key: LogKey, value: Any): MDC- Attributes
- protected
- Definition Classes
- Logging
 
-   final  val aggregationDepth: IntParamParam for suggested depth for treeAggregate (>= 2). Param for suggested depth for treeAggregate (>= 2). - Definition Classes
- HasAggregationDepth
 
-   final  def asInstanceOf[T0]: T0- Definition Classes
- Any
 
-   final  def clear(param: Param[_]): LinearRegression.this.typeClears the user-supplied value for the input param. Clears the user-supplied value for the input param. - Definition Classes
- Params
 
-    def clone(): AnyRef- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
 
-    def copy(extra: ParamMap): LinearRegressionCreates a copy of this instance with the same UID and some extra params. Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. See defaultCopy().- Definition Classes
- LinearRegression → Predictor → Estimator → PipelineStage → Params
- Annotations
- @Since("1.4.0")
 
-    def copyValues[T <: Params](to: T, extra: ParamMap = ParamMap.empty): TCopies param values from this instance to another instance for params shared by them. Copies param values from this instance to another instance for params shared by them. This handles default Params and explicitly set Params separately. Default Params are copied from and to defaultParamMap, and explicitly set Params are copied from and toparamMap. Warning: This implicitly assumes that this Params instance and the target instance share the same set of default Params.- to
- the target instance, which should work with the same set of default Params as this source instance 
- extra
- extra params to be copied to the target's - paramMap
- returns
- the target instance with param values copied 
 - Attributes
- protected
- Definition Classes
- Params
 
-   final  def defaultCopy[T <: Params](extra: ParamMap): TDefault implementation of copy with extra params. Default implementation of copy with extra params. It tries to create a new instance with the same UID. Then it copies the embedded and extra parameters over and returns the new instance. - Attributes
- protected
- Definition Classes
- Params
 
-   final  val elasticNetParam: DoubleParamParam for the ElasticNet mixing parameter, in range [0, 1]. Param for the ElasticNet mixing parameter, in range [0, 1]. For alpha = 0, the penalty is an L2 penalty. For alpha = 1, it is an L1 penalty. - Definition Classes
- HasElasticNetParam
 
-   final  val epsilon: DoubleParamThe shape parameter to control the amount of robustness. The shape parameter to control the amount of robustness. Must be > 1.0. At larger values of epsilon, the huber criterion becomes more similar to least squares regression; for small values of epsilon, the criterion is more similar to L1 regression. Default is 1.35 to get as much robustness as possible while retaining 95% statistical efficiency for normally distributed data. It matches sklearn HuberRegressor and is "M" from A robust hybrid of lasso and ridge regression. Only valid when "loss" is "huber". - Definition Classes
- LinearRegressionParams
- Annotations
- @Since("2.3.0")
 
-   final  def eq(arg0: AnyRef): Boolean- Definition Classes
- AnyRef
 
-    def equals(arg0: AnyRef): Boolean- Definition Classes
- AnyRef → Any
 
-    def estimateModelSize(dataset: Dataset[_]): LongFor ml connect only. For ml connect only. Estimate an upper-bound size of the model to be fitted in bytes, based on the parameters and the dataset, e.g., using $(k) and numFeatures to estimate a k-means model size. 1, Both driver side memory usage and distributed objects size (like DataFrame, RDD, Graph, Summary) are counted. 2, Lazy vals are not counted, e.g., an auxiliary object used in prediction. 3, If there is no enough information to get an accurate size, try to estimate the upper-bound size, e.g. - Given a LogisticRegression estimator, assume the coefficients are dense, even though the actual fitted model might be sparse (by L1 penalty).
- Given a tree model, assume all underlying trees are complete binary trees, even
     though some branches might be pruned or truncated.
4, For some model such as tree model, estimating model size before training is hard,
   the estimateModelSizemethod is not supported.
 - Definition Classes
- LinearRegression → Estimator
 
-    def explainParam(param: Param[_]): StringExplains a param. Explains a param. - param
- input param, must belong to this instance. 
- returns
- a string that contains the input param name, doc, and optionally its default value and the user-supplied value 
 - Definition Classes
- Params
 
-    def explainParams(): StringExplains all params of this instance. Explains all params of this instance. See explainParam().- Definition Classes
- Params
 
-   final  def extractParamMap(): ParamMapextractParamMapwith no extra values.extractParamMapwith no extra values.- Definition Classes
- Params
 
-   final  def extractParamMap(extra: ParamMap): ParamMapExtracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values less than user-supplied values less than extra. Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values less than user-supplied values less than extra. - Definition Classes
- Params
 
-   final  val featuresCol: Param[String]Param for features column name. Param for features column name. - Definition Classes
- HasFeaturesCol
 
-    def fit(dataset: Dataset[_]): LinearRegressionModelFits a model to the input data. 
-    def fit(dataset: Dataset[_], paramMaps: Seq[ParamMap]): Seq[LinearRegressionModel]Fits multiple models to the input data with multiple sets of parameters. Fits multiple models to the input data with multiple sets of parameters. The default implementation uses a for loop on each parameter map. Subclasses could override this to optimize multi-model training. - dataset
- input dataset 
- paramMaps
- An array of parameter maps. These values override any specified in this Estimator's embedded ParamMap. 
- returns
- fitted models, matching the input parameter maps 
 - Definition Classes
- Estimator
- Annotations
- @Since("2.0.0")
 
-    def fit(dataset: Dataset[_], paramMap: ParamMap): LinearRegressionModelFits a single model to the input data with provided parameter map. Fits a single model to the input data with provided parameter map. - dataset
- input dataset 
- paramMap
- Parameter map. These values override any specified in this Estimator's embedded ParamMap. 
- returns
- fitted model 
 - Definition Classes
- Estimator
- Annotations
- @Since("2.0.0")
 
-    def fit(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*): LinearRegressionModelFits a single model to the input data with optional parameters. Fits a single model to the input data with optional parameters. - dataset
- input dataset 
- firstParamPair
- the first param pair, overrides embedded params 
- otherParamPairs
- other param pairs. These values override any specified in this Estimator's embedded ParamMap. 
- returns
- fitted model 
 - Definition Classes
- Estimator
- Annotations
- @Since("2.0.0") @varargs()
 
-   final  val fitIntercept: BooleanParamParam for whether to fit an intercept term. Param for whether to fit an intercept term. - Definition Classes
- HasFitIntercept
 
-   final  def get[T](param: Param[T]): Option[T]Optionally returns the user-supplied value of a param. Optionally returns the user-supplied value of a param. - Definition Classes
- Params
 
-   final  def getAggregationDepth: Int- Definition Classes
- HasAggregationDepth
 
-   final  def getClass(): Class[_ <: AnyRef]- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
 
-   final  def getDefault[T](param: Param[T]): Option[T]Gets the default value of a parameter. Gets the default value of a parameter. - Definition Classes
- Params
 
-   final  def getElasticNetParam: Double- Definition Classes
- HasElasticNetParam
 
-    def getEpsilon: Double- Definition Classes
- LinearRegressionParams
- Annotations
- @Since("2.3.0")
 
-   final  def getFeaturesCol: String- Definition Classes
- HasFeaturesCol
 
-   final  def getFitIntercept: Boolean- Definition Classes
- HasFitIntercept
 
-   final  def getLabelCol: String- Definition Classes
- HasLabelCol
 
-   final  def getLoss: String- Definition Classes
- HasLoss
 
-   final  def getMaxBlockSizeInMB: Double- Definition Classes
- HasMaxBlockSizeInMB
 
-   final  def getMaxIter: Int- Definition Classes
- HasMaxIter
 
-   final  def getOrDefault[T](param: Param[T]): TGets the value of a param in the embedded param map or its default value. Gets the value of a param in the embedded param map or its default value. Throws an exception if neither is set. - Definition Classes
- Params
 
-    def getParam(paramName: String): Param[Any]Gets a param by its name. Gets a param by its name. - Definition Classes
- Params
 
-   final  def getPredictionCol: String- Definition Classes
- HasPredictionCol
 
-   final  def getRegParam: Double- Definition Classes
- HasRegParam
 
-   final  def getSolver: String- Definition Classes
- HasSolver
 
-   final  def getStandardization: Boolean- Definition Classes
- HasStandardization
 
-   final  def getTol: Double- Definition Classes
- HasTol
 
-   final  def getWeightCol: String- Definition Classes
- HasWeightCol
 
-   final  def hasDefault[T](param: Param[T]): BooleanTests whether the input param has a default value set. Tests whether the input param has a default value set. - Definition Classes
- Params
 
-    def hasParam(paramName: String): BooleanTests whether this instance contains a param with a given name. Tests whether this instance contains a param with a given name. - Definition Classes
- Params
 
-    def hashCode(): Int- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
 
-    def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean- Attributes
- protected
- Definition Classes
- Logging
 
-    def initializeLogIfNecessary(isInterpreter: Boolean): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-   final  def isDefined(param: Param[_]): BooleanChecks whether a param is explicitly set or has a default value. Checks whether a param is explicitly set or has a default value. - Definition Classes
- Params
 
-   final  def isInstanceOf[T0]: Boolean- Definition Classes
- Any
 
-   final  def isSet(param: Param[_]): BooleanChecks whether a param is explicitly set. Checks whether a param is explicitly set. - Definition Classes
- Params
 
-    def isTraceEnabled(): Boolean- Attributes
- protected
- Definition Classes
- Logging
 
-   final  val labelCol: Param[String]Param for label column name. Param for label column name. - Definition Classes
- HasLabelCol
 
-    def log: Logger- Attributes
- protected
- Definition Classes
- Logging
 
-    def logBasedOnLevel(level: Level)(f: => MessageWithContext): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logDebug(msg: => String, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logDebug(entry: LogEntry, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logDebug(entry: LogEntry): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logDebug(msg: => String): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logError(msg: => String, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logError(entry: LogEntry, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logError(entry: LogEntry): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logError(msg: => String): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logInfo(msg: => String, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logInfo(entry: LogEntry, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logInfo(entry: LogEntry): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logInfo(msg: => String): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logName: String- Attributes
- protected
- Definition Classes
- Logging
 
-    def logTrace(msg: => String, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logTrace(entry: LogEntry, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logTrace(entry: LogEntry): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logTrace(msg: => String): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logWarning(msg: => String, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logWarning(entry: LogEntry, throwable: Throwable): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logWarning(entry: LogEntry): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def logWarning(msg: => String): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-   final  val loss: Param[String]The loss function to be optimized. The loss function to be optimized. Supported options: "squaredError" and "huber". Default: "squaredError" - Definition Classes
- LinearRegressionParams → HasLoss
- Annotations
- @Since("2.3.0")
 
-   final  val maxBlockSizeInMB: DoubleParamParam for Maximum memory in MB for stacking input data into blocks. Param for Maximum memory in MB for stacking input data into blocks. Data is stacked within partitions. If more than remaining data size in a partition then it is adjusted to the data size. Default 0.0 represents choosing optimal value, depends on specific algorithm. Must be >= 0.. - Definition Classes
- HasMaxBlockSizeInMB
 
-   final  val maxIter: IntParamParam for maximum number of iterations (>= 0). Param for maximum number of iterations (>= 0). - Definition Classes
- HasMaxIter
 
-   final  def ne(arg0: AnyRef): Boolean- Definition Classes
- AnyRef
 
-   final  def notify(): Unit- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
 
-   final  def notifyAll(): Unit- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
 
-    lazy val params: Array[Param[_]]Returns all params sorted by their names. Returns all params sorted by their names. The default implementation uses Java reflection to list all public methods that have no arguments and return Param. - Definition Classes
- Params
- Note
- Developer should not use this method in constructor because we cannot guarantee that this variable gets initialized before other params. 
 
-   final  val predictionCol: Param[String]Param for prediction column name. Param for prediction column name. - Definition Classes
- HasPredictionCol
 
-   final  val regParam: DoubleParamParam for regularization parameter (>= 0). Param for regularization parameter (>= 0). - Definition Classes
- HasRegParam
 
-    def save(path: String): UnitSaves this ML instance to the input path, a shortcut of write.save(path).Saves this ML instance to the input path, a shortcut of write.save(path).- Definition Classes
- MLWritable
- Annotations
- @Since("1.6.0") @throws("If the input path already exists but overwrite is not enabled.")
 
-   final  def set(paramPair: ParamPair[_]): LinearRegression.this.typeSets a parameter in the embedded param map. Sets a parameter in the embedded param map. - Attributes
- protected
- Definition Classes
- Params
 
-   final  def set(param: String, value: Any): LinearRegression.this.typeSets a parameter (by name) in the embedded param map. Sets a parameter (by name) in the embedded param map. - Attributes
- protected
- Definition Classes
- Params
 
-   final  def set[T](param: Param[T], value: T): LinearRegression.this.typeSets a parameter in the embedded param map. Sets a parameter in the embedded param map. - Definition Classes
- Params
 
-    def setAggregationDepth(value: Int): LinearRegression.this.typeSuggested depth for treeAggregate (greater than or equal to 2). Suggested depth for treeAggregate (greater than or equal to 2). If the dimensions of features or the number of partitions are large, this param could be adjusted to a larger size. Default is 2. - Annotations
- @Since("2.1.0")
 
-   final  def setDefault(paramPairs: ParamPair[_]*): LinearRegression.this.typeSets default values for a list of params. Sets default values for a list of params. Note: Java developers should use the single-parameter setDefault. Annotating this with varargs can cause compilation failures due to a Scala compiler bug. See SPARK-9268.- paramPairs
- a list of param pairs that specify params and their default values to set respectively. Make sure that the params are initialized before this method gets called. 
 - Attributes
- protected
- Definition Classes
- Params
 
-   final  def setDefault[T](param: Param[T], value: T): LinearRegression.this.typeSets a default value for a param. 
-    def setElasticNetParam(value: Double): LinearRegression.this.typeSet the ElasticNet mixing parameter. Set the ElasticNet mixing parameter. For alpha = 0, the penalty is an L2 penalty. For alpha = 1, it is an L1 penalty. For alpha in (0,1), the penalty is a combination of L1 and L2. Default is 0.0 which is an L2 penalty. Note: Fitting with huber loss only supports None and L2 regularization, so throws exception if this param is non-zero value. - Annotations
- @Since("1.4.0")
 
-    def setEpsilon(value: Double): LinearRegression.this.typeSets the value of param epsilon. Sets the value of param epsilon. Default is 1.35. - Annotations
- @Since("2.3.0")
 
-    def setFeaturesCol(value: String): LinearRegression- Definition Classes
- Predictor
 
-    def setFitIntercept(value: Boolean): LinearRegression.this.typeSet if we should fit the intercept. Set if we should fit the intercept. Default is true. - Annotations
- @Since("1.5.0")
 
-    def setLabelCol(value: String): LinearRegression- Definition Classes
- Predictor
 
-    def setLoss(value: String): LinearRegression.this.typeSets the value of param loss. Sets the value of param loss. Default is "squaredError". - Annotations
- @Since("2.3.0")
 
-    def setMaxBlockSizeInMB(value: Double): LinearRegression.this.typeSets the value of param maxBlockSizeInMB. Sets the value of param maxBlockSizeInMB. Default is 0.0, then 1.0 MB will be chosen. - Annotations
- @Since("3.1.0")
 
-    def setMaxIter(value: Int): LinearRegression.this.typeSet the maximum number of iterations. Set the maximum number of iterations. Default is 100. - Annotations
- @Since("1.3.0")
 
-    def setPredictionCol(value: String): LinearRegression- Definition Classes
- Predictor
 
-    def setRegParam(value: Double): LinearRegression.this.typeSet the regularization parameter. Set the regularization parameter. Default is 0.0. - Annotations
- @Since("1.3.0")
 
-    def setSolver(value: String): LinearRegression.this.typeSet the solver algorithm used for optimization. Set the solver algorithm used for optimization. In case of linear regression, this can be "l-bfgs", "normal" and "auto". - "l-bfgs" denotes Limited-memory BFGS which is a limited-memory quasi-Newton optimization method.
- "normal" denotes using Normal Equation as an analytical solution to the linear regression
   problem.  This solver is limited to LinearRegression.MAX_FEATURES_FOR_NORMAL_SOLVER.
- "auto" (default) means that the solver algorithm is selected automatically. The Normal Equations solver will be used when possible, but this will automatically fall back to iterative optimization methods when needed.
 Note: Fitting with huber loss doesn't support normal solver, so throws exception if this param was set with "normal". - Annotations
- @Since("1.6.0")
 
-    def setStandardization(value: Boolean): LinearRegression.this.typeWhether to standardize the training features before fitting the model. Whether to standardize the training features before fitting the model. The coefficients of models will be always returned on the original scale, so it will be transparent for users. Default is true. - Annotations
- @Since("1.5.0")
- Note
- With/without standardization, the models should be always converged to the same solution when no regularization is applied. In R's GLMNET package, the default behavior is true as well. 
 
-    def setTol(value: Double): LinearRegression.this.typeSet the convergence tolerance of iterations. Set the convergence tolerance of iterations. Smaller value will lead to higher accuracy with the cost of more iterations. Default is 1E-6. - Annotations
- @Since("1.4.0")
 
-    def setWeightCol(value: String): LinearRegression.this.typeWhether to over-/under-sample training instances according to the given weights in weightCol. Whether to over-/under-sample training instances according to the given weights in weightCol. If not set or empty, all instances are treated equally (weight 1.0). Default is not set, so all instances have weight one. - Annotations
- @Since("1.6.0")
 
-   final  val solver: Param[String]The solver algorithm for optimization. The solver algorithm for optimization. Supported options: "l-bfgs", "normal" and "auto". Default: "auto" - Definition Classes
- LinearRegressionParams → HasSolver
- Annotations
- @Since("1.6.0")
 
-   final  val standardization: BooleanParamParam for whether to standardize the training features before fitting the model. Param for whether to standardize the training features before fitting the model. - Definition Classes
- HasStandardization
 
-   final  def synchronized[T0](arg0: => T0): T0- Definition Classes
- AnyRef
 
-    def toString(): String- Definition Classes
- Identifiable → AnyRef → Any
 
-   final  val tol: DoubleParamParam for the convergence tolerance for iterative algorithms (>= 0). Param for the convergence tolerance for iterative algorithms (>= 0). - Definition Classes
- HasTol
 
-    def train(dataset: Dataset[_]): LinearRegressionModelTrain a model using the given dataset and parameters. Train a model using the given dataset and parameters. Developers can implement this instead of fit()to avoid dealing with schema validation and copying parameters into the model.- dataset
- Training dataset 
- returns
- Fitted model 
 - Attributes
- protected
- Definition Classes
- LinearRegression → Predictor
 
-    def transformSchema(schema: StructType): StructTypeCheck transform validity and derive the output schema from the input schema. Check transform validity and derive the output schema from the input schema. We check validity for interactions between parameters during transformSchemaand raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled byParam.validate().Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks. - Definition Classes
- Predictor → PipelineStage
 
-    def transformSchema(schema: StructType, logging: Boolean): StructType:: DeveloperApi :: :: DeveloperApi :: Derives the output schema from the input schema and parameters, optionally with logging. This should be optimistic. If it is unclear whether the schema will be valid, then it should be assumed valid until proven otherwise. - Attributes
- protected
- Definition Classes
- PipelineStage
- Annotations
- @DeveloperApi()
 
-    val uid: StringAn immutable unique ID for the object and its derivatives. An immutable unique ID for the object and its derivatives. - Definition Classes
- LinearRegression → Identifiable
- Annotations
- @Since("1.3.0")
 
-    def validateAndTransformSchema(schema: StructType, fitting: Boolean, featuresDataType: DataType): StructTypeValidates and transforms the input schema with the provided param map. Validates and transforms the input schema with the provided param map. - schema
- input schema 
- fitting
- whether this is in fitting 
- featuresDataType
- SQL DataType for FeaturesType. E.g., - VectorUDTfor vector features.
- returns
- output schema 
 - Attributes
- protected
- Definition Classes
- LinearRegressionParams → PredictorParams
 
-   final  def wait(arg0: Long, arg1: Int): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
 
-   final  def wait(arg0: Long): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
 
-   final  def wait(): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
 
-   final  val weightCol: Param[String]Param for weight column name. Param for weight column name. If this is not set or empty, we treat all instance weights as 1.0. - Definition Classes
- HasWeightCol
 
-    def withLogContext(context: Map[String, String])(body: => Unit): Unit- Attributes
- protected
- Definition Classes
- Logging
 
-    def write: MLWriterReturns an MLWriterinstance for this ML instance.Returns an MLWriterinstance for this ML instance.- Definition Classes
- DefaultParamsWritable → MLWritable
 
Deprecated Value Members
-    def finalize(): Unit- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
- (Since version 9) 
 
Inherited from DefaultParamsWritable
Inherited from MLWritable
Inherited from LinearRegressionParams
Inherited from HasMaxBlockSizeInMB
Inherited from HasLoss
Inherited from HasAggregationDepth
Inherited from HasSolver
Inherited from HasWeightCol
Inherited from HasStandardization
Inherited from HasFitIntercept
Inherited from HasTol
Inherited from HasMaxIter
Inherited from HasElasticNetParam
Inherited from HasRegParam
Inherited from Regressor[Vector, LinearRegression, LinearRegressionModel]
Inherited from Predictor[Vector, LinearRegression, LinearRegressionModel]
Inherited from PredictorParams
Inherited from HasPredictionCol
Inherited from HasFeaturesCol
Inherited from HasLabelCol
Inherited from Estimator[LinearRegressionModel]
Inherited from PipelineStage
Inherited from Logging
Inherited from Params
Inherited from Serializable
Inherited from Identifiable
Inherited from AnyRef
Inherited from Any
Parameters
A list of (hyper-)parameter keys this algorithm can take. Users can set and get the parameter values through setters and getters, respectively.
Members
getExpertParam
setExpertParam
Parameter setters
Parameter getters
(expert-only) Parameters
A list of advanced, expert-only (hyper-)parameter keys this algorithm can take. Users can set and get the parameter values through setters and getters, respectively.