public class Pipeline extends Estimator<PipelineModel>
Estimator
or a Transformer
. When fit(org.apache.spark.sql.DataFrame)
is called, the
stages are executed in order. If a stage is an Estimator
, its Estimator.fit(org.apache.spark.sql.DataFrame, org.apache.spark.ml.param.ParamPair<?>, org.apache.spark.ml.param.ParamPair<?>...)
method will
be called on the input dataset to fit a model. Then the model, which is a transformer, will be
used to transform the dataset as the input to the next stage. If a stage is a Transformer
,
its Transformer.transform(org.apache.spark.sql.DataFrame, org.apache.spark.ml.param.ParamPair<?>, org.apache.spark.ml.param.ParamPair<?>...)
method will be called to produce the dataset for the next stage.
The fitted model from a Pipeline
is an PipelineModel
, which consists of fitted models and
transformers, corresponding to the pipeline stages. If there are no stages, the pipeline acts as
an identity transformer.Modifier and Type | Method and Description |
---|---|
Pipeline |
copy(ParamMap extra)
Creates a copy of this instance with the same UID and some extra params.
|
PipelineModel |
fit(DataFrame dataset)
Fits the pipeline to the input dataset with additional parameters.
|
PipelineStage[] |
getStages() |
Pipeline |
setStages(PipelineStage[] value) |
Param<PipelineStage[]> |
stages()
param for pipeline stages
|
StructType |
transformSchema(StructType schema)
:: DeveloperApi ::
|
String |
uid() |
void |
validateParams()
Validates parameter values stored internally.
|
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
clear, copyValues, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, paramMap, params, set, set, set, setDefault, setDefault, setDefault, shouldOwn
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public String uid()
public Param<PipelineStage[]> stages()
public Pipeline setStages(PipelineStage[] value)
public PipelineStage[] getStages()
public void validateParams()
Params
This only needs to check for interactions between parameters.
Parameter value checks which do not depend on other parameters are handled by
Param.validate()
. This method does not handle input/output column parameters;
those are checked during schema validation.
public PipelineModel fit(DataFrame dataset)
Estimator
, its Estimator.fit(org.apache.spark.sql.DataFrame, org.apache.spark.ml.param.ParamPair<?>, org.apache.spark.ml.param.ParamPair<?>...)
method will be called on the input dataset to fit a model.
Then the model, which is a transformer, will be used to transform the dataset as the input to
the next stage. If a stage is a Transformer
, its Transformer.transform(org.apache.spark.sql.DataFrame, org.apache.spark.ml.param.ParamPair<?>, org.apache.spark.ml.param.ParamPair<?>...)
method will be
called to produce the dataset for the next stage. The fitted model from a Pipeline
is an
PipelineModel
, which consists of fitted models and transformers, corresponding to the
pipeline stages. If there are no stages, the output model acts as an identity transformer.
fit
in class Estimator<PipelineModel>
dataset
- input datasetpublic Pipeline copy(ParamMap extra)
Params
copy
in interface Params
copy
in class Estimator<PipelineModel>
extra
- (undocumented)public StructType transformSchema(StructType schema)
PipelineStage
Derives the output schema from the input schema.
transformSchema
in class PipelineStage
schema
- (undocumented)