Class QuantileDiscretizer
- All Implemented Interfaces:
Serializable
,org.apache.spark.internal.Logging
,QuantileDiscretizerBase
,Params
,HasHandleInvalid
,HasInputCol
,HasInputCols
,HasOutputCol
,HasOutputCols
,HasRelativeError
,DefaultParamsWritable
,Identifiable
,MLWritable
QuantileDiscretizer
takes a column with continuous features and outputs a column with binned
categorical features. The number of bins can be set using the numBuckets
parameter. It is
possible that the number of buckets used will be smaller than this value, for example, if there
are too few distinct values of the input to create enough distinct quantiles.
Since 2.3.0, QuantileDiscretizer
can map multiple columns at once by setting the inputCols
parameter. If both of the inputCol
and inputCols
parameters are set, an Exception will be
thrown. To specify the number of buckets for each column, the numBucketsArray
parameter can
be set, or if the number of buckets should be the same across columns, numBuckets
can be
set as a convenience. Note that in multiple columns case, relative error is applied to all
columns.
NaN handling:
null and NaN values will be ignored from the column during QuantileDiscretizer
fitting. This
will produce a Bucketizer
model for making predictions. During the transformation,
Bucketizer
will raise an error when it finds NaN values in the dataset, but the user can
also choose to either keep or remove NaN values within the dataset by setting handleInvalid
.
If the user chooses to keep NaN values, they will be handled specially and placed into their own
bucket, for example, if 4 buckets are used, then non-NaN data will be put into buckets[0-3],
but NaNs will be counted in a special bucket[4].
Algorithm: The bin ranges are chosen using an approximate algorithm (see the documentation for
org.apache.spark.sql.DataFrameStatFunctions.approxQuantile
for a detailed description). The precision of the approximation can be controlled with the
relativeError
parameter. The lower and upper bin bounds will be -Infinity
and +Infinity
,
covering all real values.
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionCreates a copy of this instance with the same UID and some extra params.Fits a model to the input data.Param for how to handle invalid entries.inputCol()
Param for input column name.final StringArrayParam
Param for input column names.static QuantileDiscretizer
static org.apache.spark.internal.Logging.LogStringContext
LogStringContext
(scala.StringContext sc) Number of buckets (quantiles, or categories) into which data points are grouped.Array of number of buckets (quantiles, or categories) into which data points are grouped.static org.slf4j.Logger
static void
org$apache$spark$internal$Logging$$log__$eq
(org.slf4j.Logger x$1) Param for output column name.final StringArrayParam
Param for output column names.static MLReader<T>
read()
final DoubleParam
Param for the relative target precision for the approximate quantile algorithm.setHandleInvalid
(String value) setInputCol
(String value) setInputCols
(String[] value) setNumBuckets
(int value) setNumBucketsArray
(int[] value) setOutputCol
(String value) setOutputCols
(String[] value) setRelativeError
(double value) transformSchema
(StructType schema) Check transform validity and derive the output schema from the input schema.uid()
An immutable unique ID for the object and its derivatives.Methods inherited from class org.apache.spark.ml.PipelineStage
params
Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface org.apache.spark.ml.util.DefaultParamsWritable
write
Methods inherited from interface org.apache.spark.ml.param.shared.HasHandleInvalid
getHandleInvalid
Methods inherited from interface org.apache.spark.ml.param.shared.HasInputCol
getInputCol
Methods inherited from interface org.apache.spark.ml.param.shared.HasInputCols
getInputCols
Methods inherited from interface org.apache.spark.ml.param.shared.HasOutputCol
getOutputCol
Methods inherited from interface org.apache.spark.ml.param.shared.HasOutputCols
getOutputCols
Methods inherited from interface org.apache.spark.ml.param.shared.HasRelativeError
getRelativeError
Methods inherited from interface org.apache.spark.ml.util.Identifiable
toString
Methods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContext
Methods inherited from interface org.apache.spark.ml.util.MLWritable
save
Methods inherited from interface org.apache.spark.ml.param.Params
clear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, onParamChange, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
Methods inherited from interface org.apache.spark.ml.feature.QuantileDiscretizerBase
getNumBuckets, getNumBucketsArray
-
Constructor Details
-
QuantileDiscretizer
-
QuantileDiscretizer
public QuantileDiscretizer()
-
-
Method Details
-
load
-
read
-
org$apache$spark$internal$Logging$$log_
public static org.slf4j.Logger org$apache$spark$internal$Logging$$log_() -
org$apache$spark$internal$Logging$$log__$eq
public static void org$apache$spark$internal$Logging$$log__$eq(org.slf4j.Logger x$1) -
LogStringContext
public static org.apache.spark.internal.Logging.LogStringContext LogStringContext(scala.StringContext sc) -
numBuckets
Description copied from interface:QuantileDiscretizerBase
Number of buckets (quantiles, or categories) into which data points are grouped. Must be greater than or equal to 2.See also
QuantileDiscretizerBase.handleInvalid()
, which can optionally create an additional bucket for NaN values.default: 2
- Specified by:
numBuckets
in interfaceQuantileDiscretizerBase
- Returns:
- (undocumented)
-
numBucketsArray
Description copied from interface:QuantileDiscretizerBase
Array of number of buckets (quantiles, or categories) into which data points are grouped. Each value must be greater than or equal to 2See also
QuantileDiscretizerBase.handleInvalid()
, which can optionally create an additional bucket for NaN values.- Specified by:
numBucketsArray
in interfaceQuantileDiscretizerBase
- Returns:
- (undocumented)
-
handleInvalid
Description copied from interface:QuantileDiscretizerBase
Param for how to handle invalid entries. Options are 'skip' (filter out rows with invalid values), 'error' (throw an error), or 'keep' (keep invalid values in a special additional bucket). Note that in the multiple columns case, the invalid handling is applied to all columns. That said for 'error' it will throw an error if any invalids are found in any column, for 'skip' it will skip rows with any invalids in any columns, etc. Default: "error"- Specified by:
handleInvalid
in interfaceHasHandleInvalid
- Specified by:
handleInvalid
in interfaceQuantileDiscretizerBase
- Returns:
- (undocumented)
-
relativeError
Description copied from interface:HasRelativeError
Param for the relative target precision for the approximate quantile algorithm. Must be in the range [0, 1].- Specified by:
relativeError
in interfaceHasRelativeError
- Returns:
- (undocumented)
-
outputCols
Description copied from interface:HasOutputCols
Param for output column names.- Specified by:
outputCols
in interfaceHasOutputCols
- Returns:
- (undocumented)
-
inputCols
Description copied from interface:HasInputCols
Param for input column names.- Specified by:
inputCols
in interfaceHasInputCols
- Returns:
- (undocumented)
-
outputCol
Description copied from interface:HasOutputCol
Param for output column name.- Specified by:
outputCol
in interfaceHasOutputCol
- Returns:
- (undocumented)
-
inputCol
Description copied from interface:HasInputCol
Param for input column name.- Specified by:
inputCol
in interfaceHasInputCol
- Returns:
- (undocumented)
-
uid
Description copied from interface:Identifiable
An immutable unique ID for the object and its derivatives.- Specified by:
uid
in interfaceIdentifiable
- Returns:
- (undocumented)
-
setRelativeError
-
setNumBuckets
-
setInputCol
-
setOutputCol
-
setHandleInvalid
-
setNumBucketsArray
-
setInputCols
-
setOutputCols
-
transformSchema
Description copied from class:PipelineStage
Check transform validity and derive the output schema from the input schema.We check validity for interactions between parameters during
transformSchema
and raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled byParam.validate()
.Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
- Specified by:
transformSchema
in classPipelineStage
- Parameters:
schema
- (undocumented)- Returns:
- (undocumented)
-
fit
Description copied from class:Estimator
Fits a model to the input data.- Specified by:
fit
in classEstimator<Bucketizer>
- Parameters:
dataset
- (undocumented)- Returns:
- (undocumented)
-
copy
Description copied from interface:Params
Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. SeedefaultCopy()
.- Specified by:
copy
in interfaceParams
- Specified by:
copy
in classEstimator<Bucketizer>
- Parameters:
extra
- (undocumented)- Returns:
- (undocumented)
-