Class QuantileDiscretizer

Object
org.apache.spark.ml.PipelineStage
org.apache.spark.ml.Estimator<Bucketizer>
org.apache.spark.ml.feature.QuantileDiscretizer
All Implemented Interfaces:
Serializable, org.apache.spark.internal.Logging, QuantileDiscretizerBase, Params, HasHandleInvalid, HasInputCol, HasInputCols, HasOutputCol, HasOutputCols, HasRelativeError, DefaultParamsWritable, Identifiable, MLWritable, scala.Serializable

public final class QuantileDiscretizer extends Estimator<Bucketizer> implements QuantileDiscretizerBase, DefaultParamsWritable
QuantileDiscretizer takes a column with continuous features and outputs a column with binned categorical features. The number of bins can be set using the numBuckets parameter. It is possible that the number of buckets used will be smaller than this value, for example, if there are too few distinct values of the input to create enough distinct quantiles. Since 2.3.0, QuantileDiscretizer can map multiple columns at once by setting the inputCols parameter. If both of the inputCol and inputCols parameters are set, an Exception will be thrown. To specify the number of buckets for each column, the numBucketsArray parameter can be set, or if the number of buckets should be the same across columns, numBuckets can be set as a convenience. Note that in multiple columns case, relative error is applied to all columns.

NaN handling: null and NaN values will be ignored from the column during QuantileDiscretizer fitting. This will produce a Bucketizer model for making predictions. During the transformation, Bucketizer will raise an error when it finds NaN values in the dataset, but the user can also choose to either keep or remove NaN values within the dataset by setting handleInvalid. If the user chooses to keep NaN values, they will be handled specially and placed into their own bucket, for example, if 4 buckets are used, then non-NaN data will be put into buckets[0-3], but NaNs will be counted in a special bucket[4].

Algorithm: The bin ranges are chosen using an approximate algorithm (see the documentation for org.apache.spark.sql.DataFrameStatFunctions.approxQuantile for a detailed description). The precision of the approximation can be controlled with the relativeError parameter. The lower and upper bin bounds will be -Infinity and +Infinity, covering all real values.

See Also:
  • Constructor Details

    • QuantileDiscretizer

      public QuantileDiscretizer(String uid)
    • QuantileDiscretizer

      public QuantileDiscretizer()
  • Method Details

    • load

      public static QuantileDiscretizer load(String path)
    • read

      public static MLReader<T> read()
    • org$apache$spark$internal$Logging$$log_

      public static org.slf4j.Logger org$apache$spark$internal$Logging$$log_()
    • org$apache$spark$internal$Logging$$log__$eq

      public static void org$apache$spark$internal$Logging$$log__$eq(org.slf4j.Logger x$1)
    • numBuckets

      public IntParam numBuckets()
      Description copied from interface: QuantileDiscretizerBase
      Number of buckets (quantiles, or categories) into which data points are grouped. Must be greater than or equal to 2.

      See also QuantileDiscretizerBase.handleInvalid(), which can optionally create an additional bucket for NaN values.

      default: 2

      Specified by:
      numBuckets in interface QuantileDiscretizerBase
      Returns:
      (undocumented)
    • numBucketsArray

      public IntArrayParam numBucketsArray()
      Description copied from interface: QuantileDiscretizerBase
      Array of number of buckets (quantiles, or categories) into which data points are grouped. Each value must be greater than or equal to 2

      See also QuantileDiscretizerBase.handleInvalid(), which can optionally create an additional bucket for NaN values.

      Specified by:
      numBucketsArray in interface QuantileDiscretizerBase
      Returns:
      (undocumented)
    • handleInvalid

      public Param<String> handleInvalid()
      Description copied from interface: QuantileDiscretizerBase
      Param for how to handle invalid entries. Options are 'skip' (filter out rows with invalid values), 'error' (throw an error), or 'keep' (keep invalid values in a special additional bucket). Note that in the multiple columns case, the invalid handling is applied to all columns. That said for 'error' it will throw an error if any invalids are found in any column, for 'skip' it will skip rows with any invalids in any columns, etc. Default: "error"
      Specified by:
      handleInvalid in interface HasHandleInvalid
      Specified by:
      handleInvalid in interface QuantileDiscretizerBase
      Returns:
      (undocumented)
    • relativeError

      public final DoubleParam relativeError()
      Description copied from interface: HasRelativeError
      Param for the relative target precision for the approximate quantile algorithm. Must be in the range [0, 1].
      Specified by:
      relativeError in interface HasRelativeError
      Returns:
      (undocumented)
    • outputCols

      public final StringArrayParam outputCols()
      Description copied from interface: HasOutputCols
      Param for output column names.
      Specified by:
      outputCols in interface HasOutputCols
      Returns:
      (undocumented)
    • inputCols

      public final StringArrayParam inputCols()
      Description copied from interface: HasInputCols
      Param for input column names.
      Specified by:
      inputCols in interface HasInputCols
      Returns:
      (undocumented)
    • outputCol

      public final Param<String> outputCol()
      Description copied from interface: HasOutputCol
      Param for output column name.
      Specified by:
      outputCol in interface HasOutputCol
      Returns:
      (undocumented)
    • inputCol

      public final Param<String> inputCol()
      Description copied from interface: HasInputCol
      Param for input column name.
      Specified by:
      inputCol in interface HasInputCol
      Returns:
      (undocumented)
    • uid

      public String uid()
      Description copied from interface: Identifiable
      An immutable unique ID for the object and its derivatives.
      Specified by:
      uid in interface Identifiable
      Returns:
      (undocumented)
    • setRelativeError

      public QuantileDiscretizer setRelativeError(double value)
    • setNumBuckets

      public QuantileDiscretizer setNumBuckets(int value)
    • setInputCol

      public QuantileDiscretizer setInputCol(String value)
    • setOutputCol

      public QuantileDiscretizer setOutputCol(String value)
    • setHandleInvalid

      public QuantileDiscretizer setHandleInvalid(String value)
    • setNumBucketsArray

      public QuantileDiscretizer setNumBucketsArray(int[] value)
    • setInputCols

      public QuantileDiscretizer setInputCols(String[] value)
    • setOutputCols

      public QuantileDiscretizer setOutputCols(String[] value)
    • transformSchema

      public StructType transformSchema(StructType schema)
      Description copied from class: PipelineStage
      Check transform validity and derive the output schema from the input schema.

      We check validity for interactions between parameters during transformSchema and raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled by Param.validate().

      Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.

      Specified by:
      transformSchema in class PipelineStage
      Parameters:
      schema - (undocumented)
      Returns:
      (undocumented)
    • fit

      public Bucketizer fit(Dataset<?> dataset)
      Description copied from class: Estimator
      Fits a model to the input data.
      Specified by:
      fit in class Estimator<Bucketizer>
      Parameters:
      dataset - (undocumented)
      Returns:
      (undocumented)
    • copy

      public QuantileDiscretizer copy(ParamMap extra)
      Description copied from interface: Params
      Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. See defaultCopy().
      Specified by:
      copy in interface Params
      Specified by:
      copy in class Estimator<Bucketizer>
      Parameters:
      extra - (undocumented)
      Returns:
      (undocumented)