Class Bucketizer

All Implemented Interfaces:
Serializable, org.apache.spark.internal.Logging, Params, HasHandleInvalid, HasInputCol, HasInputCols, HasOutputCol, HasOutputCols, DefaultParamsWritable, Identifiable, MLWritable, scala.Serializable

Bucketizer maps a column of continuous features to a column of feature buckets.

Since 2.3.0, Bucketizer can map multiple columns at once by setting the inputCols parameter. Note that when both the inputCol and inputCols parameters are set, an Exception will be thrown. The splits parameter is only used for single column usage, and splitsArray is for multiple columns.

See Also:
  • Constructor Details

    • Bucketizer

      public Bucketizer(String uid)
    • Bucketizer

      public Bucketizer()
  • Method Details

    • load

      public static Bucketizer load(String path)
    • read

      public static MLReader<T> read()
    • outputCols

      public final StringArrayParam outputCols()
      Description copied from interface: HasOutputCols
      Param for output column names.
      Specified by:
      outputCols in interface HasOutputCols
    • inputCols

      public final StringArrayParam inputCols()
      Description copied from interface: HasInputCols
      Param for input column names.
      Specified by:
      inputCols in interface HasInputCols
    • outputCol

      public final Param<String> outputCol()
      Description copied from interface: HasOutputCol
      Param for output column name.
      Specified by:
      outputCol in interface HasOutputCol
    • inputCol

      public final Param<String> inputCol()
      Description copied from interface: HasInputCol
      Param for input column name.
      Specified by:
      inputCol in interface HasInputCol
    • uid

      public String uid()
      Description copied from interface: Identifiable
      An immutable unique ID for the object and its derivatives.
      Specified by:
      uid in interface Identifiable
    • splits

      public DoubleArrayParam splits()
      Parameter for mapping continuous features into buckets. With n+1 splits, there are n buckets. A bucket defined by splits x,y holds values in the range [x,y) except the last bucket, which also includes y. Splits should be of length greater than or equal to 3 and strictly increasing. Values at -inf, inf must be explicitly provided to cover all Double values; otherwise, values outside the splits specified will be treated as errors.

      See also handleInvalid(), which can optionally create an additional bucket for NaN values.

    • getSplits

      public double[] getSplits()
    • setSplits

      public Bucketizer setSplits(double[] value)
    • setInputCol

      public Bucketizer setInputCol(String value)
    • setOutputCol

      public Bucketizer setOutputCol(String value)
    • handleInvalid

      public Param<String> handleInvalid()
      Param for how to handle invalid entries containing NaN values. Values outside the splits will always be treated as errors. Options are 'skip' (filter out rows with invalid values), 'error' (throw an error), or 'keep' (keep invalid values in a special additional bucket). Note that in the multiple column case, the invalid handling is applied to all columns. That said for 'error' it will throw an error if any invalids are found in any column, for 'skip' it will skip rows with any invalids in any columns, etc. Default: "error"
      Specified by:
      handleInvalid in interface HasHandleInvalid
    • setHandleInvalid

      public Bucketizer setHandleInvalid(String value)
    • splitsArray

      public DoubleArrayArrayParam splitsArray()
      Parameter for specifying multiple splits parameters. Each element in this array can be used to map continuous features into buckets.

    • getSplitsArray

      public double[][] getSplitsArray()
    • setSplitsArray

      public Bucketizer setSplitsArray(double[][] value)
    • setInputCols

      public Bucketizer setInputCols(String[] value)
    • setOutputCols

      public Bucketizer setOutputCols(String[] value)
    • transform

      public Dataset<Row> transform(Dataset<?> dataset)
      Description copied from class: Transformer
      Transforms the input dataset.
      Specified by:
      transform in class Transformer
      dataset - (undocumented)
    • transformSchema

      public StructType transformSchema(StructType schema)
      Description copied from class: PipelineStage
      Check transform validity and derive the output schema from the input schema.

      We check validity for interactions between parameters during transformSchema and raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled by Param.validate().

      Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.

      Specified by:
      transformSchema in class PipelineStage
      schema - (undocumented)
    • copy

      public Bucketizer copy(ParamMap extra)
      Description copied from interface: Params
      Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. See defaultCopy().
      Specified by:
      copy in interface Params
      Specified by:
      copy in class Model<Bucketizer>
      extra - (undocumented)
    • toString

      public String toString()
      Specified by:
      toString in interface Identifiable
      toString in class Object