public class FeatureHasher extends Transformer implements HasInputCols, HasOutputCol, HasNumFeatures, DefaultParamsWritable
The FeatureHasher
transformer operates on multiple columns. Each column may contain either
numeric or categorical features. Behavior and handling of column data types is as follows:
-Numeric columns: For numeric features, the hash value of the column name is used to map the
feature value to its index in the feature vector. By default, numeric features
are not treated as categorical (even when they are integers). To treat them
as categorical, specify the relevant columns in categoricalCols
.
-String columns: For categorical features, the hash value of the string "column_name=value"
is used to map to the vector index, with an indicator value of 1.0
.
Thus, categorical features are "one-hot" encoded
(similarly to using OneHotEncoder
with dropLast=false
).
-Boolean columns: Boolean values are treated in the same way as string columns. That is,
boolean features are represented as "column_name=true" or "column_name=false",
with an indicator value of 1.0
.
Null (missing) values are ignored (implicitly zero in the resulting feature vector).
The hash function used here is also the MurmurHash 3 used in HashingTF
. Since a simple modulo
on the hashed value is used to determine the vector index, it is advisable to use a power of two
as the numFeatures parameter; otherwise the features will not be mapped evenly to the vector
indices.
val df = Seq(
(2.0, true, "1", "foo"),
(3.0, false, "2", "bar")
).toDF("real", "bool", "stringNum", "string")
val hasher = new FeatureHasher()
.setInputCols("real", "bool", "stringNum", "string")
.setOutputCol("features")
hasher.transform(df).show(false)
+----+-----+---------+------+------------------------------------------------------+
|real|bool |stringNum|string|features |
+----+-----+---------+------+------------------------------------------------------+
|2.0 |true |1 |foo |(262144,[51871,63643,174475,253195],[1.0,1.0,2.0,1.0])|
|3.0 |false|2 |bar |(262144,[6031,80619,140467,174475],[1.0,1.0,1.0,3.0]) |
+----+-----+---------+------+------------------------------------------------------+
Constructor and Description |
---|
FeatureHasher() |
FeatureHasher(String uid) |
Modifier and Type | Method and Description |
---|---|
StringArrayParam |
categoricalCols()
Numeric columns to treat as categorical features.
|
FeatureHasher |
copy(ParamMap extra)
Creates a copy of this instance with the same UID and some extra params.
|
String[] |
getCategoricalCols() |
StringArrayParam |
inputCols()
Param for input column names.
|
static FeatureHasher |
load(String path) |
IntParam |
numFeatures()
Param for Number of features.
|
Param<String> |
outputCol()
Param for output column name.
|
static MLReader<T> |
read() |
FeatureHasher |
setCategoricalCols(String[] value) |
FeatureHasher |
setInputCols(scala.collection.Seq<String> values) |
FeatureHasher |
setInputCols(String[] value) |
FeatureHasher |
setNumFeatures(int value) |
FeatureHasher |
setOutputCol(String value) |
String |
toString() |
Dataset<Row> |
transform(Dataset<?> dataset)
Transforms the input dataset.
|
StructType |
transformSchema(StructType schema)
Check transform validity and derive the output schema from the input schema.
|
String |
uid()
An immutable unique ID for the object and its derivatives.
|
transform, transform, transform
params
getInputCols
getOutputCol
getNumFeatures
clear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, onParamChange, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
write
save
$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitialize
public FeatureHasher(String uid)
public FeatureHasher()
public static FeatureHasher load(String path)
public static MLReader<T> read()
public final IntParam numFeatures()
HasNumFeatures
numFeatures
in interface HasNumFeatures
public final Param<String> outputCol()
HasOutputCol
outputCol
in interface HasOutputCol
public final StringArrayParam inputCols()
HasInputCols
inputCols
in interface HasInputCols
public String uid()
Identifiable
uid
in interface Identifiable
public StringArrayParam categoricalCols()
inputCols
, categorical columns not set in inputCols
will be listed in a warning.public FeatureHasher setNumFeatures(int value)
public FeatureHasher setInputCols(scala.collection.Seq<String> values)
public FeatureHasher setInputCols(String[] value)
public FeatureHasher setOutputCol(String value)
public String[] getCategoricalCols()
public FeatureHasher setCategoricalCols(String[] value)
public Dataset<Row> transform(Dataset<?> dataset)
Transformer
transform
in class Transformer
dataset
- (undocumented)public FeatureHasher copy(ParamMap extra)
Params
defaultCopy()
.copy
in interface Params
copy
in class Transformer
extra
- (undocumented)public StructType transformSchema(StructType schema)
PipelineStage
We check validity for interactions between parameters during transformSchema
and
raise an exception if any parameter value is invalid. Parameter value checks which
do not depend on other parameters are handled by Param.validate()
.
Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
transformSchema
in class PipelineStage
schema
- (undocumented)public String toString()
toString
in interface Identifiable
toString
in class Object