Class FeatureHasher
- All Implemented Interfaces:
Serializable
,org.apache.spark.internal.Logging
,Params
,HasInputCols
,HasNumFeatures
,HasOutputCol
,DefaultParamsWritable
,Identifiable
,MLWritable
The FeatureHasher
transformer operates on multiple columns. Each column may contain either
numeric or categorical features. Behavior and handling of column data types is as follows:
-Numeric columns: For numeric features, the hash value of the column name is used to map the
feature value to its index in the feature vector. By default, numeric features
are not treated as categorical (even when they are integers). To treat them
as categorical, specify the relevant columns in categoricalCols
.
-String columns: For categorical features, the hash value of the string "column_name=value"
is used to map to the vector index, with an indicator value of 1.0
.
Thus, categorical features are "one-hot" encoded
(similarly to using OneHotEncoder
with dropLast=false
).
-Boolean columns: Boolean values are treated in the same way as string columns. That is,
boolean features are represented as "column_name=true" or "column_name=false",
with an indicator value of 1.0
.
Null (missing) values are ignored (implicitly zero in the resulting feature vector).
The hash function used here is also the MurmurHash 3 used in HashingTF
. Since a simple modulo
on the hashed value is used to determine the vector index, it is advisable to use a power of two
as the numFeatures parameter; otherwise the features will not be mapped evenly to the vector
indices.
val df = Seq(
(2.0, true, "1", "foo"),
(3.0, false, "2", "bar")
).toDF("real", "bool", "stringNum", "string")
val hasher = new FeatureHasher()
.setInputCols("real", "bool", "stringNum", "string")
.setOutputCol("features")
hasher.transform(df).show(false)
+----+-----+---------+------+------------------------------------------------------+
|real|bool |stringNum|string|features |
+----+-----+---------+------+------------------------------------------------------+
|2.0 |true |1 |foo |(262144,[51871,63643,174475,253195],[1.0,1.0,2.0,1.0])|
|3.0 |false|2 |bar |(262144,[6031,80619,140467,174475],[1.0,1.0,1.0,3.0]) |
+----+-----+---------+------+------------------------------------------------------+
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionNumeric columns to treat as categorical features.Creates a copy of this instance with the same UID and some extra params.String[]
final StringArrayParam
Param for input column names.static FeatureHasher
final IntParam
Param for Number of features.Param for output column name.static MLReader<T>
read()
setCategoricalCols
(String[] value) setInputCols
(String[] value) setInputCols
(scala.collection.immutable.Seq<String> values) setNumFeatures
(int value) setOutputCol
(String value) toString()
Transforms the input dataset.transformSchema
(StructType schema) Check transform validity and derive the output schema from the input schema.uid()
An immutable unique ID for the object and its derivatives.Methods inherited from class org.apache.spark.ml.Transformer
transform, transform, transform
Methods inherited from class org.apache.spark.ml.PipelineStage
params
Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, wait, wait, wait
Methods inherited from interface org.apache.spark.ml.util.DefaultParamsWritable
write
Methods inherited from interface org.apache.spark.ml.param.shared.HasInputCols
getInputCols
Methods inherited from interface org.apache.spark.ml.param.shared.HasNumFeatures
getNumFeatures
Methods inherited from interface org.apache.spark.ml.param.shared.HasOutputCol
getOutputCol
Methods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContext
Methods inherited from interface org.apache.spark.ml.util.MLWritable
save
Methods inherited from interface org.apache.spark.ml.param.Params
clear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, onParamChange, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
-
Constructor Details
-
FeatureHasher
-
FeatureHasher
public FeatureHasher()
-
-
Method Details
-
load
-
read
-
numFeatures
Description copied from interface:HasNumFeatures
Param for Number of features. Should be greater than 0.- Specified by:
numFeatures
in interfaceHasNumFeatures
- Returns:
- (undocumented)
-
outputCol
Description copied from interface:HasOutputCol
Param for output column name.- Specified by:
outputCol
in interfaceHasOutputCol
- Returns:
- (undocumented)
-
inputCols
Description copied from interface:HasInputCols
Param for input column names.- Specified by:
inputCols
in interfaceHasInputCols
- Returns:
- (undocumented)
-
uid
Description copied from interface:Identifiable
An immutable unique ID for the object and its derivatives.- Specified by:
uid
in interfaceIdentifiable
- Returns:
- (undocumented)
-
categoricalCols
Numeric columns to treat as categorical features. By default only string and boolean columns are treated as categorical, so this param can be used to explicitly specify the numerical columns to treat as categorical. Note, the relevant columns should also be set ininputCols
, categorical columns not set ininputCols
will be listed in a warning.- Returns:
- (undocumented)
-
setNumFeatures
-
setInputCols
-
setInputCols
-
setOutputCol
-
getCategoricalCols
-
setCategoricalCols
-
transform
Description copied from class:Transformer
Transforms the input dataset.- Specified by:
transform
in classTransformer
- Parameters:
dataset
- (undocumented)- Returns:
- (undocumented)
-
copy
Description copied from interface:Params
Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. SeedefaultCopy()
.- Specified by:
copy
in interfaceParams
- Specified by:
copy
in classTransformer
- Parameters:
extra
- (undocumented)- Returns:
- (undocumented)
-
transformSchema
Description copied from class:PipelineStage
Check transform validity and derive the output schema from the input schema.We check validity for interactions between parameters during
transformSchema
and raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled byParam.validate()
.Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
- Specified by:
transformSchema
in classPipelineStage
- Parameters:
schema
- (undocumented)- Returns:
- (undocumented)
-
toString
- Specified by:
toString
in interfaceIdentifiable
- Overrides:
toString
in classObject
-