Class VectorIndexer
- All Implemented Interfaces:
Serializable,org.apache.spark.internal.Logging,VectorIndexerParams,Params,HasHandleInvalid,HasInputCol,HasOutputCol,DefaultParamsWritable,Identifiable,MLWritable
Vector.
This has 2 usage modes: - Automatically identify categorical features (default behavior) - This helps process a dataset of unknown vectors into a dataset with some continuous features and some categorical features. The choice between continuous and categorical is based upon a maxCategories parameter. - Set maxCategories to the maximum number of categorical any categorical feature should have. - E.g.: Feature 0 has unique values {-1.0, 0.0}, and feature 1 values {1.0, 3.0, 5.0}. If maxCategories = 2, then feature 0 will be declared categorical and use indices {0, 1}, and feature 1 will be declared continuous. - Index all features, if all features are categorical - If maxCategories is set to be very large, then this will build an index of unique values for all features. - Warning: This can cause problems if features are continuous since this will collect ALL unique values to the driver. - E.g.: Feature 0 has unique values {-1.0, 0.0}, and feature 1 values {1.0, 3.0, 5.0}. If maxCategories is greater than or equal to 3, then both features will be declared categorical.
This returns a model which can transform categorical features to use 0-based indices.
Index stability: - This is not guaranteed to choose the same category index across multiple runs. - If a categorical feature includes value 0, then this is guaranteed to map value 0 to index 0. This maintains vector sparsity. - More stability may be added in the future.
TODO: Future extensions: The following functionality is planned for the future: - Preserve metadata in transform; if a feature's metadata is already present, do not recompute. - Specify certain features to not index, either via a parameter or via existing metadata. - Add warning if a categorical feature has only 1 category.
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionCreates a copy of this instance with the same UID and some extra params.Fits a model to the input data.Param for how to handle invalid data (unseen labels or NULL values).inputCol()Param for input column name.static VectorIndexerThreshold for the number of values a categorical feature can take.Param for output column name.static MLReader<T>read()setHandleInvalid(String value) setInputCol(String value) setMaxCategories(int value) setOutputCol(String value) transformSchema(StructType schema) Check transform validity and derive the output schema from the input schema.uid()An immutable unique ID for the object and its derivatives.Methods inherited from class org.apache.spark.ml.PipelineStage
paramsMethods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface org.apache.spark.ml.util.DefaultParamsWritable
writeMethods inherited from interface org.apache.spark.ml.param.shared.HasHandleInvalid
getHandleInvalidMethods inherited from interface org.apache.spark.ml.param.shared.HasInputCol
getInputColMethods inherited from interface org.apache.spark.ml.param.shared.HasOutputCol
getOutputColMethods inherited from interface org.apache.spark.ml.util.Identifiable
toStringMethods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContextMethods inherited from interface org.apache.spark.ml.util.MLWritable
saveMethods inherited from interface org.apache.spark.ml.param.Params
clear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, onParamChange, paramMap, params, set, set, set, setDefault, setDefault, shouldOwnMethods inherited from interface org.apache.spark.ml.feature.VectorIndexerParams
getMaxCategories
-
Constructor Details
-
VectorIndexer
-
VectorIndexer
public VectorIndexer()
-
-
Method Details
-
load
-
read
-
handleInvalid
Description copied from interface:VectorIndexerParamsParam for how to handle invalid data (unseen labels or NULL values). Note: this param only applies to categorical features, not continuous ones. Options are: 'skip': filter out rows with invalid data. 'error': throw an error. 'keep': put invalid data in a special additional bucket, at index of the number of categories of the feature. Default value: "error"- Specified by:
handleInvalidin interfaceHasHandleInvalid- Specified by:
handleInvalidin interfaceVectorIndexerParams- Returns:
- (undocumented)
-
maxCategories
Description copied from interface:VectorIndexerParamsThreshold for the number of values a categorical feature can take. If a feature is found to have > maxCategories values, then it is declared continuous. Must be greater than or equal to 2.(default = 20)
- Specified by:
maxCategoriesin interfaceVectorIndexerParams- Returns:
- (undocumented)
-
outputCol
Description copied from interface:HasOutputColParam for output column name.- Specified by:
outputColin interfaceHasOutputCol- Returns:
- (undocumented)
-
inputCol
Description copied from interface:HasInputColParam for input column name.- Specified by:
inputColin interfaceHasInputCol- Returns:
- (undocumented)
-
uid
Description copied from interface:IdentifiableAn immutable unique ID for the object and its derivatives.- Specified by:
uidin interfaceIdentifiable- Returns:
- (undocumented)
-
setMaxCategories
-
setInputCol
-
setOutputCol
-
setHandleInvalid
-
fit
Description copied from class:EstimatorFits a model to the input data.- Specified by:
fitin classEstimator<VectorIndexerModel>- Parameters:
dataset- (undocumented)- Returns:
- (undocumented)
-
transformSchema
Description copied from class:PipelineStageCheck transform validity and derive the output schema from the input schema.We check validity for interactions between parameters during
transformSchemaand raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled byParam.validate().Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
- Specified by:
transformSchemain classPipelineStage- Parameters:
schema- (undocumented)- Returns:
- (undocumented)
-
copy
Description copied from interface:ParamsCreates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. SeedefaultCopy().- Specified by:
copyin interfaceParams- Specified by:
copyin classEstimator<VectorIndexerModel>- Parameters:
extra- (undocumented)- Returns:
- (undocumented)
-