pyspark.ml package

Module Context

class pyspark.ml.Param(parent, name, doc, defaultValue=None)

A param with self-contained documentation and optionally default value.

class pyspark.ml.Params

Components that take parameters. This also provides an internal param map to store parameter values attached to the instance.

params

Returns all params. The default implementation uses dir() to get all attributes of type Param.

class pyspark.ml.Transformer

Abstract class for transformers that transform one dataset into another.

params

Returns all params. The default implementation uses dir() to get all attributes of type Param.

transform(dataset, params={})

Transforms the input dataset with optional parameters.

Parameters:
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame
  • params – an optional param map that overwrites embedded params
Returns:

transformed dataset

class pyspark.ml.Estimator

Abstract class for estimators that fit models to data.

fit(dataset, params={})

Fits a model to the input dataset with optional parameters.

Parameters:
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame
  • params – an optional param map that overwrites embedded params
Returns:

fitted model

params

Returns all params. The default implementation uses dir() to get all attributes of type Param.

class pyspark.ml.Pipeline(*args, **kwargs)

A simple pipeline, which acts as an estimator. A Pipeline consists of a sequence of stages, each of which is either an Estimator or a Transformer. When Pipeline.fit() is called, the stages are executed in order. If a stage is an Estimator, its Estimator.fit() method will be called on the input dataset to fit a model. Then the model, which is a transformer, will be used to transform the dataset as the input to the next stage. If a stage is a Transformer, its Transformer.transform() method will be called to produce the dataset for the next stage. The fitted model from a Pipeline is an PipelineModel, which consists of fitted models and transformers, corresponding to the pipeline stages. If there are no stages, the pipeline acts as an identity transformer.

fit(dataset, params={})

Fits a model to the input dataset with optional parameters.

Parameters:
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame
  • params – an optional param map that overwrites embedded params
Returns:

fitted model

getStages()

Get pipeline stages.

params

Returns all params. The default implementation uses dir() to get all attributes of type Param.

setParams(self, stages=[])

Sets params for Pipeline.

setStages(value)

Set pipeline stages. :param value: a list of transformers or estimators :return: the pipeline instance

pyspark.ml.feature module

class pyspark.ml.feature.Tokenizer(*args, **kwargs)[source]

A tokenizer that converts the input string to lowercase and then splits it by white spaces.

>>> from pyspark.sql import Row
>>> df = sc.parallelize([Row(text="a b c")]).toDF()
>>> tokenizer = Tokenizer(inputCol="text", outputCol="words")
>>> print tokenizer.transform(df).head()
Row(text=u'a b c', words=[u'a', u'b', u'c'])
>>> # Change a parameter.
>>> print tokenizer.setParams(outputCol="tokens").transform(df).head()
Row(text=u'a b c', tokens=[u'a', u'b', u'c'])
>>> # Temporarily modify a parameter.
>>> print tokenizer.transform(df, {tokenizer.outputCol: "words"}).head()
Row(text=u'a b c', words=[u'a', u'b', u'c'])
>>> print tokenizer.transform(df).head()
Row(text=u'a b c', tokens=[u'a', u'b', u'c'])
>>> # Must use keyword arguments to specify params.
>>> tokenizer.setParams("text")
Traceback (most recent call last):
    ...
TypeError: Method setParams forces keyword arguments.
getInputCol()

Gets the value of inputCol or its default value.

getOutputCol()

Gets the value of outputCol or its default value.

inputCol = Param(parent=undefined, name='inputCol', doc='input column name', defaultValue='input')
outputCol = Param(parent=undefined, name='outputCol', doc='output column name', defaultValue='output')
params

Returns all params. The default implementation uses dir() to get all attributes of type Param.

setInputCol(value)

Sets the value of inputCol.

setOutputCol(value)

Sets the value of outputCol.

setParams(self, inputCol="input", outputCol="output")[source]

Sets params for this Tokenizer.

transform(dataset, params={})

Transforms the input dataset with optional parameters.

Parameters:
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame
  • params – an optional param map that overwrites embedded params
Returns:

transformed dataset

class pyspark.ml.feature.HashingTF(*args, **kwargs)[source]

Maps a sequence of terms to their term frequencies using the hashing trick.

>>> from pyspark.sql import Row
>>> df = sc.parallelize([Row(words=["a", "b", "c"])]).toDF()
>>> hashingTF = HashingTF(numFeatures=10, inputCol="words", outputCol="features")
>>> print hashingTF.transform(df).head().features
(10,[7,8,9],[1.0,1.0,1.0])
>>> print hashingTF.setParams(outputCol="freqs").transform(df).head().freqs
(10,[7,8,9],[1.0,1.0,1.0])
>>> params = {hashingTF.numFeatures: 5, hashingTF.outputCol: "vector"}
>>> print hashingTF.transform(df, params).head().vector
(5,[2,3,4],[1.0,1.0,1.0])
getInputCol()

Gets the value of inputCol or its default value.

getNumFeatures()

Gets the value of numFeatures or its default value.

getOutputCol()

Gets the value of outputCol or its default value.

inputCol = Param(parent=undefined, name='inputCol', doc='input column name', defaultValue='input')
numFeatures = Param(parent=undefined, name='numFeatures', doc='number of features', defaultValue=262144)
outputCol = Param(parent=undefined, name='outputCol', doc='output column name', defaultValue='output')
params

Returns all params. The default implementation uses dir() to get all attributes of type Param.

setInputCol(value)

Sets the value of inputCol.

setNumFeatures(value)

Sets the value of numFeatures.

setOutputCol(value)

Sets the value of outputCol.

setParams(self, numFeatures=1 << 18, inputCol="input", outputCol="output")[source]

Sets params for this HashingTF.

transform(dataset, params={})

Transforms the input dataset with optional parameters.

Parameters:
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame
  • params – an optional param map that overwrites embedded params
Returns:

transformed dataset

pyspark.ml.classification module

class pyspark.ml.classification.LogisticRegression(*args, **kwargs)[source]

Logistic regression.

>>> from pyspark.sql import Row
>>> from pyspark.mllib.linalg import Vectors
>>> df = sc.parallelize([
...     Row(label=1.0, features=Vectors.dense(1.0)),
...     Row(label=0.0, features=Vectors.sparse(1, [], []))]).toDF()
>>> lr = LogisticRegression(maxIter=5, regParam=0.01)
>>> model = lr.fit(df)
>>> test0 = sc.parallelize([Row(features=Vectors.dense(-1.0))]).toDF()
>>> print model.transform(test0).head().prediction
0.0
>>> test1 = sc.parallelize([Row(features=Vectors.sparse(1, [0], [1.0]))]).toDF()
>>> print model.transform(test1).head().prediction
1.0
>>> lr.setParams("vector")
Traceback (most recent call last):
    ...
TypeError: Method setParams forces keyword arguments.
featuresCol = Param(parent=undefined, name='featuresCol', doc='features column name', defaultValue='features')
fit(dataset, params={})

Fits a model to the input dataset with optional parameters.

Parameters:
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame
  • params – an optional param map that overwrites embedded params
Returns:

fitted model

getFeaturesCol()

Gets the value of featuresCol or its default value.

getLabelCol()

Gets the value of labelCol or its default value.

getMaxIter()

Gets the value of maxIter or its default value.

getPredictionCol()

Gets the value of predictionCol or its default value.

getRegParam()

Gets the value of regParam or its default value.

labelCol = Param(parent=undefined, name='labelCol', doc='label column name', defaultValue='label')
maxIter = Param(parent=undefined, name='maxIter', doc='max number of iterations', defaultValue=100)
params

Returns all params. The default implementation uses dir() to get all attributes of type Param.

predictionCol = Param(parent=undefined, name='predictionCol', doc='prediction column name', defaultValue='prediction')
regParam = Param(parent=undefined, name='regParam', doc='regularization constant', defaultValue=0.1)
setFeaturesCol(value)

Sets the value of featuresCol.

setLabelCol(value)

Sets the value of labelCol.

setMaxIter(value)

Sets the value of maxIter.

setParams(self, featuresCol="features", labelCol="label", predictionCol="prediction", maxIter=100, regParam=0.1)[source]

Sets params for logistic regression.

setPredictionCol(value)

Sets the value of predictionCol.

setRegParam(value)

Sets the value of regParam.

class pyspark.ml.classification.LogisticRegressionModel(java_model)[source]

Model fitted by LogisticRegression.

params

Returns all params. The default implementation uses dir() to get all attributes of type Param.

transform(dataset, params={})

Transforms the input dataset with optional parameters.

Parameters:
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame
  • params – an optional param map that overwrites embedded params
Returns:

transformed dataset

Table Of Contents

Previous topic

pyspark.streaming module

Next topic

pyspark.mllib package

This Page