pyspark.pandas.extensions.register_index_accessor(name: str) → Callable[[Type[T]], Type[T]][source]

Register a custom accessor with an Index


name used when calling the accessor after its registered


A class decorator.

See also


Register a custom accessor on DataFrame objects


Register a custom accessor on Series objects


When accessed, your accessor will be initialiazed with the pandas-on-Spark object the user is interacting with. The code signature must be:

def __init__(self, pandas_on_spark_obj):
    # constructor logic

In the pandas API, if data passed to your accessor has an incorrect dtype, it’s recommended to raise an AttributeError for consistency purposes. In pandas-on-Spark, ValueError is more frequently used to annotate when a value’s datatype is unexpected for a given method/function.

Ultimately, you can structure this however you like, but pandas-on-Spark would likely do something like this:

>>> ps.Series(['a', 'b']).dt
Traceback (most recent call last):
ValueError: Cannot call DatetimeMethods on type StringType()


In your library code:

from pyspark.pandas.extensions import register_index_accessor

class CustomAccessor:

    def __init__(self, pandas_on_spark_obj):
        self._obj = pandas_on_spark_obj
        self.item = "baz"

    def bar(self):
        # return item value
        return self.item

Then, in an ipython session:

>>> ## Import if the accessor is in the other file.
>>> # from my_ext_lib import CustomAccessor
>>> psdf = ps.DataFrame({"longitude": np.linspace(0,10),
...                     "latitude": np.linspace(0, 20)})