{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Quickstart: DataFrame\n",
    "\n",
    "This is a short introduction and quickstart for the PySpark DataFrame API. PySpark DataFrames are lazily evaluated. They are implemented on top of [RDD](https://spark.apache.org/docs/latest/rdd-programming-guide.html#overview)s. When Spark [transforms](https://spark.apache.org/docs/latest/rdd-programming-guide.html#transformations) data, it does not immediately compute the transformation but plans how to compute later. When [actions](https://spark.apache.org/docs/latest/rdd-programming-guide.html#actions) such as `collect()` are explicitly called, the computation starts.\n",
    "This notebook shows the basic usages of the DataFrame, geared mainly for new users. You can run the latest version of these examples by yourself in 'Live Notebook: DataFrame' at [the quickstart page](https://spark.apache.org/docs/latest/api/python/getting_started/index.html).\n",
    "\n",
    "There is also other useful information in Apache Spark documentation site, see the latest version of [Spark SQL and DataFrames](https://spark.apache.org/docs/latest/sql-programming-guide.html), [RDD Programming Guide](https://spark.apache.org/docs/latest/rdd-programming-guide.html), [Structured Streaming Programming Guide](https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html), [Spark Streaming Programming Guide](https://spark.apache.org/docs/latest/streaming-programming-guide.html) and [Machine Learning Library (MLlib) Guide](https://spark.apache.org/docs/latest/ml-guide.html).\n",
    "\n",
    "PySpark applications start with initializing `SparkSession` which is the entry point of PySpark as below. In case of running it in PySpark shell via pyspark executable, the shell automatically creates the session in the variable spark for users."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pyspark.sql import SparkSession\n",
    "\n",
    "spark = SparkSession.builder.getOrCreate()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## DataFrame Creation\n",
    "\n",
    "A PySpark DataFrame can be created via `pyspark.sql.SparkSession.createDataFrame` typically by passing a list of lists, tuples, dictionaries and `pyspark.sql.Row`s, a [pandas DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) and an RDD consisting of such a list.\n",
    "`pyspark.sql.SparkSession.createDataFrame` takes the `schema` argument to specify the schema of the DataFrame. When it is omitted, PySpark infers the corresponding schema by taking a sample from the data.\n",
    "\n",
    "Firstly, you can create a PySpark DataFrame from a list of rows"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "DataFrame[a: bigint, b: double, c: string, d: date, e: timestamp]"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from datetime import datetime, date\n",
    "import pandas as pd\n",
    "from pyspark.sql import Row\n",
    "\n",
    "df = spark.createDataFrame([\n",
    "    Row(a=1, b=2., c='string1', d=date(2000, 1, 1), e=datetime(2000, 1, 1, 12, 0)),\n",
    "    Row(a=2, b=3., c='string2', d=date(2000, 2, 1), e=datetime(2000, 1, 2, 12, 0)),\n",
    "    Row(a=4, b=5., c='string3', d=date(2000, 3, 1), e=datetime(2000, 1, 3, 12, 0))\n",
    "])\n",
    "df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Create a PySpark DataFrame with an explicit schema."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "DataFrame[a: bigint, b: double, c: string, d: date, e: timestamp]"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df = spark.createDataFrame([\n",
    "    (1, 2., 'string1', date(2000, 1, 1), datetime(2000, 1, 1, 12, 0)),\n",
    "    (2, 3., 'string2', date(2000, 2, 1), datetime(2000, 1, 2, 12, 0)),\n",
    "    (3, 4., 'string3', date(2000, 3, 1), datetime(2000, 1, 3, 12, 0))\n",
    "], schema='a long, b double, c string, d date, e timestamp')\n",
    "df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Create a PySpark DataFrame from a pandas DataFrame"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "DataFrame[a: bigint, b: double, c: string, d: date, e: timestamp]"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "pandas_df = pd.DataFrame({\n",
    "    'a': [1, 2, 3],\n",
    "    'b': [2., 3., 4.],\n",
    "    'c': ['string1', 'string2', 'string3'],\n",
    "    'd': [date(2000, 1, 1), date(2000, 2, 1), date(2000, 3, 1)],\n",
    "    'e': [datetime(2000, 1, 1, 12, 0), datetime(2000, 1, 2, 12, 0), datetime(2000, 1, 3, 12, 0)]\n",
    "})\n",
    "df = spark.createDataFrame(pandas_df)\n",
    "df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The DataFrames created above all have the same results and schema."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+---+-------+----------+-------------------+\n",
      "|  a|  b|      c|         d|                  e|\n",
      "+---+---+-------+----------+-------------------+\n",
      "|  1|2.0|string1|2000-01-01|2000-01-01 12:00:00|\n",
      "|  2|3.0|string2|2000-02-01|2000-01-02 12:00:00|\n",
      "|  3|4.0|string3|2000-03-01|2000-01-03 12:00:00|\n",
      "+---+---+-------+----------+-------------------+\n",
      "\n",
      "root\n",
      " |-- a: long (nullable = true)\n",
      " |-- b: double (nullable = true)\n",
      " |-- c: string (nullable = true)\n",
      " |-- d: date (nullable = true)\n",
      " |-- e: timestamp (nullable = true)\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# All DataFrames above result same.\n",
    "df.show()\n",
    "df.printSchema()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Viewing Data\n",
    "\n",
    "The top rows of a DataFrame can be displayed using `DataFrame.show()`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+---+-------+----------+-------------------+\n",
      "|  a|  b|      c|         d|                  e|\n",
      "+---+---+-------+----------+-------------------+\n",
      "|  1|2.0|string1|2000-01-01|2000-01-01 12:00:00|\n",
      "+---+---+-------+----------+-------------------+\n",
      "only showing top 1 row\n",
      "\n"
     ]
    }
   ],
   "source": [
    "df.show(1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Alternatively, you can enable `spark.sql.repl.eagerEval.enabled` configuration for the eager evaluation of PySpark DataFrame in notebooks such as Jupyter. The number of rows to show can be controlled via `spark.sql.repl.eagerEval.maxNumRows` configuration."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "
\n",
       "| a | b | c | d | e | 
|---|
\n",
       "| 1 | 2.0 | string1 | 2000-01-01 | 2000-01-01 12:00:00 | 
\n",
       "| 2 | 3.0 | string2 | 2000-02-01 | 2000-01-02 12:00:00 | 
\n",
       "| 3 | 4.0 | string3 | 2000-03-01 | 2000-01-03 12:00:00 | 
\n",
       "
\n"
      ],
      "text/plain": [
       "DataFrame[a: bigint, b: double, c: string, d: date, e: timestamp]"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "spark.conf.set('spark.sql.repl.eagerEval.enabled', True)\n",
    "df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The rows can also be shown vertically. This is useful when rows are too long to show horizontally."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "-RECORD 0------------------\n",
      " a   | 1                   \n",
      " b   | 2.0                 \n",
      " c   | string1             \n",
      " d   | 2000-01-01          \n",
      " e   | 2000-01-01 12:00:00 \n",
      "only showing top 1 row\n",
      "\n"
     ]
    }
   ],
   "source": [
    "df.show(1, vertical=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can see the DataFrame's schema and column names as follows:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['a', 'b', 'c', 'd', 'e']"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df.columns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "root\n",
      " |-- a: long (nullable = true)\n",
      " |-- b: double (nullable = true)\n",
      " |-- c: string (nullable = true)\n",
      " |-- d: date (nullable = true)\n",
      " |-- e: timestamp (nullable = true)\n",
      "\n"
     ]
    }
   ],
   "source": [
    "df.printSchema()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Show the summary of the DataFrame"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-------+---+---+-------+\n",
      "|summary|  a|  b|      c|\n",
      "+-------+---+---+-------+\n",
      "|  count|  3|  3|      3|\n",
      "|   mean|2.0|3.0|   null|\n",
      "| stddev|1.0|1.0|   null|\n",
      "|    min|  1|2.0|string1|\n",
      "|    max|  3|4.0|string3|\n",
      "+-------+---+---+-------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "df.select(\"a\", \"b\", \"c\").describe().show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`DataFrame.collect()` collects the distributed data to the driver side as the local data in Python. Note that this can throw an out-of-memory error when the dataset is too large to fit in the driver side because it collects all the data from executors to the driver side."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Row(a=1, b=2.0, c='string1', d=datetime.date(2000, 1, 1), e=datetime.datetime(2000, 1, 1, 12, 0)),\n",
       " Row(a=2, b=3.0, c='string2', d=datetime.date(2000, 2, 1), e=datetime.datetime(2000, 1, 2, 12, 0)),\n",
       " Row(a=3, b=4.0, c='string3', d=datetime.date(2000, 3, 1), e=datetime.datetime(2000, 1, 3, 12, 0))]"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df.collect()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In order to avoid throwing an out-of-memory exception, use `DataFrame.take()` or `DataFrame.tail()`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Row(a=1, b=2.0, c='string1', d=datetime.date(2000, 1, 1), e=datetime.datetime(2000, 1, 1, 12, 0))]"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df.take(1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "PySpark DataFrame also provides the conversion back to a [pandas DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) to leverage pandas API. Note that `toPandas` also collects all data into the driver side that can easily cause an out-of-memory-error when the data is too large to fit into the driver side."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "\n",
       "\n",
       "
\n",
       "  \n",
       "    \n",
       "      | \n",
       " | a\n",
       " | b\n",
       " | c\n",
       " | d\n",
       " | e\n",
       " | 
\n",
       "  \n",
       "  \n",
       "    \n",
       "      | 0\n",
       " | 1\n",
       " | 2.0\n",
       " | string1\n",
       " | 2000-01-01\n",
       " | 2000-01-01 12:00:00\n",
       " | 
\n",
       "    \n",
       "      | 1\n",
       " | 2\n",
       " | 3.0\n",
       " | string2\n",
       " | 2000-02-01\n",
       " | 2000-01-02 12:00:00\n",
       " | 
\n",
       "    \n",
       "      | 2\n",
       " | 3\n",
       " | 4.0\n",
       " | string3\n",
       " | 2000-03-01\n",
       " | 2000-01-03 12:00:00\n",
       " | 
\n",
       "  \n",
       "
\n",
       "