public interface CreateTableWriter<T> extends WriteConfigMethods<CreateTableWriter<T>>
Modifier and Type | Method and Description |
---|---|
void |
create()
Create a new table from the contents of the data frame.
|
void |
createOrReplace()
Create a new table or replace an existing table with the contents of the data frame.
|
CreateTableWriter<T> |
partitionedBy(Column column,
scala.collection.Seq<Column> columns)
Partition the output table created by
create , createOrReplace , or replace using
the given columns or transforms. |
void |
replace()
Replace an existing table with the contents of the data frame.
|
CreateTableWriter<T> |
tableProperty(String property,
String value)
Add a table property.
|
CreateTableWriter<T> |
using(String provider)
Specifies a provider for the underlying output data source.
|
void create() throws org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException
The new table's schema, partition layout, properties, and other configuration will be based on the configuration set on this writer.
If the output table exists, this operation will fail with
TableAlreadyExistsException
.
org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException
- If the table already existsvoid createOrReplace()
The output table's schema, partition layout, properties, and other configuration will be based on the contents of the data frame and the configuration set on this writer. If the table exists, its configuration and data will be replaced.
CreateTableWriter<T> partitionedBy(Column column, scala.collection.Seq<Column> columns)
create
, createOrReplace
, or replace
using
the given columns or transforms.
When specified, the table data will be stored by these values for efficient reads.
For example, when a table is partitioned by day, it may be stored in a directory layout like:
table/day=2019-06-01/
table/day=2019-06-02/
Partitioning is one of the most widely used techniques to optimize physical data layout. It provides a coarse-grained index for skipping unnecessary data reads when queries have predicates on the partitioned columns. In order for partitioning to work well, the number of distinct values in each column should typically be less than tens of thousands.
column
- (undocumented)columns
- (undocumented)void replace() throws org.apache.spark.sql.catalyst.analysis.CannotReplaceMissingTableException
The existing table's schema, partition layout, properties, and other configuration will be replaced with the contents of the data frame and the configuration set on this writer.
If the output table does not exist, this operation will fail with
CannotReplaceMissingTableException
.
org.apache.spark.sql.catalyst.analysis.CannotReplaceMissingTableException
- If the table does not existCreateTableWriter<T> tableProperty(String property, String value)
property
- (undocumented)value
- (undocumented)CreateTableWriter<T> using(String provider)
provider
- (undocumented)