Reading from Greenplum Database into Spark
Reading a Greenplum Database table into Spark loads all of the table rows into a Spark
DataFrame. You can use the Spark Scala API or the
spark-shell interactive shell to read a Greenplum Database table that you created with the
CREATE TABLE SQL command.
The Greenplum-Spark Connector provides a Spark data source optimized for reading Greenplum Database data into Spark. To read a Greenplum Database table into Spark, you must identify the Greenplum-Spark Connector data source name and provide read options for the import.
A Spark data source provides an access point to structured data. Spark provides several pre-defined data sources to support specific file types and databases. You specify a Spark data source using either its fully qualified name or its short name.
The Greenplum-Spark Connector exposes a Spark data source named
greenplum to transfer data between Spark and Greenplum Database. The Greenplum-Spark Connector supports specifying the data source only with this short name.
.format(datasource: String) Scala method to identify the data source. You must provide the Greenplum-Spark Connector data source short name
greenplum to the
.format() method. For example:
greenplum data source supports the read options identified in the table below. An option is required unless otherwise specified.
|Option Key||Value Description|
|url||The JDBC connection string URL; see Constructing the Greenplum Database JDBC URL.|
|dbschema||The name of the Greenplum Database schema in which
|dbtable||The name of the Greenplum Database table. This table must reside in the Greenplum Database schema identified in the
|driver||The fully qualified class path of the custom JDBC driver. Optional, specify only when using a custom JDBC driver.|
|user||The Greenplum Database user/role name.|
|password||(Optional.) The Greenplum Database password for the user. You can omit the password if Greenplum Database is configured to not require a password for the specified user, or if you use kerberos authentication and provide the required authentication properties in the JDBC connection string URL.|
|partitionColumn||The name of the Greenplum Database table column to use for Spark partitioning. This column must be one of the Greenplum Database data types
|partitionsPerSegment||The number of Spark partitions per Greenplum Database segment. Optional, the default value is 1 partition.|
For more information about Greenplum-Spark Connector options, including specifying the options to a data source, refer to About Connector Options.
When you read a Greenplum Database table into Spark, you identify the Greenplum-Spark Connector data source, provide the read options, and invoke the
DataFrameReader.load() method. For example:
val gscReadOptionMap = Map( "url" -> "jdbc:postgresql://gpdb-master:5432/testdb", "user" -> "bill", "password" -> "changeme", "dbschema" -> "myschema", "dbtable" -> "table1", "partitionColumn" -> "id" ) val gpdf = spark.read.format("greenplum") .options(gscReadOptionMap) .load()
.load() method returns a
DataFrame is a set of rows, i.e. a
Note that the
.load() operation does not initiate the movement of data from Greenplum Database to Spark. Spark employs lazy evaluation for transformations; it does not compute the results until the application performs an action on the
DataFrame, such as displaying or filtering the data or counting the number of rows.
Actions and transformations you can perform on the returned
- Viewing the contents of the table with
- Counting the number of rows with
- Filtering the data using
- Grouping and ordering the data using
Refer to the Spark DataSet Scala API docs for additional information about this class and the other actions and transformations you can perform.
Note: By default, Spark recomputes a transformed
DataFrame each time you run an action on it. If you have a large data set on which you want to perform multiple transformations, you may choose to keep the
DataFrame in memory for performance reasons. You can use the
DataSet.persist() method for this purpose. Keep in mind that there are memory implications to persisting large data sets.