Using the Greenplum-Spark Connector

Prerequisites

Before using the Greenplum-Spark Connector, ensure that you can identify:

  • The hostname of your Greenplum Database master node.
  • The port on which your Greenplum Database master server process is running, if it is not running on the default port (5432).
  • The name of the Greenplum database to which you want to connect.
  • The names of the Greenplum Database schema and table that you want to access.
  • The Greenplum Database user/role name and password that you have been assigned. Also ensure that this user/role name has the required privileges as described in Configuring Greenplum Database Role Privileges.

Downloading the Connector JAR File

The Greenplum-Spark Connector is available as a separate download for Greenplum Database 4.3.X or 5.X from Pivotal Network:

  1. Download the JAR file by navigating to Pivotal Network and locating and selecting the Release Download directory named Pivotal Greenplum Connector.

    The format of the Greenplum-Spark Connector JAR file name is greenplum-spark_<spark-version>-<gsc-version>.jar. For example:

    greenplum-spark_2.11-1.1.0.jar
    
  2. Make note of the directory to which the JAR was downloaded.

Using spark-shell

You can run Spark interactively through spark-shell, a modified version of the Scala shell. Refer to the spark-shell Spark documentation for detailed information on using this command.

To try out the Greenplum-Spark Connector, run the spark-shell command providing a --jars option that identifies the file system path to the Greenplum-Spark Connector JAR file. For example:

spark-user@spark-node$ export GSC_JAR=/path/to/greenplum-spark_<spark-version>-<version>.jar
spark-user@spark-node$ spark-shell --jars $GSC_JAR
< ... spark-shell startup output messages ... >
scala>

When you run spark-shell, you enter the scala> interactive subsystem. A SparkSession is instantiated for you and accessible via the spark local variable:

scala> println(spark)
org.apache.spark.sql.SparkSession@4113d9ab

Your SparkSession provides the entry point to the spark.read.format().load() method that you will use to transfer data from a Greenplum Database table into Spark.

Developing Applications with the Connector

If you are writing a stand-alone Spark application, you will bundle the Greenplum-Spark Connector along with your other application dependencies into an “uber” JAR. The Spark Self-Contained Applications and Bundling Your Application’s Dependencies documentation identifies additional considerations for stand-alone Spark application development.

You can use the spark-submit command to launch a Spark application assembled with the Greenplum-Spark Connector. You can also run the spark-submit command providing a --jars option that identifies the file system path to the Greenplum-Spark Connector JAR file. The spark-submit Spark documentation describes using this command.

Constructing the Greenplum Database JDBC URL

The Greenplum-Spark Connector uses a JDBC connection to communicate with the Greenplum Database master node. The PostgreSQL JDBC driver JAR file is bundled with the Greenplum-Spark Connector JAR file, so you do not need to manage this dependency. You may also use a custom JDBC driver with the Greenplum-Spark Connector.

You must provide a JDBC connection string URL when you use the Connector to transfer data between Greenplum Database and Spark. This URL must include the Greenplum Database master hostname and port, as well as the name of the database to which you want to connect.

Parameter Name Description
<master> Hostname or IP address of the Greenplum Database master node.
<port> The port on which the Greenplum Database server process is listening. Optional, default is 5432.
<database_name> The Greenplum database to which you want to connect.

Note: The Greenplum-Spark Connector requires that other connection options, including user name and password, be provided separately.

Using the Default PostgreSQL JDBC Driver

The JDBC connection string URL format for the default Greenplum-Spark Connector JDBC driver is:

jdbc:postgresql://<master>[:<port>]/<database_name>

For example:

jdbc:postgresql://gpdb-master:5432/testdb

The syntax and semantics of the default JDBC connection string URL are governed by the PostgreSQL JDBC driver. For additional information about this syntax, refer to Connecting to the Database in the PostgreSQL JDBC documentation.

Using a Custom JDBC Driver

The Greenplum-Spark Connector also supports using a custom JDBC driver. To use a custom Greenplum Database JDBC driver, you must:

  • Construct a JDBC connection string URL for your custom driver that includes the Greenplum Database master hostname and port and the name of the database to which you want to connect.

  • Provide the JAR file for the custom JDBC driver via one of the following options:

    • Include a --jars <custom-jdbc-driver>.jar option on your spark-shell or spark-submit command line, identifying the full path to the custom JDBC driver JAR file.
    • Bundle the Greenplum-Spark Connector and custom JDBC JAR files along with your other application dependencies into an “uber” JAR.
    • Install the custom JDBC JAR file in a known, configured location on your Spark executor nodes.

You must also identify the fully qualified Java class name of the JDBC driver in a Greenplum-Spark Connector option (described in Connector Read Options).

For example, to use the Greenplum-Spark Connector with the Greenplum Database Data Direct JDBC driver that you have downloaded to /tmp/greenplum.jar in a spark-shell session, use this connection string URL format:

jdbc:pivotal:greenplum://<master>[:<port>];DatabaseName=<database_name>

Invoke spark-shell with the following command line:

$ spark-shell --jars /tmp/greenplum.jar --jars $GSC_JAR

And specify the class name com.pivotal.jdbc.GreenplumDriver in the Greenplum-Spark Connector JDBC driver option value.

Configuring Spark Worker Port Numbers

You can specify the port number that the Greenplum-Spark Connector uses for data transfer to Spark worker nodes. If you choose to specify the port number, you must set the GPFDIST_PORT environment variable on each Spark worker node that you want to configure. You can configure a different GPFDIST_PORT value or list of values for each Spark worker node.

Set GPFDIST_PORT to a single port number or to a comma-separated list of port numbers. When you specify a list of port numbers, the Greenplum-Spark connector traverses the list and uses the first port number that it opens successfully.

You must set GPFDIST_PORT on a Spark worker node before running the start-slave.sh command on that node. For example:

user@spark-worker$ GPFDIST_PORT="12900, 12901, 12902" start-slave.sh

You may also choose to set GPFDIST_PORT in your spark-env.sh file. For information about the Spark spark-env.sh file, refer to the Environment Variables section of the Spark Configuration documentation.