Pivotal Greenplum-Spark Connector 2.0.0 Release Notes
The Pivotal Greenplum-Spark Connector provides high speed, parallel data transfer between Greenplum Database and an Apache Spark cluster using Spark’s Scala API for programmatic access (including the
Refer to the Pivotal Greenplum Database documentation for detailed information about Pivotal Greenplum Database.
See the Apache Spark documentation for information about Apache Spark version 2.4.
The following table identifies the supported component versions for the Pivotal Greenplum-Spark Connector 2.0:
|Greenplum-Spark Connector Version||Greenplum Version||Spark Version||Scala Version||PostgreSQL JDBC Driver Version|
|2.0.0||5.x, 6.x||2.3.x , 2.4.x
The Greenplum-Spark Connector is certified against the Greenplum, Spark, and Scala versions listed above. The Connector is bundled with, and certified against, the listed PostgreSQL JDBC driver version.
Released: October 2, 2020
Greenplum-Spark Connector 2.0.0 includes new and changed features and bug fixes.
Pivotal Greenplum-Spark Connector 2.0.0 includes these new and changed features:
- The Greenplum-Spark Connector is certified against the Scala, Spark, and JDBC driver versions identified in Supported Platforms above.
- The Greenplum-Spark Connector is now bundled with the PostgreSQL JDBC driver version 42.2.14.
The Greenplum-Spark Connector package that you download from Pivotal Network is now a
.tar.gzfile that includes the product open source license and the Connector JAR file. The naming format of the file is
gpfdistserver connection activity timeout changes from 30 seconds to 5 minutes.
server.timeoutoption is provided that a developer can use to specify the
gpfdistserver connection activity timeout.
The Connector improves read performance from Greenplum Database by using the internal Greenplum table column named
gp_segment_idas the default
partitionColumnwhen the developer does not specify this option.
The following issues were resolved in Greenplum-Spark Connector version 2.0.0:
|30731||Resolved an issue where the Greenplum-Spark Connector timed out with a serialization exception when writing aggregated results to Greenplum Database. The Connector now exposes the
|174495848||Resolved an issue where predicate pushdown was not working correctly because the Greenplum-Spark Connector did not use parentheses to join the predicates together when it constructed the filter string.|
The Greenplum-Spark Connector version 2.x removes:
- Support for Greenplum Database 4.x.
connector.portoption (deprecated in 1.6).
partitionsPerSegmentoption (deprecated in 1.5).
Known issues and limitations related to the 2.x release of the Pivotal Greenplum-Spark Connector include the following:
- The Connector cannot use
partitionColumn(the default) when reading data from Greenplum Database and mirroring is enabled in the Greenplum cluster.
- The Connector does not support reading from or writing to Greenplum Database when your Spark cluster is deployed on Kubernetes.
- The Connector supports basic data types like Float, Integer, String, and Date/Time data types. The Connector does not yet support more complex types. See Greenplum Database <-> Spark Data Type Mapping for additional information.