Starting with Spark 1.0.0, the Spark project will follow the semantic versioning guidelines with a few deviations. These small differences account for Spark’s nature as a multi-module project.
Each Spark release will be versioned:
When new components are added to Spark, they may initially be marked as “alpha”. Alpha components do not have to abide by the above guidelines, however, to the maximum extent possible, they should try to. Once they are marked “stable” they have to follow these guidelines.
An API is any public class or interface exposed in Spark that is not marked as “developer API” or “experimental”. Release A is API compatible with release B if code compiled against release A compiles cleanly against B. Currently, does not guarantee that a compiled application that is linked against version A will link cleanly against version B without re-compiling. Link-level compatibility is something we’ll try to guarantee in future releases.
Note, however, that even for features “developer API” and “experimental”, we strive to maintain maximum compatibility. Code should not be merged into the project as “experimental” if there is a plan to change the API later, because users expect the maximum compatibility from all available APIs.
In general, feature (“minor”) releases occur about every 6 months. Hence, Spark 2.3.0 would generally be released about 6 months after 2.2.0. Maintenance releases happen as needed. A minor release usually sees 1-2 maintenance releases in the 6 months following its first release. Major releases do not happen according to a fixed schedule.
|Mid Nov 2017||Code freeze. Release branch cut.|
|Late Nov 2017||QA period. Focus on bug fixes, tests, stability and docs. Generally, no new features merged.|
|Early Dec 2017||Release candidates (RC), voting, etc. until final release passes|