AWS EMR 5.x vs 6.x and corresponding Scala build.sbt settings.

貝里昂
2 min readOct 28, 2020

AWS對Linux AMI 1的支援預計只到2020/12/31為止,並且加強Linux AMI 2的支援與Long Term Support。 相對應EMR的版本, 5.x使用AMI 1,而6.X是使用AMI 2。為了取得AWS對於Linux AMI的最大化支援與協助,目前小弟所屬的公司也開始把版本升級到EMR 6.x。 但是EMR 6.x 所使用的Scala 與 Spark的版本與 EMR 5.X有蠻大的差距,我們該如何在這過渡期間,但pipeline運行順利呢?

Versions of EMR, Spark, and Scala

根據EMR官網的資料:
EMR 6.0.1 使用 Spark 3.x。
EMR 6.0.1 以前使用 Spark 2.x。

根據Spark官網的資料:
Spark 3.X版本使用Scala2.12,不能使用Scala2.11版本

Spark runs on Java 8/11, Scala 2.12, Python 2.7+/3.4+ and R 3.5+. Java 8 prior to version 8u92 support is deprecated as of Spark 3.0.0. Python 2 and Python 3 prior to version 3.6 support is deprecated as of Spark 3.0.0. For the Scala API, Spark 3.0.1 uses Scala 2.12. You will need to use a compatible Scala version (2.12.x).

Spark2.4.0之後的版本都使用Scala2.12,但也支援2.11

Spark runs on Java 8, Python 2.7+/3.4+ and R 3.5+. For the Scala API, Spark 2.4.7 uses Scala 2.12. You will need to use a compatible Scala version (2.12.x).

Note that support for Java 7, Python 2.6 and old Hadoop versions before 2.6.5 were removed as of Spark 2.2.0. Support for Scala 2.10 was removed as of 2.3.0. Support for Scala 2.11 is deprecated as of Spark 2.4.1 and will be removed in Spark 3.0.

目前實務上的結論為:

  • 當使用Scala 2.11.X,請使用EMR 5.x的版本。
  • 當使用Scala 2.12.X,請使用EMR 6.x的版本。
  • 如果使用Spark 3.x版本,Scala務必使用2.12,請使用EMR6.1.0
  • 如果使用Spark 2.x版本,但是Scala使用2.11,請使用EMR 5.x。
  • 如果使用Spark 2.x版本,但是Scala使用2.12,請使用EMR 6.0.0。

但是在這個過渡期,我們可不可以讓某個pipeline jar可以跑在EMR5.x,又可以無誤地跑在EMR6.x?
答案是:不行。 如果在EMR內使用錯誤的Scala version jar會產生 NoSuchMethodError: Scala.Predef$.refArrayOps 。怎解?

crosssScalaVersions feature in sbt

目前比較可行的解法是讓sbt每次打包jar時,都產生出兩個scala版本的jar。並且在trigger EMR時,再根據要跑的EMR版本自動的選擇我們要的scala jar。

import Dependencies._

val ver = "1.0.1"
val org = "com.<company>.<team>.<project>"
name := "<your_project_name>"
scalaVersion := scala212

resolvers ++= resolversRepos
autoScalaLibrary := true

lazy val scala212 = "2.12.10"
lazy val scala211 = "2.11.12"
lazy val supportedScalaVersions = Seq(scala211, scala212)
crossScalaVersions := supportedScalaVersions

lazy val commonSettings = Seq(
version := ver,
organization := org,
crossScalaVersions := supportedScalaVersions,
test in assembly := {},
parallelExecution in Test := false
)

lazy val publishSettings = Seq (
credentials += Credentials(Path.userHome / ".artifactory" / ".credentials"),
publishTo := {
val artifactory = "<your_artifact_link>"
if (version.value.trim.endsWith("SNAPSHOT"))
Some("snapshots" at artifactory + "/<snapshot_folder>")
else
Some("releases" at artifactory + "/<release_folder>")
}
)

lazy val root = (project in file(".")).
settings(commonSettings: _*).
settings(publishSettings: _*).
settings(
//assembly Jar Name Setting
assemblyJarName in assembly := s"${name.value}-${version.value}-Assembly.jar",
//assembly Merge Strategy Setting
assemblyMergeStrategy in assembly := {
{
case PathList(ps@_*) if ps.last equals "license.properties" => MergeStrategy.first
case PathList("META-INF", xs@_*) => MergeStrategy.discard
case x => MergeStrategy.first
}
}
).
settings(
//Library Dependencies and Dependency Overrides Settings
libraryDependencies ++=
loggerDep ++
sparkDeps ++
awsSDKDep,
dependencyOverrides ++= Set(scalaTest)
).
settings(
//Publishing
artifact in (Compile, assembly) := {
val art = (artifact in (Compile, assembly)).value
art.copy(`classifier` = Some("assembly"))
},
addArtifact(artifact in (Compile, assembly), assembly)
)

並且在sbt command前加上”+”號,sbt就會自動的assembly兩個版本,並且publish這兩個版本到artifact。

sbt clean update test compile +assembly +publish

--

--