Zeppelin中导入和使用胖子罐时库兼容性问题

我们在EMR上运行了一个胖子罐,该胖子罐存储了我们所有的Spark作业,以及用于将数据集标准化读写到s3的辅助方法。

在使用齐柏林飞艇进行原型制作时,我们希望向我们的工程师和数据科学家提供这些辅助方法,因此我尝试了一些调整,将罐子直接导入齐柏林飞艇。我确保我们使用的是与Zeppelin(spark 2.4.0)相同的SparkSession和spark版本,以及hadoop-aws版本,但是当我们尝试在Zeppelin上执行任何groupBys时,我仍然遇到此错误(选择和直接agg方法工作正常):

org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 6.0 failed 4 times,most recent failure: Lost task 1.3 in stage 6.0 (TID 286,ip-10-251-105-128.ec2.internal,executor 1): java.io.InvalidClassException: org.apache.spark.sql.execution.FileSourceScanExec; local class incompatible: stream classdesc serialVersionUID = 2742886231279821360,local class serialVersionUID = 4939607015738524857

我的猜测是,执行器和主控器之间的库不匹配是由于加载胖子造成的。我只是在寻找罪魁祸首,并希望再加上另一双眼睛。

这是EMR集群使用的:

Release label:emr-5.23.0
Hadoop distribution:Amazon 2.8.5
Applications:Ganglia 3.7.2,Spark 2.4.0,Zeppelin 0.8.1

这是我的胖罐构建文件中定义的依赖项:

libraryDependencies += "org.apache.hadoop" % "hadoop-aws" % "2.8.5"
libraryDependencies += "org.apache.spark" %% "spark-hive" % sparkVersion % "provided"
libraryDependencies += "org.apache.spark" %% "spark-core" % sparkVersion % "provided"
libraryDependencies += "org.apache.spark" %% "spark-sql" % sparkVersion
libraryDependencies += "com.typesafe" % "config" % "1.3.0"
libraryDependencies += "com.amazonaws" % "aws-java-sdk" % "1.11.189"
libraryDependencies += "com.fasterxml.jackson.core" % "jackson-core" % "2.6.5"
libraryDependencies += "com.fasterxml.jackson.core" % "jackson-databind" % "2.6.5"
libraryDependencies += "com.fasterxml.jackson.module" % "jackson-module-scala_2.11" % "2.6.5"
libraryDependencies += "com.clearspring.analytics" % "stream" % "2.9.5"
libraryDependencies += "com.company" % "company-shared" % "1.0.98-snAPSHOT"
libraryDependencies += "com.company" % "companycalc" % "1.0.98-snAPSHOT"
libraryDependencies ++= Seq(
  "org.slf4s" %% "slf4s-api" % "1.7.12","ch.qos.logback" % "logback-classic" % "1.1.2" excludeAll ExclusionRule(organization = "io.netty")
)
libraryDependencies += "org.scalanlp" %% "breeze" % "0.13.2"
libraryDependencies += "org.scalatest" %% "scalatest-all" % "3.0.0-M11"

libraryDependencies += "org.apache.spark" %% "spark-mllib" % sparkVersion % "runtime"
libraryDependencies += "io.spray" %% "spray-json" % "1.3.5"

dependencyOverrides += "com.fasterxml.jackson.core" % "jackson-core" % "2.6.5"
dependencyOverrides += "com.fasterxml.jackson.core" % "jackson-databind" % "2.6.5"
dependencyOverrides += "com.fasterxml.jackson.module" % "jackson-module-scala_2.11" % "2.6.5"

libraryDependencies += "io.prometheus" % "simpleclient" % "0.6.0"
libraryDependencies += "io.prometheus" % "simpleclient_common" % "0.6.0"
libraryDependencies += "io.prometheus" % "simpleclient_pushgateway" % "0.6.0"
//libraryDependencies += "com.julianpeeters" %% "avrohugger-core" % "1.0.0-RC18"

libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.5" % "test"

我已经尝试过的事情:

试图使spark-sql成为提供的库,但是如果我这样做,加载时实际上会使齐柏林飞艇崩溃。我怀疑这是孩子的问题-只需弄清楚如何同步版本

corlins 回答:Zeppelin中导入和使用胖子罐时库兼容性问题

暂时没有好的解决方案,如果你有好的解决方案,请发邮件至:iooj@foxmail.com
本文链接:https://www.f2er.com/3142586.html

大家都在问