when i do some sql-related certain simple test programs,this exception occurs to me.although it seems weird.
(used spark-1.3.1 for project needness)
scala.reflect.internal.MissingRequirementError: class org.apache.spark.sql.catalyst.ScalaReflection in JavaMirror at scala.reflect.internal.MissingRequirementError$.signal(MissingRequirementError.scala:16) at scala.reflect.internal.MissingRequirementError$.notFound(MissingRequirementError.scala:17) at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:48)
and the program is pretty simple
val x = sqlCtx.udf.register("cude", (num :Integer) => num * num *num); sqlCtx.sql("select cude(4) c").show()
then i run it again using spark-shell repl,but it runs quite well.so i think what's the differences between it and ide?
first,the spark jars will be one of the case.so i run this code again in a new project which refers the spark's assembly jar.yeh it accomplishs successfuly too.so the proof of jar is here.
but i look into the exception stacktrace agagin,i found that one of the caller path is:RootsBase.getMuduleOrClass(described previously),then i dived into the src of spark,yeh. a inspiration flashes in my mind:class compatiblity is it!
after a googling, i found that that is a bug about dataframe
Registering table on RDD is giving MissingRequirementError [spark-5281]
ie. https://github.com/apache/spark/pull/5981
Replaced calls to typeOf with typeTag[T].in(mirror). The convenience method assumes all types can be found in the classloader that loaded scala-reflect (the primordial classloader). This assumption is not valid in all contexts (sbt console, Eclipse launchers).
solutions:
1.replace all the scala-related jars installed in ide with the ones you installed scala manually
i found that all the scala jars referenced by project are located in eclipse plugins dir.
2.upgrade to spark 1.4.1+
it means that replace the scala-related jars or spark jars may do the trick,that is it.
相关推荐
spark-3.1.2.tgz版本 & spark-3.1.2-bin-hadoop2.7.tgz版本
本资源是spark-2.0.0-bin-hadoop2.6.tgz百度网盘资源下载,本资源是spark-2.0.0-bin-hadoop2.6.tgz百度网盘资源下载
Apache Spark版本3.1.3。Linux安装包。spark-3.1.3-bin-hadoop3.2.tgz
spark-hive_2.11-2.3.0 spark-hive-thriftserver_2.11-2.3.0.jar log4j-2.15.0.jar slf4j-api-1.7.7.jar slf4j-log4j12-1.7.25.jar curator-client-2.4.0.jar curator-framework-2.4.0.jar curator-recipes-2.4.0....
Spark安装包:spark-3.1.3-bin-without-hadoop.tgz
pyspark本地的环境配置包,spark-2.3.4-bin-hadoop2.7.tgz:spark-2.3.4-bin-hadoop2.7.tgz
Spark Doris Connector(apache-doris-spark-connector-2.3_2.11-1.0.1-incubating-src.tar.gz) Spark Doris Connector Version:1.0.1 Spark Version:2.x Scala Version:2.11 Apache Doris是一个现代MPP分析...
内容概要:由于cdh6.3.2的spark版本为2.4.0,并且spark-sql被阉割,现基于cdh6.3.2,scala2.12.0,java1.8,maven3.6.3,,对spark-3.2.2源码进行编译 应用:该资源可用于cdh6.3.2集群配置spark客户端,用于spark-sql
spark-3.2.0-bin-hadoop3.2.tgz
spark-3.2.4-bin-hadoop3.2-scala2.13 安装包
这是每个学习spark必备的jar包,是根据我的个人试验后所得,官网正版,在spark官网下载。 资源包里不仅有需要的jar包,并且给不会再官网上下载的新手官方网址,可以自由下载资源
spark-3.0.0-bin-hadoop3.2下载安装包
spark-3.2.1-bin-hadoop3.2-scala2.13.tgz
spark-streaming-flume_2.11-2.1.0.jar
mongodb-spark官方连接器,运行spark-submit --packages org.mongodb.spark:mongo-spark-connector_2.11:1.1.0可以自动下载,国内网络不容易下载成功,解压后保存到~/.ivy2目录下即可。
spark-assembly-1.5.2-hadoop2.6.0 在spark编程中使用的一个jar
spark-2.4.8-bin-hadoop2.7.tgz
文件名: spark-3.4.1-bin-hadoop3.tgz 这是 Apache Spark 3.4.1 版本的二进制文件,专为与 Hadoop 3 配合使用而设计。Spark 是一种快速、通用的集群计算系统,用于大规模数据处理。这个文件包含了所有必要的组件,...
linux的spark新版本,匹配hadoop2.7版本,spark-3.2.1-bin-hadoop2.7.tgz
spark-3.0.0-bin-hadoop2.7.tgz 官网下载不了的,需要资源的,可以到这里下载哦