hive on spark 配置时报错:Job failed with java.lang.ClassNotFoundException: org.apache.spark.AccumulatorPa

hive on spark 配置时报错:Job failed with java.lang.ClassNotFoundException: org.apache.spark.AccumulatorPa,第1张

hive on spark 配置时报错:Job failed with java.lang.ClassNotFoundException: org.apache.spark.AccumulatorPa

1.执行sql语句,报错信息。

hive> insert into table student values(1,'abc'); Query ID = atguigu_20200814150018_318272cf-ede4-420c-9f86-c5357b57aa11 Total jobs = 1 Launching Job 1 out of 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= Job failed with java.lang.ClassNotFoundException: org.apache.spark.AccumulatorParam FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause. 

原因:由于当前的hive的版本3.1.2,spark版本3.0.0,只能自己编译

建议用官方发布的hive+spark版本搭配。

安装和Spark对应版本一起编译的Hive,当前官网推荐的版本关系如下:

HiveVersionSparkVersion1.1.x1.2.01.2.x1.3.12.0.x1.5.02.1.x1.6.02.2.x1.6.02.3.x2.0.03.0.x2.3.0master2.3.0

若版本一致,还报该错误:

若配置的是HA,则:hive-site.xml是如下:


    spark.yarn.jars
    hdfs://mycluster/spark-jars/*

若不是以上原因:

则删掉hive,重新进行安装,也许是在hive还没解压完,你就进行了mv加压后的目录,导致jar包不全;

欢迎分享,转载请注明来源:内存溢出

原文地址: https://www.outofmemory.cn/zaji/5682491.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-17
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存