当前位置: 首页 > news >正文

spark-sql : “java.lang.NoSuchFieldError: out“ 异常解决

异常现象

        at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:847)at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:922)at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:931)at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NoSuchFieldError: outat org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:221)at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:127)at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:423)at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:314)at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:433)at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:326)at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:219)at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:219)at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:219)at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)... 59 more
【ERROR】spark.sql hql error
Exception in thread "main" org.apache.spark.sql.AnalysisException: java.lang.NoSuchFieldError: out;at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:218)at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:138)at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:126)at org.apache.spark.sql.hive.HiveSessionStateBuilder.org$apache$spark$sql$hive$HiveSessionStateBuilder$$externalCatalog(HiveSessionStateBuilder.scala:39)at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$1.apply(HiveSessionStateBuilder.scala:54)at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$1.apply(HiveSessionStateBuilder.scala:54)at org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog$lzycompute(SessionCatalog.scala:90)at org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog(SessionCatalog.scala:90)at org.apache.spark.sql.catalyst.catalog.SessionCatalog.databaseExists(SessionCatalog.scala:243)at org.apache.spark.sql.catalyst.catalog.SessionCatalog.org$apache$spark$sql$catalyst$catalog$SessionCatalog$$requireDbExists(SessionCatalog.scala:177)at org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTableMetadata(SessionCatalog.scala:432)at org.apache.spark.sql.catalyst.catalog.CatalogUtils$.getMetaData(ExternalCatalogUtils.scala:265

spark 应用程序

package org.example.spark;import java.util.Base64;
import org.apache.spark.sql.SparkSession;public class JavaSparkHiveExample {public static void main(String[] args) {long start = System.currentTimeMillis();byte[] decodedBytes = Base64.getDecoder().decode(args[0]);String sql = new String(decodedBytes);System.out.println("sql:" + sql);SparkSession spark = SparkSession.builder().config("spark.sql.hive.loadStagingDirectory.enabled", args[1]).appName("Java Spark Hive Example").enableHiveSupport().getOrCreate();spark.sql(sql);long end = System.currentTimeMillis();System.out.println("cost time:" + (end - start));}
}

异常原因

版本不兼容,cdp 集群中 spark 版本是 2.4.7。Java 工程中使用的是 2.4.0


解决办法

修改 Java工程依赖,如下:

    <dependency><groupId>org.apache.spark</groupId><artifactId>spark-hive_2.11</artifactId><version>2.4.7.7.1.7.2000-305</version></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-sql_2.11</artifactId><version>2.4.7.7.1.7.2000-305</version></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-core_2.11</artifactId><version>2.4.7.7.1.7.2000-305</version></dependency>
http://www.lryc.cn/news/104540.html

相关文章:

  • Node.js入门笔记(包含源代码)以及详细解析
  • windows自动化点击大麦app抢购、捡漏,仅支持windows11操作系统
  • vue 拦截 v-html 中 a 标签 href 跳转
  • 分布式id、系统id、业务id以及主键之间的关系
  • 设计模式七:适配器模式(Adapter Pattern)
  • 数据结构---队列
  • chatGPT在软件测试中应用方式有哪些?
  • chatgpt 接口使用(一)
  • 【个人笔记】Linux 服务管理两种方式service和systemctl
  • HCIP中期考试实验
  • 【WebRTC---源码篇】(二十二)WebRTC的混音处理
  • MTK system_server 卡死导致手机重启案例分析
  • 加强 Kubernetes 能力:利用 CRD 定义多版本资源的实现方式
  • 区块链应用 DApp 开发需要掌握的技能
  • 关于新版本selenium定位元素报错:‘WebDriver‘ object has no attribute ‘find_element_by_id‘等问题
  • c++通过自然语言处理技术分析语音信号音高
  • [pymc3][python]pymc3安装后测试代码2
  • Go语言time库,时间和日期相关的操作方法
  • JVM总结笔记
  • C++ 缓存再排序,解决多线程处理后的乱序问题,不知道思路对不对[挠下巴]
  • 华为数通HCIA-地址分类及子网划分
  • Linux第七章之gdb与makefile使用
  • Mycat-Balance使用指南
  • 玩转顺序表——【数据结构】
  • SSE(Server-Sent Events,服务器推送事件)和sockets(套接字)通信区别
  • 【设计模式——学习笔记】23种设计模式——代理模式Proxy(原理讲解+应用场景介绍+案例介绍+Java代码实现)
  • 大学英语四新视野 课后习题+答案翻译 Unit1~Unit8
  • Java入门指南:Java语言优势及其特点
  • Jenkins 节点该如何管理?
  • hugging face下载数据集