zoukankan      html  css  js  c++  java
  • org.apache.spark.sql.AnalysisException: Table or view not found解决办法

    space.db
    20/07/16 17:24:59 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.2.2) is file:/data/shiseido/spark-warehouse
    20/07/16 17:24:59 INFO metastore.HiveMetaStore: 0: get_database: default
    20/07/16 17:24:59 INFO HiveMetaStore.audit: ugi=mip     ip=unknown-ip-addr      cmd=get_database: default
    20/07/16 17:24:59 INFO metastore.HiveMetaStore: 0: get_database: global_temp
    20/07/16 17:24:59 INFO HiveMetaStore.audit: ugi=mip     ip=unknown-ip-addr      cmd=get_database: global_temp
    20/07/16 17:24:59 WARN metastore.ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
    20/07/16 17:24:59 INFO metastore.HiveMetaStore: 0: get_database: dw_saas
    20/07/16 17:24:59 INFO HiveMetaStore.audit: ugi=mip     ip=unknown-ip-addr      cmd=get_database: dw_saas
    20/07/16 17:24:59 WARN metastore.ObjectStore: Failed to get database dw_saas, returning NoSuchObjectException
    Exception in thread "main" org.apache.spark.sql.AnalysisException: Table or view not found: `dw_saas`.`anticheat_log`; line 1 pos 33;
    'Project ['logtype, 'uuid, 'rules]
    +- 'Filter ('begindate = 2020-07-09)
       +- 'UnresolvedRelation `dw_saas`.`anticheat_log`
    
            at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
            at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:90)
            at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:85)
            at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
            at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126)
            at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126)
            at scala.collection.immutable.List.foreach(List.scala:392)
            at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:126)
            at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126)
            at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126)
            at scala.collection.immutable.List.foreach(List.scala:392)
            at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:126)
            at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:85)
            at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:95)
            at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:108)
            at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)
            at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
            at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
            at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
            at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
            at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
            at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:78)
            at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
            at com.ipinyou.mip.analyze.spark.base.adserving.task.ImpClk2Base$.anticheat(ImpClk2Base.scala:202)
            at com.ipinyou.mip.analyze.spark.base.adserving.task.ImpClk2Base$.execute(ImpClk2Base.scala:172)
            at com.ipinyou.mip.analyze.spark.base.adserving.task.ImpClk2Base$.main(ImpClk2Base.scala:222)
            at com.ipinyou.mip.analyze.spark.base.adserving.task.ImpClk2Base.main(ImpClk2Base.scala)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:498)
            at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
            at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
            at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
            at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
            at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
            at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
            at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
            at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    20/07/16 17:24:59 INFO spark.SparkContext: Invoking stop() from shutdown hook
    20/07/16 17:24:59 INFO server.AbstractConnector: Stopped Spark@4a67b4ec{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}

    可以检查Spark的conf目录下是否又hive-site.xml,如果没有,从Hive安装目录conf下拷贝一份过来

  • 相关阅读:
    (办公)记事本_Linux常用的文件操作命令
    (办公)记事本_Linux的In命令
    Python、Django、Celery中文文档分享
    Python循环引用的解决方案
    Django中非视图函数获取用户对象
    在Django中使用Sentry(Python 3.6.8 + Django 1.11.20 + sentry-sdk 0.13.5)
    CentOS7安装配置redis
    CentOS7配置ftp
    CentOS7安装docker和docker-compose
    CentOS7安装postgreSQL11
  • 原文地址:https://www.cnblogs.com/144823836yj/p/13324079.html
Copyright © 2011-2022 走看看