zoukankan      html  css  js  c++  java
  • spark连接hive数据库

    hive在执行查询sql时出现java.lang.IllegalArgumentException: Wrong FS: hdfs://node1:9000/user/hive/warehouse/test1.db/t1, expected: hdfs://cluster1

    原因是hadoop由普通集群修改成了高可用集群后没有更改hive设置中warehouse在hdfs上的储存路径
    修改hive-site.xml文件内hive.metastore.warehouse.dir的值

    将之前的hdfs://k200:9000/user/hive/warehouse 修改为 hdfs://k131/user/hive/warehouse

    (这里的hdfs://cluster1是Hadoop配置文件core-site.xml中的fs.defaultFS指定的值)

     1 <?xml version="1.0"?>
     2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
     3 <configuration>
     4         <property>
     5             <name>javax.jdo.option.ConnectionURL</name>
     6             <value>jdbc:mysql://k131:3306/metastore? 
     7               createDatabaseIfNotExist=true</value>
     8             <description>JDBC connect string for a JDBC 
     9             metastore</description>
    10         </property>
    11 
    12         <property>
    13            <name>javax.jdo.option.ConnectionDriverName</name>
    14            <value>com.mysql.jdbc.Driver</value>
    15            <description>Driver class name for a JDBC 
    16             metastore</description>
    17          </property>
    18 
    19          <property>
    20               <name>javax.jdo.option.ConnectionUserName</name>
    21               <value>root</value>
    22               <description>username to use against metastore 
    23               database</description>
    24          </property>
    25 
    26          <property>
    27                <name>javax.jdo.option.ConnectionPassword</name>
    28                <value>root</value>
    29                <description>password to use against metastore 
    30                database</description>
    31         </property>
    32 
    33         <property>
    34                <name>hive.cli.print.header</name>
    35                <value>true</value>
    36         </property>
    37 
    38         <property>
    39                <name>hive.cli.print.current.db</name>
    40                <value>true</value>
    41         </property>
    42         <property>
    43                <name>hive.exec.mode.local.auto</name>
    44                <value>true</value>
    45         </property>
    46 
    47         <property>
    48                 <name>hive.zookeeper.quorum</name>
    49                 <value>k131</value>
    50                 <description>The list of ZooKeeper servers to talk to. This is only needed for read/write locks.</description>
    51                 </property>
    52 
    53         <property>
    54               <name>hive.zookeeper.client.port</name>
    55               <value>2181</value>
    56               <description>The port of ZooKeeper servers to talk to. This is only needed for read/write locks.</description>
    57         </property>
    58 
    59 </configuration>
    hive-site.xml

    spark 无法查看 hive 表中原来的内容,只能重新创建新表

    hive (default)> select * from emp;
    FAILED: SemanticException Unable to determine if hdfs://k200:9000/user/hive/warehouse/emp is encrypted: java.lang.IllegalArgumentException: Wrong FS: hdfs://k200:9000/user/hive/warehouse/emp, expected: hdfs://k131:9000
    hive (default)>

  • 相关阅读:
    windows7 端口查看以及杀死进程释放端口
    字符设备驱动模块与测试代码编写。
    c++项目范例
    较复杂makefile跟lds脚本程序的编写
    S5PV210时钟,看门狗定时器
    S5PV210中断处理
    arm 异常处理结构
    arm指令系统
    arm体系结构
    s5pv210 的启动
  • 原文地址:https://www.cnblogs.com/Vowzhou/p/10882160.html
Copyright © 2011-2022 走看看