zoukankan      html  css  js  c++  java
  • hadoop安装部署3------安装hive

    安装mysql

    mysql装在了master节点上

    1)卸载系统自带的mysql相关安装包,仅卸载 mysql 开头的包

    rpm -qa|grep -i mysql

    -i 作用是不区分大小写

    可以看到有两个安装包

    MySQL-server-5.6.19-1.linux_glibc2.5.x86_64.rpm

    MySQL-client-5.6.19-1.linux_glibc2.5.x86_64.rpm

    删除这两个服务(去掉后缀)

    rpm -e MySQL-client-5.6.19-1.linux_glibc2.5.x86_64

    rpm -e MySQL-server-5.6.19-1.linux_glibc2.5.x86_64

    查看残留的目录:

    whereis mysql

    然后删除mysql目录:

    rm –rf /usr/lib64/mysql

    删除相关文件:

    rm –rf /usr/my.cnf

    rm -rf /root/.mysql_sercret

    最关键的:

    rm -rf /var/lib/mysql

    注:删除centos7自带的mariabd

    rpm -qa | grep mariadb

    rpm -e --nodeps mariadb-libs-5.5.44-2.e17.centos.x86_64

    2)安装mysql依赖

    yum install vim libaio net-tools

    3)安装mysql5.5.39的rpm包

    rpm -ivh /opt/MySQL-server-5.5.39-2.el6.x86_64.rpm

    rpm -ivh /opt/MySQL-client-5.5.39-2.el6.x86_64.rpm

    4)拷贝配置文件

    cp /usr/share/mysql/my-medium.cnf /etc/my.cnf

    5)启动mysql服务

    service mysql start

    6)设置为开机自启动

    chkconfig mysql on

    7)设置root用户登录密码

    /usr/bin/mysqladmin -u root password 'root'

    8)登录mysql 以root用户身份登录

    mysql -uroot –proot

    安装hive

    hive装在了master节点上

    1)在mysql中创建hive用户,数据库等

    insert into mysql.user(Host,User,Password) values("localhost","hive",password("hive"));

    create database hive;

    grant all on hive.* to hive@'%' identified by 'hive';

    grant all on hive.* to hive@'localhost' identified by 'hive';

    flush privileges;

    2)退出mysql

    exit

    3)添加环境变量

    4)修改hive-site.xml

    <property>

    <name>javax.jdo.option.ConnectionURL</name>

    <value>jdbc:mysql://localhost:3306/hive</value>

    <description>JDBC connect string for a JDBC metastore</description>

    </property>

    <property>

    <name>javax.jdo.option.ConnectionDriverName</name>

    <value>com.mysql.jdbc.Driver</value>

    <description>Driver class name for a JDBC metastore</description>

    </property>

    <property>

    <name>javax.jdo.option.ConnectionPassword</name>

    <value>hive</value>

    <description>password to use against metastore database</description>

    </property>

    <property>

    <name>hive.hwi.listen.port</name>

    <value>9999</value>

    <description>This is the port the Hive Web Interface will listen on</description>

    </property>

    <property>

    <name>datanucleus.autoCreateSchema</name>

    <value>true</value>

    <description>creates necessary schema on a startup if one doesn't exist. set this to false, after creating it once</description>

    </property>

    <property>

    <name>datanucleus.fixedDatastore</name>

    <value>false</value>

    <description/>

    </property>

    <property>

    <name>javax.jdo.option.ConnectionUserName</name>

    <value>hive</value>

    <description>Username to use against metastore database</description>

    </property>

    <property>

    <name>hive.exec.local.scratchdir</name>

    <value>/opt/apache-hive-1.2.1-bin/iotmp</value>

    <description>Local scratch space for Hive jobs</description>

    </property>

    <property>

    <name>hive.downloaded.resources.dir</name>

    <value>/opt/apache-hive-1.2.1-bin/iotmp</value>

    <description>Temporary local directory for added resources in the remote file system.</description>

    </property>

    <property>

    <name>hive.querylog.location</name>

    <value>/opt/apache-hive-1.2.1-bin/iotmp</value>

    <description>Location of Hive run time structured log file</description>

    </property>

    5)拷贝mysql-connector-java-5.1.6-bin.jar 到hive 的lib下面

    mv /home/hadoop/Desktop/mysql-connector-java-5.1.6-bin.jar /opt/ apache-hive-1.2.1-bin/lib/

    6)把jline-2.12.jar拷贝到hadoop相应的目录下,替代jline-0.9.94.jar,否则启动会报错

    cp /opt/apache-hive-1.2.1-bin/lib/jline-2.12.jar /opt/hadoop-2.6.3/share/hadoop/yarn/lib/

    mv /opt/hadoop-2.6.3/share/hadoop/yarn/lib/jline-0.9.94.jar /opt/hadoop-2.6.3/share/hadoop/yarn/lib/jline-0.9.94.jar.bak

    7)创建hive临时文件夹

    mkdir /opt/apache-hive-1.2.1-bin/iotmp

    8)启动测试hive

    启动hadoop后,执行hive命令

    hive

    测试输入 show database;

    hive> show databases;

    OK

    default

    Time taken: 0.907 seconds, Fetched: 1 row(s)

  • 相关阅读:
    PTA中如何出Java编程题?
    20145120黄玄曦 《java程序设计》 寒假学习总结
    java EE技术体系——CLF平台API开发注意事项(1)——后端开发
    相信自己、相信未来—2017半年总结
    API生命周期第三阶段:API实施:使用swagger codegen生成可部署工程,择取一个作为mock service
    API生命周期第三阶段:API实施模式,以及结合swagger和项目现状的最佳模式
    API生命周期第二阶段——设计:如何设计API(基于swagger进行说明)
    API生命周期第二阶段——设计:采用swagger进行API描述、设计
    API生命周期
    API经济时代的思考(转载目的:为之后写API-first模式的生命周期治理做准备)
  • 原文地址:https://www.cnblogs.com/niuxiaoha/p/5303780.html
Copyright © 2011-2022 走看看