- 首先安装hadoop集群
- 下载hive安装包
- 解压到 /opt/hadoop/hive/ 目录下
- 配置环境变量hive
#jdk
export JAVA_HOME=/opt/jdk/jdk1.8.0_201 export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib
#hadoop export HADOOP_HOME=/opt/hadoop/hadoop-2.8.5 PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin export PATH
#hbase export HBASE_HOME=/opt/hadoop/hbase/hbase-1.4.9 export PATH=$HBASE_HOME/bin:$PATH
#hive export HIVE_HOME=/opt/hadoop/hive/apache-hive-2.3.4-bin export PATH=$HIVE_HOME/bin:$PATH export HIVE_CONF_DIR=$HIVE_HOME/conf/ export HIVE_AUX_JARS_PATH=$HIVE_HOME/lib/ export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HIVE_HOME/lib/* - 进入 hive目录 /conf
sudo mv hive-default.xml.template hive-default.xml
新建hive-site配置文件 sudo vim hive-site.xml #添加以下内容 <?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://mysqlIp:3306/hive?createDatabaseIfNotExist=true&useSSL=false</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>root</value> <description>password to use against metastore database</description> </property> </configuration> - 安装mysql,这里直接给出docker 方便测试
安装mysql
docker run --name some-mysql -p 3306:3306 -v /opt/docker/mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=lxr7293209 -d mysql:5.7 --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci - 上传mysql-connector-java-5.1.46-bin.jar 到 $HIVE_HOME/lib下
- 初始化表 schematool -initSchema -dbType mysql
- 启动hive CLI: ./hive
- 启动hiveserver2 : nohup ./hiveserver2 &
如果遇到root is not allowed to impersonate hive 请在hdfs core-site.yml文件中添加如下内容并重启hdfs <property> <name>hadoop.proxyuser.root.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property>
- 使用thrift客户端连接hive服务: ./beeline 然后输入 !connect jdbc:hive2://192.168.91.132:10000 hive hive