hadoop部署参考:https://www.cnblogs.com/barneywill/p/10428098.html
1 拷贝到所有服务器上并解压
# ansible all-servers -m copy -a 'src=/src/path/to/apache-hive-2.3.4-bin.tar.gz dest=/dest/path/to/'
# ansible all-servers -m shell -a 'tar xvf /dest/path/to/apache-hive-2.3.4-bin.tar.gz -C /app/path'
2 拷贝mysql-connector-java.jar
# ansible all-servers -m shell -a 'cp /path/to/mysql-connector-java.jar /app/path/apache-hive-2.3.4-bin/lib/'
3 准备配置文件
hive-site.xml
<configuration>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://node0:3306/hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://node0:9083</value>
</property>
</configuration>
4 将配置同步到所有服务器上
# ansible all-servers -m copy -a 'src=/path/to/config/ dest=/app/path/apache-hive-2.3.4-bin/conf/'
5 初始化数据库
# echo "create database hive;grant all privileges on hive.* to hive@'%' identified by 'hive';"|mysql -uroot -proot
# su - hadoop
$ /app/path/apache-hive-2.3.4-bin/bin/schematool -dbType mysql -initSchema
6 启动metastore
# su - hadoop
$ /app/path/apache-hive-2.3.4-bin/bin/hive --service metastore
7 启动hive thrift server
# su - hadoop
$ /app/path/apache-hive-2.3.4-bin/bin/hive --service hiveserver2
如果运行sql时报Error: Java heap space,即mapper或reducer内存溢出,可以临时调整参数
set mapreduce.map.memory.mb=3072;
set mapreduce.map.java.opts=-Xmx2048m;
set mapreduce.reduce.memory.mb=3072;
set mapreduce.reduce.java.opts=-Xmx2048m;
以上配置可以在mapred-site.xml中永久修改