1.centos7 安装mysql
mysql安装
- 1.由于CentOS 的yum源中没有mysql,需要到mysql的官网下载yum repo配置文件。下载命令:
wget https://dev.mysql.com/get/mysql57-community-release-el7-9.noarch.rpm
- 2.然后进行repo的安装:
rpm -ivh mysql57-community-release-el7-9.noarch.rpm
,执行完成后会在/etc/yum.repos.d/目录下生成两个repo文件mysql-community.repo mysql-community-source.repo - 3.使用yum命令即可完成安装,注意:必须进入到 /etc/yum.repos.d/目录后再执行以下脚本
安装命令:yum install mysql-server
启动msyql:systemctl start mysqld #启动MySQL
- 4.跳过密码验证方法:
vim /etc/my.cnf
(注:windows下修改的是my.ini),在文档内搜索mysqld定位到[mysqld]文本段:mysqld(在vim编辑状态下直接输入该命令可搜索文本内容),在[mysqld]后面任意一行添加“skip-grant-tables”用来跳过密码验证的过程. - 5.为root用户授权执行以下命令,输入
mysql -u root
,则可进入mysql界面
use mysql;
delete from user where 1=1;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'root' WITH GRANT OPTION;
FLUSH PRIVILEGES;
- 6.xshell界面输入
mysql -u root
,则可进入mysql界面
2.配置hive
- 1.将安装包apache-hive-1.2.1-bin.tar.gz上传到/opt/目录下
- 2.解压安装包到/usr/local/目录下,解压命令:
tar -zxf apache-hive-1.2.1-bin.tar.gz -C /usr/local/
- 3.进入到Hive的安装目录的conf目录下,重命名hive-env.sh.template为hive-env.sh。
mv hive-env.sh.template hive-env.sh
修改hive-env.sh文件,在末尾添加如下内容export HADOOP_HOME=/usr/local/hadoop-2.6.5
- 5.将hive-site.xml文件上传到/usr/local/apache-hive-1.2.1-bin/conf目录下
cp /opt/hive-site.xml /usr/local/apache-hive-1.2.1-bin/conf/
- 6.上传mysql驱动mysql-connector-java-5.1.32-bin.jar到/usr/local/apache-hive-1.2.1-bin/lib目录
cp /opt/mysql-connector-java-5.1.32-bin.jar /usr/local/apache-hive-1.2.1-bin/lib/
- 7.在终端执行以下命令
mv /usr/local/hadoop-2.6.5/share/hadoop/yarn/lib/jline-0.9.94.jar /usr/local/hadoop-2.6.5/share/hadoop/yarn/lib/jline-0.9.94.jar.bak
cp /usr/local/apache-hive-1.2.1-bin/lib/jline-2.12.jar /usr/local/hadoop-2.6.5/share/hadoop/yarn/lib/
scp /usr/local/apache-hive-1.2.1-bin/lib/jline-2.12.jar slave1:/usr/local/hadoop-2.6.5/share/hadoop/yarn/lib/
scp /usr/local/apache-hive-1.2.1-bin/lib/jline-2.12.jar slave2:/usr/local/hadoop-2.6.5/share/hadoop/yarn/lib/
scp /usr/local/apache-hive-1.2.1-bin/lib/jline-2.12.jar slave3:/usr/local/hadoop-2.6.5/share/hadoop/yarn/lib/
- 8.设置环境变量vi /etc/profile,添加:
export HIVE_HOME=/usr/local/apache-hive-1.2.1-bin
export PATH=$HIVE_HOME/bin:$PATH
source /etc/profile使配置生效
-
9.初始化元数据库,进入hive安装包bin目录
./schematool -dbType mysql -initSchema
-
10.在集群已启动的情况下启动Hive
hive --service metastore & #启动元数据服务,多了一个11834 RunJar进程
nohup hive --service hiveserver2 &
3. hive使用
- 输入hive命令,则进入hive界面,接下来就可以使用mysql语句建库建表
user 数据库名;
hive> create table people(
> num int,
> name string,
> age int)
> row format delimited fields terminated by ','; #建表,字段分隔符为,
hive> load data inpath '/sparkdata/person.txt' into table people;#导入hdfs数据到表中
hive> select * from people;
接下来在hdfs的 /user/hive/warehouse/就可以看到表了