zoukankan      html  css  js  c++  java
  • linux安装卸载MySQL以及密码设置+Hive测试

    linux系统卸载MYSQL

    	1,先通过yum方式卸载mysql及相关组件 命令:yum remove mysql*
    	2.通过命令:rpm -qa|grep -i mysql   查找系统的有关于mysql的文件
    	3.然后通过命令:sudo rpm -e --nodeps 包名删除mysql有关软件
    	4.卸载后/etc/my.cnf不会删除,需要进行手工删除通过命令:rm -rf /etc/my.cnf
    	需要删除配置文件/etc/my.cnf和数据库文件/var/lib/mysql  删除命令 rm- rf 文件名/文件夹名
    	5.最后再次通过命令 rpm -qa|grep -i mysql来确认系统中是否还含有mysql相关的文件,若没有,则表示卸载干净
    

    Linux系统安装MySQL

    	1.下载MySQL的Linux版本注意:下载好的MySQL你需要上传到Linux上才行,同时使用tar -zxvf压缩文件名解压
    	2.进入Linux系统后,先切换成root用户,root用户有更高的权限,有权限卸载系统服务,su - root  回车,然后输入密码
    	3.查看系统是否已经安装MySQL rpm -qa | grep mysql     或 rpm -qa | grep -i mysql
    	4.安装命令   rpm -ivh 服务名
    	我们需要安装MySQL服务端(Server)和客户端(client)
    	rpm -ivh MySQL-server-5.6.30-1. linux_glibc2.5. x86_64.rpm
    	rpm -ivh MySQL-client-5.6.30-1. linux_glibc2.5. x86_64.rpm
    	注意: 必须安装客户端,否则你在Linux上通过命令是不能进入MySQL的,如输入命令mysql会提示错误.
    	开启MySQL服务 service mysql start
    	5.安装完成后,可以通过命令netstat -nat查看Linux的端口监控,看看Linux有没有在监控3306端口
    	
    	yum install 安装方式
    	
    	1.1、查看有没有安装过:yum list installed mysql*
    	1.2、查看有没有安装包:yum list mysql*
    	1.3、安装mysql客户端:yum install mysql
    	1.4、安装mysql 服务器端:yum install mysql-server
    	1.5、数据库字符集设置mysql配置文件/etc/my.cnf中加入default-character-set=utf8
    	1.6、启动mysql服务:service mysqld start或者/etc/init.d/mysqld start
    	1.7、开机启动:sudo chkconfig mysqld on,chkconfig --list | grep mysql*
    	mysqld             0:关闭    1:关闭    2:启用    3:启用    4:启用    5:启用    6:关闭
    	1.8、停止:service mysqld stop
    	1.9、开启登录创建root管理员:mysqladmin -u root password 123456
    	登录: mysql -u root -p输入密码即可
    	2.0、忘记密码:
    	service mysqld stop
    	mysqld_safe --user=root --skip-grant-tables
    	mysql -u root
    	use mysql
    	update user set password=password("new_pass") where user="root";
    	flush privileges; 
    

    MySQL修改初始密码

    	注意:先stop你的myslq服务,service mysql stop或者  /etc/init.d/mysqld stop
    	
    	1.若没有root权限,这种情况下,我们可以采用类似安全模式的方法修改初始密码,先执行命令  mysqld_safe --skip-grant-tables &   (设置成安全模式)&,表示在后台运行,不再后台运行的话,就再打开一个终端咯
    	
    	<1># mysql
    	mysql> use mysql;
    	mysql> UPDATE user SET password=password("123456") WHERE user='root';    (会提示修改成功query ok)
    	mysql> flush privileges;
    	mysql> exit;
    	
    	<2>在mysql系统外,使用mysqladmin
    	# mysqladmin -u root -p password "test123"
    	Enter password: 【输入原来的密码】
    	
    	<3>. 可以登录mysql系统的情况下,通过登录mysql系统修改
    	# mysql -uroot -p
    	Enter password: 【输入原来的密码】
    	mysql>use mysql;
    	mysql> update user set password=password("123456") where user='root';
    	mysql> flush privileges;
    	mysql> exit; 
    	
    	2.将MySQL加入到系统启动项中 chkconfig mysql on 查看MySQL是否加入到系统启动项中  chkconfig --list | grep mysql
    	3.登录你的MySQL系统  mysql -uroot -p回车,然后输入你的密码
    	4.添加系统mysql组和mysql用户:执行命令:groupadd mysql和useradd -r -g mysql mysql
    	5.修改当前data目录拥有者为mysql用户:执行命令 chown -R mysql:mysql data
    	6.把mysql客户端放到默认路径:ln -s /usr/local/mysql/bin/mysql /usr/local/bin/mysql
    	注意:建议使用软链过去,不要直接包文件复制,便于系统安装多个版本的mysql
    

    MYSQL服务的状态、启动、停止、重启命令

    	service mysql start      或    /etc/init.d/mysql start
    	service mysql stop      或    /etc/init.d/mysql stop
    	service mysql restart   或    /etc/init.d/mysql restart
    	service mysql status    或    /etc/init.d/mysql status
    

    hive的安装及配置

    	1.启动设置mysql
    	启动mysql服务
    	sudo service mysql start
    
    	2.设置为开机自启动
    	sudo chkconfig mysql on
    
    	3.设置root用户登录密码
    	sudo /usr/bin/mysqladmin -u root password 'root123'
    
    	4.登录mysql  以root用户身份登录
    	mysql -uroot -proot123
    	
    	5.创建hive用户,数据库等
    	insert into mysql.user(Host,User,Password) values("localhost","hive",password("hive"));
    	create database hive;
    	grant all on hive.* to hive@'%'  identified by 'hive';
    	grant all on hive.* to hive@'localhost'  identified by 'hive';
    	flush privileges; 
    
    	6.退出mysql 
    	exit
    	
    	7.验证hive用户
    	mysql -uhive -phive
    	show databases;
    	+--------------------+
    	| Database           |
    	+--------------------+
    	| information_schema |
    	| hive               |
    	| test               |
    	+--------------------+
    	3 rows in set (0.00 sec)
    	退出mysql
    	exit
    

    安装hive

    	1,解压安装包
    	cd  ~
    	tar -zxvf apache-hive-1.1.0-bin.tar.gz
    	2,建立软连接
    	ln -s apache-hive-1.1.0-bin hive
    	3,添加环境变量
    	vi  .bash_profile
    	导入下面的环境变量
    	export HIVE_HOME=/home/hdpsrc/hive
    	export PATH=$PATH:$HIVE_HOME/bin
    
    	使其有效
    	source .bash_profile
    

    配置hive
    4.修改hive-site.xml
    cp hive/conf/hive-default.xml.template hive/conf/hive-site.xml
    编辑hive-site.xml

    	主要修改以下参数
    	<property> 
    	   <name>javax.jdo.option.ConnectionURL </name> 
    	   <value>jdbc:mysql://Master:3306/hive </value> 
    	</property> 
    	 
    	<property> 
    	   <name>javax.jdo.option.ConnectionDriverName </name> 
    	   <value>com.mysql.jdbc.Driver </value> 
    	</property>
    
    	<property> 
    	   <name>javax.jdo.option.ConnectionPassword </name> 
    	   <value>hive </value> 
    	</property> 
    	 
    	<property> 
    	   <name>hive.hwi.listen.port </name> 
    	   <value>9999 </value> 
    	   <description>This is the port the Hive Web Interface will listen on </descript ion> 
    	</property> 
    
    	<property> 
    	   <name>datanucleus.autoCreateSchema </name> 
    	   <value>true</value> 
    	</property> 
    	 
    	<property> 
    	   <name>datanucleus.fixedDatastore </name> 
    	   <value>false</value> 
    	</property> 
    	</property> 
    
    	  <property>
    	    <name>javax.jdo.option.ConnectionUserName</name>
    	    <value>hive</value>
    	    <description>Username to use against metastore database</description>
    	  </property>
    
    	  <property>
    	    <name>hive.exec.local.scratchdir</name>
    	    <value>/home/hdpsrc/hive/iotmp</value>
    	    <description>Local scratch space for Hive jobs</description>
    	  </property>
    	  <property>
    	    <name>hive.downloaded.resources.dir</name>
    	    <value>/home/hdpsrc/hive/iotmp</value>
    	    <description>Temporary local directory for added resources in the remote file system.</description>
    	  </property>
    	  <property>
    	    <name>hive.querylog.location</name>
    	    <value>/home/hdpsrc/hive/iotmp</value>
    	    <description>Location of Hive run time structured log file</description>
    	  </property>
    	  
    	5,拷贝mysql-connector-java-5.1.6-bin.jar 到hive 的lib下面
    	mv /home/hdpsrc/Desktop/mysql-connector-java-5.1.6-bin.jar /home/hdpsrc/hive/lib/
    	cp  mysql-connector-java-5.1.1.18-bin  /usr/hive/lib 
    
    	6,把jline-2.12.jar拷贝到hadoop相应的目录下,替代jline-0.9.94.jar,否则启动会报错
    	cp /home/hdpsrc/hive/lib/jline-2.12.jar /home/hdpsrc/hadoop-2.6.0/share/hadoop/yarn/lib/
    	mv /home/hdpsrc/hadoop-2.6.0/share/hadoop/yarn/lib/jline-0.9.94.jar 
    	/home/hdpsrc/hadoop-2.6.0/share/hadoop/yarn/lib/jline-0.9.94.jar.bak /
    	7,穿件hive临时文件夹
    	mkdir /home/hdpsrc/hive/iotmp
    	
    	四,启动测试hive
    	
    	初始化hive元数据仓库 
                该执行目录$HIVE_HOME/bin 
                bin]#./schematool -initSchema -dbType mysql -userName hive -passWord hive
                
    	启动hadoop后,执行hive命令
    	
    	#hive
    
    	测试输入 show database;
    	hive> show databases;
    	OK
    	default
    	Time taken: 0.907 seconds, Fetched: 1 row(s)
    

    hive 产生的log 的路径

    	<property>
    	      <name>hive.querylog.location</name>
    	      <value>${system:java.io.tmpdir}/${system:user.name}</value>
    	      <description>Location of Hive run time structured log file</description>
           </property>
           修改hive-log4j.properties配置文件
    
          cp hive-log4j.properties.template  hive-log4j.proprties
    
           # list of properties
          property.hive.log.level = INFO
          property.hive.root.logger = DRFA
          property.hive.log.dir = ${sys:java.io.tmpdir}/${sys:user.name}
          property.hive.log.file = hive.log
          property.hive.perflogger.log.level = INFO
          
    
    	1) 在mysql里创建hive用户,并赋予其足够权限
    	[root@node01 mysql]# mysql -u root -p
    	Enter password:
    
    	mysql> create user 'hive' identified by 'hive';
    	Query OK, 0 rows affected (0.00 sec)
    
    	mysql> grant all privileges on *.* to 'hive' with grant option;
    	Query OK, 0 rows affected (0.00 sec)
    
    	mysql> flush privileges;
    	Query OK, 0 rows affected (0.01 sec)
    
    	2)测试hive用户是否能正常连接mysql,并创建hive数据库
    	[root@node01 mysql]# mysql -u hive -p
    	Enter password:
    
    	mysql> create database hive;
    	Query OK, 1 row affected (0.00 sec)
    
    	mysql> use hive;
    	Database changed
    	mysql> show tables;
    	Empty set (0.00 sec)
    
    	3)解压缩hive安装包
    	tar -xzvf hive-0.9.0.tar.gz
    	[hadoop@node01 ~]$ cd hive-0.9.0
    	[hadoop@node01 hive-0.9.0]$ ls
    	bin  conf  docs  examples  lib  LICENSE  NOTICE  README.txt  RELEASE_NOTES.txt  scripts  src
    
    	4)下载mysql连接java的驱动 并拷入hive home的lib下
    	[hadoop@node01 ~]$ mv mysql-connector-java-5.1.24-bin.jar ./hive-0.9.0/lib
    
    	5)修改环境变量,把Hive加到PATH
    	/etc/profile
    	export HIVE_HOME=/home/hadoop/hive-0.9.0
    	export PATH=$PATH:$HIVE_HOME/bin
    
    	6)修改hive-env.sh
    	[hadoop@node01 conf]$ cp hive-env.sh.template hive-env.sh
    	[hadoop@node01 conf]$ vi hive-env.sh
    
    	7)拷贝hive-default.xml 并命名为 hive-site.xml
    	修改四个关键配置 为上面mysql的配置
    	[hadoop@node01 conf]$ cp hive-default.xml.template hive-site.xml
    	[hadoop@node01 conf]$ vi hive-site.xml
    	<property>
    	  <name>javax.jdo.option.ConnectionURL</name>
    	  <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
    	  <description>JDBC connect string for a JDBC metastore</description>
    	</property>
    
    	<property>
    	  <name>javax.jdo.option.ConnectionDriverName</name>
    	  <value>com.mysql.jdbc.Driver</value>
    	  <description>Driver class name for a JDBC metastore</description>
    	</property>
    
    	<property>
    	  <name>javax.jdo.option.ConnectionUserName</name>
    	  <value>hive</value>
    	  <description>username to use against metastore database</description>
    	</property>
    
    	<property>
    	  <name>javax.jdo.option.ConnectionPassword</name>
    	  <value>hive</value>
    	  <description>password to use against metastore database</description>
    	</property>
    
    	8)启动Hadoop,打开hive shell 测试
    	[hadoop@node01 conf]$ start-all.sh
    
    	hive> load data inpath 'hdfs://node01:9000/user/hadoop/access_log.txt'
    	    > overwrite into table records;
    	Loading data to table default.records
    	Moved to trash: hdfs://node01:9000/user/hive/warehouse/records
    	OK
    	Time taken: 0.526 seconds
    	hive> select ip, count(*) from records
    	    > group by ip;
    	Total MapReduce jobs = 1
    	Launching Job 1 out of 1
    	Number of reduce tasks not specified. Estimated from input data size: 1
    	In order to change the average load for a reducer (in bytes):
    	  set hive.exec.reducers.bytes.per.reducer=<number>
    	In order to limit the maximum number of reducers:
    	  set hive.exec.reducers.max=<number>
    	In order to set a constant number of reducers:
    	  set mapred.reduce.tasks=<number>
    	Starting Job = job_201304242001_0001, Tracking URL = http://node01:50030/jobdetails.jsp?jobid=job_201304242001_0001
    	Kill Command = /home/hadoop/hadoop-0.20.2/bin/../bin/hadoop job  -Dmapred.job.tracker=192.168.231.131:9001 -kill job_201304242001_0001
    	Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
    	2013-04-24 20:11:03,127 Stage-1 map = 0%,  reduce = 0%
    	2013-04-24 20:11:11,196 Stage-1 map = 100%,  reduce = 0%
    	2013-04-24 20:11:23,331 Stage-1 map = 100%,  reduce = 100%
    	Ended Job = job_201304242001_0001
    	MapReduce Jobs Launched:
    	Job 0: Map: 1  Reduce: 1   HDFS Read: 7118627 HDFS Write: 9 SUCCESS
    	Total MapReduce CPU Time Spent: 0 msec
    	OK
    	NULL    28134
    	Time taken: 33.273 seconds
    
    	records在HDFS中就是一个文件:
    	[hadoop@node01 home]$ hadoop fs -ls /user/hive/warehouse/records
    	Found 1 items
    	-rw-r--r--   2 hadoop supergroup    7118627 2013-04-15 20:06 /user/hive/warehouse/records/access_log.txt
  • 相关阅读:
    Struts 2 Overview
    Struts 2 Tutorial Basic MVC Architecture
    Struts 2 Tutorial
    Struts DynaActionForm example
    Struts – MappingDispatchAction Example
    Struts DispatchAction Example
    【置顶】本博客文章推荐和迁移声明
    Whatbeg's blog 文章列表
    如何实现并应用决策树算法?
    【2016读书】4月读书笔记
  • 原文地址:https://www.cnblogs.com/suway/p/9606909.html
Copyright © 2011-2022 走看看