zoukankan      html  css  js  c++  java
  • hadoop学习笔记(一):hadoop集群搭建

    hadoop集群搭建

    阅读目录

    回到顶部

    一、准备工作:

    1、环境:CentOS6.4-64bit JDK1.7-64bit

      基于虚拟机拷贝4个虚拟机,一个作为Master,另外三个作为Slave,在这拷贝出来的四台虚拟机上分别执行下面的脚本初始化网卡eth0设备:

    /install/initNetwork.sh

    2、配置集群网络

    A、Master机器:

    1 #配置主机名
    2 hostname master01
    3 vi /etc/sysconfig/network
    4 NETWORKING=yes
    5 HOSTNAME=master01

    复制代码

     1 #配置网络地址
     2 vi /etc/sysconfig/network-scripts/ifcfg-eth0
     3 DEVICE=eth0
     4 TYPE=Ethernet
     5 ONBOOT=yes
     6 NM_CONTROLLED=yes
     7 BOOTPROTO=static
     8 IPADDR=192.168.37.154
     9 NETMASK=255.255.255.0
    10 GATEWAY=192.168.37.2
    11 DNS1=192.168.37.2
    12 DNS2=8.8.8.8

    复制代码

    复制代码

    1 #配置本地IP地址解析
    2 [root@master01 ~]# cat /etc/hosts
    3 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    5 192.168.37.154 master01
    6 192.168.37.156 slave01
    7 192.168.37.157 slave02
    8 192.168.37.158 slave03

    复制代码

    1 #重新启动服务器
    2 reboot

    B、Slave机器:

    1 #配置主机名
    2 hostname slave01
    3 vi /etc/sysconfig/network
    4 NETWORKING=yes
    5 HOSTNAME=slave01

    复制代码

     1 #配置网络地址
     2 vi /etc/sysconfig/network-scripts/ifcfg-eth0
     3 DEVICE=eth0
     4 TYPE=Ethernet
     5 ONBOOT=yes
     6 NM_CONTROLLED=yes
     7 BOOTPROTO=static
     8 IPADDR=192.168.37.156
     9 NETMASK=255.255.255.0
    10 GATEWAY=192.168.37.2
    11 DNS1=192.168.37.2
    12 DNS2=8.8.8.8

    复制代码

    复制代码

    1 #配置本地IP地址解析
    2 [root@master01 ~]# cat /etc/hosts
    3 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    5 192.168.37.154 master01
    6 192.168.37.156 slave01
    7 192.168.37.157 slave02
    8 192.168.37.158 slave03

    复制代码

    1 #重新启动服务器
    2 reboot

    注意:
    另外两台Slave虚拟机的主机名和IP地址分别为:
    slave02 192.168.37.157
    slave03 192.168.37.158
    集群中每台虚拟机的网络配置完成之后就可以使用SecureCRT或XShell连接上去操作了

    3、关闭所有虚拟机的防火墙和Selinux

    复制代码

    1 [root@master01 ~]# service iptables stop
    2 [root@master01 software]# chkconfig iptables --level 35 off
    3 [root@master01 ~]# setenforce 0
    4 [root@master01 ~]# getenforce
    5 Permissive
    6 [root@master01 ~]# grep -Ev "^#|^$" /etc/sysconfig/selinux 
    7 SELINUX=disabled
    8 SELINUXTYPE=targeted 

    复制代码

    注意:

    如果使用上述方法SELINUX还是为enforcing,则可以使用以下方法解决:
    修改/etc/selinux/config 文件
    将SELINUX=enforcing改为SELINUX=disabled
    重启机器即可

    4、配置ntp服务

    A、Master机器:

    复制代码

    1 #校正Master机器的时间
    2 [root@master01 ~]# date -s "2017-07-16 16:17:50"
    3 2017年 07月 16日 星期日 16:17:50 CST
    4 [root@master01 ~]# clock -w
    5 [root@master01 ~]# date '+%F %T %A'
    6 2017-07-16 16:20:31 星期日

    复制代码

    1 #检查是否安装ntp组件,若未安装则执行yum install -y ntp安装它
    2 [root@master01 ~]# rpm -qa|grep ntp
    3 fontpackages-filesystem-1.41-1.1.el6.noarch
    4 ntp-4.2.4p8-3.el6.centos.x86_64
    5 ntpdate-4.2.4p8-3.el6.centos.x86_64

    复制代码

     1 #配置时间服务
     2 [root@master01 ~]# grep -Ev "^#|^$" /etc/ntp.conf 
     3 #server 0.centos.pool.ntp.org
     4 #server 1.centos.pool.ntp.org
     5 #server 2.centos.pool.ntp.org
     6 driftfile /var/lib/ntp/drift
     7 restrict default kod nomodify notrap nopeer noquery
     8 restrict -6 default kod nomodify notrap nopeer noquery
     9 restrict 127.0.0.1 
    10 restrict -6 ::1
    11 server 127.127.1.0
    12 fudge 192.168.37.154 stratum 0
    13 includefile /etc/ntp/crypto/pw
    14 keys /etc/ntp/keys

    复制代码

    1 #启动时间服务
    2 service ntpd start

    复制代码

    1 #查看ntp端口123是否启动
    2 [root@master01 ~]# lsof -i:123
    3 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    4 ntpd 1694 ntp 16u IPv4 14631 0t0 UDP *:ntp 
    5 ntpd 1694 ntp 17u IPv6 14632 0t0 UDP *:ntp 
    6 ntpd 1694 ntp 18u IPv6 14636 0t0 UDP localhost:ntp 
    7 ntpd 1694 ntp 19u IPv6 14637 0t0 UDP [fe80::20c:29ff:feaf:886]:ntp 
    8 ntpd 1694 ntp 20u IPv4 14638 0t0 UDP localhost:ntp 
    9 ntpd 1694 ntp 21u IPv4 14639 0t0 UDP 192.168.37.154:ntp 

    复制代码

    1 #查看时间服务器配置
    2 [root@master01 ~]# ntpq -p
    3 remote refid st t when poll reach delay offset jitter
    4 ==============================================================================
    5 *LOCAL(0) .LOCL. 5 l 30 64 377 0.000 0.000 0.000

    B、Slave机器:

    复制代码

    1 #首先校正Slave机器的时间
    2 [root@slave01 ~]# date -s "2017-07-16 16:17:50"
    3 2017年 07月 16日 星期日 16:17:50 CST
    4 [root@slave01 ~]# clock -w
    5 [root@slave01 ~]# date '+%F %T %A'
    6 2017-07-16 16:20:31 星期日

    复制代码

    1 #手动同步一次时间
    2 [root@slave01 ~]# ntpdate 192.168.37.154
    3 16 Jul 16:39:18 ntpdate[1834]: step time server 192.168.37.154 offset 23.818568 sec

    复制代码

     1 #修改配置文件
     2 [root@slave01 ~]# grep -Ev "^#|^$" /etc/ntp.conf 
     3 #server 0.centos.pool.ntp.org
     4 #server 1.centos.pool.ntp.org
     5 #server 2.centos.pool.ntp.org
     6 driftfile /var/lib/ntp/drift
     7 restrict default kod nomodify notrap nopeer noquery
     8 restrict -6 default kod nomodify notrap nopeer noquery
     9 restrict 127.0.0.1 
    10 restrict -6 ::1
    11 server 192.168.37.154
    12 fudge 192.168.37.156 stratum 10
    13 includefile /etc/ntp/crypto/pw
    14 keys /etc/ntp/keys

    复制代码

    1 #启动时间服务
    2 service ntpd start

    复制代码

    1 #查看ntp端口123是否启动
    2 [root@master01 ~]# lsof -i:123
    3 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    4 ntpd 1694 ntp 16u IPv4 14631 0t0 UDP *:ntp 
    5 ntpd 1694 ntp 17u IPv6 14632 0t0 UDP *:ntp 
    6 ntpd 1694 ntp 18u IPv6 14636 0t0 UDP localhost:ntp 
    7 ntpd 1694 ntp 19u IPv6 14637 0t0 UDP [fe80::20c:29ff:feaf:886]:ntp 
    8 ntpd 1694 ntp 20u IPv4 14638 0t0 UDP localhost:ntp 
    9 ntpd 1694 ntp 21u IPv4 14639 0t0 UDP 192.168.37.154:ntp 

    复制代码

    1 #查看客户端的配置
    2 [root@slave01 ~]# ntpq -p
    3 remote refid st t when poll reach delay offset jitter
    4 ==============================================================================
    5 192.168.37.154 LOCAL(0) 6 u 44 64 7 0.756 -0.646 0.122
    1 #查看Slave端的同步状态
    2 [root@slave01 ~]# ntpstat
    3 unsynchronised
    4 time server re-starting
    5 polling server every 64 s

    注意:参照slave01节点的时间服务配置将另外两台Slave节点配置好

    5、配置Hadoop用户的SSH免密码登录

    从官方给出的建议来看只需要Master到各个Slave节点之间实现单向免密码登录即可

    A、Master机器

    1 #切换到Hadoop用户下
    2 [root@master01 software]# su -l hadoop

    复制代码

     1 #生成无密码的公私密匙
     2 [hadoop@master01 ~]$ ssh-keygen -t rsa -P ''
     3 Generating public/private rsa key pair.
     4 Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
     5 Created directory '/home/hadoop/.ssh'.
     6 Your identification has been saved in /home/hadoop/.ssh/id_rsa.
     7 Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
     8 The key fingerprint is:
     9 3a:3c:57:36:31:67:f4:d8:2f:df:f9:88:47:45:c1:26 hadoop@master01
    10 The key's randomart image is:
    11 +--[ RSA 2048]----+
    12 | . ...|
    13 | . E o.|
    14 | o + =. |
    15 | = ..|
    16 | S + ...|
    17 | . . o . .oo|
    18 | = . . .o|
    19 | + ....|
    20 | ... .|
    21 +-----------------+

    复制代码

    复制代码

     1 #首先拷贝公匙给自己实现无密码登录自己(使用ssh-copy-id拷贝公匙可以直接将其追加到authorized_keys文件且自动设置authorized_keys文件权限为600、.ssh文件夹权限为700)
     2 [hadoop@master01 ~]$ ssh-copy-id master01
     3 The authenticity of host 'master01 (192.168.37.154)' can't be established.
     4 RSA key fingerprint is 82:72:60:05:6d:dc:3e:bf:f7:aa:2d:f5:08:c1:59:3a.
     5 Are you sure you want to continue connecting (yes/no)? yes
     6 Warning: Permanently added 'master01,192.168.37.154' (RSA) to the list of known hosts.
     7 hadoop@master01's password: 
     8 Now try logging into the machine, with "ssh 'master01'", and check in:
     9 
    10 .ssh/authorized_keys
    11 
    12 to make sure we haven't added extra keys that you weren't expecting.

    复制代码

    1 #测试无密码登录自己
    2 [hadoop@master01 ~]$ ssh master01
    3 [hadoop@master01 ~]$ exit
    4 logout
    5 Connection to master01 closed.

    复制代码

     1 #拷贝公匙到第1台Slave
     2 [hadoop@master01 ~]$ ssh-copy-id slave01
     3 The authenticity of host 'slave01 (192.168.37.156)' can't be established.
     4 RSA key fingerprint is 82:72:60:05:6d:dc:3e:bf:f7:aa:2d:f5:08:c1:59:3a.
     5 Are you sure you want to continue connecting (yes/no)? yes
     6 Warning: Permanently added 'slave01,192.168.37.156' (RSA) to the list of known hosts.
     7 hadoop@slave01's password: 
     8 Now try logging into the machine, with "ssh 'slave01'", and check in:
     9 
    10 .ssh/authorized_keys
    11 
    12 to make sure we haven't added extra keys that you weren't expecting.

    复制代码

    1 #测试免密码登录第1台Slave
    2 [hadoop@master01 ~]$ ssh slave01
    1 #回退到当前环境
    2 [hadoop@slave01 ~]$ exit
    3 logout
    4 Connection to slave01 closed.

    注意:参照上面的操作继续将生成公匙拷贝到另外两台Slave机器上

    B、Slave机器(双向免密码通信需要配置各个Slave节点):

    #可以不用配置(略)

    回到顶部

    二、搭建Hadoop集群:

    1、Master机器:

    A、上传并解压

    复制代码

    1 [hadoop@master01 install]# ls
    2 hadoop-2.7.3.tar.gz initNetwork.sh
    3 [hadoop@master01 install]$ tar -zxvf hadoop-2.7.3.tar.gz -C /software/
    4 [hadoop@master01 install]$ ls /software/
    5 hadoop-2.7.3 jdk1.7.0_79
    6 [hadoop@master01 install]$ rm -rf hadoop-2.7.3.tar.gz 

    复制代码

    B、配置环境变量

    复制代码

    1 [hadoop@master01 install]$ su -lc "vi /etc/profile"
    2 密码:123456
    3 JAVA_HOME=/software/jdk1.7.0_79
    4 HADOOP_HOME=/software/hadoop-2.7.3
    5 PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/lib:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
    6 export PATH JAVA_HOME HADOOP_HOME
    7 [hadoop@master01 hadoop-2.7.3]$ source /etc/profile

    复制代码

    C、配置Hadoop环境

    1 [hadoop@master01 hadoop-2.7.3]$ cd /software/hadoop-2.7.3/etc/hadoop/
    2 #配置hadoop-env.sh(注意必须配置,即便/etc/profile中配置了也要配置)
    3 [hadoop@master01 hadoop]$ vi hadoop-env.sh 
    4 export JAVA_HOME=/software/jdk1.7.0_79

    复制代码

     1 #配置core-site.xml
     2 [hadoop@master01 hadoop]$ vi core-site.xml 
     3 <configuration>
     4     <property>
     5         <name>fs.defaultFS</name>
     6         <value>hdfs://master01:9000</value>
     7     </property>
     8     <property>
     9         <name>hadoop.tmp.dir</name>
    10         <value>/software/hadoop-2.7.3/work</value>
    11     </property>
    12 </configuration>            

    复制代码

    复制代码

    1 #配置hdfs-site.xml
    2 [hadoop@master01 hadoop]$ vi hdfs-site.xml 
    3 <configuration>
    4     <property>
    5         <name>dfs.replication</name>
    6         <value>3</value>
    7     </property>
    8 </configuration>

    复制代码

    复制代码

    1 #配置mapred-site.xml
    2 [hadoop@master01 hadoop]$ mv mapred-site.xml.template mapred-site.xml
    3 [hadoop@master01 hadoop]$ vi mapred-site.xml 
    4 <configuration>
    5     <property>
    6         <name>mapreduce.framework.name</name>
    7         <value>yarn</value>
    8     </property>
    9 </configuration>

    复制代码

    复制代码

     1 #配置yarn-site.xml
     2 <configuration>
     3     <property>
     4         <name>yarn.resourcemanager.hostname</name>
     5         <value>master01</value>
     6     </property>
     7     <property>
     8         <name>yarn.nodemanager.aux-services</name>
     9         <value>mapreduce_shuffle</value>
    10     </property>
    11 </configuration>

    复制代码

    1 #配置slaves
    2 [hadoop@master01 hadoop]$ vi slaves
    3 slave01
    4 slave02
    5 slave03

    2、Slave机器:

    1 #将Master机器上Hadoop的安装目录拷贝到各个Slave节点上
    2 [hadoop@master01 software]$ cd /software/
    3 [hadoop@master01 software]$ scp -r hadoop-2.7.3 slave01:/software/
    4 [hadoop@master01 software]$ scp -r hadoop-2.7.3 slave02:/software/
    5 [hadoop@master01 software]$ scp -r hadoop-2.7.3 slave03:/software/

    复制代码

    1 #将Master机器上/etc/profile文件拷贝到各个Slave节点上
    2 [root@master01 ~]# scp /etc/profile slave01:/etc/
    3 [root@master01 ~]# scp /etc/profile slave02:/etc/
    4 [root@master01 ~]# scp /etc/profile slave03:/etc/
    5 #在各个Slave节点上执行source指令使配置生效
    6 [hadoop@slave01 ~]$ source /etc/profile

    复制代码

    3、启动HDFS集群并测试

    复制代码

    1 #查看hadoop的版本
    2 [hadoop@master01 software]$ hadoop version
    3 Hadoop 2.7.3
    4 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff
    5 Compiled by root on 2016-08-18T01:41Z
    6 Compiled with protoc 2.5.0
    7 From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
    8 This command was run using /software/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar

    复制代码

    1 #格式化文件系统
    2 [hadoop@master01 software]$ hdfs namenode -format
    3 注意:当出现下面的退出状态值为0时标识格式化成功
    4 INFO util.ExitUtil: Exiting with status 0

    复制代码

     1 #启动HDFS集群
     2 [hadoop@master01 hadoop]$ start-dfs.sh 
     3 Starting namenodes on [master01]
     4 master01: starting namenode, logging to /software/hadoop-2.7.3/logs/hadoop-hadoop-namenode-master01.out
     5 slave01: starting datanode, logging to /software/hadoop-2.7.3/logs/hadoop-hadoop-datanode-slave01.out
     6 slave02: starting datanode, logging to /software/hadoop-2.7.3/logs/hadoop-hadoop-datanode-slave02.out
     7 slave03: starting datanode, logging to /software/hadoop-2.7.3/logs/hadoop-hadoop-datanode-slave03.out
     8 Starting secondary namenodes [0.0.0.0]
     9 0.0.0.0: starting secondarynamenode, logging to /software/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-master01.out
    10 [hadoop@master01 hadoop]$ 
    11 [hadoop@master01 hadoop]$ 

    复制代码

    1 #检查进程NN进程
    2 [hadoop@master01 hadoop]$ jps
    3 5640 NameNode
    4 5847 SecondaryNameNode
    5 5974 Jps

    复制代码

     1 #分别在各个从节点上检查DN进程
     2 [hadoop@slave01 software]$ jps
     3 3032 DataNode
     4 3111 Jps
     5 [hadoop@slave02 software]$ jps
     6 3181 Jps
     7 3102 DataNode
     8 [hadoop@slave03 software]$ jps
     9 3119 Jps
    10 3040 DataNode

    复制代码

    #HDFS分布式文件系统简单命令行操作

    #查看分布式系统根目录
    [hadoop@master01 hadoop]$ hdfs dfs -ls /
    #创建单级目录
    [hadoop@master01 hadoop]$ hdfs dfs -mkdir /bnyw

    复制代码

    1 #创建多级目录
    2 [hadoop@master01 hadoop]$ hdfs dfs -mkdir -p /test/lyf
    3 [hadoop@master01 hadoop]$ hdfs dfs -ls /
    4 Found 2 items
    5 drwxr-xr-x - hadoop supergroup 0 2017-07-16 23:22 /bnyw
    6 drwxr-xr-x - hadoop supergroup 0 2017-07-16 23:24 /test

    复制代码

    1 #删除单级目录、多级目录及其文件
    2 [hadoop@master01 hadoop]$ hdfs dfs -rm -r /test/lyf
    3 17/07/16 23:28:25 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
    4 Deleted /test/lyf

    复制代码

    1 #上传文件
    2 [hadoop@master01 install]$ ls /install/
    3 initNetwork.sh test.sh
    4 [hadoop@master01 install]$ hdfs dfs -put initNetwork.sh /test/
    5 [hadoop@master01 install]$ hdfs dfs -ls /test/
    6 Found 1 items
    7 -rw-r--r-- 3 hadoop supergroup 492 2017-07-16 23:33 /test/initNetwork.sh

    复制代码

    复制代码

     1 #查看文件内容
     2 [hadoop@master01 install]$ hdfs dfs -cat /test/initNetwork.sh
     3 #!/bin/sh
     4 declare -a checkEth0=`ifconfig -a|awk '/eth0/{print $1}'`
     5 declare -a eth0Device='/etc/sysconfig/network-scripts/ifcfg-eth0'
     6 declare -a deviceFile='/etc/udev/rules.d/70-persistent-net.rules'
     7 [ -z $checkEth0 ] && {
     8 [ -f $deviceFile ] && printf "">$deviceFile
     9 [ -f $eth0Device ] && sed -ri -e "/HWADDR/d" -e "/UUID/d" $eth0Device
    10 echo "init eth0 device finished....."
    11 echo "reboot current unix system....."
    12 sleep 1
    13 reboot
    14 } || echo "eth0 already exists,no require init..."

    复制代码

    复制代码

    1 #下载文件
    2 [hadoop@master01 install]$ mkdir -p test && cd test
    3 [hadoop@master01 test]$ pwd
    4 /install/test
    5 [hadoop@master01 test]$ hdfs dfs -get /test/initNetwork.sh /install/test/
    6 [hadoop@master01 test]$ ls /install/test/
    7 initNetwork.sh

    复制代码

    复制代码

    1 #删除文件
    2 [hadoop@master01 test]$ hdfs dfs -rm -r /test/initNetwork.sh
    3 17/07/16 23:40:24 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
    4 Deleted /test/initNetwork.sh
    5 [hadoop@master01 test]$ hdfs dfs -ls /test/

    复制代码

    复制代码

     1 #拷贝文件
     2 [hadoop@CloudDeskTop hive]$ hdfs dfs -mkdir -p /test/test1
     3 [hadoop@CloudDeskTop hive]$ hdfs dfs -mkdir -p /test/test2
     4 [hadoop@CloudDeskTop hive]$ hdfs dfs -put user02 /test/test1
     5 [hadoop@CloudDeskTop hive]$ hdfs dfs -ls /test/test1
     6 Found 1 items
     7 -rw-r--r-- 3 hadoop supergroup 53 2017-07-30 03:27 /test/test1/user02
     8 [hadoop@CloudDeskTop hive]$ hdfs dfs -cp /test/test1/user02 /test/test2
     9 [hadoop@CloudDeskTop hive]$ hdfs dfs -ls /test/test2
    10 Found 1 items
    11 -rw-r--r-- 3 hadoop supergroup 53 2017-07-30 03:29 /test/test2/user02
    12 [hadoop@CloudDeskTop hive]$ hdfs dfs -cat /test/test2/user02
    13 20    chenhualiang    90
    14 21    chaochunhong    80
    15 22    guaxixi    100

    复制代码

    #重命名或移动文件

    复制代码

    1 #重命名操作
    2 [hadoop@CloudDeskTop hive]$ hdfs dfs -mv /test/test2/user02 /test/test2/user03
    3 [hadoop@CloudDeskTop hive]$ hdfs dfs -ls /test/test2/
    4 Found 1 items
    5 -rw-r--r-- 3 hadoop supergroup 53 2017-07-30 03:29 /test/test2/user03
    6 [hadoop@CloudDeskTop hive]$ hdfs dfs -cat /test/test2/user03
    7 20    chenhualiang    90
    8 21    chaochunhong    80
    9 22    guaxixi    100

    复制代码

    复制代码

    1 #移动文件操作
    2 [hadoop@CloudDeskTop hive]$ hdfs dfs -mv /test/test2/user03 /test/test1
    3 [hadoop@CloudDeskTop hive]$ hdfs dfs -ls /test/test2/
    4 [hadoop@CloudDeskTop hive]$ hdfs dfs -ls /test/test1/
    5 Found 2 items
    6 -rw-r--r-- 3 hadoop supergroup 53 2017-07-30 03:27 /test/test1/user02
    7 -rw-r--r-- 3 hadoop supergroup 53 2017-07-30 03:29 /test/test1/user03

    复制代码

    复制代码

     1 #查看HDFS分布式系统的整体运行工况
     2 [hadoop@master01 test]$ hdfs dfsadmin -report
     3 Configured Capacity: 128831840256 (119.98 GB)
     4 Present Capacity: 114848382976 (106.96 GB)
     5 DFS Remaining: 114848260096 (106.96 GB)
     6 DFS Used: 122880 (120 KB)
     7 DFS Used%: 0.00%
     8 Under replicated blocks: 0
     9 Blocks with corrupt replicas: 0
    10 Missing blocks: 0
    11 Missing blocks (with replication factor 1): 0
    12 
    13 -------------------------------------------------
    14 Live datanodes (3):
    15 
    16 Name: 192.168.37.158:50010 (slave03)
    17 Hostname: slave03
    18 Decommission Status : Normal
    19 Configured Capacity: 42943946752 (39.99 GB)
    20 DFS Used: 40960 (40 KB)
    21 Non DFS Used: 4660379648 (4.34 GB)
    22 DFS Remaining: 38283526144 (35.65 GB)
    23 DFS Used%: 0.00%
    24 DFS Remaining%: 89.15%
    25 Configured Cache Capacity: 0 (0 B)
    26 Cache Used: 0 (0 B)
    27 Cache Remaining: 0 (0 B)
    28 Cache Used%: 100.00%
    29 Cache Remaining%: 0.00%
    30 Xceivers: 1
    31 Last contact: Sun Jul 16 23:49:36 CST 2017
    32 
    33 
    34 Name: 192.168.37.156:50010 (slave01)
    35 Hostname: slave01
    36 Decommission Status : Normal
    37 Configured Capacity: 42943946752 (39.99 GB)
    38 DFS Used: 40960 (40 KB)
    39 Non DFS Used: 4662685696 (4.34 GB)
    40 DFS Remaining: 38281220096 (35.65 GB)
    41 DFS Used%: 0.00%
    42 DFS Remaining%: 89.14%
    43 Configured Cache Capacity: 0 (0 B)
    44 Cache Used: 0 (0 B)
    45 Cache Remaining: 0 (0 B)
    46 Cache Used%: 100.00%
    47 Cache Remaining%: 0.00%
    48 Xceivers: 1
    49 Last contact: Sun Jul 16 23:49:39 CST 2017
    50 
    51 
    52 Name: 192.168.37.157:50010 (slave02)
    53 Hostname: slave02
    54 Decommission Status : Normal
    55 Configured Capacity: 42943946752 (39.99 GB)
    56 DFS Used: 40960 (40 KB)
    57 Non DFS Used: 4660391936 (4.34 GB)
    58 DFS Remaining: 38283513856 (35.65 GB)
    59 DFS Used%: 0.00%
    60 DFS Remaining%: 89.15%
    61 Configured Cache Capacity: 0 (0 B)
    62 Cache Used: 0 (0 B)
    63 Cache Remaining: 0 (0 B)
    64 Cache Used%: 100.00%
    65 Cache Remaining%: 0.00%
    66 Xceivers: 1
    67 Last contact: Sun Jul 16 23:49:38 CST 2017

    复制代码

    4、启动Yarn集群并测试

    复制代码

    1 #启动yarn集群
    2 [hadoop@master01 test]$ start-yarn.sh 
    3 starting yarn daemons
    4 starting resourcemanager, logging to /software/hadoop-2.7.3/logs/yarn-hadoop-resourcemanager-master01.out
    5 slave02: starting nodemanager, logging to /software/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-slave02.out
    6 slave01: starting nodemanager, logging to /software/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-slave01.out
    7 slave03: starting nodemanager, logging to /software/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-slave03.out

    复制代码

    复制代码

    1 #在Master节点上查看RM进程
    2 [hadoop@master01 test]$ jps
    3 7297 Jps
    4 5640 NameNode
    5 5847 SecondaryNameNode
    6 7032 ResourceManager

    复制代码

    复制代码

     1 #在其它Slave节点上查看NM进程
     2 [hadoop@slave01 software]$ jps
     3 3032 DataNode
     4 3178 NodeManager
     5 3291 Jps
     6 [hadoop@slave02 software]$ jps
     7 3250 NodeManager
     8 3102 DataNode
     9 3363 Jps
    10 [hadoop@slave03 software]$ jps
    11 3299 Jps
    12 3040 DataNode
    13 3186 NodeManager

    复制代码

    5、WebPortal端监控

    1 #查看HDFS集群状态信息
    2 http://192.168.37.154:50070
    1 #查看Yarn集群状态信息
    2 http://192.168.37.154:8088

    复制代码

    1 #停止Yarn集群
    2 [hadoop@master01 test]$ stop-yarn.sh 
    3 stopping yarn daemons
    4 stopping resourcemanager
    5 slave02: stopping nodemanager
    6 slave01: stopping nodemanager
    7 slave03: stopping nodemanager
    8 no proxyserver to stop

    复制代码

    复制代码

     1 #停止HDFS集群
     2 [hadoop@master01 test]$ stop-dfs.sh 
     3 Stopping namenodes on [master01]
     4 master01: stopping namenode
     5 slave03: stopping datanode
     6 slave02: stopping datanode
     7 slave01: stopping datanode
     8 Stopping secondary namenodes [0.0.0.0]
     9 0.0.0.0: stopping secondarynamenode
    10 [hadoop@master01 test]$ jps
    11 7972 Jps
    12 [hadoop@slave01 software]$ jps
    13 3432 Jps

    复制代码

     

  • 相关阅读:
    Java高级之类结构的认识
    14.8.9 Clustered and Secondary Indexes
    14.8.4 Moving or Copying InnoDB Tables to Another Machine 移动或者拷贝 InnoDB 表到另外机器
    14.8.3 Physical Row Structure of InnoDB Tables InnoDB 表的物理行结构
    14.8.2 Role of the .frm File for InnoDB Tables InnoDB 表得到 .frm文件的作用
    14.8.1 Creating InnoDB Tables 创建InnoDB 表
    14.7.4 InnoDB File-Per-Table Tablespaces
    14.7.2 Changing the Number or Size of InnoDB Redo Log Files 改变InnoDB Redo Log Files的数量和大小
    14.7.1 Resizing the InnoDB System Tablespace InnoDB 系统表空间大小
    14.6.11 Configuring Optimizer Statistics for InnoDB 配置优化统计信息用于InnoDB
  • 原文地址:https://www.cnblogs.com/hzcya1995/p/13313311.html
Copyright © 2011-2022 走看看