zoukankan      html  css  js  c++  java
  • hadoop-2.6.0.tar.gz的集群搭建(3节点)(不含zookeeper集群安装)

    关于几个疑问和几处心得!

    a.用NAT,还是桥接,还是only-host模式?

       答: hostonly、桥接和NAT

    b.用static的ip,还是dhcp的?

    答:static

    c.别认为快照和克隆不重要,小技巧,比别人灵活用,会很节省时间和大大减少错误。

    d.重用起来脚本语言的编程,如paython或shell编程。

     对于用scp -r命令或deploy.conf(配置文件),deploy.sh(实现文件复制的shell脚本文件),runRemoteCdm.sh(在远程节点上执行命令的shell脚本文件)。

    e.重要Vmare Tools增强工具,或者,rz上传、sz下载。

    f.大多数人常用

       Xmanager Enterprise *安装步骤

     用到的所需:

        1、VMware-workstation-full-11.1.2.61471.1437365244.exe

      2、CentOS-6.5-x86_64-bin-DVD1.iso

      3、jdk-7u69-linux-x64.tar.gz

      4、hadoop-2.6.0.tar.gz

      

    机器规划:

      192.168.80.31   ----------------  master

      192.168.80.32   ----------------  slave1

      192.168.80.33   ----------------  slave1

    目录规划:

      所有namenode节点产生的日志                                                                        /data/dfs/name

      所有datanode节点产生的日志                                                                         /data/dfs/data

    第一步:安装VMware-workstation虚拟机,我这里是VMware-workstation11版本。

           详细见 -> 

         VMware workstation 11 的下载      

              VMWare Workstation 11的安装

              VMware Workstation 11安装之后的一些配置

      第二步:安装CentOS系统,我这里是6.6版本。推荐(生产环境中常用)

        详细见 ->  

              CentOS 6.5的安装详解

              CentOS 6.5安装之后的网络配置

              CentOS 6.5静态IP的设置(NAT和桥接都适用) 

              CentOS 命令行界面与图形界面切换

              网卡eth0、eth1...ethn谜团

              Centos 6.5下的OPENJDK卸载和SUN的JDK安装、环境变量配置

     

          第三步:VMware Tools增强工具安装

        详细见 ->

          VMware里Ubuntukylin-14.04-desktop的VMware Tools安装图文详解

      

      第四步:准备小修改(学会用快照和克隆,根据自身要求情况,合理位置快照) 

        详细见 ->   

         CentOS常用命令、快照、克隆大揭秘

        新建用户组、用户、用户密码、删除用户组、用户(适合CentOS、Ubuntu)

      

       第一步:搭建一个3节点的hadoop分布式小集群--预备工作(master、slave1、slave2的网络连接、ip地址静态、拍照、远程)

    master

    slave1

    Slave2

     

    对master而言,

    即,成功由原来的192.168.80.140(是动态获取的)成功地,改变成了192.168.80.31(静态的)

    以上是master 的   192.168.80.31

    对slave1而言,

    即,成功由原来的192.168.80.141(是动态获取的)成功地,改变成了192.168.80.32(静态的)

    以上是slave1 的   192.168.80.32

    对slave2而言,

    即,成功由原来的192.168.80.142(是动态获取的)成功地,改变成了192.168.80.33(静态的)

    以上是slave2 的   192.168.80.33

    打开,C:WindowsSystem32driversetc 

    ssh master

    ssh slave1

    ssh slave2

    第二步:搭建一个3节点的hadoop分布式小集群--预备工作(master、slave1、slave2的用户规划、目录规划)

    1 用户规划

      依次,对master、slave1、slave2进行用户规划,hadoop用户组,hadoop用户

      先新建用户组,再来新建用户 。

    新建用户组、用户、用户密码、删除用户组、用户(适合CentOS、Ubuntu)

    复制代码
    [root@master ~]# groupadd hadoop
    [root@master ~]# useradd -g hadoop hadoop (一般推荐用 useradd -g -m  hadoop hadoop )
    [root@master ~]# passwd hadoop
    [root@master ~]# cd /home/
    [root@master home]# ls -al
    [root@master home]# su hadoop
    [hadoop@master home]$ cd
    [hadoop@master ~]$ pwd
    [hadoop@master ~]$ ls
    [hadoop@master ~]$
    复制代码

    复制代码
    [root@slave1 ~]# groupadd hadoop
    [root@slave1 ~]# useradd -g hadoop hadoop
    [root@slave1 ~]# passwd hadoop
    [root@slave1 ~]# cd /home/
    [root@slave1 home]# ls -al
    [root@slave1 home]# su hadoop
    [hadoop@slave1 home]$ cd
    [hadoop@slave1 ~]$ pwd
    [hadoop@slave1 ~]$ ls
    [hadoop@slave1 ~]$
    复制代码

    复制代码
    [root@slave2 ~]# groupadd hadoop
    [root@slave2 ~]# useradd -g hadoop hadoop
    [root@slave2 ~]# passwd hadoop
    [root@slave2 ~]# cd /home/
    [root@slave2 home]# ls -al
    [root@slave2 home]# su hadoop
    [hadoop@slave2 home]$ cd
    [hadoop@slave2 ~]$ pwd
    [hadoop@slave2 ~]$ ls
    [hadoop@slave2 ~]$
    复制代码

    2目录规划

      依次,对master、slave1、slave2进行目录规划,

    名称                                                                                                                         路径        

    所有集群安装的软件目录                                                                            /home/hadoop/app/

    所有临时目录                                                                                             /tmp

    这个是,系统会默认的临时目录是在/tmp下,而这个目录在每次重启后都会被删掉,必须重新执行format才行,否则会出错。

     

      如果这里,我们像下面这样的话,

      那么得,事先在配置之前就要用root用户创建/data/tmp

    暂时没弄下面的

    所有namenode节点产生的日志                                                                        /data/dfs/name

    所有datanode节点产生的日志                                                                         /data/dfs/data

    [hadoop@master ~]$ mkdir app 

    [hadoop@slave1 ~]$ mkdir app

    [hadoop@slave2 ~]$ mkdir app 

    下面为这三台机器分配IP地址及相应的角色

    192.168.80.31-----master,namenode,jobtracker

    192.168.80.32-----slave1,datanode,tasktracker

    192.168.80.33----- slave2,datanode,tasktracker

    第三步:搭建一个3节点的hadoop分布式小集群--预备工作(master、slave1、slave2的环境检查)

     集群安装前的环境检查

    在集群安装之前,我们需要一个对其环境的一个检查

    时钟同步

      3.1 master

    复制代码
    [root@master hadoop]# date
    [root@master hadoop]# cd /usr/share/zoneinfo/
    [root@master zoneinfo]# ls
    [root@master zoneinfo]# cd Asia/
    [root@master Asia]# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    cp:是否覆盖"/etc/localtime"? y
    [root@master Asia]#
    复制代码

    我们需要ntp命令,来实现时间的同步。

    [root@master Asia]# pwd
    [root@master Asia]# yum -y install ntp

    [root@master Asia]# ntpdate pool.ntp.org

      3.2  slave1

    复制代码
    [root@slave1 hadoop]# date
    [root@slave1 hadoop]# cd /usr/share/zoneinfo/Asia/
    [root@slave1 Asia]# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    cp:是否覆盖"/etc/localtime"? y
    [root@slave1 Asia]# pwd
    /usr/share/zoneinfo/Asia
    [root@slave1 Asia]# yum -y install ntp
    复制代码

    [root@slave1 Asia]# ntpdate pool.ntp.org

      3.3 slave2

    复制代码
    [root@slave2 hadoop]# date
    [root@slave2 hadoop]# cd /usr/share/zoneinfo/Asia/
    [root@slave2 Asia]# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    cp:是否覆盖"/etc/localtime"? y
    [root@slave2 Asia]# pwd
    /usr/share/zoneinfo/Asia
    [root@slave2 Asia]# yum -y install ntp
    复制代码

    [root@slave2 Asia]# ntpdate pool.ntp.org

    hosts文件检查

      依次对master,slave1,slave2

      master

    [root@master Asia]# vi /etc/hosts

    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

    192.168.80.31  master

    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

     

    192.168.80.31  master
    192.168.80.32  slave1
    192.168.80.33  slave2 

      slave1

    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

    192.168.80.32  slave1

    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

    192.168.80.32  slave1
    192.168.80.31  master
    192.168.80.33  slave2

    slave2

    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

    192.168.80.33  slave2

    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

    192.168.80.33  slave2
    192.168.80.31  master
    192.168.80.32  slave1

    禁用防火墙

      依次对master,slave1,slave2

    在这里哈,参考了很多,也咨询了一些人。直接永久关闭吧。

    http://www.cnblogs.com/baiboy/p/4639474.html

    master

    [root@master Asia]# chkconfig iptables off
    [root@master Asia]# service iptables status

    slave1

    [root@slave1 Asia]# chkconfig iptables off
    [root@slave1 Asia]# service iptables status

    slave2

    [root@slave2 Asia]# chkconfig iptables off
    [root@slave2 Asia]# service iptables status

    第四步:搭建一个3节点的hadoop分布式小集群--预备工作(master、slave1、slave2的SSH免密码通信的配置)

      SSH免密码通信的配置

    1、每台机器的各自本身的无密码访问

    master

    复制代码
    [root@master Asia]# su hadoop
    [hadoop@master Asia]$ cd
    [hadoop@master ~]$ cd .ssh
    [hadoop@master ~]$ mkdir .ssh
    [hadoop@master ~]$ ssh-keygen -t rsa
     (/home/hadoop/.ssh/id_rsa): (Enter键)
    Enter passphrase (empty for no passphrase): (Enter键)
    Enter same passphrase again: (Enter键)
    复制代码

    复制代码
    [hadoop@master ~]$ pwd
    [hadoop@master ~]$ cd .ssh
    [hadoop@master .ssh]$ ls
    [hadoop@master .ssh]$ cat id_rsa.pub >> authorized_keys
    [hadoop@master .ssh]$ ls
    [hadoop@master .ssh]$ cat authorized_keys
    ssh-rsa
    [hadoop@master .ssh]$ cd ..
    [hadoop@master ~]$ chmod 700 .ssh
    [hadoop@master ~]$ chmod 600 .ssh/*
    [hadoop@master ~]$ ls -al
    复制代码

    [hadoop@master ~]$ ssh master
    [hadoop@master ~]$ su root
    密码:
    [root@master hadoop]# yum -y install openssh-clients

    [root@master hadoop]# su hadoop
    [hadoop@master ~]$ ssh master
    Are you sure you want to continue connecting (yes/no)? yes
    [hadoop@master ~]$ ssh master

    slave1

    复制代码
    [root@slave1 Asia]# su hadoop
    [hadoop@slave1 Asia]$ cd
    [hadoop@slave1 ~]$ cd .ssh
    [hadoop@slave1 ~]$ mkdir .ssh
    [hadoop@slave1 ~]$ ssh-keygen -t rsa
     (/home/hadoop/.ssh/id_rsa): (Enter键)
    Enter passphrase (empty for no passphrase): (Enter键)
    Enter same passphrase again: (Enter键)
    复制代码

    复制代码
    [hadoop@slave1 ~]$ pwd
    [hadoop@slave1 ~]$ cd .ssh
    [hadoop@slave1 .ssh]$ ls
    [hadoop@slave1 .ssh]$ cat id_rsa.pub >> authorized_keys
    [hadoop@slave1 .ssh]$ ls
    [hadoop@slave1 .ssh]$ cat authorized_keys
    ssh-rsa
    [hadoop@slave1 .ssh]$ cd ..
    [hadoop@slave1 ~]$ chmod 700 .ssh
    [hadoop@slave1 ~]$ chmod 600 .ssh/*
    [hadoop@slave1 ~]$ ls -al
    复制代码

    [hadoop@slave1 ~]$ ssh slave1
    [hadoop@slave1 ~]$ su root
    [root@slave1 hadoop]# yum -y install openssh-clients

    [root@slave1 hadoop]# su hadoop
    [hadoop@slave1 ~]$ ssh slave1
    Are you sure you want to continue connecting (yes/no)? yes
    [hadoop@slave1 ~]$ ssh slave1

    slave2

    复制代码
    [root@slave2 Asia]# su hadoop
    [hadoop@slave2 Asia]$ cd
    [hadoop@slave2 ~]$ cd .ssh
    [hadoop@slave2 ~]$ mkdir .ssh
    [hadoop@slave2 ~]$ ssh-keygen -t rsa
    (/home/hadoop/.ssh/id_rsa): (Enter键)
    Enter passphrase (empty for no passphrase): (Enter键)
    Enter same passphrase again: (Enter键)
    复制代码

    复制代码
    [hadoop@slave2 ~]$ pwd
    [hadoop@slave2 ~]$ cd .ssh
    [hadoop@slave2 .ssh]$ ls
    [hadoop@slave2 .ssh]$ cat id_rsa.pub >> authorized_keys
    [hadoop@slave2 .ssh]$ ls
    [hadoop@slave2 .ssh]$ cat authorized_keys
    ssh-rsa
    [hadoop@slave2 .ssh]$ cd ..
    [hadoop@slave2 ~]$ chmod 700 .ssh
    [hadoop@slave2 ~]$ chmod 600 .ssh/*
    [hadoop@slave2 ~]$ ls -al
    复制代码

    [hadoop@slave2 ~]$ ssh slave2
    [hadoop@slave2 ~]$ su root
    [root@slave2 hadoop]# yum -y install openssh-clients

    [root@slave2 hadoop]# su hadoop
    [hadoop@slave2 ~]$ ssh slave2
    Are you sure you want to continue connecting (yes/no)? yes
    [hadoop@slave2 ~]$ ssh slave2

      到此,为止。每台机器的各自本身的无密码访问已经成功设置好了

    2、  每台机器之间的无密码访问的设置

      2.1连接master

        完成slave1与master,slave2与msater

      2.1.1完成slave1与master

     

    [hadoop@slave1 ~]$ cat ~/.ssh/id_rsa.pub | ssh hadoop@master 'cat >> ~/.ssh/authorized_keys'
    Are you sure you want to continue connecting (yes/no)? yes
    hadoop@master's password:(是master的密码)

    [hadoop@master ~]$ cd .ssh
    [hadoop@master .ssh]$ ls
    [hadoop@master .ssh]$ cat authorized_keys

      2.1.2完成slave2与master

    [hadoop@slave2 ~]$ cat ~/.ssh/id_rsa.pub | ssh hadoop@master 'cat >> ~/.ssh/authorized_keys'

    将master的authorized_keys,分发给slave1

    [hadoop@master .ssh]$ scp -r authorized_keys hadoop@slave1:~/.ssh/
    Are you sure you want to continue connecting (yes/no)? yes
    hadoop@slave1's password:(密码是hadoop)

    查看

    [hadoop@slave1 ~]$ cd .ssh
    [hadoop@slave1 .ssh]$ ls
    [hadoop@slave1 .ssh]$ cat authorized_keys

    将master的authorized_keys,分发给slave2

    [hadoop@master .ssh]$ scp -r authorized_keys hadoop@slave2:~/.ssh/
    Are you sure you want to continue connecting (yes/no)? yes
    hadoop@slave2's password:(密码是hadoop)

      至此,完成slave1与master,slave2与msater

    现在,我们来互相测试下。

    从master出发,

    [hadoop@master .ssh]$ ssh slave1
    [hadoop@master .ssh]$ ssh slave2

    复制代码
    [hadoop@slave1 .ssh]$ ssh master
    [hadoop@master ~]$ exit
    [hadoop@slave1 .ssh]$ ssh slave2
    Are you sure you want to continue connecting (yes/no)? yes
    [hadoop@slave2 ~]$ exit
    [hadoop@slave1 .ssh]$ ssh slave2
    [hadoop@slave2 ~]$ exit
    [hadoop@slave1 .ssh]$
    复制代码

    复制代码
    [hadoop@slave2 .ssh]$ ssh master
    [hadoop@master ~]$ exit
    [hadoop@slave2 .ssh]$ ssh slave1
    Are you sure you want to continue connecting (yes/no)? yes
    [hadoop@slave1 ~]$ exit
    [hadoop@slave2 .ssh]$ ssh slave1
    [hadoop@slave1 ~]$ exit
    复制代码

      至此,3节点的master、slave1、slave2的SSH免密码通信的配置,已经完成。

        分:

          第一步,各机器的各自本身间的免密码通信。

          第二步,各机器的之间的免密码通信。

    第五步:搭建一个3节点的hadoop分布式小集群--预备工作(master、slave1、slave2的JDK安装与配置)

    master

    [hadoop@master ~]$ ls
    [hadoop@master ~]$ cd app/
    [hadoop@master app]$ ls
    [hadoop@master app]$ rz
    [hadoop@master app]$ su root
    [root@master app]# yum -y install lrzsz

    [hadoop@djt11 app]$pwd
    [hadoop@djt11 app]$ tar -zxvf jdk-7u79-linux-x64.tar.gz

    [hadoop@master app]$ ls
    [hadoop@master app]$ rm jdk-7u79-linux-x64.tar.gz
    [hadoop@master app]$ ls
    [hadoop@master app]$ su root
    [root@master app]# vi /etc/profile

    JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
    CLASSPATH=$JAVA_HOME/lib:$CLASSPATH
    PATH=$JAVA_HOME/bin:$PATH
    export JAVA_HOME CLASSPATH PATH

    [root@master app]# source /etc/profile
    [root@master app]# java -version

    slave1

    [hadoop@slave1 ~]$ ls
    [hadoop@slave1 ~]$ cd app/
    [hadoop@slave1 app]$ ls
    [hadoop@slave1 app]$ rz
    [hadoop@slave1 app]$ su root
    [root@slave1 app]# yum -y install lrzsz

    [root@master app]# su hadoop
    [hadoop@master app]$ ls
    [hadoop@master app]$ scp -r ./jdk1.7.0_79 slave1:/home/hadoop/app/

    [hadoop@master app]$ ls
    [hadoop@master app]$ scp -r ./jdk1.7.0_79 slave2:/home/hadoop/app/

    分发完之后,slave1和slave2单独进行/etc/profile配置

    [hadoop@slave1 app]$ su root
    [root@slave1 app]# vi /etc/profile

    JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
    CLASSPATH=$JAVA_HOME/lib:$CLASSPATH
    PATH=$JAVA_HOME/bin:$PATH
    export JAVA_HOME CLASSPATH PATH

     

    [root@slave2 app]# vi /etc/profile

    JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
    CLASSPATH=$JAVA_HOME/lib:$CLASSPATH
    PATH=$JAVA_HOME/bin:$PATH
    export JAVA_HOME CLASSPATH PATH

    [root@slave1 app]# source /etc/profile
    [root@slave1 app]# su hadoop
    [hadoop@slave1 app]$ ls
    jdk1.7.0_79
    [hadoop@slave1 app]$ java -version

    [root@slave2 app]# source /etc/profile
    [root@slave2 app]# su hadoop
    [hadoop@slave2 app]$ ls
    jdk1.7.0_79
    [hadoop@slave2 app]$ java -version

    第六步:搭建一个3节点的hadoop分布式小集群--预备工作(master、slave1、slave2的hadoop压缩包的配置)

    http://www.cnblogs.com/lanxuezaipiao/p/3525554.html   

    master

     

    [hadoop@master app]$ tar -zxvf hadoop-2.6.0.tar.gz

    [hadoop@master app]$ ls
    [hadoop@master app]$ rm hadoop-2.6.0.tar.gz
    [hadoop@master app]$ ls

    [hadoop@master app]$ su root
    [root@master app]# vi /etc/profile

    JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
    HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0
    CLASSPATH=$JAVA_HOME/lib:$CLASSPATH
    PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH
    export JAVA_HOME HADOOP_HOME CLASSPATH PATH

    [root@master app]# source /etc/profile

    查看hadoop的版本

    [root@master app]# hadoop version

    [root@master app]# su hadoop
    [hadoop@master hadoop-2.6.0]$ cd etc/hadoop
    [hadoop@master hadoop]$ vi hadoop-env.sh

    export JAVA_HOME=/home/hadoop/app/jdk1.7.0_79

    [hadoop@master hadoop]$ vi core-site.xml

           

    复制代码
      <property>
                    <name>fs.default.name</name>
                    <value>hdfs://master:9000</value>
                    <description>The name of the default file system, using 9000 port.</description>
            </property>
            <property>
                    <name>hadoop.tmp.dir</name>
                    <value>/tmp</value>
                    <description>A base for other temporary directories.</description>
            </property>
    复制代码

      注意,是hdfs,图片上有错误!

    [hadoop@master hadoop]$ vi hdfs-site.xml

         

    复制代码
      <property>
                    <name>dfs.namenode.rpc-address</name>
                    <value>master:9000</value>          
            </property>
            <property>
                    <name>dfs.replication</name>
                    <value>2</value>
                    <description>Set to 1 for pseudo-distributed mode,Set to 2 for distributed mode,Set to 3 for distributed mode.</description>
            </property>
    复制代码

    [hadoop@master hadoop]$ cp mapred-site.xml.template mapred-site.xml
    [hadoop@master hadoop]$ vi mapred-site.xml

    复制代码
            <property>
                    <name>mapred.job.tracker</name>
                    <value>master:9001</value>
            </property>
            <property>
                    <name>mapreduce.framework.name</name>
                    <value>yarn</value>
            </property>
    复制代码

     

    <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
    </property>

    [hadoop@master hadoop]$ vi masters

    [hadoop@master hadoop]$ vi slaves

    slave1
    slave2

      至此,master的hadoop压缩包的配置已经完成。

    向slave1、slave2分发hadoop压缩包及配置

    [hadoop@master hadoop]$ cd /home/hadoop/app/

    [hadoop@master app]$ ls

    [hadoop@master app]$ scp -r ./hadoop-2.6.0 slave1:/home/hadoop/app/

    [hadoop@master app]$ scp -r ./hadoop-2.6.0 slave2:/home/hadoop/app/

    至此,master,slave1,slave2的hadoop压缩包的配置全部完成。

    然后,在/etc/profile配置hadoop路径

    [hadoop@slave1 app]$ su root

    [root@slave1 app]# vi /etc/profile

    JAVA_HOME=/home/hadoop/app/jdk1.7.0_79

    HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0

    CLASSPATH=$JAVA_HOME/lib:$CLASSPATH

    PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH

    export JAVA_HOME HADOOP_HOME CLASSPATH PATH

     

    [root@slave1 app]# source /etc/profile

    [root@slave1 app]# hadoop version

    JAVA_HOME=/home/hadoop/app/jdk1.7.0_79

    HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0

    CLASSPATH=$JAVA_HOME/lib:$CLASSPATH

    PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH

    export JAVA_HOME HADOOP_HOME CLASSPATH PATH

    [root@slave2 app]# source /etc/profile

    [root@slave2 app]# hadoop version

      至此,3节点的hadoop压缩包的安装与配置均完成。

    第七步:搭建一个3节点的hadoop分布式小集群--预备工作(master、slave1、slave2的hadoop压缩包的格式化和启动进程)

    初次格式化前的进程查看

    对master节点,进行格式化,即在 namenode 上启动格式化

    [hadoop@master bin]$ pwd
    /home/hadoop/app/hadoop-2.6.0/bin
    [hadoop@master bin]$ ./hadoop namenode -format

    或者,如下,

    [hadoop@master hadoop-2.6.0]$ bin/hadoop namenode -format

    在 namenode 上启动 hadoop 集群

    [hadoop@master bin]$ ./start-all.sh

    或者如下

    复制代码
    [hadoop@master hadoop-2.6.0]$ sbin/start-all.sh
    Are you sure you want to continue connecting (yes/no)? yes
    [hadoop@master hadoop-2.6.0]$ jps
    1659 NameNode
    1964 ResourceManager
    2027 Jps
    1830 SecondaryNameNode
    [hadoop@master hadoop-2.6.0]$
    复制代码

    第八步:搭建一个3节点的hadoop分布式小集群--预备工作(master、slave1、slave2的集群状态)

    出现这样的情况啊,是因为,

    或者,,

    master

           <property>
                    <name>dfs.namenode.http-address</name>
                    <value>master:50070</value>
            </property>

    slave1

        

        <property>
                    <name>dfs.namenode.http-address</name>
                    <value>master:50070</value>
        </property>

    slave2

    想说的是,新的yarn架构默认是8088了,hadoop-2.*之后。

     

    hadoop2.*有50030吗?没有。没有jobtracker和tasktracker了。

      已经没有jobtracker和tasktracker了,

    而是ResourceManager和NodeManager了。

  • 相关阅读:
    New version of VS2005 extensions for SharePoint 3.0
    QuickPart : 用户控件包装器 for SharePoint Server 2007
    随想
    发布 SharePoint Server 2007 Starter Page
    如何在SharePoint Server中整合其他应用系统?
    Office SharePoint Server 2007 中文180天评估版到货!
    RMS 1.0 SP2
    SharePoint Server 2007 Web内容管理中的几个关键概念
    如何为已存在的SharePoint站点启用SSL
    Some update information about Office 2007
  • 原文地址:https://www.cnblogs.com/wangsongbai/p/9115514.html
Copyright © 2011-2022 走看看