zoukankan      html  css  js  c++  java
  • hadoop 分布式集群安装

     
    centos7 安装hadoop 2.7 - 准备工作
     
    三台机器(内存大于2G) 分别写hosts、设定hostname
     
    172.7.15.113 master
     
    172.7.15.114 slave1
     
    172.7.15.115 slave2
     
    关闭selinux
     
    关闭firewalld
     
    systemctl disable firewalld
     
    systemctl stop firewalld
     
    yum install -y iptables-services
     
    systemctl enable iptables
     
    systemctl start iptables
     
    service iptables save
    

       

     
    centos7 安装hadoop 2.7 - 密钥登陆
     
    master可以通过密钥登陆本机和两台slave
     
    master上生成密钥对:
     
    ssh-keygen 一直回车
     
    复制~/.ssh/id_rsa.pub 内容到本机和两台slave的 ~/.ssh/authorized_keys
     
    设置本机和两台slave机器上的~/.ssh/authorized_keys文件权限为600
     
    chmod 600 ~/.ssh/authorized_keys
     
     
    在master上
     
    ssh master
     
    ssh slave1
     
    ssh slave2
     
    可以直接登陆
     
     
     
    2.1 安装hadoop-安装jdk
     
    hadoop2.7 需要安装jdk1.7版本
    下载地址
    http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html
     
    解压压缩包 tar zxf jdk1.7.0_79.tar.gz
     
    mv jdk1.7.0_79 /usr/local/
     
    编写环境变量配置 vim /etc/profile.d/java.sh 写入
     
    export JAVA_HOME=/usr/local/jdk1.7.0_79
    export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
    export PATH=$PATH:$JAVA_HOME/bin
     
    source /etc/profile.d/java.sh
     
    java -version 查看是否生效
     
    slave1 和 slave2 重复上面的操作
     
     
    以下操作在master上执行
    下载地址 http://hadoop.apache.org/releases.html 镜像站 http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.1/
     
    下载2.7.1 binary版本  wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz
     
     
    解压 tar zxf hadoop-2.7.1.tar.gz
     
    mv hadoop-2.7.1 /usr/local/hadoop
     
    cd /usr/local/hadoop
     
    mkdir tmp dfs dfs/data dfs/name
     
    把/usr/local/hadoop 目录分别拷贝至两个slave上  (slave1,slave2 上都需要安装 rsync)
     
    rsync -av /usr/local/hadoop  slave1:/usr/local/
     
    rsync -av /usr/local/hadoop  slave2:/usr/local/
     
    

       

     
    2.3 安装hadoop-配置hadoop
     
    master上
    vim /usr/local/hadoop/etc/hadoop/core-site.xml
    <configuration>
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://172.7.15.113:9000</value>
        </property>
        <property>
            <name>hadoop.tmp.dir</name>
            <value>file:/usr/local/hadoop/tmp</value>
        </property>
        <property>
            <name>io.file.buffer.size</name>
            <value>131702</value>
        </property>
    </configuration>
     
    

      

    master上
     
      vim  /usr/local/hadoop/etc/hadoop/hdfs-site.xml
    <configuration>
        <property>
            <name>dfs.namenode.name.dir</name>
            <value>file:/usr/local/hadoop/dfs/name</value>
        </property>
        <property>
            <name>dfs.datanode.data.dir</name>
            <value>file:/usr/local/hadoop/dfs/data</value>
        </property>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
        </property>
        <property>
            <name>dfs.namenode.secondary.http-address</name>
            <value>172.7.15.113:9001</value>
        </property>
        <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
        </property>
    </configuration>
     
    

      

     
    master上 
     
    vim  /usr/local/hadoop/etc/hadoop/mapred-site.xml
    <configuration>
        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>
        <property>
            <name>mapreduce.jobhistory.address</name>
            <value>172.7.15.113:10020</value>
        </property>
        <property>
            <name>mapreduce.jobhistory.webapp.address</name>
            <value>172.7.15.113:19888</value>
        </property>
    </configuration>
    

      

     
    master上 
     
    vim  /usr/local/hadoop/etc/hadoop/yarn-site.xml
    <configuration>
    <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
        <property>
            <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
            <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
        <property>
            <name>yarn.resourcemanager.address</name>
            <value>172.7.15.113:8032</value>
        </property>
        <property>
            <name>yarn.resourcemanager.scheduler.address</name>
            <value>172.7.15.113:8030</value>
        </property>
     
    <property>
            <name>yarn.resourcemanager.resource-tracker.address</name>
            <value>172.7.15.113:8031</value>
        </property>
        <property>
            <name>yarn.resourcemanager.admin.address</name>
            <value>172.7.15.113:8033</value>
        </property>
        <property>
            <name>yarn.resourcemanager.webapp.address</name>
            <value>172.7.15.113:8088</value>
        </property>
        <property>
            <name>yarn.nodemanager.resource.memory-mb</name>
            <value>768</value>
        </property>
    </configuration>
     
    

      

     
    以下在master上操作
     
    cd  /usr/local/hadoop/etc/hadoop
    vi  hadoop-env.sh  //更改JAVA_HOME
    export JAVA_HOME=/usr/local/jdk1.7.0_79
    vi yarn-env.sh  //更改JAVA_HOME
    export JAVA_HOME=/usr/local/jdk1.7.0_79
    vi  slaves //改为如下
    172.7.15.114
    172.7.15.115
    

      

     
    将master上的etc目录同步至两个slave
    rsync -av /usr/local/hadoop/etc/ slave1:/usr/local/hadoop/etc/
    rsync -av /usr/local/hadoop/etc/ slave2:/usr/local/hadoop/etc/
    

      

     
    centos7 安装hadoop 2.7 - 启动hadoop
     
    在master上操作即可,两个slave会自动启动
     
    初始化
    /usr/local/hadoop/bin/hdfs namenode -format
    启动服务
    /usr/local/hadoop/sbin/start-all.sh
    停止服务
    /usr/local/hadoop/sbin/stop-all.sh
    访问
    浏览器打开http://172.7.15.113:8088/
    浏览器打开http://172.7.15.113:50070/
    

      

     
    centos7 安装hadoop 2.7 - 测试hadoop
     
    以下操作在master上实现
    cd /usr/local/hadoop
    建立测试目录   bin/hdfs dfs -mkdir /123
    如果提示 copyFromLocal: Cannot create directory /123/. Name node is in safe mode.
    这是因为开启了安全模式,解决办法
    bin/hdfs dfsadmin -safemode leave
    将当前目录下的LICENSE.txt复制到hadopp中 
    bin/hdfs dfs -copyFromLocal ./LICENSE.txt  /123
    查看/123/下有哪些文件  bin/hdfs dfs -ls /123
    用wordcount分析LICENSE.txt  bin/hadoop  jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /123/LICENSE.txt /output/123
    bin/hdfs dfs -ls /output/123  查看分析后的文件
    bin/hdfs dfs -cat /output/123/part-r-00000  查看分析结果
  • 相关阅读:
    C#加密解密
    软件推广常去网站
    C#双缓冲
    C#截图相关代码
    C# 如何在空间运行时调整控件位置和大小
    微信小程序蓝牙打印机demo
    解决办法 not Signedoffby author/committer/uploader in commit message footer
    C# 多线程任务 Task
    2019 TFUG 成都 Coding Lab 圆满结束
    微信小程序元素的定位相对绝对固定
  • 原文地址:https://www.cnblogs.com/weifeng1463/p/10885974.html
Copyright © 2011-2022 走看看