zoukankan      html  css  js  c++  java
  • hadoop3.1.1 HA高可用分布式集群安装部署

     

    1、环境介绍

    涉及到软件下载地址:https://pan.baidu.com/s/1hpcXUSJe85EsU9ara48MsQ

    服务器:CentOS 6.8 其中:2 台 namenode、3 台 datanode

    zookeeper集群地址:192.168.67.11:2181,192.168.67.12:2181

    JDK:jdk-8u191-linux-x64.tar.gz

    hadoop:hadoop-3.1.1.tar.gz

    节点信息:

    节点 IP namenode datanode resourcemanager journalnode
    namenode1 192.168.67.101  
    namenode2 192.168.67.102  
    datanode1 192.168.67.103    
    datanode2 192.168.67.104    
    datanode3 192.168.67.105    

    2、配置ssh免密登陆

    2.1 在每台机器上执行 ssh-keygen -t rsa

    2.2 vim ~/.ssh/id_rsa.pub 将所有机器上的公钥内容汇总到 authorized_keys 文件并分发到每台机器上。

    2.3 授权 chmod 600 ~/.ssh/authorized_keys

    3、配置hosts: 

    vim /etc/hosts
    
    #增加如下配置
    192.168.67.101 namenode1
    192.168.67.102 namenode2
    192.168.67.103 datanode1
    192.168.67.104 datanode2
    192.168.67.105 datanode3
    #将hosts文件分发至其他机器
    scp -r /etc/hosts namenode2:/etc/hosts
    scp -r /etc/hosts datanode1:/etc/hosts
    scp -r /etc/hosts datanode2:/etc/hosts
    scp -r /etc/hosts datanode3:/etc/hosts

    4、关闭防火墙

    service iptables stop
    chkconfig iptables off

    5、安装JDK

    tar -zxvf /usr/local/soft/jdk-8u191-linux-x64.tar.gz -C /usr/local/
    
    vim /etc/profile
    
    #增加JDK环境变量内容
    export JAVA_HOME=/usr/local/jdk1.8.0_191
    export JRE_HOME=${JAVA_HOME}/jre
    export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
    export PATH=${JAVA_HOME}/bin:$PATH
    使环境变量生效:source /etc/profile

     6、安装hadoop

    tar -zxvf /usr/local/soft/hadoop-3.1.1.tar.gz -C /usr/local/
    vim /etc/profile
    
    #增加hadoop环境变量内容
    export HADOOP_HOME=/usr/local/hadoop-3.1.1
    export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib
    使环境变量生效:source /etc/profile
    #修改 start-dfs.sh 和 stop-dfs.sh 两个文件,增加配置
    vim /usr/local/hadoop-3.1.1/sbin/start-dfs.sh
    vim /usr/local/hadoop-3.1.1/sbin/stop-dfs.sh
    
    #增加启动用户
    HDFS_DATANODE_USER=root
    HDFS_DATANODE_SECURE_USER=root
    HDFS_NAMENODE_USER=root
    HDFS_SECONDARYNAMENODE_USER=root
    HDFS_JOURNALNODE_USER=root
    HDFS_ZKFC_USER=root
     
    #修改 start-yarn.sh 和 stop-yarn.sh 两个文件,增加配置
    vim /usr/local/hadoop-3.1.1/sbin/start-yarn.sh
    vim /usr/local/hadoop-3.1.1/sbin/stop-yarn.sh
    
    #增加启动用户
    YARN_RESOURCEMANAGER_USER=root
    HDFS_DATANODE_SECURE_USER=root
    YARN_NODEMANAGER_USER=root
    vim /usr/local/hadoop-3.1.1/etc/hadoop/hadoop-env.sh
    
    #增加内容
    export JAVA_HOME=/usr/local/jdk1.8.0_191
    export HADOOP_HOME=/usr/local/hadoop-3.1.1
    #修改 workers 文件内容
    vim /usr/local/hadoop-3.1.1/etc/hadoop/workers

    #替换内容为 datanode1 datanode2 datanode3
     
    vim /usr/local/hadoop-3.1.1/etc/hadoop/core-site.xml
    
    #修改为如下配置
    <configuration>
        <!-- 指定hdfs的nameservice为nameservice -->
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://mycluster/</value>
        </property>
    
        <!-- 指定hadoop临时目录 -->
        <property>
            <name>hadoop.tmp.dir</name>
            <value>file:/usr/local/hadoop-3.1.1/hdfs/temp</value> 
        </property>
    
        <!-- 指定zookeeper地址 -->
        <property>
            <name>ha.zookeeper.quorum</name>
            <value>192.168.67.1:2181</value>
        </property>
    </configuration>
     
    vim /usr/local/hadoop-3.1.1/etc/hadoop/hdfs-site.xml
    
    #修改为如下配置
    <configuration>
        <property>
            <name>dfs.namenode.name.dir</name>
            <value>file:/usr/local/hadoop-3.1.1/hdfs/name</value>
        </property>
        
        <property>
            <name>dfs.datanode.data.dir</name>
            <value>file:/usr/local/hadoop-3.1.1/hdfs/data</value>
        </property>
        
        <property>
            <name>dfs.nameservices</name>
            <value>mycluster</value>
        </property>
        
        <property>
            <name>dfs.ha.namenodes.mycluster</name>
            <value>nn1,nn2</value>
        </property>
        
        <property>
            <name>dfs.namenode.rpc-address.mycluster.nn1</name>
            <value>namenode1:9000</value>
        </property>
        
        <property>
            <name>dfs.namenode.rpc-address.mycluster.nn2</name>
            <value>namenode2:9000</value>
        </property>
        
        <property>
            <name>dfs.namenode.http-address.mycluster.nn1</name>
            <value>namenode1:50070</value>
        </property>
        
        <property>
            <name>dfs.namenode.http-address.mycluster.nn2</name>
            <value>namenode2:50070</value>
        </property>
        
        <!--HA故障切换 -->
        <property>
            <name>dfs.ha.automatic-failover.enabled</name>
            <value>true</value>
        </property>
        
        <!-- journalnode 配置 -->
        <property>
            <name>dfs.namenode.shared.edits.dir</name>
            <value>qjournal://namenode1:8485;namenode2:8485;datanode1:8485;datanode2:8485;datanode3:8485/mycluster</value>
        </property>
        
        <property>
            <name>dfs.client.failover.proxy.provider.mycluster</name>
            <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        </property>
        
        <!--发生failover时,Standby的节点要执行一系列方法把原来那个Active节点中不健康的NameNode服务给杀掉,
        这个叫做fence过程。sshfence会通过ssh远程调用fuser命令去找到Active节点的NameNode服务并杀死它-->
        <property>
            <name>dfs.ha.fencing.methods</name>
            <value>shell(/bin/true)</value>
        </property>
        
        <!--SSH私钥 -->
        <property>
            <name>dfs.ha.fencing.ssh.private-key-files</name>
            <value>/root/.ssh/id_rsa</value>
        </property>
        
        <!--SSH超时时间 -->
        <property>
            <name>dfs.ha.fencing.ssh.connect-timeout</name>
            <value>30000</value>
        </property>
        
        <!--Journal Node文件存储地址 -->
        <property>
            <name>dfs.journalnode.edits.dir</name>
            <value>/usr/local/hadoop-3.1.1/hdfs/journaldata</value>
        </property>
        
        <property>
            <name>dfs.qjournal.write-txns.timeout.ms</name>
            <value>60000</value>
        </property>
    </configuration>
    vim /usr/local/hadoop-3.1.1/etc/hadoop/mapred-site.xml
    
    #修改为如下配置
    <configuration>
        <!-- 指定mr框架为yarn方式 -->
        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>
    </configuration>
    vim /usr/local/hadoop-3.1.1/etc/hadoop/yarn-site.xml
    
    #修改为如下配置
    <configuration>
        <!-- Site specific YARN configuration properties -->
        <!-- 开启RM高可用 -->
        <property>
            <name>yarn.resourcemanager.ha.enabled</name>
            <value>true</value>
        </property>
    
        <!-- 指定RM的cluster id -->
        <property>
            <name>yarn.resourcemanager.cluster-id</name>
            <value>yrc</value>
        </property>
    
        <!-- 指定RM的名字 -->
        <property>
            <name>yarn.resourcemanager.ha.rm-ids</name>
            <value>rm1,rm2</value>
        </property>
    
        <!-- 分别指定RM的地址 -->
        <property>
            <name>yarn.resourcemanager.hostname.rm1</name>
            <value>namenode1</value>
        </property>
    
        <property>
            <name>yarn.resourcemanager.hostname.rm2</name>
            <value>namenode2</value>
        </property>
    
        <!-- 指定zk集群地址 -->
        <property>
            <name>yarn.resourcemanager.zk-address</name>
            <value>192.168.67.11:2181,192.168.67.12:2181</value>
        </property>
    
        <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
    </configuration>
    #将这些修改的文件分发至其他4台服务器中
    /usr/local/hadoop-3.1.1/sbin/start-dfs.sh
    /usr/local/hadoop-3.1.1/sbin/stop-dfs.sh
    /usr/local/hadoop-3.1.1/sbin/start-yarn.sh
    /usr/local/hadoop-3.1.1/sbin/stop-yarn.sh
    /usr/local/hadoop-3.1.1/etc/hadoop/hadoop-env.sh
    /usr/local/hadoop-3.1.1/etc/hadoop/workers
    /usr/local/hadoop-3.1.1/etc/hadoop/core-site.xml
    /usr/local/hadoop-3.1.1/etc/hadoop/hdfs-site.xml
    /usr/local/hadoop-3.1.1/etc/hadoop/mapred-site.xml
    /usr/local/hadoop-3.1.1/etc/hadoop/yarn-site.xml
     
    首次启动顺序
    1、确保配置的zookeeper服务器已经运行
    2、在所有journalnode机器上启动:hdfs --daemon start journalnode
    3、namenode1中执行格式化zkfc:hdfs zkfc -formatZK
    4、namenode1中格式化主节点:hdfs namenode -format
    5、启动namenode1中的主节点:hdfs --daemon start namenode
    6、namenode2副节点同步主节点格式化:hdfs namenode -bootstrapStandby
    7、启动集群:start-all.sh
     

    7、验证

    7.1 访问地址:

    http://192.168.67.101/50070/

    http://192.168.67.102/50070/

    http://192.168.67.101/8088/

    http://192.168.67.102/8088/

    7.2 关闭 namenode 为 active 对应的服务器,观察另一台 namenode 状态是否由 standby 变更为 active

     
  • 相关阅读:
    在Excel中查找/替换时使用换行符
    用fieldset标签轻松实现Tab选项卡效果
    用JSFL将Flash中的元件导出为PNG
    PHP学习笔记之PDO
    网页中的数学公式
    解决fl.findObjectInDocByType/fl.findObjectInDocByName的毛病
    HTML+CSS 网页中鼠标滚轮失效的解决办法
    jQuery 离开页面时提示
    ASP 计算出指定年份生肖
    au3创建Access数据库表例子
  • 原文地址:https://www.cnblogs.com/liuys635/p/11341523.html
Copyright © 2011-2022 走看看