zoukankan      html  css  js  c++  java
  • 结合ZooKeeper配置Hadoop HA

    参考官方文档 http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html

    架构图


    hdfs-site.xml配置

    <configuration>
            <property>
            <name>dfs.nameservices</name>
            <value>ns1</value>
        </property>
        <property>
        <name>dfs.ha.namenodes.ns1</name>
            <value>nn1,nn2</value>
        </property>
        <!-- NameNode RPC ADDRESS -->
        <property>
            <name>dfs.namenode.rpc-address.ns1.nn1</name>
            <value>hadoop-senior1.jason.com:8020</value>
        </property>
        <property>
            <name>dfs.namenode.rpc-address.ns1.nn2</name>
            <value>hadoop-senior2.jason.com:8020</value>
        </property>
        <!-- NameNode HTTP WEB ADDRESS -->
        <property>
            <name>dfs.namenode.http-address.ns1.nn1</name>
            <value>hadoop-senior1.jason.com:50070</value>
        </property>
        <property>
            <name>dfs.namenode.http-address.ns1.nn2</name>
            <value>hadoop-senior2.jason.com:50070</value>
        </property>
        <!-- NameNode Shared Edits Address -->
        <property>
            <name>dfs.namenode.shared.edits.dir</name>
            <value>qjournal://hadoop-senior1.jason.com:8485;hadoop-senior2.jason.com:8485;hadoop-senior3.jason.com:8485/ns1</value>
        </property>
        <!-- HDFS PROXY CLIENT -->
        <property>
            <name>dfs.client.failover.proxy.provider.ns1</name>
            <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        </property>
        <!-- Fencing Method -->
        <property>
            <name>dfs.ha.fencing.methods</name>
            <value>sshfence</value>
        </property>
        <property>
            <name>dfs.ha.fencing.ssh.private-key-files</name>
            <value>/home/jason/.ssh/id_rsa</value>
        </property>
        
        <!-- DIR Location -->
        <property>
            <name>dfs.journalnode.edits.dir</name>
            <value>/opt/app/hadoop-2.5.0/data/dfs/jn</value>
        </property>
        
         <property>
           <name>dfs.ha.automatic-failover.enabled</name>
           <value>true</value>
         </property>
        
    </configuration>

    core-site.xml配置

    <configuration>    
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://ns1</value>
        </property>
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/opt/app/hadoop-2.5.0/data/tmp</value>
        </property>
        <property>
            <name>fs.trash.interval</name>
            <value>420</value>
        </property>
        <property>
           <name>ha.zookeeper.quorum</name>
           <value>hadoop-senior1.jason.com:2181,hadoop-senior2.jason.com:2181,hadoop-senior3.jason.com:2181</value>
        </property>
    </configuration>
  • 相关阅读:
    Java 快速入门-06-JDK 目录文件说明
    Java快速入门-05-数组循环条件 实例《延禧攻略》
    腾讯云服务器 选购+远程控制 图文教程
    无法获得锁 /var/lib/dpkg/lock
    Ubuntu 安装 PhpMyAdmin 图文教程
    基于Redis的BloomFilter算法去重
    CAP理论
    Linux常用命令回顾
    基于Solr实现HBase的二级索引
    Solr搜索服务架构图
  • 原文地址:https://www.cnblogs.com/xdlaoliu/p/7337229.html
Copyright © 2011-2022 走看看