zoukankan      html  css  js  c++  java
  • HDFS 高可用,hdfs-site.xml 配置及说明,更详细参考官网

    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>3</value>
         </property>
    <!--the logical name for this new nameservice -->
    <property>  
      <name>dfs.nameservices</name>
      <value>mycluster</value>
    </property>
    <!--unique identifiers for each NameNode in the nameservice  -->
    <property>
      <name>dfs.ha.namenodes.mycluster</name>
      <value>nn1,nn2</value>
    </property>
    <!--the fully-qualified RPC address for each NameNode to listen on -->
    <property>
      <name>dfs.namenode.rpc-address.mycluster.nn1</name>
      <value>bigdatastorm:8020</value>
    </property>
    <property>
      <name>dfs.namenode.rpc-address.mycluster.nn2</name>
      <value>bigdataspark:8020</value>
    </property>
    <!--the fully-qualified HTTP address for each NameNode to listen on -->
    <property>
      <name>dfs.namenode.http-address.mycluster.nn1</name>
      <value>bigdatastorm:50070</value>
    </property>
    <property>
      <name>dfs.namenode.http-address.mycluster.nn2</name>
      <value>bigdataspark:50070</value>
    </property>
    <!-- the Java class that HDFS clients use to contact the Active NameNode -->
    <property>
      <name>dfs.client.failover.proxy.provider.mycluster</name>
      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <!--the URI which identifies the group of JNs where the NameNodes will write/read edits -->
    <property>
      <name>dfs.namenode.shared.edits.dir</name>
      <value>qjournal://bigdatastorm:8485;bigdataspark:8485;bigdatacloud:8485/mycluster</value>
    </property>
    <!--a list of scripts or Java classes which will be used to fence the Active NameNode during a failover-->
    <property>
      <name>dfs.ha.fencing.methods</name>
      <value>sshfence</value>
    </property>

    <property>
      <name>dfs.ha.fencing.ssh.private-key-files</name>
      <value>/root/.ssh/id_dsa</value>
    </property>
    <!--the path where the JournalNode daemon will store its local state -->
    <property>
      <name>dfs.journalnode.edits.dir</name>
      <value>/opt/hadoop-2.5.1/data</value>
    </property>
    <property>
       <name>dfs.ha.automatic-failover.enabled</name>
       <value>true</value>
     </property>

    </configuration>



  • 相关阅读:
    cefsharp wpf 中文输入问题解决方法
    [Node.js]操作mysql
    [Node.js]操作redis
    关系型数据库同步
    微服务和事件驱动
    如何使用REDIS进行微服务间通讯
    CENTOS7开启SSH服务
    WINDOWS和LINUX相互传文件WINSCP
    WINDOWS远程控制LINUX终端XSHELL
    腾讯云CENTOS7安装MSSQL2017
  • 原文地址:https://www.cnblogs.com/TendToBigData/p/10501452.html
Copyright © 2011-2022 走看看