zoukankan      html  css  js  c++  java
  • Spark -14:spark Hadoop 高可用模式下读写hdfs

    第一种,通过配置文件

      val sc = new SparkContext()

        sc.hadoopConfiguration.set("fs.defaultFS", "hdfs://cluster1");
        sc.hadoopConfiguration.set("dfs.nameservices", "cluster1");
        sc.hadoopConfiguration.set("dfs.ha.namenodes.cluster1", "nn1,nn2");
        sc.hadoopConfiguration.set("dfs.namenode.rpc-address.cluster1.nn1", "namenode001:8020");
        sc.hadoopConfiguration.set("dfs.namenode.rpc-address.cluster1.nn2", "namenode002:8020");
        sc.hadoopConfiguration.set("dfs.client.failover.proxy.provider.cluster1", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");

    第二种,通过Java代码

       val conf = new SparkConf().setAppName("Spark Word Count") 
        val sc = new SparkContext()
        sc.hadoopConfiguration.addResource("cluster1/core-site.xml")
        sc.hadoopConfiguration.addResource("cluster1/hdfs-site.xml")

  • 相关阅读:
    systemmap 使用记录
    reading code record
    吞吐问题
    debug cps 原因
    fopen的a+和rewind
    debug cps && perf debug
    tfo以及quic的阅读笔记
    ss 和netstat
    debug open files
    多核编程 local global
  • 原文地址:https://www.cnblogs.com/nucdy/p/6917701.html
Copyright © 2011-2022 走看看