zoukankan      html  css  js  c++  java
  • 设置hdfs和hbase副本数。hadoop2.5.2 hbase0.98.6

    hdfs副本和基本读写。

    core-site.xml
    hdfs-site.xml

    从/etc/hdfs1/conf下拷贝到工作空间


    import java.io.IOException;
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.FSDataOutputStream;
    import org.apache.hadoop.fs.FileSystem;
    import org.apache.hadoop.fs.Path;
    // * hadoop2.5.2
    public class CopyOfHadoopDFSFileReadWrite {
     
        static void printAndExit(String str) {
            System.err.println(str);
            System.exit(1);
        }

        public static void main (String[] argv) throws IOException {
            Configuration conf = new Configuration();   
            FileSystem fs = FileSystem.get(conf);
            argv=new String[]{"/tmp/hello.txt"};
            Path outFile = new Path(argv[0]);
            if (fs.exists(outFile))
                printAndExit("Output already exists");
            FSDataOutputStream out = fs.create(outFile,(short)2); 
            try {
                out.write("hello 扒拉扒拉了吧啦啦啦不".getBytes());
            } catch (IOException e) {
                System.out.println("Error while copying file");
            } finally {
                out.close();
            }
        }
    }

    image
    hbase-site.xml
    从/etc/hyperbase1/conf下拷贝
    http://192.168.146.128:8180/#/dashboard 确保hyperbase1服务启动状态


    Configuration conf = HBaseConfiguration.create();
        HBaseAdmin ha = new HBaseAdmin(conf);
        String tableName = "demoReplication";
        HTableDescriptor htd = new HTableDescriptor(tableName.getBytes());  
        HColumnDescriptor hcd = new HColumnDescriptor("f").setMaxVersions(30)
                .setCompressionType(Algorithm.SNAPPY)    //compression
                .setBloomFilterType(BloomType.ROW); 
        htd.addFamily(hcd);
        ha.createTable(htd);   
        HTable htable = new HTable(conf, tableName);
        String uuid = UUID.randomUUID().toString();
        uuid = "uuidsaaaabbbbuuide1";
        Put put = new Put(Bytes.toBytes(uuid));
        File file = new File("C:\Users\microsoft\workspace\hbase\hyper\hbase-client-0.98.6-transwarp-tdh464.jar");
        put.add("f".getBytes(), Bytes.toBytes("q1"),Files.toByteArray(file));
        htable.put(put);
        htable.flushCommits();
        ha.flush(tableName);
        htable.close();

     

    在hbase shell中执行

    describeInJson 'demoReplication' ,'true','/tmp/demo'

    vi '/tmp/demo' 在如下位置添加special.column.replication指定副本数2( hbase shell 外)

    {
      "tableName" : "demoReplication",
      "base" : {
        "families" : [ {
          "FAMILY" : "f",

          "special.column.replication" : "2",

    保存/tmp/demo,更新表定义

    alterUseJson 'demoReplication' ,'/tmp/demo'(hbase shell中)

    hdfs dfs -ls /hyperbase1/data/default/testReplication/c38f234712a99d45797ef1bdd6c3b09a/f用ls命令查看副本数
    image改前是3,改后是2
  • 相关阅读:
    作为【开发人员】如何持续提升自己的开发技能
    永远不要放弃做梦的权利---与所有程序员们共勉
    十种更好的表达“你的代码写的很烂”的方法---总有些人的代码让人难以忍受
    程序员技术练级攻略--练成这样,成神仙了!
    创业其实是个逻辑问题![想不想创业都来看看]
    多图震撼!数字的未来,2013报告
    记最难忘的一件事 等笑话一箩筐
    HDU4666 Hyperspace(曼哈顿)
    POJ3436 ACM Computer Factory(最大流)
    再思考
  • 原文地址:https://www.cnblogs.com/wifi0/p/6767847.html
Copyright © 2011-2022 走看看