zoukankan      html  css  js  c++  java
  • hdfs常见操作java示例

    我们学习hadoop,最常见的编程是编写mapreduce程序,但是,有时候我们也会利用java程序做一些常见的hdfs操作。比如删除一个目录,新建一个文件,从本地上传一个文件到hdfs等,甚至是追加内容到hdfs文件中。

    这里介绍一些常见的hdfs操作的java示例,帮助我们加深对hdfs的理解。这里分为8个小部分,分别是:

    1. 创建文件夹
    2. 创建文件并写入内容
    3. 查看文件内容
    4. 重命名
    5. 获取文件最后修改时间
    6. 拷贝本地文件到hdfs
    7. 追加内容到文件
    8. 删除

    这些示例,分别对应8个方法,这里先给出源代码:

    package com.xxx.hadoop.hdfs;
    import java.io.IOException;
    import java.util.Date;
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.FSDataInputStream;
    import org.apache.hadoop.fs.FSDataOutputStream;
    import org.apache.hadoop.fs.FileStatus;
    import org.apache.hadoop.fs.FileSystem;
    import org.apache.hadoop.fs.Path;
    import org.slf4j.Logger;
    import org.slf4j.LoggerFactory;
    /**
     * hdfs常见操作
     * @author hadoop
     *
     */
    public class HDFSMain {
        
        private static final Logger log = LoggerFactory.getLogger(HDFSMain.class);
     
        /**
         * 创建文件夹
         * @param fs
         */
        public static void mkdir(FileSystem fs) {
            Path path = new Path("/user/root/hdfs");
            try {
                boolean exists = fs.exists(path);
                if(!exists) {
                    log.info("path /user/root/hdfs doesn't exists.");
                    boolean status = fs.mkdirs(path);
                    log.info("mkdir success : "+status);
                }else {
                    log.info("path /user/root/hdfs exists.");
                }
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
        
        /**
         * 创建文件并写入内容
         * @param fs
         */
        public static void createFile(FileSystem fs) {
            byte[] data = "hello,hdfs!
    ".getBytes();
            try {
                Path path = new Path("/user/root/hdfs/20190830.txt");
                FSDataOutputStream output = fs.create(path);
                output.write(data);
                output.close();
                log.info("createFile done.");
            }catch (IOException e) {
                e.printStackTrace();
            }
        }
        
        /**
         * 读取文件内容
         * @param fs
         */
        public static void readFile(FileSystem fs) {
            Path path = new Path("/user/root/hdfs/20190830.txt");
            try {
                FSDataInputStream input = fs.open(path);
                byte[] buffer = new byte[1024];
                int result = input.read(buffer);
                if(result!=-1) {
                    log.info("file content is : "+new String(buffer));
                }
                input.close();
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
        
        /**
         * 重命名
         * @param fs
         */
        public static void rename(FileSystem fs) {
            Path path = new Path("/user/root/hdfs");
            Path newPath = new Path("/user/root/2019");
            try {
                boolean status = fs.rename(path, newPath);
                log.info("path /user/root/hdfs rename success "+status+".");;
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
        
        /**
         * 获取文件最后修改时间
         * @param fs
         */
        public static void getLastModificationTime(FileSystem fs) {
            Path file = new Path("/user/root/2019/20190830.txt");
            try {
                FileStatus status = fs.getFileStatus(file);
                long time = status.getModificationTime();
                log.info("file "+file.getName()+" last modification time :"+new Date(time));
            } catch (IOException e) {
                e.printStackTrace();
            }    
        }
        
        /**
         * 拷贝本地文件到hdfs
         * @param fs
         */
        public static void copyToHDFS(FileSystem fs) {
            //Path src = new Path("/root/test.txt");
            Path src = new Path("D:\push.txt");
            Path dst = new Path("/user/root/2019/");
            try {
                fs.copyFromLocalFile(src, dst);
                log.info("copyFromLocal done.");
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
        
        /**
         * 追加内容到文件
         * @param fs
         */
        public static void appendToFile(FileSystem fs) {
            byte[] data = "append for hdfs!
    ".getBytes();
            Path path = new Path("/user/root/2019/20190830.txt");
            try {
                FSDataOutputStream output = fs.append(path);
                output.write(data);
                output.close();
                log.info("append to file ok.");
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
        
        /**
         * 删除
         * @param fs
         */
        public static void delete(FileSystem fs) {
            Path path = new Path("/user/root/2019");
            try {
                boolean exists = fs.exists(path);
                if(exists) {
                    boolean success = fs.delete(path,true);
                    log.info("path "+path.getName()+" delete successfully : "+success+".");
                }
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
        
        public static void main(String[] args) {
            try {
                System.setProperty("HADOOP_USER_NAME", "root");
                Configuration conf = new Configuration();
                conf.set("fs.defaultFS", "hdfs://192.168.56.202:9000");
                conf.setBoolean("dfs.support.append", true);
                conf.set("dfs.client.block.write.replace-datanode-on-failure.policy", "NEVER");
                FileSystem fs = FileSystem.get(conf);
                //1 mkdir
                //mkdir(fs);
                //2 createFile
                //createFile(fs);
                //3 readFile
                //readFile(fs);
                //4 rename
                //rename(fs);
                //5 getLastModificationTime
                //getLastModificationTime(fs);
                //6 copyToHDFS
                //copyToHDFS(fs);
                //7 appendToFile
                appendToFile(fs);
                //8 delete
                //delete(fs);
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
     
    }
    View Code

    运行示例之前,hdfs目录如下:

     

    运行第一个实例,控制台打印日志如下:

    2019-08-30 23:29:27 [INFO ]  [main]  [com.xxx.hadoop.hdfs.HDFSMain] path /user/root/hdfs doesn't exists.
    2019-08-30 23:29:27 [INFO ]  [main]  [com.xxx.hadoop.hdfs.HDFSMain] mkdir success : true

    检查hdfs文件夹,所需的文件夹hdfs创建成功:

    运行第二个示例,是在hdfs文件夹下创建一个文本文件,并写入内容。控制台打印信息:

    2019-08-30 23:34:31 [INFO ]  [main]  [com.xxx.hadoop.hdfs.HDFSMain] createFile done.

    刷新DFS Locations目录,结构如下,可以看到新建文件及其内容:

    运行第三个示例,控制台打印信息:

    2019-08-30 23:36:29 [INFO ]  [main]  [com.xxx.hadoop.hdfs.HDFSMain] file content is : hello,hdfs!

    运行第四个示例,控制台打印信息:

    2019-08-30 23:38:29 [INFO ]  [main]  [com.xxx.hadoop.hdfs.HDFSMain] path /user/root/hdfs rename success true.

    刷新DFS Locations目录,结构如下:

    运行第五个示例,控制台打印信息:

    2019-08-30 23:40:17 [INFO ]  [main]  [com.xxx.hadoop.hdfs.HDFSMain] file 20190830.txt last modification time :Fri Aug 30 23:36:36 CST 2019

    运行第六个示例,控制台打印信息:

    2019-08-30 23:41:33 [INFO ]  [main]  [com.xxx.hadoop.hdfs.HDFSMain] copyFromLocal done.

    刷新DFS Locations目录,结构如下:

    这个示例运行时,指定的本地文件是eclipse运行所在机器的磁盘,而不是hadoop部署的机器磁盘,这里是运行在windows系统之上的,因此我的文件路径是:D:\push.txt,这里不能搞错了,不能设置为hadoop部署机器上的文件路径。

    运行第七个示例,控制台打印信息:

    2019-08-30 23:43:21 [INFO ]  [main]  [com.xxx.hadoop.hdfs.HDFSMain] append to file ok.

    hdfs文件内容:

    运行第八个示例,控制台打印信息:

    2019-08-30 23:45:14 [INFO ]  [main]  [com.xxx.hadoop.hdfs.HDFSMain] path 2019 delete successfully : true.

    文件目录删除,刷新DFS Locations的目录:

        这里需要注意的是,文件追加这个示例,文件追加需要设置属性:dfs.client.block.write.replace-datanode-on-failure.policy:NEVER,如果不设置,运行示例的时候会报异常,显示错误信息:because lease recovery is in progress。

        另外,这里所有的示例运行在windows物理机的eclipse中,而hadoop部署在虚拟机上,相当于远程执行相关命令,我们需要在构建Configuration之后,设置fs.defaultFS的值,这里是hadoop2.7.0,因此设置的值是 hdfs://192.168.56.202:9000。

    Configuration conf = new Configuration();
    conf.set("fs.defaultFS", "hdfs://192.168.56.202:9000");

    转载于:https://blog.csdn.net/feinifi/article/details/100167793

  • 相关阅读:
    Anaconda3下安装Anaconda2
    课程四(Convolutional Neural Networks),第四 周(Special applications: Face recognition & Neural style transfer) —— 3.Programming assignments:Face Recognition for the Happy House
    课程四(Convolutional Neural Networks),第四 周(Special applications: Face recognition & Neural style transfer) —— 2.Programming assignments:Art generation with Neural Style Transfer
    课程四(Convolutional Neural Networks),第四 周(Special applications: Face recognition & Neural style transfer) —— 1.Practice quentions
    Python中通过threshold创建mask
    课程四(Convolutional Neural Networks),第三 周(Object detection) —— 2.Programming assignments:Car detection with YOLOv2
    课程四(Convolutional Neural Networks),第三 周(Object detection) —— 1.Practice questions:Detection algorithms
    故障常见原因归类分析及预防和应对措施
    谈恶
    奇怪之事总有缘由:订单状态对比不一致问题排查
  • 原文地址:https://www.cnblogs.com/it-deepinmind/p/14292342.html
Copyright © 2011-2022 走看看