zoukankan      html  css  js  c++  java
  • hadoop(十一)HDFS简介和常用命令介绍

    HDFS背景

    随着数据量的增大,在一个操作系统中内存不了了,就需要分配到操作系统的的管理磁盘中,但是不方便管理者维护,迫切需要一种系统来管理多态机器上的文件,这就是分布式文件管理系统。

    HDFS的概念

    HDFS英文hadoop distributed file system ,是一个分布式文件系统,用于存储文件,通过目录树记录定位文件,其次他是分布式的,由很多服务器联合起来实现其功能,集群中的服务器各有角色。
    HDFS的设计适合一次吸入,多次读取的场景,且不支持文件的修改。适合做数据分析。

    HDFS优缺点

    优点

    1)高容错性
    (1)数据自动保存多个副本。它通过增加副本的形式,提高容错性;
    (2)某一个副本丢失以后,它可以自动恢复。
    2)适合大数据处理
    (1)数据规模:能够处理数据规模达到GB、TB、甚至PB级别的数据;
    (2)文件规模:能够处理百万規模以上的文件数量,数量相当之大。
    3)流式数据访问,它能保证数据的一致性
    4)可构建在廉价机器上,通过多副本机制,提高可靠性。

    缺点

    1)不适合低延时数据访问,比如毫秒级的存储数据,是做不到的。
    2)无法高效的对大量小文件进行存储。
    (1)存储大量小文件的话,它会占用 Namenode大量的内存来存储文件、目录和块信息。这样是不可取的,因为 Namenode的内存总是有限的;
    (2)小文件存储的寻址时间会超过读取时同,它违反了HDFS的设计目标
    3)不支持并发写入、文件随机修改。
    (1)一个文件只能有一个写,不允许多个线程同时写;
    (2)仅支持数据 append(追加),不支持文件的随机修改

    架构

    image.png
    image.png

    这种架构主要由三个部分组成,分别为 Namenode、 Datanode和 SecondaryNamenode。

    Namenode:

    相当于 Master,它是一个主管、管理者。
    (1)管理HDFS的名称空问;
    (2)管理数据块( Block)映射信息;
    (3)配置副本策略;
    (4)处理客户端读写请求。

    Datanode:

    就是 Slave,Namenode下达命令, Datanode执行实际的操作。
    (1)存储实际的数据块;
    (2)执行数据块的读/写操作。

    Secondary Namenode:

    并非 Namenode的热备。当 Namenode挂掉的时候,它并不能马上替换 Namenode并提供服务
    (1)辅助 Namenode,分担其工作量
    (2)定期合并 Fsimage和 Edits,并推送给 Namenode
    (3)在紧急情况下,可辅助恢复 Namenode

    HDFS文件块大小

    HDFS中的文件在物理上是分块存储( block),块的大小可以通过配置参数( dfs blocksize)来规定,默认大小在 hadoop2x版本中是128M,老版本中是64M
    思考:为什么块的大小不能设置的太小,也不能设置的太大?
    HDFS的块比磁盘的块大,其目的是为了最小化寻址开销。如果块设置得足够大,从磁盘传输数据的时间会明显大于定位这个块开始位置所需的时间。因而,传输一个由多个块组成的文件的时间取决于磁盘传输速如果寻址时间约为10ms,而传输速率为100MBs,为了使寻址时间仅占传输时间的1%,我们要将块大小设置约为100MB。默认的块大小128MB。块的大小:10ms+100*100Ms=100M,如图3-2所示

    Hadoop的命令操作

    [shaozhiqi@hadoop102 ~]$ cd /opt/module/
    [shaozhiqi@hadoop102 module]$ cd hadoop-3.1.2/
    [shaozhiqi@hadoop102 hadoop-3.1.2]$ ls
    bin include lib LICENSE.txt output sbin wcinput
    etc input libexec NOTICE.txt README.txt share wcoutput

    查看help

    [shaozhiqi@hadoop102 hadoop-3.1.2]$ hadoop –help
     Client Commands:
    checknative check native Hadoop and compression libraries availability
    classpath prints the class path needed to get the Hadoop jar and the required libraries
    conftest validate configuration XML files
    credential interact with credential providers
    dtutil   operations related to delegation tokens
    envvars display computed Hadoop environment variables
    fs run a generic filesystem user client
    jar <jar> run a jar file. NOTE: please use "yarn jar" to launch YARN applications, not this command.
    jnipath prints the java.library.path
    kdiag Diagnose Kerberos Problems
    kerbname show auth_to_local principal conversion
    key manage keys via the KeyProvider
    trace view and modify Hadoop tracing settings
    version print the version

    查看fs都有哪些命令

    [shaozhiqi@hadoop102 hadoop-3.1.2]$ hadoop fs
    Usage: hadoop fs [generic options]
     [-appendToFile <localsrc> ... <dst>]
     [-cat [-ignoreCrc] <src> ...]
     [-checksum <src> ...]
     [-chgrp [-R] GROUP PATH...]
     [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
     [-chown [-R] [OWNER][:[GROUP]] PATH...]
     [-copyFromLocal [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst>]
     [-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
     [-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] [-e] <path> ...]
     [-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>]
     [-createSnapshot <snapshotDir> [<snapshotName>]]
     [-deleteSnapshot <snapshotDir> <snapshotName>]
     [-df [-h] [<path> ...]]
     [-du [-s] [-h] [-v] [-x] <path> ...]
     [-expunge]
     [-find <path> ... <expression> ...]
     [-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
     [-getfacl [-R] <path>]
     [-getfattr [-R] {-n name | -d} [-e en] <path>]
     [-getmerge [-nl] [-skip-empty-file] <src> <localdst>]
     [-head <file>]
     [-help [cmd ...]]
     [-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]]
     [-mkdir [-p] <path> ...]
     [-moveFromLocal <localsrc> ... <dst>
     [-moveToLocal <src> <localdst>]
     [-mv <src> ... <dst>]
     [-put [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
     [-renameSnapshot <snapshotDir> <oldName> <newName>]
     [-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...]
     [-rmdir [--ignore-fail-on-non-empty] <dir> ...]
     [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
     [-setfattr {-n name [-v value] | -x name} <path>]
     [-setrep [-R] [-w] <rep> <path> ...]
     [-stat [format] <path> ...]
     [-tail [-f] <file>]
     [-test -[defsz] <path>]
     [-text [-ignoreCrc] <src> ...]
     [-touch [-a] [-m] [-t TIMESTAMP ] [-c] <path> ...]
     [-touchz <path> ...]
     [-truncate [-w] <length> <path> ...]
     [-usage [cmd ...]]
    Generic options supported are:
    -conf <configuration file> specify an application configuration file
    -D <property=value> define a value for a given property
    -fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations.
    -jt <local|resourcemanager:port> specify a ResourceManager
    -files <file1,...> specify a comma-separated list of files to be copied to the map reduce cluster
    -libjars <jar1,...> specify a comma-separated list of jar files to be included in the classpath
    -archives <archive1,...> specify a comma-separated list of archives to be unarchived on the compute machines
    The general command line syntax is:
    command [genericOptions] [commandOptions]

    查看我们的HDFS目录信息-ls,发现失败,启动我们的hadoop集群

    [shaozhiqi@hadoop102 hadoop-3.1.2]$ hadoop fs -ls
    ls: Call From hadoop102/192.168.1.102 to hadoop102:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
    [shaozhiqi@hadoop102 hadoop-3.1.2]$

    再次查看目录-ls

    我们是新起的集群没有上传文件和创建目录

    [shaozhiqi@hadoop102 hadoop-3.1.2]$ hadoop fs -ls
    ls: `.': No such file or directory
    [shaozhiqi@hadoop102 hadoop-3.1.2]$ hadoop fs -ls /
    [shaozhiqi@hadoop102 hadoop-3.1.2]$

    创建文件夹-mkdir

    -p标示递归创建,多层时用

    [shaozhiqi@hadoop102 hadoop-3.1.2]$ hadoop fs -mkdir -p /shaozhiqi/temp
    [shaozhiqi@hadoop102 hadoop-3.1.2]$ hadoop fs -ls /
    Found 1 items
    drwxr-xr-x - shaozhiqi supergroup 0 2019-06-29 19:40 /shaozhiqi
    [shaozhiqi@hadoop102 hadoop-3.1.2]$

    将本地的文件剪贴到hdfs -moveFromLocal

    [shaozhiqi@hadoop102 hadoop-3.1.2]$ vim test.txt
    [shaozhiqi@hadoop102 hadoop-3.1.2]$
    [shaozhiqi@hadoop102 hadoop-3.1.2]$ hadoop fs -moveFromLocal test.txt /shaozhiqi/temp
    [shaozhiqi@hadoop102 hadoop-3.1.2]$

    查看是否上传成功

    [shaozhiqi@hadoop102 hadoop-3.1.2]$ hadoop fs -ls -r /shaozhiqi/temp
    Found 1 items
    -rw-r--r-- 3 shaozhiqi supergroup 18 2019-06-29 19:48 /shaozhiqi/temp/test.txt
    [shaozhiqi@hadoop102 hadoop-3.1.2]$

    在test.txt的文件末尾追加内容-appendToFile

    [shaozhiqi@hadoop102 hadoop-3.1.2]$ hadoop fs –appendToFIle test2.txt /shapzhiqi/temp/test.txt

    查看文件内容 –cat

    [shaozhiqi@hadoop102 hadoop-3.1.2]$ hadoop fs -cat /shaozhiqi/temp/test.txt
    tetete
    sdfd
    
    ddd
    [shaozhiqi@hadoop102 hadoop-3.1.2]$

    其他常用命令后续再补充吧

    -tail:显示一个文件的末尾
    -chgrp、-chmod、-chown:和linux一样修改文件的所属权
    -copyFromLocal:从本地copy文件到HDFS上去
    -copyToLocal:从hdfs copy到本地
    -cp:从HDFS的一个路劲copy到HDFS的另一个路劲
    -mv:在HDFS中移动文件
    -get:等同-copyToLocal
    -getmerge:合并下载多个文件

  • 相关阅读:
    [LeetCode]Word Break
    [LeetCode]singleNumber
    [LeetCode]Palindrome
    新浪博客无法访问
    C++基础之顺序容器
    C++基础之IO类
    [LeetCode]Restore IP Addresses
    [LeetCode]Maximal Rectangle
    [LeetCode]Reverse Linked List II
    ACM 树形数组
  • 原文地址:https://www.cnblogs.com/shaozhiqi/p/11534875.html
Copyright © 2011-2022 走看看