zoukankan      html  css  js  c++  java
  • hadoop 2.x安装:安装结果测试

    注意:本方法只适用于hadoop2.x

    在我们安装之后即使使用jps获取了当前的进程,也未必安装成功,我们实际测试一下。注意关闭防火墙并启动hadoop集群。这里给出简单的测试:

    [grid2@tiny1 ~]$ ## 创建两个文件
    [grid2@tiny1 ~]$ cd ~
    [grid2@tiny1 ~]$ mkdir input
    [grid2@tiny1 ~]$ cd input
    [grid2@tiny1 input]$ echo "hello world" >test1.txt
    [grid2@tiny1 input]$ echo "hello hadoop" >test2.txt
    [grid2@tiny1 input]$ cat test1.txt
    hello world
    [grid2@tiny1 input]$ cat test2.txt
    hello hadoop
    
    [grid2@tiny1 input]$ ## 将这两个文件拷贝至HDFS
    [grid2@tiny1 ~]$ hadoop/hadoop-2.7.2/bin/hadoop fs -ls /
    [grid2@tiny1 ~]$ hadoop/hadoop-2.7.2/bin/hadoop fs -put ~/input /in
    [grid2@tiny1 ~]$ hadoop/hadoop-2.7.2/bin/hadoop fs -ls /
    Found 1 items
    drwxr-xr-x   - grid2 supergroup          0 2017-03-13 05:24 /in
    [grid2@tiny1 ~]$ hadoop/hadoop-2.7.2/bin/hadoop fs -ls /in/*
    Found 2 items
    -rw-r--r--   1 grid supergroup         12 2017-06-21 20:28 /in/test1.txt
    -rw-r--r--   1 grid supergroup         13 2017-06-21 20:28 /in/test2.txt
    
    [grid2@tiny1 ~]$ ## 运行MapReduce
    [grid2@tiny1 ~]$ hadoop/hadoop-2.7.2/bin/hadoop jar hadoop/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /in /out
    17/03/13 05:54:10 INFO client.RMProxy: Connecting to ResourceManager at tiny1/192.168.132.101:8032
    17/03/13 05:54:16 INFO input.FileInputFormat: Total input paths to process : 2
    17/03/13 05:54:16 INFO mapreduce.JobSubmitter: number of splits:2
    17/03/13 05:54:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1489354711598_0002
    17/03/13 05:54:17 INFO impl.YarnClientImpl: Submitted application application_1489354711598_0002
    17/03/13 05:54:18 INFO mapreduce.Job: The url to track the job: http://tiny1:8088/proxy/application_1489354711598_0002/
    17/03/13 05:54:18 INFO mapreduce.Job: Running job: job_1489354711598_0002
    17/03/13 05:54:34 INFO mapreduce.Job: Job job_1489354711598_0002 running in uber mode : false
    17/03/13 05:54:34 INFO mapreduce.Job:  map 0% reduce 0%
    17/03/13 05:55:55 INFO mapreduce.Job:  map 100% reduce 0%
    17/03/13 05:56:46 INFO mapreduce.Job:  map 100% reduce 100%
    17/03/13 05:56:49 INFO mapreduce.Job: Job job_1489354711598_0002 completed successfully
    17/03/13 05:56:49 INFO mapreduce.Job: Counters: 49
            File System Counters
                    FILE: Number of bytes read=55
                    FILE: Number of bytes written=353368
                    FILE: Number of read operations=0
                    FILE: Number of large read operations=0
                    FILE: Number of write operations=0
                    HDFS: Number of bytes read=215
                    HDFS: Number of bytes written=25
                    HDFS: Number of read operations=9
                    HDFS: Number of large read operations=0
                    HDFS: Number of write operations=2
            Job Counters
                    Launched map tasks=2
                    Launched reduce tasks=1
                    Data-local map tasks=2
                    Total time spent by all maps in occupied slots (ms)=176701
                    Total time spent by all reduces in occupied slots (ms)=32951
                    Total time spent by all map tasks (ms)=176701
                    Total time spent by all reduce tasks (ms)=32951
                    Total vcore-milliseconds taken by all map tasks=176701
                    Total vcore-milliseconds taken by all reduce tasks=32951
                    Total megabyte-milliseconds taken by all map tasks=180941824
                    Total megabyte-milliseconds taken by all reduce tasks=33741824
            Map-Reduce Framework
                    Map input records=2
                    Map output records=4
                    Map output bytes=41
                    Map output materialized bytes=61
                    Input split bytes=190
                    Combine input records=4
                    Combine output records=4
                    Reduce input groups=3
                    Reduce shuffle bytes=61
                    Reduce input records=4
                    Reduce output records=3
                    Spilled Records=8
                    Shuffled Maps =2
                    Failed Shuffles=0
                    Merged Map outputs=2
                    GC time elapsed (ms)=4814
                    CPU time spent (ms)=5120
                    Physical memory (bytes) snapshot=324501504
                    Virtual memory (bytes) snapshot=6172119040
                    Total committed heap usage (bytes)=253939712
            Shuffle Errors
                    BAD_ID=0
                    CONNECTION=0
                    IO_ERROR=0
                    WRONG_LENGTH=0
                    WRONG_MAP=0
                    WRONG_REDUCE=0
            File Input Format Counters
                    Bytes Read=25
            File Output Format Counters
                    Bytes Written=25
                    
    [grid2@tiny1 ~]$ ## 查看输出结果
    [grid2@tiny1 ~]$ hadoop/hadoop-2.7.2/bin/hadoop fs -ls /
    Found 3 items
    drwxr-xr-x   - grid2 supergroup          0 2017-03-13 05:51 /in
    drwxr-xr-x   - grid2 supergroup          0 2017-03-13 05:56 /out
    drwx------   - grid2 supergroup          0 2017-03-13 05:54 /tmp
    [grid2@tiny1 ~]$ hadoop/hadoop-2.7.2/bin/hadoop fs -ls /out/*
    -rw-r--r--   1 grid2 supergroup          0 2017-03-13 05:56 /out/_SUCCESS
    -rw-r--r--   1 grid2 supergroup         25 2017-03-13 05:56 /out/part-r-00000
    [grid2@tiny1 ~]$ hadoop/hadoop-2.7.2/bin/hadoop fs -cat /out/part-r-00000
    hadoop  1
    hello   2
    world   1
    

    测试成功

  • 相关阅读:
    CentOS6 配置阿里云 NTP 服务
    使用docker-compose运行nginx容器挂载时遇到的文件/目录问题
    Springboot配置文件参数使用docker-compose实现动态配置
    Dockerfile文件全面详解
    docker 生成mysql镜像启动时自动执行sql
    CentOS无法识别NTFS格式U盘完美解决方案
    网络模型与网络策略
    k8s更换网络插件:从flannel更换成calico
    数据采集实战(四)-- 线性代数习题答案下载
    leedcode 146. LRU 缓存机制(哈希+双向链表)
  • 原文地址:https://www.cnblogs.com/erygreat/p/7224056.html
Copyright © 2011-2022 走看看