前提步骤安装Hadoop,安装步骤: https://www.jianshu.com/p/2ce9775aeb6e
单节点案例官方文档地址:http://hadoop.apache.org/docs/r3.1.2/

单节点的案例

可以看到有三项,本地单节点,伪分布式,完全分布式三中

案例1
本地单节点操作:准备数据源,示例input为我们要处理的数据源,不要提前创建output
执行examples的示例:过滤input,按照正则,将过滤到的数据输出到output中
正则:以dfs开头以a-z任意一个字符结尾的数据
$mkdir input
$cp etc/hadoop/*.xml input
$bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar grep input output 'dfs[a-z.]+'

$cat output/*

查看执行生成的文件:

正则解释:以dfs开头以a-z任意一个字符结尾的数据

案例2
Wordcount案例
数据源内容:
hadoop yarn
hadoop mapreduce
shaozhiqi
shaozhiqi
目标:统计相同单词的个数
[shaozhiqi@hadoop101 hadoop-3.1.2]$ mkdir wcinput
[shaozhiqi@hadoop101 hadoop-3.1.2]$ ls
bin etc include input lib libexec LICENSE.txt NOTICE.txt output README.txt sbin share wcinput
[shaozhiqi@hadoop101 hadoop-3.1.2]$ cd wcinput/
[shaozhiqi@hadoop101 wcinput]$ vim wc.input
[shaozhiqi@hadoop101 hadoop-3.1.2]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar wordcount wcinput/ wcoutput
结果:
