1.在linux下安装eclipse-jee-kepler-SR2-linux-gtk.tar.gz
并在桌面生成快捷方式
2.解压m2.tar.gz /root/
3.在maven程序/pom.xml添加引用,引用Hadoop,引用JDK
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>jdk.tools</groupId>
<artifactId>jdk.tools</artifactId>
<version>1.7</version>
<scope>system</scope>
<systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
</dependency>
4.编写DataCount,在这里,我们需要编写Map/Reduce两个阶段,一个负责读取数据并将有用的数据写入字节流中
Map阶段:1.接收数据。2.传递数据
public static class DCMapper extends Mapper<LongWritable, Text, Text, DataBean>
{
@Override
protected void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
//1.jie shou shu ju
String line = value.toString();
String[] fields = line.split(" ");
String telNo = fields[1];
long up = Long.parseLong(fields[8]);
long down = Long.parseLong(fields[9]);
//2.chuan di shu ju
DataBean bean = new DataBean(telNo, up, down);
context.write(new Text(telNo), bean);
}
}
Reduce阶段
public static class DCReducer extends Reducer<Text, DataBean, Text, DataBean>
{
@Override
protected void reduce(Text key, Iterable<DataBean> v2s,
Context context)
throws IOException, InterruptedException {
long up_sum = 0;
long down_sum = 0;
for (DataBean bean : v2s)
{
up_sum += bean.getUpPayLoad();
down_sum += bean.getDownPayLoad();
}
DataBean bean = new DataBean("", up_sum, down_sum);
context.write(key, bean);
}
}
5.Main方法,提供数据
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
job.setJarByClass(DataCount.class);
job.setMapperClass(DCMapper.class);
// k2 v2 and k3 v3
// job.setMapOutputKeyClass(Text.class);
// job.setMapOutputValueClass(DataBean.class);
FileInputFormat.setInputPaths(job, new Path(args[0]));
job.setReducerClass(DCReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(DataBean.class);
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
6.将程序打包成jar包,并上传到hdfs中,hadoop fs -put HTTP_20130313143750.dat /data.doc
7.运行hadoop程序,hadoop jar /root/examples.jar cn.itcast.hadoop.mr.dc.DataCount /data.doc /dataout
说明,如果期间报错,注意检查yarn进程是否启动。如没有启动yarn,需要启动yarn