zoukankan      html  css  js  c++  java
  • eclipse 远程链接访问hadoop 集群日志信息没有输出的问题l

    Eclipse插件Run on Hadoop没有用到hadoop集群节点的问题
    参考来源

    http://f.dataguru.cn/thread-250980-1-1.html

    http://f.dataguru.cn/thread-249738-1-1.html
    (出处: 炼数成金)

    三个问题:(第2个问题是我加的)

    1.eclipse 控制台没有运行日志输出的问题

    2.eclipse 上远程运行hadoop 集群的情况,这过程中一直变成了本地的,搞了2天才搞通,要确保本地与hadoop集群的Master之间ssh无密码登陆,当然首先要配置好/etc/hostname/

     

    还要注意链接代码的写法.如果实际的远程集群运行,控制台就会像我下面的代码一样的(我远程主机的master IP是192.168.2.35,本机是192.168.2.51),代码显示运行在了HDFS 所在master的

    35上面,证明成功了,如果想进一步验证,可以暂时关闭远程master的hadoop,即stop-all.sh,会提示厦门的错误

    15/06/27 12:27:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    15/06/27 12:27:16 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
    15/06/27 12:27:16 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
    Exception in thread "main" java.net.ConnectException: Call From One/192.168.2.51 to Master:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)

    如果重新启动,删除生成HDFS上面的output 目录,重新运行eclipse ,则有如下结果:说明远程集群确实在执行任务

    15/06/27 12:34:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    15/06/27 12:34:18 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
    15/06/27 12:34:18 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
    15/06/27 12:34:18 WARN mapreduce.JobSubmitter: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
    15/06/27 12:34:19 INFO input.FileInputFormat: Total input paths to process : 2
    15/06/27 12:34:19 INFO mapreduce.JobSubmitter: number of splits:15
    15/06/27 12:34:19 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
    15/06/27 12:34:19 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1331725738_0001
    15/06/27 12:34:19 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
    15/06/27 12:34:19 INFO mapreduce.Job: Running job: job_local1331725738_0001
    15/06/27 12:34:19 INFO mapred.LocalJobRunner: OutputCommitter set in config null
    15/06/27 12:34:19 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
    15/06/27 12:34:19 INFO mapred.LocalJobRunner: Waiting for map tasks
    15/06/27 12:34:19 INFO mapred.LocalJobRunner: Starting task: attempt_local1331725738_0001_m_000000_0
    15/06/27 12:34:19 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
    15/06/27 12:34:19 INFO mapred.MapTask: Processing split: hdfs://192.168.2.35:9000/user/hadoop/input/1.txt:0+134217728
    15/06/27 12:34:19 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
    15/06/27 12:34:19 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
    15/06/27 12:34:19 INFO mapred.MapTask: soft limit at 83886080
    15/06/27 12:34:19 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
    15/06/27 12:34:19 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
    15/06/27 12:34:19 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
    15/06/27 12:34:20 INFO mapreduce.Job: Job job_local1331725738_0001 running in uber mode : false
    15/06/27 12:34:20 INFO mapreduce.Job:  map 0% reduce 0%
    15/06/27 12:34:21 INFO mapred.MapTask: Spilling map output
    15/06/27 12:34:21 INFO mapred.MapTask: bufstart = 0; bufend = 32177692; bufvoid = 104857600
    15/06/27 12:34:21 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 13287304(53149216); length = 12927093/6553600
    15/06/27 12:34:21 INFO mapred.MapTask: (EQUATOR) 42663446 kvi 10665856(42663424)
    15/06/27 12:34:25 INFO mapred.LocalJobRunner: map > map
    15/06/27 12:34:26 INFO mapred.MapTask: Finished spill 0
    15/06/27 12:34:26 INFO mapred.MapTask: (RESET) equator 42663446 kv 10665856(42663424) kvi 8044428(32177712)
    15/06/27 12:34:26 INFO mapreduce.Job:  map 1% reduce 0%
    15/06/27 12:34:27 INFO mapred.MapTask: Spilling map output
    15/06/27 12:34:27 INFO mapred.MapTask: bufstart = 42663446; bufend = 74841169; bufvoid = 104857600
    15/06/27 12:34:27 INFO mapred.MapTask: kvstart = 10665856(42663424); kvend = 23953172(95812688); length = 12927085/6553600
    15/06/27 12:34:27 INFO mapred.MapTask: (EQUATOR) 85326920 kvi 21331724(85326896)
    15/06/27 12:34:28 INFO mapred.LocalJobRunner: map > map
    15/06/27 12:34:31 INFO mapred.MapTask: Finished spill 1
    15/06/27 12:34:31 INFO mapred.MapTask: (RESET) equator 85326920 kv 21331724(85326896) kvi 18710300(74841200)
    15/06/27 12:34:31 INFO mapred.LocalJobRunner: map > map

    WordCount 源码,注意远程连接代码写法

     

    package test;
    import java.io.IOException;
    import java.util.StringTokenizer;
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.Mapper;
    import org.apache.hadoop.mapreduce.Reducer;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    import org.apache.hadoop.util.GenericOptionsParser;

    public class WordCount {

    public static class TokenizerMapper
    extends Mapper<Object, Text, Text, IntWritable>{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context
    ) throws IOException, InterruptedException {
    StringTokenizer itr = new StringTokenizer(value.toString());
    while (itr.hasMoreTokens()) {
    word.set(itr.nextToken());
    context.write(word, one); }
    }
    }

    public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> {
    private IntWritable result = new IntWritable();


    public void reduce(Text key, Iterable<IntWritable> values,
    Context context
    ) throws IOException, InterruptedException {
    int sum = 0;
    for (IntWritable val : values) {
    sum += val.get();
    }
    result.set(sum);
    context.write(key, result);
    }


    }

    public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    //conf.set("mapred.job.tracker", "192.168.2.35:9001");
    //在你的文件地址前自动添加:hdfs://master:9000/
    //conf.set("fs.defaultFS", "hdfs://192.168.2.35:9001/");
    //conf.set("hadoop.job.user","hadoop");
    //指定jobtracker的ip和端口号,master在/etc/hosts中可以配置
    //conf.set("mapred.job.tracker","192.168.2.35:9001");

    //在你的文件地址前自动添加:hdfs://master:9000/

    conf.set("fs.defaultFS", "hdfs://192.168.2.35:9000/");
    ////conf.set("hadoop.job.user","hadoop");
    //// conf.set("Master","1234");
    //指定jobtracker的ip和端口号,master在/etc/hosts中可以配置
    //////conf.set("mapred.job.tracker","Master:9001");
    String[] ars=new String[]{"input","out"};
    String[] otherArgs = new GenericOptionsParser(conf, ars).getRemainingArgs();
    if (otherArgs.length != 2) {
    System.err.println("Usage: wordcount ");
    System.exit(2);
    }
    Job job = new Job(conf, "wordcount");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
    }

    3.eclipse 上运行hadoop 集群的任务时,通过eclipse(hdfs插件)远程链接到服务器的hadoop中,eclipse 执行程序时,在hadoop的各节点上jps就没有结果 

    第一个问题的解决方法:eclipse 运行时候:log4j的日志

    log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
    log4j:WARN Please initialize the log4j system properly.
    log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

    把 log4.properties 复制到项目中bin目录下即可,注意复制后的权限。
    cp /usr/hadoop2.5/etc/hadoop/log4.properties /home/hadoop/workspace/WordCount/bin/ 

    第二个问题:

  • 相关阅读:
    SharePoint Framework 构建你的第一个web部件(二)
    win32
    win32
    win32
    win32
    C++ 将filesystem::path转换为const BYTE*
    win32
    win32
    win32
    win32
  • 原文地址:https://www.cnblogs.com/canyangfeixue/p/4599767.html
Copyright © 2011-2022 走看看