zoukankan      html  css  js  c++  java
  • Failed to rename HdfsNamedFileStatus

    报错信息:

    21/03/29 14:05:08 WARN TaskSetManager: Lost task 53.0 in stage 6.0 (TID 580, test-bdp06, executor 7): org.apache.spark.SparkException: Task failed while writing rows
        at org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:155)
        at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83)
        at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
    Caused by: java.io.IOException: Failed to rename HdfsNamedFileStatus{path=hdfs://mycluster/data/autohome-log/1614524400000.tmp/_temporary/0/_temporary/attempt_20210329140504_0016_m_000053_0/5; isDirectory=false; length=2859909; replication=3; blocksize=134217728; modification_time=1616997908781; access_time=1616997908596; owner=root; group=hdfs; permission=rw-r--r--; isSymlink=false; hasAcl=false; isEncrypted=false; isErasureCoded=false} to hdfs://mycluster/data/autohome-log/1614524400000.tmp/5
        at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:473)
        at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:486)
        at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitTask(FileOutputCommitter.java:597)
        at org.apache.hadoop.mapred.FileOutputCommitter.commitTask(FileOutputCommitter.java:172)
        at org.apache.hadoop.mapred.OutputCommitter.commitTask(OutputCommitter.java:343)
        at org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:50)
        at org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:77)
        at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitTask(HadoopMapReduceCommitProtocol.scala:225)
        at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:138)
        at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:127)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1415)
        at org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:139)
        ... 8 more

    错误代码:

    JavaPairRDD<String, String> domainLogRdd = logRdd.mapToPair(log -> {
        return new Tuple2<>(log.split("|")[1], log);
    })
    
    domainRdd = domainLogRdd.reduceByKey((log1, log2) -> log1.concat("
    ").concat(log2));
    
    domainLogRdd.saveAsHadoopFile(path, String.class, String.class, RDDMultipleTextOutputFormat.class);

    解决方法:

    未给分隔符添加转义字符,把log.split("|")改为log.split("\|")可以解决错误

    报错原因:

  • 相关阅读:
    Kubernetes Declarative Deployment
    Kubernetes集群如何查看scheduler/controller manager谁是leader
    kubelet 配置管理 (一)
    Kubernetes Ingress
    如何计算Kubernetes容器CPU使用率
    Kubernetes概念
    .NET陷阱之四:事件监听带来的问题与弱监听器
    .NET陷阱之三:“正确”使用控件也会造成内存泄露
    .NET陷阱之五:奇怪的OutOfMemoryException——大对象堆引起的问题与对策
    自己编译得到 arm64v8a 版本的libZBarDecoder.SO文件
  • 原文地址:https://www.cnblogs.com/zcqkk/p/14593875.html
Copyright © 2011-2022 走看看