zoukankan      html  css  js  c++  java
  • hadoop集群启动时DataNode节点启动失败

    错误日志如下:

    ************************************************************/
    2018-03-07 18:57:35,121 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
    2018-03-07 18:57:35,296 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/java/data/dfs/data should be specified as a URI in configuration files. Please update hdfs configuration.
    2018-03-07 18:57:36,059 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
    2018-03-07 18:57:36,153 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
    2018-03-07 18:57:36,153 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
    2018-03-07 18:57:36,155 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is Centpy
    2018-03-07 18:57:36,208 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
    2018-03-07 18:57:36,213 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
    2018-03-07 18:57:36,264 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
    2018-03-07 18:57:36,296 INFO org.apache.hadoop.http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
    2018-03-07 18:57:36,298 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
    2018-03-07 18:57:36,298 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
    2018-03-07 18:57:36,298 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
    2018-03-07 18:57:36,300 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 0.0.0.0:50075
    2018-03-07 18:57:36,302 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
    2018-03-07 18:57:36,302 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
    2018-03-07 18:57:36,302 INFO org.mortbay.log: jetty-6.1.26
    2018-03-07 18:57:36,592 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
    2018-03-07 18:57:36,942 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
    2018-03-07 18:57:36,962 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
    2018-03-07 18:57:36,971 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
    2018-03-07 18:57:36,985 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
    2018-03-07 18:57:36,988 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/java/data/dfs/data should be specified as a URI in configuration files. Please update hdfs configuration.
    2018-03-07 18:57:36,996 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (storage id unknown) service to Centpy/192.168.86.127:9000 starting to offer service
    2018-03-07 18:57:37,000 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
    2018-03-07 18:57:37,003 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
    2018-03-07 18:57:37,305 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/java/data/dfs/data/in_use.lock acquired by nodename 7541@Centpy
    2018-03-07 18:57:37,306 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-7509759-192.168.86.127-1520419708605 (storage id ) service to Centpy/192.168.86.127:9000
    org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/java/data/dfs/data is in an inconsistent state: node type is incompatible with others.
        at org.apache.hadoop.hdfs.server.common.Storage.setStorageType(Storage.java:1051)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:304)
        at org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:372)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
        at java.lang.Thread.run(Thread.java:745)
    2018-03-07 18:57:37,307 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-7509759-192.168.86.127-1520419708605 (storage id ) service to Centpy/192.168.86.127:9000
    2018-03-07 18:57:37,313 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-7509759-192.168.86.127-1520419708605 (storage id )
    2018-03-07 18:57:39,313 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
    2018-03-07 18:57:39,315 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
    2018-03-07 18:57:39,322 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down DataNode at Centpy/192.168.86.127
    ************************************************************/

    日志说明了datanode的clusterID 和 namenode的clusterID 不匹配。

    解决方法:根据日志进入路径(/usr/java/data/dfs/data)。此时,我们应该能看见其下有/data和/name两个目录。将name/current下的VERSION中的clusterID复制到data/current下的VERSION中,覆盖掉原来的clusterID,让两个保持一致;或者,直接删除掉data和name两个目录下的内容,然后重启

    注意:在重启之前,一定要结束掉hadoop所有进程(sbin/stop-all.sh),否则会出现其他错误。

    现在,我们重启之后输入jps就可以看到以下进程了:

    8862 NameNode
    8982 DataNode
    9131 SecondaryNameNode
    9506 Jps
    9374 NodeManager
    9269 ResourceManager

     以上就是博主为大家介绍的这一板块的主要内容,这都是博主自己的学习过程,希望能给大家带来一定的指导作用,有用的还望大家点个支持,如果对你没用也望包涵,有错误烦请指出。如有期待可关注博主以第一时间获取更新哦,谢谢!

     

     版权声明:本文为博主原创文章,未经博主允许不得转载。

  • 相关阅读:
    python中不可变数据类型和可变数据类型
    你分得清Python中:“索引和切片”吗?
    Python Django中一些少用却很实用的orm查询方法
    jQuery on()方法
    jquery.flexslider-min.js实现banner轮播图效果
    jQuery 树型菜单插件(Treeview)
    jQuery Growl 插件(消息提醒)
    jQuery Autocomplete 用户快速找到并从预设值列表中选择
    jQuery Accordion 插件用于创建折叠菜单
    jquery.validate.js 验证框架详解
  • 原文地址:https://www.cnblogs.com/zimo-jing/p/8521428.html
Copyright © 2011-2022 走看看