zoukankan      html  css  js  c++  java
  • hadoop报错WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

    19/06/14 10:44:58 WARN common.Util: Path /opt/hadoopdata/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
    19/06/14 10:44:58 WARN common.Util: Path /opt/hadoopdata/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
    19/06/14 10:44:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

    1 解决

    WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

    查看

    [root@hadoop1 conf]# sed -i '$a export  HADOOP_ROOT_LOGGER=DEBUG,console' /etc/profile
    [root@hadoop1 conf]# source /etc/profile
    [hadoop@hadoop1 sbin]$ hadoop fs -ls /
    19/06/14 11:04:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Found 2 items
    drwxrwx---   - hadoop supergroup          0 2019-05-31 16:26 /tmp
    drwxr-xr-x   - hadoop supergroup          0 2019-05-31 16:20 /user
    ##查看文件是否有系统一致 [root
    @hadoop1 native]# file /opt/hadoop/lib/native/libhadoop.so.1.0.0 libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped
    [hadoop@hadoop2 sbin]$ uname -i
    x86_64

    给hadoop执行操作了debug,还是没看到详细日志

    一开始执行网上的这些操作步骤

    vim /opt/hadoop/etc/hadoop/hadoop-env.sh
    vim /etc/profile
    export  HADOOP_HOME=/opt/hadoop/
    export  HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
    export  HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
    source /etc/profile
    vim ~/.bashrc
    source ~/.bashrc

    完成之后还是报错

    于是开始执行

    [root@hadoop1 build]# strings /lib64/libc.so.6 | grep GLIBC
    GLIBC_2.2.5
    GLIBC_2.2.6
    GLIBC_2.3
    GLIBC_2.3.2
    GLIBC_2.3.3
    GLIBC_2.3.4
    GLIBC_2.4
    GLIBC_2.5
    GLIBC_2.6
    GLIBC_2.7
    GLIBC_2.8
    GLIBC_2.9
    GLIBC_2.10
    GLIBC_2.11
    GLIBC_2.12
    GLIBC_PRIVATE

    应该是缺少了2.14的支持

    http://ftp.gnu.org/gnu/glibc/glibc-2.14.tar.gz
    tar -zxvf glibc-2.14.tar.gz
    cd glibc-2.14 && mkdir build && cd build
    ../configure --prefix=/opt/glibc-2.14
    ##如果有报错
    configure: error: in `/opt/glibc-2.14/build':
    configure: error: no acceptable C compiler found in $PATH
    --执行yum install -y gcc gcc-c++ make cmake
    make -j4
    make install
    [root@hadoop1 build]# mkdir /opt/glibc-2.14/etc/
    [root@hadoop1 build]# cp /etc/ld.so.c* /opt/glibc-2.14/etc/
    cp: omitting directory `/etc/ld.so.conf.d'
    [root@hadoop1 build]# ln -sf /opt/glibc-2.14/lib/libc-2.14.so /lib64/libc.so.6 
    [root@hadoop1 build]# strings /lib64/libc.so.6 | grep GLIBC
    GLIBC_2.2.5
    GLIBC_2.2.6
    GLIBC_2.3
    GLIBC_2.3.2
    GLIBC_2.3.3
    GLIBC_2.3.4
    GLIBC_2.4
    GLIBC_2.5
    GLIBC_2.6
    GLIBC_2.7
    GLIBC_2.8
    GLIBC_2.9
    GLIBC_2.10
    GLIBC_2.11
    GLIBC_2.12
    GLIBC_2.13
    GLIBC_2.14
    GLIBC_PRIVATE
    [hadoop@hadoop1 sbin]$ hadoop fs -ls /   ###不在报错
    Found 2 items
    drwxrwx---   - hadoop supergroup          0 2019-05-31 16:26 /tmp
    drwxr-xr-x   - hadoop supergroup          0 2019-05-31 16:20 /user

    2 hdfs文件修<!-- Put site-specific property overrides in this file. --

    <configuration>
            <property>
                    <name>dfs.namenode.secondary.http-address</name>
                    <value>hadoop2:9001</value>
            </property>
            <property>
                    <name>dfs.namenode.name.dir</name>
                    <value>/opt/hadoopdata/hdfs/name</value>
            </property>
            <property>
                    <name>dfs.datanode.data.dir</name>
                    <value>/opt/hadoopdata/hdfs/data</value>
            </property>
            <property>
                    <name>dfs.namenode.checkpoint.dir</name>
                    <value>/opt/hadoopdata/hdfs/snn</value>
            </property>
            <property>
                    <name>dfs.namenode.checkpoint.period</name>
                    <value>3600</value>
            </property>
            <property>
                    <name>dfs.replication</name>
                    <value>2</value>
            </property>
    ====ha之后新增加的内容
    <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>hadoop1:8020</value> </property> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>hadoop2:8020</value> </property> <property> <name>dfs.namenode.servicerpc-address.ns1.nn1</name> <value>hadoop1:8040</value> </property> <property> <name>dfs.namenode.servicerpc-address.ns1.nn2</name> <value>hadoop2:8040</value> </property> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>hadoop1:50070</value> </property> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>hadoop2:50070</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/ns1</value> </property> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/opt/hadoopdata/hdfs/journal</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> </configuration>

    修改并应用到其他节点

            <property>
                    <name>dfs.namenode.secondary.http-address</name>
                    <value>hadoop2:9001</value>
            </property>
            <property>
                    <name>dfs.namenode.name.dir</name>
                    <value>file:///opt/hadoopdata/hdfs/name</value>
            </property>
            <property>
                    <name>dfs.datanode.data.dir</name>
                    <value>file:////opt/hadoopdata/hdfs/data</value>
            </property>
            <property>
                    <name>dfs.replication</name>
                    <value>2</value>
            </property>

    启动

    [hadoop@hadoop1 hadoop]$ /opt/hadoop/sbin/start-dfs.sh 
    Starting namenodes on [hadoop1 hadoop2]
    hadoop2: starting namenode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-namenode-hadoop2.out
    hadoop1: starting namenode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-namenode-hadoop1.out
    hadoop2: starting datanode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-datanode-hadoop2.out
    hadoop1: starting datanode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-datanode-hadoop1.out
    hadoop3: starting datanode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-datanode-hadoop3.out
    Starting journal nodes [hadoop1 hadoop2 hadoop3]
    hadoop1: starting journalnode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-journalnode-hadoop1.out
    hadoop2: starting journalnode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-journalnode-hadoop2.out
    hadoop3: starting journalnode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-journalnode-hadoop3.out
    Starting ZK Failover Controllers on NN hosts [hadoop1 hadoop2]
    hadoop1: starting zkfc, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-zkfc-hadoop1.out
    hadoop2: starting zkfc, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-zkfc-hadoop2.out
  • 相关阅读:
    高精度“+”算法
    漏洞扫描
    端口扫描
    使用sqlmap
    Kali实现靶机远程控制
    Docker下配置KeepAlive支持nginx高可用
    web攻防环境--一句话木马
    Docker容器技术--自定义网桥后的默认网卡名称
    小白大数据学习指南
    Nginx简单操作
  • 原文地址:https://www.cnblogs.com/yhq1314/p/11023250.html
Copyright © 2011-2022 走看看