zoukankan      html  css  js  c++  java
  • hadoop集群的搭建

    hadoop集群的搭建

    1、ubuntu 14.04更换成阿里云源

    刚刚开始我选择了nat模式,所有可以连通网络,但是不能ping通,我就是想安装一下mysql,因为安装手动安装mysql太麻烦了,然后我再换为仅主机模式,就可以在本机用ssh连接了

    sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak #备份

    sudo vim /etc/apt/sources.list #修改
    sudo apt-get update #更新列表

    阿里源,将默认的源全部删除掉,我注释了源代码的源

    deb http://mirrors.aliyun.com/ubuntu/ trusty main restricted universe multiverse
    deb http://mirrors.aliyun.com/ubuntu/ trusty-security main restricted universe multiverse
    deb http://mirrors.aliyun.com/ubuntu/ trusty-updates main restricted universe multiverse
    deb http://mirrors.aliyun.com/ubuntu/ trusty-proposed main restricted universe multiverse
    deb http://mirrors.aliyun.com/ubuntu/ trusty-backports main restricted universe multiverse
    #deb-src http://mirrors.aliyun.com/ubuntu/ trusty main restricted universe multiverse
    #deb-src http://mirrors.aliyun.com/ubuntu/ trusty-security main restricted universe multiverse
    #deb-src http://mirrors.aliyun.com/ubuntu/ trusty-updates main restricted universe multiverse
    #deb-src http://mirrors.aliyun.com/ubuntu/ trusty-proposed main restricted universe multiverse
    #deb-src http://mirrors.aliyun.com/ubuntu/ trusty-backports main restricted universe multiverse

    2、安装mysql

    sudo apt-get install mysql-server

    3、配置静态ip

    iface eth0 inet static
    address 192.168.10.100
    netmask 255.255.255.0
    gateway 192.168.10.1
    dns-nameservers 223.5.5.5 223.6.6.6

    4、安装jdk并配置环境变量

    sudo mkdir /usr/local/java
    sudo tar -zxvf jdk-8u121-linux-x64.tar.gz -C /usr/local/java/

    配置环境变量

    sudo vim /etc/profile 末尾添加下面两行
    export JAVA_HOME=/usr/local/java/jdk1.8.0_121
    export PATH=$JAVA_HOME/bin:$PATH
    刷新资源:source /etc/profile

    5、解压hadoop

    sudo tar -zxvf hadoop-2.7.1.tar.gz -C /opt/

    设置环境变量

    sudo vim /etc/profile
    export JAVA_HOME=/usr/local/java/jdk1.8.0_121
    export HADOOP_HOME=/opt/hadoop-2.7.1
    export PATH=$JAVA_HOME/bin:$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

    6、参照http://hadoop.apache.org/docs/r2.7.5/hadoop-project-dist/hadoop-common/ClusterSetup.html

    core-default.xml, hdfs-default.xml, yarn-default.xml and mapred-default.xml.
    修改了以上四个文件
    还有修改hadoop-env.sh hadoop环境相关配置,配置jdk

    7、修改hosts

    192.168.1.100 master
    192.168.1.101 slave1
    192.168.1.102 slave2
    192.168.1.103 slave3
    把127.0.1.1 这行注释掉。否则后面会在master识别不到DataNode。回环

    8、复制master主机,更名为slavex

    修改主机名 sudo vim /etc/hostname sudo /etc/hosts 修改127.0.1.1 的名称
    修改静态ip sudo vim /etc/network/interfaces
    此时产生三台机器 slave1,slave2,slave3

    9、修改hadoop配置文件slave文件,指定slave

    在master机器下:
    vim slaves 添加三行:slave1 slave2 slave3

    10、设置ssh免密登陆

    master:产生公钥 ssh-keygen -t rsa 产生的秘钥在~/.ssh里面
    将公钥发布到其他机器: ssh-copy-id slave1/2/3/master 四台主机
    测试是否成功免密登陆:ssh slave1 退出登录:logout

    11、在master节点格式化namenode并启动

    hadoop namenode -format
    start-all.sh

    使用jsp 查看集群是否启动成功

    在maste显示:

    NodeManager
    SecondaryNameNode
    ResourceManager
    NameNode

    在slave节点显示:

    NodeManager
    DataNode

    到此为止hadoop配置完成,这是今天的笔记。

  • 相关阅读:
    LSMW TIPS
    Schedule agreement and Delfor
    Running VL10 in the background 13 Oct
    analyse idoc by creation date
    New Journey Prepare
    EDI error
    CBSN NEWS
    Listen and Write 18th Feb 2019
    Microsoft iSCSI Software Target 快照管理
    通过 Microsoft iSCSI Software Target 提供存储服务
  • 原文地址:https://www.cnblogs.com/luhuan/p/8593870.html
Copyright © 2011-2022 走看看