zoukankan      html  css  js  c++  java
  • Hadoop自动化部署脚本

    摘自:http://www.wangsenfeng.com/articles/2016/10/27/1477556261953.html

    1 概述

    最近自己写了一个Hadoop自动化部署脚本,包括Hadoop集群自动化部署脚本和Hadoop增加单节点自动化部署脚本。需要快速部署Hadoop集群的童鞋可以使用该脚本。这些脚本我在用5台虚拟机进行了测试,如果在使用中还有bug,欢迎指出。本文主要介绍Hadoop集群自动化部署脚本,安装的Hadoop版本为2.6.0。

    2 依赖

    安装Hadoop2.6.0集群需要依赖JDK和Zookeeper。本文安装的JDK版本为jdk-7u60-linux-x64,Zookeeper版本为zookeeper-3.4.6。

    3 各文件及配置说明

    该部署脚本由两部分构成:root用户下执行的脚本和Hadoop启动用户下执行的脚本。这些脚本都只需要在一台服务器上执行即可,执行脚本的服务器作为Hadoop的Master服务器。下面分别进行说明。

    3.1 root脚本说明

    root脚本的目录结构如下:

    • conf — 配置文件目录 
      • init.conf
    • expect — expect脚本目录 
      • password.expect
      • scp.expect
      • otherInstall.expect
    • file — 安装文件目录 
      installRoot.sh — 脚本执行文件
      • hadoop-2.6.0.tar.gz
      • jdk-7u60-linux-x64.tar.gz
      • zookeeper-3.4.6.tar.gz

    3.1.1 conf目录

    该目录下的init.conf文件为root执行脚本使用的配置文件,在执行脚本之前需要对该配置文件进行修改。文件内容如下:

    #jdk file and version
    JDK_FILE_TAR=jdk-7u60-linux-x64.tar.gz
    
    #jdk unpack name
    JDK_FILE=jdk1.7.0_60
    
    #java home
    JAVAHOME=/usr/java
    
    #Whether install the package for dependence,0 means no,1 means yes
    IF_INSTALL_PACKAGE=1
    
    #host conf
    ALLHOST="hadoop1master hadoop1masterha hadoop1slave1 hadoop1slave2 hadoop1slave3"
    ALLIP="192.168.0.180 192.168.0.184 192.168.0.181 192.168.0.182 192.168.0.183"
    
    #zookeeper conf
    ZOOKEEPER_TAR=zookeeper-3.4.6.tar.gz
    ZOOKEEPERHOME=/usr/local/zookeeper-3.4.6
    SLAVELIST="hadoop1slave1 hadoop1slave2 hadoop1slave3" 
    
    #hadoop conf
    HADOOP_TAR=hadoop-2.6.0.tar.gz
    HADOOPHOME=/usr/local/hadoop-2.6.0
    HADOOP_USER=hadoop2
    HADOOP_PASSWORD=hadoop2
    
    #root conf: $MASTER_HA $SLAVE1 $SLAVE2 $SLAVE3
    ROOT_PASSWORD="hadoop hadoop hadoop hadoop"

    下面是个别参数的解释及注意事项:

    1. ALLHOST为Hadoop集群各个服务器的hostname,使用空格分隔;ALLIP为Hadoop集群各个服务器的ip地址,使用空格分隔。要求ALLHOST和ALLIP要一一对应。
    2. SLAVELIST为zookeeper集群部署的服务器的hostname。
    3. ROOT_PASSWORD为除了Master服务器以外的其他服务器root用户的密码,使用逗号隔开。(在实际情况下,可能各个服务器的root密码并不相同。)

    3.1.2 expect目录

    该目录下包含password.expect、scp.expect、otherInstall.expect三个文件。password.expect用来设置hadoop启动用户的密码;scp.expect用来远程传输文件;otherInstall.expect用来远程执行其他服务器上的installRoot.sh。这三个文件都在installRoot.sh中被调用。 
    password.expect文件内容如下:

    #!/usr/bin/expect -f
    set user [lindex $argv 0]
    set password [lindex $argv 1]
    spawn passwd $user
    expect "New password:"
    send "$password
    "
    expect "Retype new password:"
    send "$password
    "
    expect eof
    

    其中argv 0和argv 1都是在installRoot.sh脚本中进行传值的。其他两个文件argv *也是这样传值的。 
    scp.expect文件内容如下:

    #!/usr/bin/expect -f
    # set dir, host, user, password
    set dir [lindex $argv 0]
    set host [lindex $argv 1]
    set user [lindex $argv 2]
    set password [lindex $argv 3]
    set timeout -1
    spawn scp -r $dir $user@$host:/root/
    expect {
        "(yes/no)?"
        {
            send "yes
    "
            expect "*assword:" { send "$password
    "}
        }
        "*assword:"
        {
            send "$password
    "
        }
    }
    expect eof

    otherInstall.expect文件内容如下:

    #!/usr/bin/expect -f
    # set dir, host, user, password
    set dir [lindex $argv 0]
    set name [lindex $argv 1]
    set host [lindex $argv 2]
    set user [lindex $argv 3]
    set password [lindex $argv 4]
    set timeout -1
    spawn ssh -q $user@$host "$dir/$name"
    expect {
        "(yes/no)?"
        {
            send "yes
    "
            expect "*assword:" { send "$password
    "}
        }
        "*assword:"
        {
            send "$password
    "
        }
    }
    expect eof

    3.1.3 file目录

    这里就是安装Hadoop集群及其依赖所需的安装包。

    3.1.4 installRoot.sh脚本

    该脚本是在root用户下需要执行的脚本,文件内容如下:

    #!/bin/bash
    
    if [ $USER != "root" ]; then
        echo "[ERROR]:Must run as root";  exit 1
    fi
    # Get absolute path and name of this shell
    readonly PROGDIR=$(readlink -m $(dirname $0))
    readonly PROGNAME=$(basename $0)
    hostname=`hostname`
    
    source /etc/profile
    # import init.conf
    source $PROGDIR/conf/init.conf
    echo "install start..."
    # install package for dependence
    if [ $IF_INSTALL_PACKAGE -eq 1 ]; then
        yum -y install expect >/dev/null 2>&1
        echo "expect install successful."
        # yum install openssh-clients #scp
    fi
    
    #stop iptables or open ports, now stop iptables
    service iptables stop
    chkconfig iptables off
    FF_INFO=`service iptables status`
    if [ -n "`echo $FF_INFO | grep "Firewall is not running"`" ]; then
        echo "Firewall is already stop."
    else
        echo "[ERROR]:Failed to shut down the firewall.Exit shell."
        exit 1
    fi
    #stop selinux
    setenforce 0
    SL_INFO=`getenforce`
    if [ $SL_INFO == "Permissive" -o $SL_INFO == "disabled" ]; then
        echo "selinux is already stop."
    else    
        echo "[ERROR]:Failed to shut down the selinux. Exit shell."
        exit 1
    fi
    
    #host config
    hostArr=( $ALLHOST )
    IpArr=( $ALLIP )
    for (( i=0; i <= ${#hostArr[@]}; i++ ))
    {
        if [ -z "`grep "${hostArr[i]}" /etc/hosts`" -o -z "`grep "${IpArr[i]}" /etc/hosts`" ]; then
            echo "${IpArr[i]} ${hostArr[i]}" >> /etc/hosts
        fi
    }
    
    #user config
    groupadd $HADOOP_USER && useradd -g $HADOOP_USER $HADOOP_USER && $PROGDIR/expect/password.expect $HADOOP_USER $HADOOP_PASSWORD >/dev/null 2>&1
    
    # check jdk
    checkOpenJDK=`rpm -qa | grep java`
    # already install openJDK ,uninstall
    if [ -n "$checkOpenJDK" ]; then
        rpm -e --nodeps $checkOpenJDK
        echo "uninstall openJDK successful"
    fi
    # A way of exception handling. `java -version` perform error then perform after ||.
    `java -version` || (
        [ ! -d $JAVAHOME ] && ( mkdir $JAVAHOME )
        tar -zxf $PROGDIR/file/$JDK_FILE_TAR -C $JAVAHOME
        echo "export JAVA_HOME=$JAVAHOME/$JDK_FILE">>/etc/profile
        echo'export JAVA_BIN=$JAVA_HOME/bin'>>/etc/profile
        echo'export PATH=$PATH:$JAVA_HOME/bin'>>/etc/profile
        echo'export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar'>>/etc/profile
        echo'export JAVA_HOME JAVA_BIN PATH CLASSPATH'>>/etc/profile
        echo"sun jdk done")# check zookeeper
    slaveArr=($SLAVELIST)if[["${slaveArr[@]}"=~$hostname]];then`zkServer.sh status`||[-d$ZOOKEEPERHOME]||(
            tar -zxf $PROGDIR/file/$ZOOKEEPER_TAR-C /usr/local/
            chown -R $HADOOP_USER:$HADOOP_USER$ZOOKEEPERHOMEecho"export ZOOKEEPER_HOME=$ZOOKEEPERHOME">>/etc/profile
            echo'PATH=$PATH:$ZOOKEEPER_HOME/bin'>>/etc/profile
            echo"zookeeper done")fi# check hadoop2`hadoop version`||[-d$HADOOPHOME]||(
        tar -zxf $PROGDIR/file/$HADOOP_TAR-C /usr/local/
        chown -R $HADOOP_USER:$HADOOP_USER$HADOOPHOMEecho"export HADOOP_HOME=$HADOOPHOME">>/etc/profile
        echo'PATH=$PATH:$HADOOP_HOME/bin'>>/etc/profile
        echo'HADOOP_HOME_WARN_SUPPRESS=1'>>/etc/profile
        echo"hadoop2 done")source/etc/profile
    
    #ssh config
    sed -i "s/^#RSAAuthentication yes/RSAAuthentication yes/g"/etc/ssh/sshd_config
    sed -i "s/^#PubkeyAuthentication yes/PubkeyAuthentication yes/g"/etc/ssh/sshd_config
    sed -i "s/^#AuthorizedKeysFile/AuthorizedKeysFile/g"/etc/ssh/sshd_config
    sed -i "s/^GSSAPIAuthentication yes/GSSAPIAuthentication no/g"/etc/ssh/sshd_config
    sed -i "s/^#UseDNS yes/UseDNS no/g"/etc/ssh/sshd_config
    service sshd restart
    
    # install other servers
    rootPasswdArr=($ROOT_PASSWORD)if[$hostname==${hostArr[0]}];then
        i=0for node in$ALLHOST;doif[$hostname==$node];thenecho"this server, do nothing"else# cope install dir to other server$PROGDIR/expect/scp.expect $PROGDIR$node$USER${rootPasswdArr[$i]}$PROGDIR/expect/otherInstall.expect $PROGDIR$PROGNAME$node$USER${rootPasswdArr[$i]}
                i=$(($i+1))#i++echo$node" install successful."fidone# Let the environment variables take effect
        su - root
    fi
    

    这个脚本主要干了下面几件事:

    1. 如果在配置文件中设置了IF_INSTALL_PACKAGE=1,则安装expect,默认是安装expect。如果服务器上已经有了expect,则可以设置IF_INSTALL_PACKAGE=0。
    2. 关闭防火墙,停止selinux。
    3. 将Hadoop集群的各个机器host及ip对应关系写到/etc/hosts文件中。
    4. 新建Hadoop启动用户及用户组。
    5. 安装jdk、zookeeper、hadoop并设置环境变量。
    6. 修改ssh配置文件/etc/ssh/sshd_config。
    7. 如果判断执行脚本的机器是Master机器,则拷贝本机的root脚本到其他机器上并执行。 
      注意:在执行该脚本之前,需要确保Hadoop集群安装的各个服务器上能够执行scp命令,如果不能执行,需要在各个服务器上安装openssh-clients,执行脚本为:yum –y install openssh-clients。

    3.2 hadoop脚本说明

    hadoop脚本的目录结构如下:

    • bin — 脚本目录 
      • config_hadoop.sh
      • config_ssh.sh
      • config_zookeeper.sh
      • ssh_nopassword.expect
      • start_all.sh
    • conf — 配置文件目录 
      • init.conf
    • template — 配置文件模板目录 
      installCluster.sh — 脚本执行文件
      • core-site.xml
      • hadoop-env.sh
      • hdfs-site.xml
      • mapred-site.xml
      • mountTable.xml
      • myid
      • slaves
      • yarn-env.sh
      • yarn-site.xml
      • zoo.cfg

    3.2.1 bin脚本目录

    该目录中包含installCluster.sh脚本中调用的所有脚本,下面一一说明。

    3.2.1.1 config_hadoop.sh

    该脚本主要是创建Hadoop所需目录,以及配置文件的配置,其中的参数均在init.conf中。

    #!/bin/bash
    
    # Get absolute path of this shell
    readonly PROGDIR=$(readlink -m $(dirname $0))
    # import init.conf
    source $PROGDIR/../conf/init.conf
    
    for node in $ALL; do
        # create dirs
        ssh -q $HADOOP_USER@$node "
            mkdir -p $HADOOPDIR_CONF/hadoop2/namedir
            mkdir -p $HADOOPDIR_CONF/hadoop2/datadir
            mkdir -p $HADOOPDIR_CONF/hadoop2/jndir
            mkdir -p $HADOOPDIR_CONF/hadoop2/tmp
            mkdir -p $HADOOPDIR_CONF/hadoop2/hadoopmrsys
            mkdir -p $HADOOPDIR_CONF/hadoop2/hadoopmrlocal
            mkdir -p $HADOOPDIR_CONF/hadoop2/nodemanagerlocal
            mkdir -p $HADOOPDIR_CONF/hadoop2/nodemanagerlogs
        "
        echo "$node create dir done."
        for conffile in $CONF_FILE; do
            # copy
            scp $PROGDIR/../template/$conffile $HADOOP_USER@$node:$HADOOPHOME/etc/hadoop
            # update
            ssh -q $HADOOP_USER@$node "
                sed -i 's%MASTER_HOST%${MASTER_HOST}%g' $HADOOPHOME/etc/hadoop/$conffile
                sed -i 's%MASTER_HA_HOST%${MASTER_HA_HOST}%g' $HADOOPHOME/etc/hadoop/$conffile
                sed -i 's%SLAVE1%${SLAVE1}%g' $HADOOPHOME/etc/hadoop/$conffile
                sed -i 's%SLAVE2%${SLAVE2}%g' $HADOOPHOME/etc/hadoop/$conffile
                sed -i 's%SLAVE3%${SLAVE3}%g' $HADOOPHOME/etc/hadoop/$conffile
                sed -i 's%HDFS_CLUSTER_NAME%${HDFS_CLUSTER_NAME}%g' $HADOOPHOME/etc/hadoop/$conffile
                sed -i 's%VIRTUAL_PATH%${VIRTUAL_PATH}%g' $HADOOPHOME/etc/hadoop/$conffile
                sed -i 's%DFS_NAMESERVICES%${DFS_NAMESERVICES}%g' $HADOOPHOME/etc/hadoop/$conffile
                sed -i 's%NAMENODE1_NAME%${NAMENODE1_NAME}%g' $HADOOPHOME/etc/hadoop/$conffile
                sed -i 's%NAMENODE2_NAME%${NAMENODE2_NAME}%g' $HADOOPHOME/etc/hadoop/$conffile
                sed -i 's%NAMENODE_JOURNAL%${NAMENODE_JOURNAL}%g' $HADOOPHOME/etc/hadoop/$conffile
                sed -i 's%HADOOPDIR_CONF%${HADOOPDIR_CONF}%g' $HADOOPHOME/etc/hadoop/$conffile
                sed -i 's%ZOOKEEPER_ADDRESS%${ZOOKEEPER_ADDRESS}%g' $HADOOPHOME/etc/hadoop/$conffile
                sed -i 's%YARN1_NAME%${YARN1_NAME}%g' $HADOOPHOME/etc/hadoop/$conffile
                sed -i 's%YARN2_NAME%${YARN2_NAME}%g' $HADOOPHOME/etc/hadoop/$conffile
                sed -i 's%HADOOPHOME%${HADOOPHOME}%g' $HADOOPHOME/etc/hadoop/$conffile
                sed -i 's%JAVAHOME%${JAVAHOME}%g' $HADOOPHOME/etc/hadoop/$conffile
                # update yarn.resourcemanager.ha.id for yarn_ha
                if [ $conffile == 'yarn-site.xml' ]; then
                    if [ $node == $MASTER_HA_HOST ]; then
                        sed -i 's%YARN_ID%${YARN2_NAME}%g' $HADOOPHOME/etc/hadoop/$conffile
                    else
                        sed -i 's%YARN_ID%${YARN1_NAME}%g' $HADOOPHOME/etc/hadoop/$conffile
                    fi
                fi
            "
        done
        echo "$node copy hadoop template done."
    done
    

    3.2.1.2 config_ssh.sh和ssh_nopassword.expect

    这两个文件是配置ssh无密码登录的,ssh_nopassword.expect被config_ssh.sh调用。 
    config_ssh.sh文件如下:

    #!/bin/bash
    
    # Get absolute path of this shell
    readonly PROGDIR=$(readlink -m $(dirname $0))
    # import init.conf
    source $PROGDIR/../conf/init.conf
    # Get hostname
    HOSTNAME=`hostname`
    
    # Config ssh nopassword login
    echo "Config ssh on master"
    # If the directory "~/.ssh" is not exist, then execute mkdir and chmod
    [ ! -d ~/.ssh ] && ( mkdir ~/.ssh ) && ( chmod 700 ~/.ssh )
    # If the file "~/.ssh/id_rsa.pub" is not exist, then execute ssh-keygen and chmod
    [ ! -f ~/.ssh/id_rsa.pub ] && ( yes|ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa ) && ( chmod 600 ~/.ssh/id_rsa.pub )
    
    echo "Config ssh nopassword for cluster"
    # For all node, including master and slaves
    for node in $ALL; do
        # execute bin/ssh_nopassword.expect
        $PROGDIR/ssh_nopassword.expect $node $HADOOP_USER $HADOOP_PASSWORD $HADOOPDIR_CONF/.ssh/id_rsa.pub >/dev/null 2>&1
        echo "$node done."
    done
    echo "Config ssh successful."

    ssh_nopassword.expect文件如下:

    #!/usr/bin/expect -f
    
    set host [lindex $argv 0]
    set user [lindex $argv 1]
    set password [lindex $argv 2]
    set dir [lindex $argv 3]
    spawn ssh-copy-id -i $dir $user@$host
    expect {
        yes/no  
        { 
            send "yes
    ";exp_continue 
        }
        -nocase "password:" 
        { 
            send "$password
    " 
        }
    }
    expect eof

    3.2.1.3 config_zookeeper.sh

    该文件主要是对zookeeper的配置,文件内容如下:

    #!/bin/bash
    
    # Get absolute path of this shell
    readonly PROGDIR=$(readlink -m $(dirname $0))
    # import init.conf
    source $PROGDIR/../conf/init.conf
    
    #update conf
    sed -i "s%ZOOKEEPERHOME%${ZOOKEEPERHOME}%g" $PROGDIR/../template/zoo.cfg
    sed -i "s%ZOOKEEPER_SLAVE1%${ZOOKEEPER_SLAVE1}%g" $PROGDIR/../template/zoo.cfg
    sed -i "s%ZOOKEEPER_SLAVE2%${ZOOKEEPER_SLAVE2}%g" $PROGDIR/../template/zoo.cfg
    sed -i "s%ZOOKEEPER_SLAVE3%${ZOOKEEPER_SLAVE3}%g" $PROGDIR/../template/zoo.cfg
    
    zookeeperArr=( "$ZOOKEEPER_SLAVE1" "$ZOOKEEPER_SLAVE2" "$ZOOKEEPER_SLAVE3" )
    myid=1
    for node in ${zookeeperArr[@]}; do
        scp $PROGDIR/../template/zoo.cfg $HADOOP_USER@$node:$ZOOKEEPERHOME/conf
        echo $myid > $PROGDIR/../template/myid
        ssh -q $HADOOP_USER@$node "
            [ ! -d $ZOOKEEPERHOME/data ] && ( mkdir $ZOOKEEPERHOME/data )
            [ ! -d $ZOOKEEPERHOME/log ] && ( mkdir $ZOOKEEPERHOME/log )        
        "
        scp $PROGDIR/../template/myid $HADOOP_USER@$node:$ZOOKEEPERHOME/data
        myid=`expr $myid + 1` #i++
        echo "$node copy zookeeper template done."
    done

    3.2.1.4 start_all.sh

    该脚本主要用来启动zookeeper及Hadoop全部组件,文件内容如下:

    #!/bin/bash
    
    source /etc/profile
    # Get absolute path of this shell
    readonly PROGDIR=$(readlink -m $(dirname $0))
    # import init.conf
    source $PROGDIR/../conf/init.conf
    
    # start zookeeper
    zookeeperArr=( "$ZOOKEEPER_SLAVE1" "$ZOOKEEPER_SLAVE2" "$ZOOKEEPER_SLAVE3" )
    for znode in ${zookeeperArr[@]}; do
        ssh -q $HADOOP_USER@$znode "
            source /etc/profile
            $ZOOKEEPERHOME/bin/zkServer.sh start
        "
        echo "$znode zookeeper start done."
    done
    
    # start journalnode
    journalArr=( $JOURNALLIST )
    for jnode in ${journalArr[@]}; do
        ssh -q $HADOOP_USER@$jnode "
            source /etc/profile
            $HADOOPHOME/sbin/hadoop-daemon.sh start journalnode
        "
        echo "$jnode journalnode start done."
    done
    
    # format zookeeper
    $HADOOPHOME/bin/hdfs zkfc -formatZK
    
    # format hdfs
    $HADOOPHOME/bin/hdfs namenode -format -clusterId $DFS_NAMESERVICES
    
    # start namenode
    $HADOOPHOME/sbin/hadoop-daemon.sh start namenode
    
    # sign in master_ha, sync from namenode to namenode_ha
    ssh -q $HADOOP_USER@$MASTER_HA_HOST "
        $HADOOPHOME/bin/hdfs namenode -bootstrapStandby
    "
    
    # start zkfc on master
    $HADOOPHOME/sbin/hadoop-daemon.sh start zkfc
    
    # start namenode_ha and datanode
    $HADOOPHOME/sbin/start-dfs.sh
    
    # start yarn
    $HADOOPHOME/sbin/start-yarn.sh
    
    # start yarn_ha
    ssh -q $HADOOP_USER@$MASTER_HA_HOST "
        source /etc/profile
        $HADOOPHOME/sbin/yarn-daemon.sh start resourcemanager
    "
    echo "start all done."

    4 集群自动化部署流程

    4.1 root脚本的执行

    选择一台服务器作为Hadoop2.6.0的主节点,使用root用户执行。

    1. 确保Hadoop集群所在服务器可以执行scp命令:在各个服务器上执行scp,如果提示命令没有找到,执行安装命令:yum –y install openssh-clients。
    2. 执行以下操作: 
      检查/etc/hosts、/etc/profile的配置,执行java -version、hadoop version命令检查jdk和Hadoop的安装情况。若出现java、hadoop命令找不到的情况,重新登录一次服务器再进行检查。
      1. 执行cd ~,进入/root目录下
      2. 将root脚本所在目录打成tar包(假设打包后的文件名为root_install.tar.gz),执行rz -y,上传root_install.tar.gz(若无法找到rz命令,执行安装命令:yum -y install lrzsz)
      3. 执行tar -zxvf root_install.tar.gz解压
      4. 执行cd root_install,进入到root_install目录中
      5. 执行. /installRoot.sh,开始安装jdk、zookeeper、Hadoop,等待安装结束

    4.2 hadoop脚本的执行

    在主节点使用Hadoop启动用户执行(该启动用户是在root中执行的脚本里创建的,下面假设该用户为hadoop2):

    1. 在root用户中直接进入hadoop2用户,执行su - hadoop2
    2. 执行以下操作: 
      检查zookeeper、Hadoop启动日志,检查是否安装成功。通过Hadoop本身提供的监控页面来检查Hadoop集群的状态。
      1. 执行cd~,进入/home/hadoop2目录下
      2. 将hadoop脚本所在目录打成tar包(假设打包后的文件名为hadoop_install.tar.gz),执行rz -y,上传hadoop_install.tar.gz(若无法找到rz命令,执行安装命令:yum -y install lrzsz)
      3. 执行tar -zxvf hadoop_install.tar.gz解压
      4. 执行cd hadoop_install,进入到hadoop_install目录中
      5. 执行./installCluster.sh,开始配置并启动zookeeper、Hadoop,等待脚本执行结束
    3. 最后根据mountTable.xml中fs.viewfs.mounttable.hCluster.link./tmp的配置,执行如下命令创建该name对应的value目录: 
      hdfs dfs -mkdir hdfs://hadoop-cluster1/tmp 
      如果不创建,执行hdfs dfs -ls /tmp时会提示找不到目录。

    5 总结

    Hadoop2.6.0部署脚本仍有缺陷,比如配置文件中参数较多,有部分重复,脚本的编写也有待改进。权当抛砖引玉。如有错误请童鞋们指正,谢谢。

  • 相关阅读:
    Nginx出现10055错误问题
    MVC报错的坑
    用Docker运行一个简单RazorDemo(一)
    dotnet run
    用Docker 在Centos7上部署Core2.1网站
    VS2015 自己用的插件
    帐户当前被锁定,所以用户 sa 登录失败。系统管理员无法将该帐户解锁
    加密方法汇总
    【备用】网页抓取、
    【转】项目奖金分配
  • 原文地址:https://www.cnblogs.com/alexRose/p/7998324.html
Copyright © 2011-2022 走看看