zoukankan      html  css  js  c++  java
  • 集群同步hive的脚本

    程序员就是把一切手工做的事情变成让计算机来做,从而可以让自己偷偷懒。

    以下就是个非常low的hive文件夹同步程序,至于节点超过100个或者1000个的,可以加个循环了。

    #!/bin/sh
    
    #================ hive 安装包同步 =================#
    # 该脚本用来将name节点的hive文件夹同步到data节点   #
    # 当hive安装包变动时,需要同步data节点,否则oozie  #
    # 通过shell调用hive程序时,会因为分配的节点hive安  #
    # 装包不同步而引起错误                             #
    #==================================================#
    
    # 1.清理旧的hive
    ssh -t hadoop@dwprod-dataslave1 rm -r /opt/local/hive
    ssh -t hadoop@dwprod-dataslave2 rm -r /opt/local/hive
    ssh -t hadoop@dwprod-dataslave3 rm -r /opt/local/hive
    ssh -t hadoop@dwprod-dataslave4 rm -r /opt/local/hive
    ssh -t hadoop@dwprod-dataslave5 rm -r /opt/local/hive
    ssh -t hadoop@dwprod-dataslave6 rm -r /opt/local/hive
    ssh -t hadoop@dwprod-dataslave7 rm -r /opt/local/hive
    ssh -t hadoop@dwprod-dataslave8 rm -r /opt/local/hive
    ssh -t hadoop@dwprod-dataslave9 rm -r /opt/local/hive
    ssh -t hadoop@dwprod-dataslave10 rm -r /opt/local/hive
    
    # 2.拷贝新的hive
    scp -r -q /opt/local/hive hadoop@dwprod-dataslave1:/opt/local/
    scp -r -q /opt/local/hive hadoop@dwprod-dataslave2:/opt/local/
    scp -r -q /opt/local/hive hadoop@dwprod-dataslave3:/opt/local/
    scp -r -q /opt/local/hive hadoop@dwprod-dataslave4:/opt/local/
    scp -r -q /opt/local/hive hadoop@dwprod-dataslave5:/opt/local/
    scp -r -q /opt/local/hive hadoop@dwprod-dataslave6:/opt/local/
    scp -r -q /opt/local/hive hadoop@dwprod-dataslave7:/opt/local/
    scp -r -q /opt/local/hive hadoop@dwprod-dataslave8:/opt/local/
    scp -r -q /opt/local/hive hadoop@dwprod-dataslave9:/opt/local/
    scp -r -q /opt/local/hive hadoop@dwprod-dataslave10:/opt/local/

  • 相关阅读:
    hadoop的运行模式
    集群之间配置 SSH无密码登录
    NameNode故障处理方法
    HDFS的HA(高可用)
    DataNode的工作机制
    NameNode和SecondaryNameNode的工作机制
    HDFS读写数据流程
    Linux软件包管理
    DNS服务之二:Bind97服务安装配置
    ssl协议、openssl及创建私有CA
  • 原文地址:https://www.cnblogs.com/30go/p/8776823.html
Copyright © 2011-2022 走看看