zoukankan      html  css  js  c++  java
  • hadoop+zookeeper+hbase分布式安装

    前期服务器配置##

    1. 修改/etc/hosts文件,添加以下信息(如果正常IP)
      119.23.163.113 master
      120.79.116.198 slave1
      120.79.116.23 slave2
      如果安全组内的IP,通过ip a方式查询内部IP并添加到/etc/hosts;

    2. 确认三个服务器之间可以互相ping通

    3. 给三个机器生成密钥文件

      1. 三台机器上执行以下命令
        ssh-keygen
      2. 生成公共密钥,先在master服务器上生成,之后复制到其他两个服务器
        1. 以下为正常免密方式
        • touch /root/.ssh/authorized_keys
        • cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
        • vim /root/.ssh/authorized_keys
          • 将其他两个机器的/root/.ssh/id_rsa.pub的内容复制到authorized_keys文件中
        • chmod 600 /root/.ssh/authorized_keys
        • chmod 700 /root/.ssh/
        1. 以下为公司内安全组方式的互相免密方式
        • ssh-copy-id -i ~/.ssh/id_rsa.pub $ip
    4. 解压hadoopzookeeperhbase包

    5. 重命名解压后的文件名
      mv hadoop-2.6.0-cdh5.6.0/ hadoop/
      mv hbase-1.0.0-cdh5.6.0/ hbase
      mv zookeeper-3.4.5-cdh5.6.0/ zookeeper

    6. 安装Java环境

      1. 解压包
      2. 配置/etc/profile
        export JAVA_HOME=/opt/cdh/jdk1.8.0_144
        export CLASSPATH=$:CLASSPATH:$JAVA_HOME/lib/
        export PATH=$PATH:$JAVA_HOME/bin
      3. 使生效 source /etc/profile
      4. 验证 java -version

    hadoop分布式部署##

    1. 进入配置文件目录cd hadoop/etc/hadoop/,修改配置文件
      1. vim core-site.xml
    	<property>
    	    <name>hadoop.tmp.dir</name>
    	    <value>/opt/cdh/hadoop-env/tmp</value>
    	    <description>Abase for other temporary directories.</description>
    	</property>
    	<property>
    	    <name>fs.default.name</name>
    	    <value>hdfs://master:9000</value>
    	</property>
    
    2. vim hadoop-env.sh
    
    export JAVA_HOME=/opt/cdh/jdk1.8.0_144
    
    3. vim hdfs-site.xml
    
    <property>
       <name>dfs.name.dir</name>
       <value>/opt/cdh/hadoop-env/dfs/name</value>
       <description>Path on the local filesystem where theNameNode stores the namespace and transactions logs persistently.</description>
    </property>
    <property>
       <name>dfs.data.dir</name>
       <value>/opt/cdh/hadoop-env/dfs/data</value>
       <description>Comma separated list of paths on the localfilesystem of a DataNode where it should store its blocks.</description>
    </property>
    <property>
       <name>dfs.replication</name>
       <value>2</value>
    </property>
    <property>
          <name>dfs.permissions</name>
          <value>true</value>
          <description>need not permissions</description>
    </property>
    
    4. cp mapred-site.xml.template mapred-site.xml  
       vim mapred-site.xml
    
    	<property>
    	    <name>mapred.job.tracker</name>
    	    <value>master:49001</value>
    	</property>
    	<property>
    	    <name>mapred.local.dir</name>
    	    <value>/opt/cdh/hadoop-env/var</value>
    	</property>
    	<property>
    	    <name>mapreduce.framework.name</name>
    	    <value>yarn</value>
    	</property>
    
    5. vim slaves
    
    	slave1
    	slave2
    
    6. vim yarn-site.xml
    
       <property>
            <name>yarn.resourcemanager.hostname</name>
            <value>master</value>
       </property>
       <property>
            <description>The address of the applications manager interface in the RM.</description>
            <name>yarn.resourcemanager.address</name>
            <value>${yarn.resourcemanager.hostname}:8032</value>
       </property>
       <property>
            <description>The address of the scheduler interface.</description>
            <name>yarn.resourcemanager.scheduler.address</name>
            <value>${yarn.resourcemanager.hostname}:8030</value>
       </property>
       <property>
            <description>The http address of the RM web application.</description>
            <name>yarn.resourcemanager.webapp.address</name>
            <value>${yarn.resourcemanager.hostname}:8088</value>
       </property>
       <property>
            <description>The https adddress of the RM web application.</description>
            <name>yarn.resourcemanager.webapp.https.address</name>
            <value>${yarn.resourcemanager.hostname}:8090</value>
       </property>
       <property>
            <name>yarn.resourcemanager.resource-tracker.address</name>
            <value>${yarn.resourcemanager.hostname}:8031</value>
       </property>
       <property>
            <description>The address of the RM admin interface.</description>
            <name>yarn.resourcemanager.admin.address</name>
            <value>${yarn.resourcemanager.hostname}:8033</value>
       </property>
       <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
       </property>
       <property>
            <name>yarn.scheduler.maximum-allocation-mb</name>
            <value>8182</value>
            <discription>每个节点可用内存,单位MB,默认8182MB</discription>
       </property>
       <property>
            <name>yarn.nodemanager.vmem-pmem-ratio</name>
            <value>2.1</value>
       </property>
       <property>
            <name>yarn.nodemanager.resource.memory-mb</name>
            <value>8182</value>
       </property>
    

    zookeeper部署##

    1. 进入zookeeper配置目录cd /opt/cdh/zookeeper/conf
      • cp zoo_sample.cfg zoo.cfg
      • vim zoo.cfg
    dataDir=/opt/cdh/zookeeper-env
    dataLogDir=/opt/cdh/zookeeper-env/logs
    
    server.1=master:2888:3888
    server.2=slave1:2888:3888
    server.3=slave2:2888:3888
    

    2.. 配置各服务器zookeeperID
    + 进入每个服务器的dataDir,每个服务器ID不一样比如以下是echo 1,其他的就是echo 2或者echo 3
    cd /opt/cdh/zookeeper-env
    echo 1 > myid

    hbase部署##

    1. 进入hbase配置目录cd /opt/cdh/hbase/conf
      1. vim hbase-env.sh
    	export JAVA_HOME=/opt/cdh/jdk1.8.0_144
    	export HBASE_CLASSPATH=/opt/cdh/hbase/conf 
    	export HBASE_MANAGES_ZK=false        #此配置信息,设置由hbase自己管理zookeeper,不需要单独的zookeeper。
    	export HBASE_HOME=/opt/cdh/hbase
    	export HADOOP_HOME=/opt/cdh/hadoop 
    	export HBASE_LOG_DIR=/opt/cdh/hbase-env/logs    #Hbase日志目录
    
    2. vim hbase-site.xml
    
       <property>
         <name>hbase.rootdir</name>
         <value>hdfs://master:9000/hbase</value>
        </property>
        <property>
         <name>hbase.cluster.distributed</name>
         <value>true</value>
        </property>
        <property>
        <name>hbase.master</name>
        <value>master:60000</value>
        </property>
        <property>
         <name>hbase.zookeeper.quorum</name>
         <value>slave1,slave2</value>
        </property>
    
    3. vim regionservers
    
    	slave1
    	slave2
    

    启动集群##

    1. 启动hadoop,只在master上启动
      1. /opt/cdh/hadoop/bin/hadoop namenode -format
      2. /opt/cdh/hadoop/sbin/start-all.sh
    2. 启动zookeeper,现在slave服务器启动,再启动master上的zookeeper
      1. /opt/cdh/zookeeper/bin/zkServer.sh start
    3. 启动hbase,只在master启动
      1. /opt/cdh/hbase/bin/start-hbase.sh
    ฅ平平庸庸的普通人ฅ
  • 相关阅读:
    魔塔猎人上线后反馈和后期计划
    自己做的roguelike+恶魔城游戏《魔塔猎人》已发布。
    我的开源项目
    Unity3D工程源码目录
    小二助手(react应用框架)-http访问
    小二助手(react应用框架)-概述
    unity3d开发app的框架
    为小团队协作和个人任务管理而生的Team应用
    使用unity3d开发app
    好久未登陆
  • 原文地址:https://www.cnblogs.com/fengzzi/p/10033008.html
Copyright © 2011-2022 走看看