zoukankan      html  css  js  c++  java
  • Centos7-Hadoop单机安装配置详解

    一、准备工作

    系统:centos7
    jdk:1.8
    hadoop:2.6.0

    2.我们要关闭系统的防火墙!!!

    [root@Master ~]# cd /tools/

    #永久关闭防火墙
    [root@Master tools]# systemctl disable firewalld

    #暂时关闭防火墙
    [root@Master tools]# systemctl stop firewalld

    3.修改用户名以及对应的IP
    [root@Master tools]# hostname
    Master

    [root@Master tools]# vi /etc/hostname
    Master

    [root@Master tools]# vi /etc/hosts
    192.168.0.3 Master

    4.进行SSH免密互登设置
    ssh-keygen -t dsa

    在命令执行过程中敲击两遍回车
    [root@Master tools]# ssh-keygen -t dsa
    Generating public/private dsa key pair.
    Enter file in which to save the key (/root/.ssh/id_dsa):
    Created directory '/root/.ssh'.
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /root/.ssh/id_dsa.
    Your public key has been saved in /root/.ssh/id_dsa.pub.
    The key fingerprint is:
    SHA256:xKYiupj804dqQ2V/4kNa3SF0Iz5JPfQ4df1TYtk2c8s root@Master
    The key's randomart image is:
    +---[DSA 1024]----+
    | o. . +.|
    | . + =+ =o*|
    | B +ooo.oB|
    | o + = .. E.|
    | . + o S + . .|
    | . o . = o . |
    |. . . * o |
    |oo + + + |
    |+.oo+ . . |
    +----[SHA256]-----+
    [root@Master tools]#

    二、安装JAVA环境
    1. jdk安装这里不做赘述,网上帖子很多。

    2. 测试
    java -version

    显示 java 版本信息,则说明 JDK 安装成功:
    [root@Master tools]# java -version
    java version "1.8.0_131"
    Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
    Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
    [root@Master tools]#

    三、搭建Hadoop环境
    [root@Master tools]# mv hadoop-2.6.0.tar.gz /usr/local/

    1.先切换到 /usr/local/
    [root@Master tools]# cd /usr/local/
    [root@Master local]# ll
    total 191416
    drwxr-xr-x. 2 root root 6 Apr 11 2018 bin
    drwxr-xr-x. 2 root root 6 Apr 11 2018 etc
    drwxr-xr-x. 2 root root 6 Apr 11 2018 games
    drwxr-xr-x 10 grafana grafana 183 Jul 31 10:52 grafana
    drwxr-xr-x 9 grafana grafana 158 Jul 31 10:08 grafana.bak
    -rw-r--r-- 1 root root 195257604 May 23 2016 hadoop-2.6.0.tar.gz
    drwxr-xr-x. 2 root root 6 Apr 11 2018 include
    drwxr-xr-x 6 root root 53 Oct 12 15:39 keepalived
    drwxrwxr-x 9 nagios nagios 4096 Oct 12 15:38 keepalived-1.4.2
    -rw-r--r-- 1 root root 738096 Jun 23 18:02 keepalived-1.4.2.tar.gz
    drwxr-xr-x. 2 root root 6 Apr 11 2018 lib
    drwxr-xr-x. 2 root root 6 Apr 11 2018 lib64
    drwxr-xr-x. 2 root root 6 Apr 11 2018 libexec
    drwxr-xr-x 5 root root 164 Sep 24 14:18 mongodb
    drwxr-xr-x 9 root root 94 Jul 28 16:33 nagios
    drwxr-xr-x 12 root root 198 Oct 12 15:23 nginx
    drwxrwxrwx 2 3434 3434 56 Jun 5 2019 node_exporter
    -rw------- 1 root root 3417 Jul 31 14:11 nohup.out
    drwxr-xr-x 5 prometheus prometheus 170 Jul 31 14:49 prometheus
    drwxr-xr-x. 2 root root 6 Apr 11 2018 sbin
    drwxr-xr-x. 5 root root 49 Jul 28 09:18 share
    drwxr-xr-x. 2 root root 6 Apr 11 2018 src
    [root@Master local]#

    2.解压hadoop安装包
    [root@Master local]# tar -zxvf hadoop-2.6.0.tar.gz
    [root@Master local]# mv hadoop-2.6.0 hadoop

    3.新建几个目录
    在/usr/local/hadoop-2.6.0目录下新建几个目录,复制粘贴执行下面的命令:

    [root@Master hadoop-2.6.0]# pwd
    /usr/local/hadoop-2.6.0
    [root@Master hadoop-2.6.0]# ll
    total 28
    drwxr-xr-x 2 20000 20000 194 Nov 14 2014 bin
    drwxr-xr-x 3 20000 20000 20 Nov 14 2014 etc
    drwxr-xr-x 2 20000 20000 106 Nov 14 2014 include
    drwxr-xr-x 3 20000 20000 20 Nov 14 2014 lib
    drwxr-xr-x 2 20000 20000 239 Nov 14 2014 libexec
    -rw-r--r-- 1 20000 20000 15429 Nov 14 2014 LICENSE.txt
    -rw-r--r-- 1 20000 20000 101 Nov 14 2014 NOTICE.txt
    -rw-r--r-- 1 20000 20000 1366 Nov 14 2014 README.txt
    drwxr-xr-x 2 20000 20000 4096 Nov 14 2014 sbin
    drwxr-xr-x 4 20000 20000 31 Nov 14 2014 share

    [root@Master hadoop-2.6.0]# mkdir /usr/local/hadoop-2.6.0/hadoop
    [root@Master hadoop-2.6.0]# mkdir /usr/local/hadoop-2.6.0/hadoop/tmp
    [root@Master hadoop-2.6.0]# mkdir /usr/local/hadoop-2.6.0/hadoop/var
    [root@Master hadoop-2.6.0]# mkdir /usr/local/hadoop-2.6.0/hadoop/dfs
    [root@Master hadoop-2.6.0]# mkdir /usr/local/hadoop-2.6.0/hadoop/dfs/name
    [root@Master hadoop-2.6.0]# mkdir /usr/local/hadoop-2.6.0/hadoop/dfs/data

    4. 修改/usr/local/hadoop-2.6.0/etc/hadoop中的一系列配置文件
    /usr/local/hadoop-2.6.0/etc/hadoop

    4.1我们先切换到该目录下,查看该目录下的文件
    [root@Master hadoop]# cd /usr/local/hadoop-2.6.0/etc/hadoop
    [root@Master hadoop]# ll
    total 152
    -rw-r--r-- 1 20000 20000 4436 Nov 14 2014 capacity-scheduler.xml
    -rw-r--r-- 1 20000 20000 1335 Nov 14 2014 configuration.xsl
    -rw-r--r-- 1 20000 20000 318 Nov 14 2014 container-executor.cfg
    -rw-r--r-- 1 20000 20000 774 Nov 14 2014 core-site.xml
    -rw-r--r-- 1 20000 20000 3670 Nov 14 2014 hadoop-env.cmd
    -rw-r--r-- 1 20000 20000 4224 Nov 14 2014 hadoop-env.sh
    -rw-r--r-- 1 20000 20000 2598 Nov 14 2014 hadoop-metrics2.properties
    -rw-r--r-- 1 20000 20000 2490 Nov 14 2014 hadoop-metrics.properties
    -rw-r--r-- 1 20000 20000 9683 Nov 14 2014 hadoop-policy.xml
    -rw-r--r-- 1 20000 20000 775 Nov 14 2014 hdfs-site.xml
    -rw-r--r-- 1 20000 20000 1449 Nov 14 2014 httpfs-env.sh
    -rw-r--r-- 1 20000 20000 1657 Nov 14 2014 httpfs-log4j.properties
    -rw-r--r-- 1 20000 20000 21 Nov 14 2014 httpfs-signature.secret
    -rw-r--r-- 1 20000 20000 620 Nov 14 2014 httpfs-site.xml
    -rw-r--r-- 1 20000 20000 3523 Nov 14 2014 kms-acls.xml
    -rw-r--r-- 1 20000 20000 1325 Nov 14 2014 kms-env.sh
    -rw-r--r-- 1 20000 20000 1631 Nov 14 2014 kms-log4j.properties
    -rw-r--r-- 1 20000 20000 5511 Nov 14 2014 kms-site.xml
    -rw-r--r-- 1 20000 20000 11291 Nov 14 2014 log4j.properties
    -rw-r--r-- 1 20000 20000 938 Nov 14 2014 mapred-env.cmd
    -rw-r--r-- 1 20000 20000 1383 Nov 14 2014 mapred-env.sh
    -rw-r--r-- 1 20000 20000 4113 Nov 14 2014 mapred-queues.xml.template
    -rw-r--r-- 1 20000 20000 758 Nov 14 2014 mapred-site.xml.template
    -rw-r--r-- 1 20000 20000 10 Nov 14 2014 slaves
    -rw-r--r-- 1 20000 20000 2316 Nov 14 2014 ssl-client.xml.example
    -rw-r--r-- 1 20000 20000 2268 Nov 14 2014 ssl-server.xml.example
    -rw-r--r-- 1 20000 20000 2237 Nov 14 2014 yarn-env.cmd
    -rw-r--r-- 1 20000 20000 4567 Nov 14 2014 yarn-env.sh
    -rw-r--r-- 1 20000 20000 690 Nov 14 2014 yarn-site.xml
    [root@Master hadoop]#

    4.2修改core-site.xml
    [root@Master hadoop]# vi core-site.xml

    在<configuration>节点内加入配置:
    <configuration>
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/local/hadoop-2.6.0/hadoop/tmp</value>
    </property>

    <property>
    <name>fs.defaultFS</name>
    <value>hdfs://Master:9000</value>
    </property>
    </configuration>

    **注意: hdfs://Master:9000 此处应与hostname保持一致 **

    4.3修改hadoop-env.sh
    [root@Master hadoop]# echo $JAVA_HOME
    /usr/java/jdk1.8.0_131

    [root@Master hadoop]# vi hadoop-env.sh

    将export JAVA_HOME=${JAVA_HOME}

    修改为:
    export JAVA_HOME=/usr/java/jdk1.8.0_131

    说明:修改为自己的JDK路径。此处可以与系统JAVA环境一致,也可以单独配置。

    4.4修改hdfs-site.xml
    [root@Master hadoop]# vi hdfs-site.xml
    在<configuration> 节点内加入配置:
    <configuration>
    <property>
    <name>dfs.name.dir</name>
    <value>/usr/local/hadoop-2.6.0/hadoop/dfs/name</value>
    </property>

    <property>
    <name>dfs.data.dir</name>
    <value>/usr/local/hadoop-2.6.0/hadoop/dfs/data</value>
    </property>

    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    </configuration>

    4.5 修改mapred-site.xml.template
    [root@Master hadoop]# vi mapred-site.xml.template

    在<configuration> 节点内加入配置:

    <configuration>
    <property>
    <name>mapred.job.tracker</name>
    <value>Master:49001</value>
    </property>

    <property>
    <name>mapred.local.dir</name>
    <value>/usr/local/hadoop-2.6.0/hadoop/var</value>
    </property>

    <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    </property>
    </configuration>

    4.5 修改yarn-site.xml文件
    [root@Master hadoop]# vi yarn-site.xml
    在<configuration> 节点内加入配置:
    <configuration>
    <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>Master</value>
    </property>

    <property>
    <name>yarn.resourcemanager.address</name>

    <value>${yarn.resourcemanager.hostname}:8032</value>

    </property>

    <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>${yarn.resourcemanager.hostname}:8030</value>
    </property>

    <property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>${yarn.resourcemanager.hostname}:8088</value>
    </property>

    <property>
    <name>yarn.resourcemanager.webapp.https.address</name>
    <value>${yarn.resourcemanager.hostname}:8090</value>
    </property>

    <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>${yarn.resourcemanager.hostname}:8031</value>
    </property>

    <property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>${yarn.resourcemanager.hostname}:8033</value>
    </property>

    <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>

    </property>

    <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>2048</value>
    </property>

    <property>
    <name>yarn.nodemanager.vmem-pmem-ratio</name>
    <value>2.1</value>
    </property>

    <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>2048</value>
    </property>

    <property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
    </property>

    </configuration>

    四、启动Hadoop
    1.切换到/usr/local/hadoop-2.6.0/bin
    [root@Master hadoop]# cd /usr/local/hadoop-2.6.0/bin
    [root@Master bin]# pwd
    /usr/local/hadoop-2.6.0/bin
    [root@Master bin]# ll
    total 440
    -rwxr-xr-x 1 20000 20000 159183 Nov 14 2014 container-executor
    -rwxr-xr-x 1 20000 20000 5479 Nov 14 2014 hadoop
    -rwxr-xr-x 1 20000 20000 8298 Nov 14 2014 hadoop.cmd
    -rwxr-xr-x 1 20000 20000 11142 Nov 14 2014 hdfs
    -rwxr-xr-x 1 20000 20000 6923 Nov 14 2014 hdfs.cmd
    -rwxr-xr-x 1 20000 20000 5205 Nov 14 2014 mapred
    -rwxr-xr-x 1 20000 20000 5949 Nov 14 2014 mapred.cmd
    -rwxr-xr-x 1 20000 20000 1776 Nov 14 2014 rcc
    -rwxr-xr-x 1 20000 20000 201659 Nov 14 2014 test-container-executor
    -rwxr-xr-x 1 20000 20000 11380 Nov 14 2014 yarn
    -rwxr-xr-x 1 20000 20000 10895 Nov 14 2014 yarn.cmd
    [root@Master bin]#

    2. 初始化
    [root@Master bin]# ./hadoop namenode -format
    20/11/11 11:42:19 INFO namenode.NNConf: ACLs enabled? false
    20/11/11 11:42:19 INFO namenode.NNConf: XAttrs enabled? true
    20/11/11 11:42:19 INFO namenode.NNConf: Maximum size of an xattr: 16384
    20/11/11 11:42:19 INFO namenode.FSImage: Allocated new BlockPoolId: BP-163737534-192.168.0.3-1605066139267
    20/11/11 11:42:19 INFO common.Storage: Storage directory /usr/local/hadoop-2.6.0/hadoop/dfs/name has been successfully formatted.
    20/11/11 11:42:19 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
    20/11/11 11:42:19 INFO util.ExitUtil: Exiting with status 0
    20/11/11 11:42:19 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at Master/192.168.0.3
    ************************************************************/
    [root@Master bin]#

    3.启动
    [root@Master bin]# pwd
    /usr/local/hadoop-2.6.0/bin
    [root@Master bin]# find / -name 'start-all.sh'
    /usr/local/hadoop-2.6.0/sbin/start-all.sh

    执行初始化脚本,也就是执行命令:
    [root@Master bin]# cd /usr/local/hadoop-2.6.0/sbin
    [root@Master sbin]# ll
    total 120
    -rwxr-xr-x 1 20000 20000 2752 Nov 14 2014 distribute-exclude.sh
    -rwxr-xr-x 1 20000 20000 6452 Nov 14 2014 hadoop-daemon.sh
    -rwxr-xr-x 1 20000 20000 1360 Nov 14 2014 hadoop-daemons.sh
    -rwxr-xr-x 1 20000 20000 1640 Nov 14 2014 hdfs-config.cmd
    -rwxr-xr-x 1 20000 20000 1427 Nov 14 2014 hdfs-config.sh
    -rwxr-xr-x 1 20000 20000 2291 Nov 14 2014 httpfs.sh
    -rwxr-xr-x 1 20000 20000 2059 Nov 14 2014 kms.sh
    -rwxr-xr-x 1 20000 20000 4080 Nov 14 2014 mr-jobhistory-daemon.sh
    -rwxr-xr-x 1 20000 20000 1648 Nov 14 2014 refresh-namenodes.sh
    -rwxr-xr-x 1 20000 20000 2145 Nov 14 2014 slaves.sh
    -rwxr-xr-x 1 20000 20000 1779 Nov 14 2014 start-all.cmd
    -rwxr-xr-x 1 20000 20000 1471 Nov 14 2014 start-all.sh
    -rwxr-xr-x 1 20000 20000 1128 Nov 14 2014 start-balancer.sh
    -rwxr-xr-x 1 20000 20000 1401 Nov 14 2014 start-dfs.cmd
    -rwxr-xr-x 1 20000 20000 3705 Nov 14 2014 start-dfs.sh
    -rwxr-xr-x 1 20000 20000 1357 Nov 14 2014 start-secure-dns.sh
    -rwxr-xr-x 1 20000 20000 1571 Nov 14 2014 start-yarn.cmd
    -rwxr-xr-x 1 20000 20000 1347 Nov 14 2014 start-yarn.sh
    -rwxr-xr-x 1 20000 20000 1770 Nov 14 2014 stop-all.cmd
    -rwxr-xr-x 1 20000 20000 1462 Nov 14 2014 stop-all.sh
    -rwxr-xr-x 1 20000 20000 1179 Nov 14 2014 stop-balancer.sh
    -rwxr-xr-x 1 20000 20000 1455 Nov 14 2014 stop-dfs.cmd
    -rwxr-xr-x 1 20000 20000 3206 Nov 14 2014 stop-dfs.sh
    -rwxr-xr-x 1 20000 20000 1340 Nov 14 2014 stop-secure-dns.sh
    -rwxr-xr-x 1 20000 20000 1642 Nov 14 2014 stop-yarn.cmd
    -rwxr-xr-x 1 20000 20000 1340 Nov 14 2014 stop-yarn.sh
    -rwxr-xr-x 1 20000 20000 4295 Nov 14 2014 yarn-daemon.sh
    -rwxr-xr-x 1 20000 20000 1353 Nov 14 2014 yarn-daemons.sh
    [root@Master sbin]# ./start-all.sh

    第一次执行上面的启动命令,会需要我们进行交互操作,在问答界面上输入yes回车。

    4.验证访问
    访问:http://192.168.0.3:50070

    访问:http://192.168.0.3:8088

  • 相关阅读:
    大数据分析项目中的“最差”实践
    【Python】Python正则表达式使用指导
    大数据分析的5个方面
    你真的会python嘛?
    你是如何自学 Python 的?
    python中#!/usr/bin/python与#!/usr/bin/env python的区别
    [实验吧刷题]密码学部分
    [bugku]高阶web 综合帖
    [bugku]sqlmap部分
    【ctf第一次校赛】+不会的web/ +凉凉的省赛预赛 + 最后摸了 个二等。。
  • 原文地址:https://www.cnblogs.com/zhangkaimin/p/13958951.html
Copyright © 2011-2022 走看看