zoukankan      html  css  js  c++  java
  • NO.2 安装配置

    检测当前系统下的jdk安装情况:
    [root@Centos 桌面]# rpm -qa | grep java
    tzdata-java-2012j-1.el6.noarch
    java-1.7.0-openjdk-1.7.0.9-2.3.4.1.el6_3.x86_64
    java-1.6.0-openjdk-1.6.0.0-1.50.1.11.5.el6_3.x86_64
     
    卸载当前所有的jdk:
    [root@Centos 桌面]# rpm -e --nodeps tzdata-java-2012j-1.el6.noarch
    [root@Centos 桌面]# rpm -e --nodeps java-1.7.0-openjdk-1.7.0.9-2.3.4.1.el6_3.x86_64
    [root@Centos 桌面]# rpm -e --nodeps java-1.6.0-openjdk-1.6.0.0-1.50.1.11.5.el6_3.x86_64
     
    复查是否卸载完全:无输出则卸载干净:
    [root@Centos 桌面]# rpm -qa | grep java
    [root@Centos 桌面]#
     
     
     
    安装自己下载的jdk并配置环境变量,详尽参考“linux学习/NO.7”
     
     
    ······················································· JDK安装完成······························································
     
    ······················································· hadoop的安装开始··················································
     
    1.在hadoop的conf目录下配置 hadoop-env.sh   core-site.xml   hdfs-site.xml   mapred-site.xml ,
    首先解压hadoop.
                1.1 在hadoop-env.sh里的配置hadoop的JDK环境
                ---------------------------------------------
                [root@Centos ~]# cd hadoop-1.2.1/
                [root@Centos hadoop-1.2.1]# cd conf
                [root@Centos conf]# vi hadoop-env.sh
                ---------------------------------------------
                配置信息如下:
                    export JAVA_HOME=/root/jdk1.8.0_65   
     
                1.2 在core-site.xml里的配置hadoop的HDFS地址及端口号
                ------------------------------------------------
                [root@Centos conf]# vi core-site.xml
                ------------------------------------------------
                配置信息如下:
                    <configuration>
                        <property>
                            <name>fs.default.name</name>
                            <value>hdfs://localhost:9000</value>
                        </property>
                    </configuration>
     
                1.3 在hdfs-site.xml里的配置hadoop的HDFS的配置
                -------------------------------------------------
                [root@Centos conf]# vi hdfs-site.xml
                -------------------------------------------------
                配置信息如下:
                    <configuration>
                        <property>
                        <name>dfs.replication</name>
                        <value>1</value>
                        </property>
                    </configuration>
     
                1.4 在mapred-site.xml里的配置hadoop的HDFS的配置
                -------------------------------------------------
                [root@Centos conf]# vi mapred-site.xml
                --------------------------------------------
                配置信息如下:
                    <configuration>
                        <property>         
                          <name>mapred.job.tracker</name>
                          <value>localhost:9001</value>
                        </property>
                    </configuration>
     
    2.ssh免密码登录
    --------------------------------------------------------------------
    [root@Centos conf]# cd /root
    [root@Centos ~]# ssh-keygen -t rsa
    输出:
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa):
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    ed:48:64:29:62:37:c1:e9:3d:84:bf:ad:4e:50:5e:66 root@Centos
    The key's randomart image is:
    +--[ RSA 2048]----+
    |     ..o         |
    |      +...       |
    |    o.++= E      |
    |   . o.B+=       |
    |      . S+.      |
    |       o.o.      |
    |        o..      |
    |       ..        |
    |       ..        |
    +-----------------+
    c[root@Centos ~]# cd .ssh
    [root@Centos .ssh]# ls
    id_rsa  id_rsa.pub
    [root@Centos .ssh]# cp id_rsa.pub  authorized_keys
    [root@Centos .ssh]# ls
    authorized_keys  id_rsa  id_rsa.pub
    [root@Centos .ssh]# ssh localhost
    The authenticity of host 'localhost (::1)' can't be established.
    RSA key fingerprint is 3f:84:db:2f:53:a9:09:a6:61:a2:3a:82:80:6c:af:1a.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
    -------------------------------------------------------------------------------
        验证免密码登录
    -------------------------------------------------------------------------------
    [root@Centos ~]# ssh localhost
    Last login: Sun Apr  3 23:19:51 2016 from localhost
     
    [root@Centos ~]# ssh localhost
    Last login: Sun Apr  3 23:20:12 2016 from localhost
     
    Connection to localhost closed.
    [root@Centos ~]#
    ----------------------------SSH免密码登录设置成功----------------------------
     
     
    格式化HDFS:
    [root@Centos ~]# cd  /root/hadoop-1.2.1/
    [root@Centos hadoop-1.2.1]#  bin/hadoop namenode -format
    输出:
    16/04/03 23:24:12 INFO namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = java.net.UnknownHostException: Centos: Centos: unknown error
    STARTUP_MSG:   args = [-format]
    STARTUP_MSG:   version = 1.2.1
    STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
    STARTUP_MSG:   java = 1.8.0_65
    ************************************************************/
    16/04/03 23:24:13 INFO util.GSet: Computing capacity for map BlocksMap
    16/04/03 23:24:13 INFO util.GSet: VM type       = 64-bit
    16/04/03 23:24:13 INFO util.GSet: 2.0% max memory = 1013645312
    16/04/03 23:24:13 INFO util.GSet: capacity      = 2^21 = 2097152 entries
    16/04/03 23:24:13 INFO util.GSet: recommended=2097152, actual=2097152
    16/04/03 23:24:15 INFO namenode.FSNamesystem: fsOwner=root
    16/04/03 23:24:15 INFO namenode.FSNamesystem: supergroup=supergroup
    16/04/03 23:24:15 INFO namenode.FSNamesystem: isPermissionEnabled=true
    16/04/03 23:24:15 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
    16/04/03 23:24:15 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
    16/04/03 23:24:15 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
    16/04/03 23:24:15 INFO namenode.NameNode: Caching file names occuring more than 10 times
    16/04/03 23:24:17 INFO common.Storage: Image file /tmp/hadoop-root/dfs/name/current/fsimage of size 110 bytes saved in 0 seconds.
    16/04/03 23:24:18 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop-root/dfs/name/current/edits
    16/04/03 23:24:18 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop-root/dfs/name/current/edits
    16/04/03 23:24:18 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.
    16/04/03 23:24:18 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: Centos: Centos: unknown error
    ************************************************************/
    -----------------------------------------------------------------------------
    此时格式化节点报错:Centos: unknown error--------紧接着下一步配置
    --------------------------------------------------------------------------
    [root@Centos hadoop-1.2.1]# vi /etc/hosts
        配置信息如下:
        127.0.0.1   localhost Centos
    -------------------------------------------------------------------------
    再一次进行格式化       
    --------------------------------------------------------------------------
    [root@Centos hadoop-1.2.1]# vi /etc/hosts
    [root@Centos hadoop-1.2.1]# bin/hadoop namenode -format
    具体情况如下图:
     
    ---------------------------namenode格式化成功------------------------------
    启动hadoop :
            关闭防火墙命令行  # service iptables stop
          启动hadoop集群命令行  # start-all.sh
          关闭hadoop集群命令行  # stop-all.sh
    关闭防火墙:# service iptables stop
    启动hadoop集群 :
    [root@Centos hadoop-1.2.1]# bin/start-all.sh
    输出:
    starting namenode, logging to /root/hadoop-1.2.1/libexec/../logs/hadoop-root-namenode-Centos.out
    localhost: starting datanode, logging to /root/hadoop-1.2.1/libexec/../logs/hadoop-root-datanode-Centos.out
    localhost: starting secondarynamenode, logging to /root/hadoop-1.2.1/libexec/../logs/hadoop-root-secondarynamenode-Centos.out
    starting jobtracker, logging to /root/hadoop-1.2.1/libexec/../logs/hadoop-root-jobtracker-Centos.out
    localhost: starting tasktracker, logging to /root/hadoop-1.2.1/libexec/../logs/hadoop-root-tasktracker-Centos.out
     
    验证集群是否正常启动----5个节点在列表中则启动成功
    再次验证启动项目
    [root@Centos hadoop-1.2.1]# cd  /root/mahout-distribution-0.6/
    [root@Centos mahout-distribution-0.6]# jps
    30692 SecondaryNameNode
    30437 NameNode
    31382 Jps
    30903 TaskTracker
    30775 JobTracker
    30553 DataNode
    [root@Centos mahout-distribution-0.6]# jps
    30692 SecondaryNameNode
    31477 Jps
    30437 NameNode
    30903 TaskTracker
    30775 JobTracker
    30553 DataNode
    [root@Centos mahout-distribution-0.6]# cd  /home/hadoop-1.2.1
    关闭hadoop集群
    [root@Centos hadoop-1.2.1]# bin/stop-all.sh
    stopping jobtracker
    localhost: stopping tasktracker
    stopping namenode
    localhost: stopping datanode
    localhost: stopping secondarynamenode
    [root@Centos hadoop-1.2.1]#
     
    ------------------------hadoop伪分布式安装成功------------------------
    mahout的安装
                1.解压安装mahout :解压到hadoop目录中
                    [root@Centos hadoop-1.2.1]#  tar zxvf mahout-distribution-0.6.tar.gz
                2.配置环境变量
                        export HADOOP_HOME=/root/hadoop-1.2.1
                        export HADOOP_CONF_DIR=/root/hadoop-1.2.1/conf
                        export MAHOUT_HOME=/root/hadoop-1.2.1/mahoutdistribution-0.6
                        export MAHOUT_CONF_DIR=/root/hadoop-1.2.1/mahoutdistribution-0.6/conf
                        export PATH=$PATH:$MAHOUT_HOME/conf:$MAHOUT_HOME/bin
                3.测试mahout的启动
    配置环境变量过程如下图:
     

    --------------------------------------mahout安装成------------------------------------

     
     
     
  • 相关阅读:
    CDB命令方式创建和删除
    cdb和pdb的启停
    python 读取blob
    c# 读取blob数据
    python 为什么没有自增自减符
    程序异常重启代码
    便捷辅助开发工具
    正则表达式带例子详解
    名语中看代码
    c# 画一个报告
  • 原文地址:https://www.cnblogs.com/panweiwei/p/8127660.html
Copyright © 2011-2022 走看看