zoukankan      html  css  js  c++  java
  • Hadoop系列:在Linux下部署hadoop 0.20.1

    两台测试虚机,系统为REHL 5.3 x64,正常安装最新版本的JDK,正确设置SSH无密码登录。
    服务器一:192.168.56.101 dev1
    服务器二:192.168.56.102 dev2
    http://apache.freelamp.com/hadoop/core/hadoop-0.20.1/下载hadoop-0.20.1.tar.gz,把hadoop-0.20.1.tar.gz拷贝到dev1的“/usr/software/hadoop”目录下。登录dev1执行以下命令:
    # cd /usr/software/hadoop
    # tar zxvf hadoop-0.20.1.tar.gz
    # cp -a hadoop-0.20.1 /usr/hadoop
    # cd /usr/hadoop/conf
    修改hadoop环境配置文件hadoop-env.sh
    # vi hadoop-env.sh
    添加以下内容:
    export JAVA_HOME=/usr/java/jdk1.6.0_16
    修改hadoop主要配置文件core-site.xml
    # vi core-site.xml
    添加以下内容(可以根据需求自行定义):
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>
      <property>
        <name>fs.default.name</name>
        <value>hdfs://dev1</value>
        <description>The name of the default file system. Either the literal string "local" or a host:port for DFS.</description>
      </property>
      <property>
        <name>hadoop.tmp.dir</name>
        <value>/usr/hadoop/tmp</value>
        <description>A base for other temporary directories.</description>
      </property>

      <property>
        <name>dfs.name.dir</name>
        <value>/usr/hadoop/filesystem/name</value>
        <description>Determines where on the local filesystem the DFS name node should store the name table. If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. </description>
      </property>

      <property>
        <name>dfs.data.dir</name>
        <value>/usr/hadoop/filesystem/data</value>
        <description>
          Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are i
          gnored.
        </description>
      </property>

      <property>
        <name>dfs.replication</name>
        <value>1</value>
        <description>Default block replication. The actual number of replications can be specified when the file is created. The default isused if replication is not specified in create time.</description>
      </property>
    </configuration>
    修伽hadoop的mapred-site.xml文件
    # vi mapred-site.xml
    添加如下内容:
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>
      <property>
        <name>mapred.job.tracker</name>
        <value>dev1:9001</value>
        <description>
          The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and
          reduce task.
        </description>
      </property>

    </configuration>
    修改hadoop定义namenode的masters文件:
    # vi masters
    添加以下内容:
    dev1
    修改hadoop定义datanode的slaves文件:
    # vi slaves
    添加以下内容:
    dev2

    在dev2按以上步骤安装hadoop。
    格式化namenode:
    # ./hadoop namenode -format
    到此所有安装和配置完成。
    在dev1执行以下命令,启动hadoop:
    # cd /usr/hadoop/bin
    # ./start-all.sh
    启动完成后,可以以下执行命令来查看hadoop查看其基本情况:
    # ./hadoop dfsadmin -report
    或在浏览器中输入http://192.168.56.101:50070/dfshealth.jsp查看。

  • 相关阅读:
    A Node Influence Based Label Propagation Algorithm for Community detection in networks 文章算法实现的疑问
    Fast Newman-FN算法以及模块度定义介绍
    Label Propagation Algorithm LPA 标签传播算法解析及matlab代码实现
    设计一个smartnic
    Intel GEN11 GPU
    Intel GEN9 GPU
    Shared Virtual Memory (SVM) Functions
    connect via ssh to virtualbox guest vm without knowing ip address
    smartnic
    技术精品翻译
  • 原文地址:https://www.cnblogs.com/TerryLiang/p/1590241.html
Copyright © 2011-2022 走看看