zoukankan      html  css  js  c++  java
  • hadoop2.x整合手册【1】--hadoop2.x安装与配置

    前置工作

    1. 此处阅读需要曾经有1.x版本的安装部署经验
    2. ssh无密码配置如同1.X版本和防火墙需要关闭,而需要的java环境也要准备好
    3. 将hadoop集群进行网络隔离,服务请求以及数据请求不要直接访问hadoop集群

    安装步骤

    一、核心配置文件以及配置(此处配置来源于官网)

    • conf/core-site.xml
      Parameter Value Notes
      fs.defaultFS NameNode URI hdfs://host:port/
      io.file.buffer.size 131072 Size of read/write buffer used in SequenceFiles.
    • conf/hdfs-site.xml
      • Configurations for NameNode:
        Parameter Value Notes
        dfs.namenode.name.dir Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently. If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
        dfs.namenode.hosts / dfs.namenode.hosts.exclude List of permitted/excluded DataNodes. If necessary, use these files to control the list of allowable datanodes.
        dfs.blocksize 268435456 HDFS blocksize of 256MB for large file-systems.
        dfs.namenode.handler.count 100 More NameNode server threads to handle RPCs from large number of DataNodes.
      • Configurations for DataNode:
        Parameter Value Notes
        dfs.datanode.data.dir Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices.
    • conf/yarn-site.xml
      • Configurations for ResourceManager and NodeManager:
        Parameter Value Notes
        yarn.acl.enable true / false Enable ACLs? Defaults to false.
        yarn.admin.acl Admin ACL ACL to set admins on the cluster. ACLs are of for comma-separated-usersspacecomma-separated-groups. Defaults to special value of* which meansanyone. Special value of justspace means no one has access.
        yarn.log-aggregation-enable false Configuration to enable or disable log aggregation
      • Configurations for ResourceManager:
        Parameter Value Notes
        yarn.resourcemanager.address ResourceManager host:port for clients to submit jobs. host:port
        yarn.resourcemanager.scheduler.address ResourceManager host:port for ApplicationMasters to talk to Scheduler to obtain resources. host:port
        yarn.resourcemanager.resource-tracker.address ResourceManager host:port for NodeManagers. host:port
        yarn.resourcemanager.admin.address ResourceManager host:port for administrative commands. host:port
        yarn.resourcemanager.webapp.address ResourceManager web-ui host:port. host:port
        yarn.resourcemanager.scheduler.class ResourceManager Scheduler class. CapacityScheduler (recommended), FairScheduler (also recommended), orFifoScheduler
        yarn.scheduler.minimum-allocation-mb Minimum limit of memory to allocate to each container request at theResource Manager. In MBs
        yarn.scheduler.maximum-allocation-mb Maximum limit of memory to allocate to each container request at theResource Manager. In MBs
        yarn.resourcemanager.nodes.include-path / yarn.resourcemanager.nodes.exclude-path List of permitted/excluded NodeManagers. If necessary, use these files to control the list of allowable NodeManagers.
      • Configurations for NodeManager:
        Parameter Value Notes
        yarn.nodemanager.resource.memory-mb Resource i.e. available physical memory, in MB, for given NodeManager Defines total available resources on the NodeManager to be made available to running containers
        yarn.nodemanager.vmem-pmem-ratio Maximum ratio by which virtual memory usage of tasks may exceed physical memory The virtual memory usage of each task may exceed its physical memory limit by this ratio. The total amount of virtual memory used by tasks on the NodeManager may exceed its physical memory usage by this ratio.
        yarn.nodemanager.local-dirs Comma-separated list of paths on the local filesystem where intermediate data is written. Multiple paths help spread disk i/o.
        yarn.nodemanager.log-dirs Comma-separated list of paths on the local filesystem where logs are written. Multiple paths help spread disk i/o.
        yarn.nodemanager.log.retain-seconds 10800 Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled.
        yarn.nodemanager.remote-app-log-dir /logs HDFS directory where the application logs are moved on application completion. Need to set appropriate permissions. Only applicable if log-aggregation is enabled.
        yarn.nodemanager.remote-app-log-dir-suffix logs Suffix appended to the remote log dir. Logs will be aggregated to ${yarn.nodemanager.remote-app-log-dir}/${user}/${thisParam} Only applicable if log-aggregation is enabled.
        yarn.nodemanager.aux-services mapreduce_shuffle Shuffle service that needs to be set for Map Reduce applications.
      • Configurations for History Server (Needs to be moved elsewhere):
        Parameter Value Notes
        yarn.log-aggregation.retain-seconds -1 How long to keep aggregation logs before deleting them. -1 disables. Be careful, set this too small and you will spam the name node.
        yarn.log-aggregation.retain-check-interval-seconds -1 Time between checks for aggregated log retention. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful, set this too small and you will spam the name node.
    • conf/mapred-site.xml
      • Configurations for MapReduce Applications:
        Parameter Value Notes
        mapreduce.framework.name yarn Execution framework set to Hadoop YARN.
        mapreduce.map.memory.mb 1536 Larger resource limit for maps.
        mapreduce.map.java.opts -Xmx1024M Larger heap-size for child jvms of maps.
        mapreduce.reduce.memory.mb 3072 Larger resource limit for reduces.
        mapreduce.reduce.java.opts -Xmx2560M Larger heap-size for child jvms of reduces.
        mapreduce.task.io.sort.mb 512 Higher memory-limit while sorting data for efficiency.
        mapreduce.task.io.sort.factor 100 More streams merged at once while sorting files.
        mapreduce.reduce.shuffle.parallelcopies 50 Higher number of parallel copies run by reduces to fetch outputs from very large number of maps.
      • Configurations for MapReduce JobHistory Server:
        Parameter Value Notes
        mapreduce.jobhistory.address MapReduce JobHistory Server host:port Default port is 10020.
        mapreduce.jobhistory.webapp.address MapReduce JobHistory Server Web UI host:port Default port is 19888.
        mapreduce.jobhistory.intermediate-done-dir /mr-history/tmp Directory where history files are written by MapReduce jobs.
        mapreduce.jobhistory.done-dir /mr-history/done Directory where history files are managed by the MR JobHistory Server.

    • slave和master文件也是必不可少的,但是为了便于系统运维不要使用ip,而应该使用机器名。但是如果局域网内的ip是动态的,机器名无法正常解析,最后将各个节点的host也进行对应配置

    二、系统操作命令:

    启动命令:

    格式化DFS文件系统

    $ $HADOOP_PREFIX/bin/hdfs namenode -format <cluster_name>

    启动dfs实例

    $ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode

    启动所有slave上的DataNode实例

    $ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start datanode

    运行yarn实例,启动ResourceManager:

    $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager

    启动所有slave上的NodeManagers

    $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager

    启动WebAppProxy server

    $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh start proxyserver --config $HADOOP_CONF_DIR

    启动MapReduce JobHistory

    $ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh start historyserver --config $HADOOP_CONF_DIR
    停止服务与启动命令对应:
    $ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop namenode
    $ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode
    $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager
    $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop nodemanager
    $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh stop proxyserver --config $HADOOP_CONF_DIR
    $ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh stop historyserver --config $HADOOP_CONF_DIR

    三、运行监控:

    hdfs状态监控:http://master的地址:50070

    任务监控:http://master的地址:19888

    程序和节点监控:http://master的地址:8034/cluster

  • 相关阅读:
    mkfs
    Nginx配置指令location匹配符优先级和安全问题
    Nginx 403 Forbidden
    nginx 开机启动脚本 可以使用systemctl enable service 添加开机启动
    systemctl
    Pycharm 项目设置Github账户
    sql存储过程
    vim 替换模式
    python--爬虫小案例
    python--正则表达式
  • 原文地址:https://www.cnblogs.com/AI001/p/3996840.html
Copyright © 2011-2022 走看看