zoukankan      html  css  js  c++  java
  • 使用Ganglia监控hadoop、hbase

    Ganglia是一个监控服务器,集群的开源软件,能够用曲线图表现最近一个小时,最近一天,最近一周,最近一月,最近一年的服务器或者集群的cpu负载,内存,网络,硬盘等指标。 

    Ganglia的强大在于:ganglia服务端能够通过一台客户端收集到同一个网段的所有客户端的数据,ganglia集群服务端能够通过一台服务端收集到它下属的所有客户端数据。这个体系设计表示一台服务器能够通过不同的分层能够管理上万台机器。这个功能是其他mrtg,nagios,cacti所不能比拟的。

    Ganglia是UC Berkeley发起的一个开源实时监视项目,用于测量数以千计的节点,为云计算系统提供系统静态数据以及重要的性能度量数据。

    Ganglia系统基本包含以下三大部分。

    Gmond:Gmond运行在每台计算机上,它主要监控每台机器上收集和发送度量数据(如处理器速度、内存使用量等)。

    Gmetad:Gmetad运行在Cluster的一台主机上,作为Web Server,或者用于与Web Server进行沟通。

    Ganglia Web前端:Web前端用于显示Ganglia的Metrics图表。

    Hadoop和HBase本身对于Ganglia的支持非常好。通过简单的配置,我们可以将Hadoop和HBase的一些关键参数以图表的形式展现在Ganglia的Web Console上。这些对于我们

    洞悉Hadoop和HBase的内部系统状态有很大的帮助。

     Ganglia工作原理和结构:

    gmetad端口:8651、8652...

    gmond端口: 8649

    一、ganglia安装:

    1、准备好epel源,首先在master上安装

    # yum install ganglia ganglia-gmetad ganglia-gmond ganglia-web -y

    2、在各个slave端安装 ganglia-mond

    # yum install ganglia-gmond -y

    二、ganglia配置:

    1、vim /etc/ganglia/gmetad.conf  #收集来自各个节点上的信息并存储在RRDtool上

        data_source "myhadoop" 10.0.10.60

    2、vim /etc/ganglia/gmond.conf

    
    

     cluster {
      name = "myhadoop"
      owner = "unspecified"
      latlong = "unspecified"
      url = "unspecified"
      }

      .............

    udp_send_channel {         #布尔类型,多播或单播
      #bind_hostname = yes # Highly recommended, soon to be default.
                           # This option tells gmond to use a source address
                           # that resolves to the machine's hostname.  Without
                           # this, the metrics may appear to come from any
                           # interface and the DNS names associated with
                           # those IPs will be used to create the RRDs.
      mcast_join = 10.0.10.60  #仅多播方式multicast
      port = 8649
      ttl = 1
    }
    
    /* You can specify as many udp_recv_channels as you like as well. */
    udp_recv_channel {
      #mcast_join = 239.2.11.71
      port = 8649
      #bind = 239.2.11.71
      retry_bind = true
      # Size of the UDP buffer. If you are handling lots of metrics you really
      # should bump it up to e.g. 10MB or even higher.
      # buffer = 10485760
    }

    将默认的多播地址改为master地址,将udp_recv_channel 的2个IP注释掉

    3、vim /etc/httpd/conf.d/ganglia.conf

    #
    # Ganglia monitoring system php web frontend
    #
    
    Alias /ganglia /usr/share/ganglia
    
    <Location /ganglia>
      Order deny,allow
      Allow from all
      Allow from 127.0.0.1
      Allow from ::1
      # Allow from .example.com
    </Location>

    将Deny from all 改为Allow from all,否则在页面访问时有权限问题

    4、vim /etc/php.ini 

       date.timezone = Asia/Shanghai

    5、启动ganglia

    # ln -s /usr/local/rrdtool/bin/rrdtool /usr/bin/rrdtool

    # service httpd start

    # service gmond start 

    # service gmetad start

    6、从页面访问 http://10.0.10.60/ganglia

    一些注意问题:

    1、gmetad收集到的信息被放到/var/lib/ganglia/rrds/

    2、可以通过以下命令检查是否有数据在传输

    # tcpdump port 8649  

    3、错误日志可以查看 /var/log/httpd/error_log

    三、配置Hadoop与hbase

    1、hadoop有两个配置文件hadoop-metrics.properties和hadoop-metrics2.properties

    hadoop-metrics.properties:用于hadoop与3.1版本以前的ganglia集成做监控的配置文件(在ganglia3.0到3.1的过程中,消息的格式发生了重要的变化,不兼容之前的版本)

    hadoop-metrics2.properties:用于hadoop与3.1版本以后的ganglia集成做监控的配置文件(本文采用此配置文件)

    # vim hadoop-metrics2.properties

    *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
    *.period=10
    *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
    *.sink.ganglia.period=10
    *.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both
    *.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40
    namenode.sink.ganglia.servers=master60:8649
    datanode.sink.ganglia.servers=slave1:8649,slave62:8649

    完整内容如下:

    # syntax: [prefix].[source|sink].[instance].[options]
    # See javadoc of package-info.java for org.apache.hadoop.metrics2 for details
    
    *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
    # default sampling period, in seconds
    *.period=10
    
    # The namenode-metrics.out will contain metrics from all context
    #namenode.sink.file.filename=namenode-metrics.out
    # Specifying a special sampling period for namenode:
    #namenode.sink.*.period=8
    
    #datanode.sink.file.filename=datanode-metrics.out
    
    #resourcemanager.sink.file.filename=resourcemanager-metrics.out
    
    #nodemanager.sink.file.filename=nodemanager-metrics.out
    
    #mrappmaster.sink.file.filename=mrappmaster-metrics.out
    
    #jobhistoryserver.sink.file.filename=jobhistoryserver-metrics.out
    
    # the following example split metrics of different
    # context to different sinks (in this case files)
    #nodemanager.sink.file_jvm.class=org.apache.hadoop.metrics2.sink.FileSink
    #nodemanager.sink.file_jvm.context=jvm
    #nodemanager.sink.file_jvm.filename=nodemanager-jvm-metrics.out
    #nodemanager.sink.file_mapred.class=org.apache.hadoop.metrics2.sink.FileSink
    #nodemanager.sink.file_mapred.context=mapred
    #nodemanager.sink.file_mapred.filename=nodemanager-mapred-metrics.out
    
    #
    # Below are for sending metrics to Ganglia
    #
    # for Ganglia 3.0 support
    # *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink30
    #
    # for Ganglia 3.1 support
    *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
    
    *.sink.ganglia.period=10
    
    # default for supportsparse is false
    # *.sink.ganglia.supportsparse=true
    
    *.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both
    *.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40
    
    # Tag values to use for the ganglia prefix. If not defined no tags are used.
    # If '*' all tags are used. If specifiying multiple tags separate them with 
    # commas. Note that the last segment of the property name is the context name.
    #
    #*.sink.ganglia.tagsForPrefix.jvm=ProcesName
    #*.sink.ganglia.tagsForPrefix.dfs=
    #*.sink.ganglia.tagsForPrefix.rpc=
    #*.sink.ganglia.tagsForPrefix.mapred=
    
    namenode.sink.ganglia.servers=master60:8649
    
    datanode.sink.ganglia.servers=slave1:8649,slave62:8649
    
    #resourcemanager.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649
    
    #nodemanager.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649
    
    #mrappmaster.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649
    
    #jobhistoryserver.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649
    View Code

    2、重启hadoop,在主NameNode节点上执行:

    # /usr/local/hadoop/sbin/stop-all.sh

    # /usr/local/hadoop/sbin/start-all.sh

    3、重启所有Gmod端:

    # service gmond restart 

    详细gmond.conf各参数参考资料:http://book.2cto.com/201309/32329.html

    四、监控spark

    支持Ganglia的Sink类别: GangliaSink

    由于Licene的限制,默认没有放到默认的build里面,如果需要使用,需要自己编译

    名称默认值描述
    class org.apache.spark.metrics.sink.GangliaSink Sink类
    host NONE Ganglia 服务器的主机名或multicast group
    port NONE Ganglia服务器的端口
    period 10 轮询间隔
    unit seconds 轮询间隔的单位
    ttl 1 TTL of messages sent by Ganglia
    mode multicast Ganglia网络模式('unicast' or 'multicast')

    参考资料:http://huaxin.blog.51cto.com/903026/1841208

  • 相关阅读:
    SecureCRT ssh Ubuntu Home End delete键失效?
    ssh登陆ubuntu开始较慢
    Ubuntu 12.04安装最新版本PostgreSQL
    xpath用法
    算法作业5——分治法求最近点对问题
    算法作业4——二分归并排序
    算法作业2——Floyd和Dijkstra
    算法作业3——顺序查找和二分查找
    算法作业1——Prim和Kruskal算法
    M
  • 原文地址:https://www.cnblogs.com/wjoyxt/p/5671221.html
Copyright © 2011-2022 走看看