zoukankan      html  css  js  c++  java
  • Feeding In Your Data from Ganglia to Graphite

    Getting your data into Graphite is very flexible. There are three main methods for sending data to Graphite: Plaintext, Pickle, and AMQP.

    It’s worth noting that data sent to Graphite is actually sent to the Carbon and Carbon-Relay, which then manage the data. The Graphite web interface reads this data back out, either from cache or straight off disk.

    Choosing the right transfer method for you is dependent on how you want to build your application or script to send data:

    • There are some tools and APIs which can help you get your data into Carbon.
    • For a singular script, or for test data, the plaintext protocol is the most straightforward method.
    • For sending large amounts of data, you’ll want to batch this data up and send it to Carbon’s pickle receiver.
    • Finally, Carbon can listen to a message bus, via AMQP.

    Existing tools and APIs

    The plaintext protocol

    The plaintext protocol is the most straightforward protocol supported by Carbon.

    The data sent must be in the following format: <metric path> <metric value> <metric timestamp>. Carbon will then help translate this line of text into a metric that the web interface and Whisper understand.

    On Unix, the nc program can be used to create a socket and send data to Carbon (by default, ‘plaintext’ runs on port 2003):

    PORT=2003
    SERVER=graphite.your.org
    echo "local.random.diceroll 4 `date +%s`" | nc -q0 ${SERVER} ${PORT}
    

    The -q0 parameter instructs nc to close socket once data is sent. Without this option, some ncversions would keep the connection open.

    The pickle protocol

    The pickle protocol is a much more efficient take on the plaintext protocol, and supports sending batches of metrics to Carbon in one go.

    The general idea is that the pickled data forms a list of multi-level tuples:

    [(path, (timestamp, value)), ...]
    

    Once you’ve formed a list of sufficient size (don’t go too big!), send the data over a socket to Carbon’s pickle receiver (by default, port 2004). You’ll need to pack your pickled data into a packet containing a simple header:

    payload = pickle.dumps(listOfMetricTuples)
    header = struct.pack("!L", len(payload))
    message = header + payload
    

    You would then send the message object through a network socket.

    Using AMQP

    When AMQP_METRIC_NAME_IN_BODY is set to True in your carbon.conf file, the data should be of the same format as the plaintext protocol, e.g. echo “local.random.diceroll 4 date +%s”. When AMQP_METRIC_NAME_IN_BODY is set to False, you should omit ‘local.random.diceroll’.

    The Carbon Daemons

    When we talk about “Carbon” we mean one or more of various daemons that make up the storage backend of a Graphite installation. In simple installations, there is typically only one daemon,carbon-cache.py. This document gives a brief overview of what each daemon does and how you can use them to build a more sophisticated storage backend.

    All of the carbon daemons listen for time-series data and can accept it over a common set ofprotocols. However, they differ in what they do with the data once they receive it.

    carbon-cache.py

    carbon-cache.py accepts metrics over various protocols and writes them to disk as efficiently as possible. This requires caching metric values in RAM as they are received, and flushing them to disk on an interval using the underlying whisper library.

    carbon-cache.py requires some basic configuration files to run:

    carbon.conf
    The [cache] section tells carbon-cache.py what ports (2003/2004/7002), protocols (newline delimited, pickle) and transports (TCP/UDP) to listen on.
    storage-schemas.conf
    Defines a retention policy for incoming metrics based on regex patterns. This policy is passed towhisper when the .wsp file is pre-allocated, and dictates how long data is stored for.

    As the number of incoming metrics increases, one carbon-cache.py instance may not be enough to handle the I/O load. To scale out, simply run multiple carbon-cache.py instances (on one or more machines) behind a carbon-aggregator.py or carbon-relay.py.

    Warning

    If clients connecting to the carbon-cache.py are experiencing errors such as connection refusedby the daemon, a common reason is a shortage of file descriptors.

    In the console.log file, if you find presence of:

    Could not accept new connection (EMFILE)

    or

    exceptions.IOError: [Errno 24] Too many open files: '/var/lib/graphite/whisper/systems/somehost/something.wsp'

    the number of files carbon-cache.py can open will need to be increased. Many systems default to a max of 1024 file descriptors. A value of 8192 or more may be necessary depending on how many clients are simultaneously connecting to the carbon-cache.py daemon.

    In Linux, the system-global file descriptor max can be set via sysctl. Per-process limits are set via ulimit. See documentation for your operating system distribution for details on how to set these values.

    How to integrate Ganglia 3.3.x+ and Graphite 0.9.x?

    As of Ganglia 3.3.x the Graphite Integration plugin was integrated into gmetad. So if configured properly, when gmetad gathers the data it writes to RRDs and sends the metrics to Graphite.

    Now, you might ask why would you want to integrate with Graphite?
    Better graphs, more calculation options such Standard Deviation, Moving Average, many other tools that integrate to Graphite such as Tattle or Dynamic Graphing tools and much much more are reasons why you would want to integrate with Graphite.

    Why would you not simply go to Graphite and skip Ganglia?
    Well, in our case we needed Ganglia because we’re using HBase and Cloudera and unfortunately Cloudera provides integration to Ganglia and I believe JMX.. But if you have the option of going directly to Graphite then go for it. The Graphite website provide pretty good instructions for the install..

    Finally, to integrate Ganglia 3.3.x+ with Graphite simply add the following lines to your gmetad.conf file.
    carbon_server  server_name/ip
    carbon_port 2003
    graphite_prefix "ganglia"
    
    • Carbon_Port is not mandatory and can be omitted. The default port for Carbon is 2003 so if you haven’t changed it in Graphite settings then you can skip it.
    • Graphite_prefix is the name that all your clusters and metrics will reside under. Its just a directory structure.
    • You must restart gmetad service for changes to take effect.
      service gmetad restart
    • Once you restart the service you must monitor /var/log/messages for any errors.

    Carbon

    Graphite uses some kind of a stats-collector / listener called Carbon. In a typical scenario Carbon will listen on a TCP port for requests, and clients can reports stats by connecting to it. It will store the stats inside the database (Whisper), which is then used by Graphite to display and query the information.

    Given the characteristics above, it’s easy to see why using Carbon to collect our data might not be the ideal choice. Why?

    • Carbon requires leaving a TCP connection open. If the carbon server, network connection or anything along the path breaks – it can not only stop gathering monitored data – it can slow down our application.
    • Making sure a connection is ‘alive’ requires techniques such as connection-pooling and generally is quite resource-intensive.
    • TCP has overhead that might not be necessary and slows things down.

    So a fire-and-forget mechanism is much better for this purpose.

  • 相关阅读:
    video和audio
    H5-geolocation学习
    hammer.js学习
    echarts学习
    移动端手势识别
    SVG
    e.key && e.which && e.keyCode
    git中避免提交.DS_Store文件[转载]
    前端笔试题[1]
    Javascript实现格式化输出
  • 原文地址:https://www.cnblogs.com/sanquanfeng/p/3873078.html
Copyright © 2011-2022 走看看