zoukankan      html  css  js  c++  java
  • rabbitmq使用(四)

     

    In the previous tutorial we built a simple logging system. We were able to broadcast log messages to many receivers.

    In this tutorial we're going to add a feature to it - we're going to make it possible to subscribe only to a subset of the messages. For example, we will be able to direct only critical error messages to the log file (to save disk space), while still being able to print all of the log messages on the console.

    Bindings

    In previous examples we were already creating bindings. You may recall code like:

    1 channel.queue_bind(exchange=exchange_name,
    2                    queue=queue_name)

    A binding is a relationship between an exchange and a queue. This can be simply read as: the queue is interested in messages from this exchange.

    Bindings can take an extra routing_key parameter. To avoid the confusion with abasic_publish parameter we're going to call it a binding key. This is how we could create a binding with a key:

    1 channel.queue_bind(exchange=exchange_name,
    2                    queue=queue_name,
    3                    routing_key='black')

    The meaning of a binding key depends on the exchange type. The fanout exchanges, which we used previously, simply ignored its value.

    Direct exchange

    Our logging system from the previous tutorial broadcasts all messages to all consumers. We want to extend that to allow filtering messages based on their severity. For example we may want the script which is writing log messages to the disk to only receive critical errors, and not waste disk space on warning or info log messages.

    We were using a fanout exchange, which doesn't give us too much flexibility - it's only capable of mindless broadcasting.

    We will use a direct exchange instead. The routing algorithm behind a direct exchange is simple - a message goes to the queues whose binding key exactly matches the routing keyof the message.

    To illustrate that, consider the following setup:

    In this setup, we can see the direct exchange X with two queues bound to it. The first queue is bound with binding key orange, and the second has two bindings, one with binding key blackand the other one with green.

    In such a setup a message published to the exchange with a routing key orange will be routed to queue Q1. Messages with a routing key of black or green will go to Q2. All other messages will be discarded.

    Multiple bindings

    It is perfectly legal to bind multiple queues with the same binding key. In our example we could add a binding between X and Q1 with binding key black. In that case, the direct exchange will behave like fanout and will broadcast the message to all the matching queues. A message with routing key black will be delivered to both Q1 and Q2.

    Emitting logs

    We'll use this model for our logging system. Instead of fanout we'll send messages to a direct exchange. We will supply the log severity as a routing key. That way the receiving script will be able to select the severity it wants to receive. Let's focus on emitting logs first.

    Like always we need to create an exchange first:

    1 channel.exchange_declare(exchange='direct_logs',
    2                          type='direct')

    And we're ready to send a message:

    1 channel.basic_publish(exchange='direct_logs',
    2                       routing_key=severity,
    3                       body=message)

    To simplify things we will assume that 'severity' can be one of 'info', 'warning', 'error'.

    Subscribing

    Receiving messages will work just like in the previous tutorial, with one exception - we're going to create a new binding for each severity we're interested in.

    1 result = channel.queue_declare(exclusive=True)
    2 queue_name = result.method.queue
    3 
    4 for severity in severities:
    5     channel.queue_bind(exchange='direct_logs',
    6                        queue=queue_name,
    7                        routing_key=severity)

    Putting it all together

    The code for emit_log_direct.py:

     

     1 #!/usr/bin/env python
     2 import pika
     3 import sys
     4 
     5 connection = pika.BlockingConnection(pika.ConnectionParameters(
     6         host='localhost'))
     7 channel = connection.channel()
     8 
     9 channel.exchange_declare(exchange='direct_logs',
    10                          type='direct')
    11 
    12 severity = sys.argv[1] if len(sys.argv) > 1 else 'info'
    13 message = ' '.join(sys.argv[2:]) or 'Hello World!'
    14 channel.basic_publish(exchange='direct_logs',
    15                       routing_key=severity,
    16                       body=message)
    17 print " [x] Sent %r:%r" % (severity, message)
    18 connection.close()

    The code for receive_logs_direct.py:

     

     1 #!/usr/bin/env python
     2 import pika
     3 import sys
     4 
     5 connection = pika.BlockingConnection(pika.ConnectionParameters(
     6         host='localhost'))
     7 channel = connection.channel()
     8 
     9 channel.exchange_declare(exchange='direct_logs',
    10                          type='direct')
    11 
    12 result = channel.queue_declare(exclusive=True)
    13 queue_name = result.method.queue
    14 
    15 severities = sys.argv[1:]
    16 if not severities:
    17     print >> sys.stderr, "Usage: %s [info] [warning] [error]" % 
    18                          (sys.argv[0],)
    19     sys.exit(1)
    20 
    21 for severity in severities:
    22     channel.queue_bind(exchange='direct_logs',
    23                        queue=queue_name,
    24                        routing_key=severity)
    25 
    26 print ' [*] Waiting for logs. To exit press CTRL+C'
    27 
    28 def callback(ch, method, properties, body):
    29     print " [x] %r:%r" % (method.routing_key, body,)
    30 
    31 channel.basic_consume(callback,
    32                       queue=queue_name,
    33                       no_ack=True)
    34 
    35 channel.start_consuming()
  • 相关阅读:
    mongodb副本集群搭建
    redis 哨兵集群搭建
    mysql binlog备份
    MySQL 日志
    mysql 事务及innodb隔离级别
    sysbench使用
    PXC安装
    mysql搭建主从数据库
    GTID
    文件的压缩和解压
  • 原文地址:https://www.cnblogs.com/lintong/p/4383491.html
Copyright © 2011-2022 走看看