zoukankan      html  css  js  c++  java
  • Python操作 RabbitMQ、Redis、Memcache、SQLAlchemy

    Memcached

    Memcached 是一个高性能的分布式内存对象缓存系统,用于动态Web应用以减轻数据库负载。它通过在内存中缓存数据和对象来减少读取数据库的次数,从而提高动态、数据库驱动网站的速度。Memcached基于一个存储键/值对的hashmap。其守护进程(daemon )是用C写的,但是客户端可以用任何语言来编写,并通过memcached协议与守护进程通信。

    Memcached安装和基本使用

    Memcached安装:

    1 wget http://memcached.org/latest
    2 tar -zxvf memcached-1.x.x.tar.gz
    3 cd memcached-1.x.x
    4 ./configure && make && make test && sudo make install
    5  
    6 PS:依赖libevent
    7        yum install libevent-devel
    8        apt-get install libevent-dev

    启动Memcached

     1 memcached -d -m 10    -u root -l 192.168.1.1 -p 12000 -c 256 -P /tmp/memcached.pid
     2  
     3 参数说明:
     4     -d 是启动一个守护进程
     5     -m 是分配给Memcache使用的内存数量,单位是MB
     6     -u 是运行Memcache的用户
     7     -l 是监听的服务器IP地址
     8     -p 是设置Memcache监听的端口,最好是1024以上的端口
     9     -c 选项是最大运行的并发连接数,默认是1024,按照你服务器的负载量来设定
    10     -P 是设置保存Memcache的pid文件

    Memcached命令

    1 存储命令: set/add/replace/append/prepend/cas
    2 获取命令: get/gets
    3 其他命令: delete/stats..

    Python操作Memcached

    安装API

    1 python操作Memcached使用Python-memcached模块
    2 下载安装:https://pypi.python.org/pypi/python-memcached

    常用操作

    1 import memcache
    2  
    3 mc = memcache.Client(['10.211.55.4:12000'], debug=True)
    4 mc.set("foo", "bar")
    5 ret = mc.get('foo')
    6 print ret
    class Client(threading.local):
        """Object representing a pool of memcache servers.
    
        See L{memcache} for an overview.
    
        In all cases where a key is used, the key can be either:
            1. A simple hashable type (string, integer, etc.).
            2. A tuple of C{(hashvalue, key)}.  This is useful if you want
            to avoid making this module calculate a hash value.  You may
            prefer, for example, to keep all of a given user's objects on
            the same memcache server, so you could use the user's unique
            id as the hash value.
    
    
        @group Setup: __init__, set_servers, forget_dead_hosts,
        disconnect_all, debuglog
        @group Insertion: set, add, replace, set_multi
        @group Retrieval: get, get_multi
        @group Integers: incr, decr
        @group Removal: delete, delete_multi
        @sort: __init__, set_servers, forget_dead_hosts, disconnect_all,
               debuglog, set, set_multi, add, replace, get, get_multi,
               incr, decr, delete, delete_multi
        """
        _FLAG_PICKLE = 1 << 0
        _FLAG_INTEGER = 1 << 1
        _FLAG_LONG = 1 << 2
        _FLAG_COMPRESSED = 1 << 3
    
        _SERVER_RETRIES = 10  # how many times to try finding a free server.
    
        # exceptions for Client
        class MemcachedKeyError(Exception):
            pass
    
        class MemcachedKeyLengthError(MemcachedKeyError):
            pass
    
        class MemcachedKeyCharacterError(MemcachedKeyError):
            pass
    
        class MemcachedKeyNoneError(MemcachedKeyError):
            pass
    
        class MemcachedKeyTypeError(MemcachedKeyError):
            pass
    
        class MemcachedStringEncodingError(Exception):
            pass
    
        def __init__(self, servers, debug=0, pickleProtocol=0,
                     pickler=pickle.Pickler, unpickler=pickle.Unpickler,
                     compressor=zlib.compress, decompressor=zlib.decompress,
                     pload=None, pid=None,
                     server_max_key_length=None, server_max_value_length=None,
                     dead_retry=_DEAD_RETRY, socket_timeout=_SOCKET_TIMEOUT,
                     cache_cas=False, flush_on_reconnect=0, check_keys=True):
            """Create a new Client object with the given list of servers.
    
            @param servers: C{servers} is passed to L{set_servers}.
            @param debug: whether to display error messages when a server
            can't be contacted.
            @param pickleProtocol: number to mandate protocol used by
            (c)Pickle.
            @param pickler: optional override of default Pickler to allow
            subclassing.
            @param unpickler: optional override of default Unpickler to
            allow subclassing.
            @param pload: optional persistent_load function to call on
            pickle loading.  Useful for cPickle since subclassing isn't
            allowed.
            @param pid: optional persistent_id function to call on pickle
            storing.  Useful for cPickle since subclassing isn't allowed.
            @param dead_retry: number of seconds before retrying a
            blacklisted server. Default to 30 s.
            @param socket_timeout: timeout in seconds for all calls to a
            server. Defaults to 3 seconds.
            @param cache_cas: (default False) If true, cas operations will
            be cached.  WARNING: This cache is not expired internally, if
            you have a long-running process you will need to expire it
            manually via client.reset_cas(), or the cache can grow
            unlimited.
            @param server_max_key_length: (default SERVER_MAX_KEY_LENGTH)
            Data that is larger than this will not be sent to the server.
            @param server_max_value_length: (default
            SERVER_MAX_VALUE_LENGTH) Data that is larger than this will
            not be sent to the server.
            @param flush_on_reconnect: optional flag which prevents a
            scenario that can cause stale data to be read: If there's more
            than one memcached server and the connection to one is
            interrupted, keys that mapped to that server will get
            reassigned to another. If the first server comes back, those
            keys will map to it again. If it still has its data, get()s
            can read stale data that was overwritten on another
            server. This flag is off by default for backwards
            compatibility.
            @param check_keys: (default True) If True, the key is checked
            to ensure it is the correct length and composed of the right
            characters.
            """
            super(Client, self).__init__()
            self.debug = debug
            self.dead_retry = dead_retry
            self.socket_timeout = socket_timeout
            self.flush_on_reconnect = flush_on_reconnect
            self.set_servers(servers)
            self.stats = {}
            self.cache_cas = cache_cas
            self.reset_cas()
            self.do_check_key = check_keys
    
            # Allow users to modify pickling/unpickling behavior
            self.pickleProtocol = pickleProtocol
            self.pickler = pickler
            self.unpickler = unpickler
            self.compressor = compressor
            self.decompressor = decompressor
            self.persistent_load = pload
            self.persistent_id = pid
            self.server_max_key_length = server_max_key_length
            if self.server_max_key_length is None:
                self.server_max_key_length = SERVER_MAX_KEY_LENGTH
            self.server_max_value_length = server_max_value_length
            if self.server_max_value_length is None:
                self.server_max_value_length = SERVER_MAX_VALUE_LENGTH
    
            #  figure out the pickler style
            file = BytesIO()
            try:
                pickler = self.pickler(file, protocol=self.pickleProtocol)
                self.picklerIsKeyword = True
            except TypeError:
                self.picklerIsKeyword = False
    
        def _encode_key(self, key):
            if isinstance(key, tuple):
                if isinstance(key[1], six.text_type):
                    return (key[0], key[1].encode('utf8'))
            elif isinstance(key, six.text_type):
                return key.encode('utf8')
            return key
    
        def _encode_cmd(self, cmd, key, headers, noreply, *args):
            cmd_bytes = cmd.encode() if six.PY3 else cmd
            fullcmd = [cmd_bytes, b' ', key]
    
            if headers:
                if six.PY3:
                    headers = headers.encode()
                fullcmd.append(b' ')
                fullcmd.append(headers)
    
            if noreply:
                fullcmd.append(b' noreply')
    
            if args:
                fullcmd.append(b' ')
                fullcmd.extend(args)
            return b''.join(fullcmd)
    
        def reset_cas(self):
            """Reset the cas cache.
    
            This is only used if the Client() object was created with
            "cache_cas=True".  If used, this cache does not expire
            internally, so it can grow unbounded if you do not clear it
            yourself.
            """
            self.cas_ids = {}
    
        def set_servers(self, servers):
            """Set the pool of servers used by this client.
    
            @param servers: an array of servers.
            Servers can be passed in two forms:
                1. Strings of the form C{"host:port"}, which implies a
                default weight of 1.
                2. Tuples of the form C{("host:port", weight)}, where
                C{weight} is an integer weight value.
    
            """
            self.servers = [_Host(s, self.debug, dead_retry=self.dead_retry,
                                  socket_timeout=self.socket_timeout,
                                  flush_on_reconnect=self.flush_on_reconnect)
                            for s in servers]
            self._init_buckets()
    
        def get_stats(self, stat_args=None):
            """Get statistics from each of the servers.
    
            @param stat_args: Additional arguments to pass to the memcache
                "stats" command.
    
            @return: A list of tuples ( server_identifier,
                stats_dictionary ).  The dictionary contains a number of
                name/value pairs specifying the name of the status field
                and the string value associated with it.  The values are
                not converted from strings.
            """
            data = []
            for s in self.servers:
                if not s.connect():
                    continue
                if s.family == socket.AF_INET:
                    name = '%s:%s (%s)' % (s.ip, s.port, s.weight)
                elif s.family == socket.AF_INET6:
                    name = '[%s]:%s (%s)' % (s.ip, s.port, s.weight)
                else:
                    name = 'unix:%s (%s)' % (s.address, s.weight)
                if not stat_args:
                    s.send_cmd('stats')
                else:
                    s.send_cmd('stats ' + stat_args)
                serverData = {}
                data.append((name, serverData))
                readline = s.readline
                while 1:
                    line = readline()
                    if not line or line.strip() == 'END':
                        break
                    stats = line.split(' ', 2)
                    serverData[stats[1]] = stats[2]
    
            return(data)
    
        def get_slabs(self):
            data = []
            for s in self.servers:
                if not s.connect():
                    continue
                if s.family == socket.AF_INET:
                    name = '%s:%s (%s)' % (s.ip, s.port, s.weight)
                elif s.family == socket.AF_INET6:
                    name = '[%s]:%s (%s)' % (s.ip, s.port, s.weight)
                else:
                    name = 'unix:%s (%s)' % (s.address, s.weight)
                serverData = {}
                data.append((name, serverData))
                s.send_cmd('stats items')
                readline = s.readline
                while 1:
                    line = readline()
                    if not line or line.strip() == 'END':
                        break
                    item = line.split(' ', 2)
                    # 0 = STAT, 1 = ITEM, 2 = Value
                    slab = item[1].split(':', 2)
                    # 0 = items, 1 = Slab #, 2 = Name
                    if slab[1] not in serverData:
                        serverData[slab[1]] = {}
                    serverData[slab[1]][slab[2]] = item[2]
            return data
    
        def flush_all(self):
            """Expire all data in memcache servers that are reachable."""
            for s in self.servers:
                if not s.connect():
                    continue
                s.flush()
    
        def debuglog(self, str):
            if self.debug:
                sys.stderr.write("MemCached: %s
    " % str)
    
        def _statlog(self, func):
            if func not in self.stats:
                self.stats[func] = 1
            else:
                self.stats[func] += 1
    
        def forget_dead_hosts(self):
            """Reset every host in the pool to an "alive" state."""
            for s in self.servers:
                s.deaduntil = 0
    
        def _init_buckets(self):
            self.buckets = []
            for server in self.servers:
                for i in range(server.weight):
                    self.buckets.append(server)
    
        def _get_server(self, key):
            if isinstance(key, tuple):
                serverhash, key = key
            else:
                serverhash = serverHashFunction(key)
    
            if not self.buckets:
                return None, None
    
            for i in range(Client._SERVER_RETRIES):
                server = self.buckets[serverhash % len(self.buckets)]
                if server.connect():
                    # print("(using server %s)" % server,)
                    return server, key
                serverhash = serverHashFunction(str(serverhash) + str(i))
            return None, None
    
        def disconnect_all(self):
            for s in self.servers:
                s.close_socket()
    
        def delete_multi(self, keys, time=0, key_prefix='', noreply=False):
            """Delete multiple keys in the memcache doing just one query.
    
            >>> notset_keys = mc.set_multi({'a1' : 'val1', 'a2' : 'val2'})
            >>> mc.get_multi(['a1', 'a2']) == {'a1' : 'val1','a2' : 'val2'}
            >>> mc.delete_multi(['key1', 'key2'])
            >>> mc.get_multi(['key1', 'key2']) == {}
    
            This method is recommended over iterated regular L{delete}s as
            it reduces total latency, since your app doesn't have to wait
            for each round-trip of L{delete} before sending the next one.
    
            @param keys: An iterable of keys to clear
            @param time: number of seconds any subsequent set / update
            commands should fail. Defaults to 0 for no delay.
            @param key_prefix: Optional string to prepend to each key when
                sending to memcache.  See docs for L{get_multi} and
                L{set_multi}.
            @param noreply: optional parameter instructs the server to not send the
                reply.
            @return: 1 if no failure in communication with any memcacheds.
            @rtype: int
            """
    
            self._statlog('delete_multi')
    
            server_keys, prefixed_to_orig_key = self._map_and_prefix_keys(
                keys, key_prefix)
    
            # send out all requests on each server before reading anything
            dead_servers = []
    
            rc = 1
            for server in six.iterkeys(server_keys):
                bigcmd = []
                write = bigcmd.append
                extra = ' noreply' if noreply else ''
                if time is not None:
                    for key in server_keys[server]:  # These are mangled keys
                        write("delete %s %d%s
    " % (key, time, extra))
                else:
                    for key in server_keys[server]:  # These are mangled keys
                        write("delete %s%s
    " % (key, extra))
                try:
                    server.send_cmds(''.join(bigcmd))
                except socket.error as msg:
                    rc = 0
                    if isinstance(msg, tuple):
                        msg = msg[1]
                    server.mark_dead(msg)
                    dead_servers.append(server)
    
            # if noreply, just return
            if noreply:
                return rc
    
            # if any servers died on the way, don't expect them to respond.
            for server in dead_servers:
                del server_keys[server]
    
            for server, keys in six.iteritems(server_keys):
                try:
                    for key in keys:
                        server.expect("DELETED")
                except socket.error as msg:
                    if isinstance(msg, tuple):
                        msg = msg[1]
                    server.mark_dead(msg)
                    rc = 0
            return rc
    
        def delete(self, key, time=0, noreply=False):
            '''Deletes a key from the memcache.
    
            @return: Nonzero on success.
            @param time: number of seconds any subsequent set / update commands
            should fail. Defaults to None for no delay.
            @param noreply: optional parameter instructs the server to not send the
                reply.
            @rtype: int
            '''
            return self._deletetouch([b'DELETED', b'NOT_FOUND'], "delete", key,
                                     time, noreply)
    
        def touch(self, key, time=0, noreply=False):
            '''Updates the expiration time of a key in memcache.
    
            @return: Nonzero on success.
            @param time: Tells memcached the time which this value should
                expire, either as a delta number of seconds, or an absolute
                unix time-since-the-epoch value. See the memcached protocol
                docs section "Storage Commands" for more info on <exptime>. We
                default to 0 == cache forever.
            @param noreply: optional parameter instructs the server to not send the
                reply.
            @rtype: int
            '''
            return self._deletetouch([b'TOUCHED'], "touch", key, time, noreply)
    
        def _deletetouch(self, expected, cmd, key, time=0, noreply=False):
            key = self._encode_key(key)
            if self.do_check_key:
                self.check_key(key)
            server, key = self._get_server(key)
            if not server:
                return 0
            self._statlog(cmd)
            if time is not None and time != 0:
                fullcmd = self._encode_cmd(cmd, key, str(time), noreply)
            else:
                fullcmd = self._encode_cmd(cmd, key, None, noreply)
    
            try:
                server.send_cmd(fullcmd)
                if noreply:
                    return 1
                line = server.readline()
                if line and line.strip() in expected:
                    return 1
                self.debuglog('%s expected %s, got: %r'
                              % (cmd, ' or '.join(expected), line))
            except socket.error as msg:
                if isinstance(msg, tuple):
                    msg = msg[1]
                server.mark_dead(msg)
            return 0
    
        def incr(self, key, delta=1, noreply=False):
            """Increment value for C{key} by C{delta}
    
            Sends a command to the server to atomically increment the
            value for C{key} by C{delta}, or by 1 if C{delta} is
            unspecified.  Returns None if C{key} doesn't exist on server,
            otherwise it returns the new value after incrementing.
    
            Note that the value for C{key} must already exist in the
            memcache, and it must be the string representation of an
            integer.
    
            >>> mc.set("counter", "20")  # returns 1, indicating success
            >>> mc.incr("counter")
            >>> mc.incr("counter")
    
            Overflow on server is not checked.  Be aware of values
            approaching 2**32.  See L{decr}.
    
            @param delta: Integer amount to increment by (should be zero
            or greater).
    
            @param noreply: optional parameter instructs the server to not send the
            reply.
    
            @return: New value after incrementing, no None for noreply or error.
            @rtype: int
            """
            return self._incrdecr("incr", key, delta, noreply)
    
        def decr(self, key, delta=1, noreply=False):
            """Decrement value for C{key} by C{delta}
    
            Like L{incr}, but decrements.  Unlike L{incr}, underflow is
            checked and new values are capped at 0.  If server value is 1,
            a decrement of 2 returns 0, not -1.
    
            @param delta: Integer amount to decrement by (should be zero
            or greater).
    
            @param noreply: optional parameter instructs the server to not send the
            reply.
    
            @return: New value after decrementing,  or None for noreply or error.
            @rtype: int
            """
            return self._incrdecr("decr", key, delta, noreply)
    
        def _incrdecr(self, cmd, key, delta, noreply=False):
            key = self._encode_key(key)
            if self.do_check_key:
                self.check_key(key)
            server, key = self._get_server(key)
            if not server:
                return None
            self._statlog(cmd)
            fullcmd = self._encode_cmd(cmd, key, str(delta), noreply)
            try:
                server.send_cmd(fullcmd)
                if noreply:
                    return
                line = server.readline()
                if line is None or line.strip() == b'NOT_FOUND':
                    return None
                return int(line)
            except socket.error as msg:
                if isinstance(msg, tuple):
                    msg = msg[1]
                server.mark_dead(msg)
                return None
    
        def add(self, key, val, time=0, min_compress_len=0, noreply=False):
            '''Add new key with value.
    
            Like L{set}, but only stores in memcache if the key doesn't
            already exist.
    
            @return: Nonzero on success.
            @rtype: int
            '''
            return self._set("add", key, val, time, min_compress_len, noreply)
    
        def append(self, key, val, time=0, min_compress_len=0, noreply=False):
            '''Append the value to the end of the existing key's value.
    
            Only stores in memcache if key already exists.
            Also see L{prepend}.
    
            @return: Nonzero on success.
            @rtype: int
            '''
            return self._set("append", key, val, time, min_compress_len, noreply)
    
        def prepend(self, key, val, time=0, min_compress_len=0, noreply=False):
            '''Prepend the value to the beginning of the existing key's value.
    
            Only stores in memcache if key already exists.
            Also see L{append}.
    
            @return: Nonzero on success.
            @rtype: int
            '''
            return self._set("prepend", key, val, time, min_compress_len, noreply)
    
        def replace(self, key, val, time=0, min_compress_len=0, noreply=False):
            '''Replace existing key with value.
    
            Like L{set}, but only stores in memcache if the key already exists.
            The opposite of L{add}.
    
            @return: Nonzero on success.
            @rtype: int
            '''
            return self._set("replace", key, val, time, min_compress_len, noreply)
    
        def set(self, key, val, time=0, min_compress_len=0, noreply=False):
            '''Unconditionally sets a key to a given value in the memcache.
    
            The C{key} can optionally be an tuple, with the first element
            being the server hash value and the second being the key.  If
            you want to avoid making this module calculate a hash value.
            You may prefer, for example, to keep all of a given user's
            objects on the same memcache server, so you could use the
            user's unique id as the hash value.
    
            @return: Nonzero on success.
            @rtype: int
    
            @param time: Tells memcached the time which this value should
            expire, either as a delta number of seconds, or an absolute
            unix time-since-the-epoch value. See the memcached protocol
            docs section "Storage Commands" for more info on <exptime>. We
            default to 0 == cache forever.
    
            @param min_compress_len: The threshold length to kick in
            auto-compression of the value using the compressor
            routine. If the value being cached is a string, then the
            length of the string is measured, else if the value is an
            object, then the length of the pickle result is measured. If
            the resulting attempt at compression yeilds a larger string
            than the input, then it is discarded. For backwards
            compatability, this parameter defaults to 0, indicating don't
            ever try to compress.
    
            @param noreply: optional parameter instructs the server to not
            send the reply.
            '''
            return self._set("set", key, val, time, min_compress_len, noreply)
    
        def cas(self, key, val, time=0, min_compress_len=0, noreply=False):
            '''Check and set (CAS)
    
            Sets a key to a given value in the memcache if it hasn't been
            altered since last fetched. (See L{gets}).
    
            The C{key} can optionally be an tuple, with the first element
            being the server hash value and the second being the key.  If
            you want to avoid making this module calculate a hash value.
            You may prefer, for example, to keep all of a given user's
            objects on the same memcache server, so you could use the
            user's unique id as the hash value.
    
            @return: Nonzero on success.
            @rtype: int
    
            @param time: Tells memcached the time which this value should
            expire, either as a delta number of seconds, or an absolute
            unix time-since-the-epoch value. See the memcached protocol
            docs section "Storage Commands" for more info on <exptime>. We
            default to 0 == cache forever.
    
            @param min_compress_len: The threshold length to kick in
            auto-compression of the value using the compressor
            routine. If the value being cached is a string, then the
            length of the string is measured, else if the value is an
            object, then the length of the pickle result is measured. If
            the resulting attempt at compression yeilds a larger string
            than the input, then it is discarded. For backwards
            compatability, this parameter defaults to 0, indicating don't
            ever try to compress.
    
            @param noreply: optional parameter instructs the server to not
            send the reply.
            '''
            return self._set("cas", key, val, time, min_compress_len, noreply)
    
        def _map_and_prefix_keys(self, key_iterable, key_prefix):
            """Compute the mapping of server (_Host instance) -> list of keys to
            stuff onto that server, as well as the mapping of prefixed key
            -> original key.
            """
            key_prefix = self._encode_key(key_prefix)
            # Check it just once ...
            key_extra_len = len(key_prefix)
            if key_prefix and self.do_check_key:
                self.check_key(key_prefix)
    
            # server (_Host) -> list of unprefixed server keys in mapping
            server_keys = {}
    
            prefixed_to_orig_key = {}
            # build up a list for each server of all the keys we want.
            for orig_key in key_iterable:
                if isinstance(orig_key, tuple):
                    # Tuple of hashvalue, key ala _get_server(). Caller is
                    # essentially telling us what server to stuff this on.
                    # Ensure call to _get_server gets a Tuple as well.
                    serverhash, key = orig_key
    
                    key = self._encode_key(key)
                    if not isinstance(key, six.binary_type):
                        # set_multi supports int / long keys.
                        key = str(key)
                        if six.PY3:
                            key = key.encode('utf8')
                    bytes_orig_key = key
    
                    # Gotta pre-mangle key before hashing to a
                    # server. Returns the mangled key.
                    server, key = self._get_server(
                        (serverhash, key_prefix + key))
    
                    orig_key = orig_key[1]
                else:
                    key = self._encode_key(orig_key)
                    if not isinstance(key, six.binary_type):
                        # set_multi supports int / long keys.
                        key = str(key)
                        if six.PY3:
                            key = key.encode('utf8')
                    bytes_orig_key = key
                    server, key = self._get_server(key_prefix + key)
    
                #  alert when passed in key is None
                if orig_key is None:
                    self.check_key(orig_key, key_extra_len=key_extra_len)
    
                # Now check to make sure key length is proper ...
                if self.do_check_key:
                    self.check_key(bytes_orig_key, key_extra_len=key_extra_len)
    
                if not server:
                    continue
    
                if server not in server_keys:
                    server_keys[server] = []
                server_keys[server].append(key)
                prefixed_to_orig_key[key] = orig_key
    
            return (server_keys, prefixed_to_orig_key)
    
        def set_multi(self, mapping, time=0, key_prefix='', min_compress_len=0,
                      noreply=False):
            '''Sets multiple keys in the memcache doing just one query.
    
            >>> notset_keys = mc.set_multi({'key1' : 'val1', 'key2' : 'val2'})
            >>> mc.get_multi(['key1', 'key2']) == {'key1' : 'val1',
            ...                                    'key2' : 'val2'}
    
    
            This method is recommended over regular L{set} as it lowers
            the number of total packets flying around your network,
            reducing total latency, since your app doesn't have to wait
            for each round-trip of L{set} before sending the next one.
    
            @param mapping: A dict of key/value pairs to set.
    
            @param time: Tells memcached the time which this value should
                expire, either as a delta number of seconds, or an
                absolute unix time-since-the-epoch value. See the
                memcached protocol docs section "Storage Commands" for
                more info on <exptime>. We default to 0 == cache forever.
    
            @param key_prefix: Optional string to prepend to each key when
                sending to memcache. Allows you to efficiently stuff these
                keys into a pseudo-namespace in memcache:
    
                >>> notset_keys = mc.set_multi(
                ...     {'key1' : 'val1', 'key2' : 'val2'},
                ...     key_prefix='subspace_')
                >>> len(notset_keys) == 0
                True
                >>> mc.get_multi(['subspace_key1',
                ...               'subspace_key2']) == {'subspace_key1': 'val1',
                ...                                     'subspace_key2' : 'val2'}
                True
    
                Causes key 'subspace_key1' and 'subspace_key2' to be
                set. Useful in conjunction with a higher-level layer which
                applies namespaces to data in memcache.  In this case, the
                return result would be the list of notset original keys,
                prefix not applied.
    
            @param min_compress_len: The threshold length to kick in
                auto-compression of the value using the compressor
                routine. If the value being cached is a string, then the
                length of the string is measured, else if the value is an
                object, then the length of the pickle result is
                measured. If the resulting attempt at compression yeilds a
                larger string than the input, then it is discarded. For
                backwards compatability, this parameter defaults to 0,
                indicating don't ever try to compress.
    
            @param noreply: optional parameter instructs the server to not
                send the reply.
    
            @return: List of keys which failed to be stored [ memcache out
               of memory, etc. ].
    
            @rtype: list
            '''
            self._statlog('set_multi')
    
            server_keys, prefixed_to_orig_key = self._map_and_prefix_keys(
                six.iterkeys(mapping), key_prefix)
    
            # send out all requests on each server before reading anything
            dead_servers = []
            notstored = []  # original keys.
    
            for server in six.iterkeys(server_keys):
                bigcmd = []
                write = bigcmd.append
                try:
                    for key in server_keys[server]:  # These are mangled keys
                        store_info = self._val_to_store_info(
                            mapping[prefixed_to_orig_key[key]],
                            min_compress_len)
                        if store_info:
                            flags, len_val, val = store_info
                            headers = "%d %d %d" % (flags, time, len_val)
                            fullcmd = self._encode_cmd('set', key, headers,
                                                       noreply,
                                                       b'
    ', val, b'
    ')
                            write(fullcmd)
                        else:
                            notstored.append(prefixed_to_orig_key[key])
                    server.send_cmds(b''.join(bigcmd))
                except socket.error as msg:
                    if isinstance(msg, tuple):
                        msg = msg[1]
                    server.mark_dead(msg)
                    dead_servers.append(server)
    
            # if noreply, just return early
            if noreply:
                return notstored
    
            # if any servers died on the way, don't expect them to respond.
            for server in dead_servers:
                del server_keys[server]
    
            #  short-circuit if there are no servers, just return all keys
            if not server_keys:
                return(mapping.keys())
    
            for server, keys in six.iteritems(server_keys):
                try:
                    for key in keys:
                        if server.readline() == 'STORED':
                            continue
                        else:
                            # un-mangle.
                            notstored.append(prefixed_to_orig_key[key])
                except (_Error, socket.error) as msg:
                    if isinstance(msg, tuple):
                        msg = msg[1]
                    server.mark_dead(msg)
            return notstored
    
        def _val_to_store_info(self, val, min_compress_len):
            """Transform val to a storable representation.
    
            Returns a tuple of the flags, the length of the new value, and
            the new value itself.
            """
            flags = 0
            if isinstance(val, six.binary_type):
                pass
            elif isinstance(val, six.text_type):
                val = val.encode('utf-8')
            elif isinstance(val, int):
                flags |= Client._FLAG_INTEGER
                val = '%d' % val
                if six.PY3:
                    val = val.encode('ascii')
                # force no attempt to compress this silly string.
                min_compress_len = 0
            elif six.PY2 and isinstance(val, long):
                flags |= Client._FLAG_LONG
                val = str(val)
                if six.PY3:
                    val = val.encode('ascii')
                # force no attempt to compress this silly string.
                min_compress_len = 0
            else:
                flags |= Client._FLAG_PICKLE
                file = BytesIO()
                if self.picklerIsKeyword:
                    pickler = self.pickler(file, protocol=self.pickleProtocol)
                else:
                    pickler = self.pickler(file, self.pickleProtocol)
                if self.persistent_id:
                    pickler.persistent_id = self.persistent_id
                pickler.dump(val)
                val = file.getvalue()
    
            lv = len(val)
            # We should try to compress if min_compress_len > 0
            # and this string is longer than our min threshold.
            if min_compress_len and lv > min_compress_len:
                comp_val = self.compressor(val)
                # Only retain the result if the compression result is smaller
                # than the original.
                if len(comp_val) < lv:
                    flags |= Client._FLAG_COMPRESSED
                    val = comp_val
    
            #  silently do not store if value length exceeds maximum
            if (self.server_max_value_length != 0 and
                    len(val) > self.server_max_value_length):
                return(0)
    
            return (flags, len(val), val)
    
        def _set(self, cmd, key, val, time, min_compress_len=0, noreply=False):
            key = self._encode_key(key)
            if self.do_check_key:
                self.check_key(key)
            server, key = self._get_server(key)
            if not server:
                return 0
    
            def _unsafe_set():
                self._statlog(cmd)
    
                if cmd == 'cas' and key not in self.cas_ids:
                    return self._set('set', key, val, time, min_compress_len,
                                     noreply)
    
                store_info = self._val_to_store_info(val, min_compress_len)
                if not store_info:
                    return(0)
                flags, len_val, encoded_val = store_info
    
                if cmd == 'cas':
                    headers = ("%d %d %d %d"
                               % (flags, time, len_val, self.cas_ids[key]))
                else:
                    headers = "%d %d %d" % (flags, time, len_val)
                fullcmd = self._encode_cmd(cmd, key, headers, noreply,
                                           b'
    ', encoded_val)
    
                try:
                    server.send_cmd(fullcmd)
                    if noreply:
                        return True
                    return(server.expect(b"STORED", raise_exception=True)
                           == b"STORED")
                except socket.error as msg:
                    if isinstance(msg, tuple):
                        msg = msg[1]
                    server.mark_dead(msg)
                return 0
    
            try:
                return _unsafe_set()
            except _ConnectionDeadError:
                # retry once
                try:
                    if server._get_socket():
                        return _unsafe_set()
                except (_ConnectionDeadError, socket.error) as msg:
                    server.mark_dead(msg)
                return 0
    
        def _get(self, cmd, key):
            key = self._encode_key(key)
            if self.do_check_key:
                self.check_key(key)
            server, key = self._get_server(key)
            if not server:
                return None
    
            def _unsafe_get():
                self._statlog(cmd)
    
                try:
                    cmd_bytes = cmd.encode() if six.PY3 else cmd
                    fullcmd = b''.join((cmd_bytes, b' ', key))
                    server.send_cmd(fullcmd)
                    rkey = flags = rlen = cas_id = None
    
                    if cmd == 'gets':
                        rkey, flags, rlen, cas_id, = self._expect_cas_value(
                            server, raise_exception=True
                        )
                        if rkey and self.cache_cas:
                            self.cas_ids[rkey] = cas_id
                    else:
                        rkey, flags, rlen, = self._expectvalue(
                            server, raise_exception=True
                        )
    
                    if not rkey:
                        return None
                    try:
                        value = self._recv_value(server, flags, rlen)
                    finally:
                        server.expect(b"END", raise_exception=True)
                except (_Error, socket.error) as msg:
                    if isinstance(msg, tuple):
                        msg = msg[1]
                    server.mark_dead(msg)
                    return None
    
                return value
    
            try:
                return _unsafe_get()
            except _ConnectionDeadError:
                # retry once
                try:
                    if server.connect():
                        return _unsafe_get()
                    return None
                except (_ConnectionDeadError, socket.error) as msg:
                    server.mark_dead(msg)
                return None
    
        def get(self, key):
            '''Retrieves a key from the memcache.
    
            @return: The value or None.
            '''
            return self._get('get', key)
    
        def gets(self, key):
            '''Retrieves a key from the memcache. Used in conjunction with 'cas'.
    
            @return: The value or None.
            '''
            return self._get('gets', key)
    
        def get_multi(self, keys, key_prefix=''):
            '''Retrieves multiple keys from the memcache doing just one query.
    
            >>> success = mc.set("foo", "bar")
            >>> success = mc.set("baz", 42)
            >>> mc.get_multi(["foo", "baz", "foobar"]) == {
            ...     "foo": "bar", "baz": 42
            ... }
            >>> mc.set_multi({'k1' : 1, 'k2' : 2}, key_prefix='pfx_') == []
    
            This looks up keys 'pfx_k1', 'pfx_k2', ... . Returned dict
            will just have unprefixed keys 'k1', 'k2'.
    
            >>> mc.get_multi(['k1', 'k2', 'nonexist'],
            ...              key_prefix='pfx_') == {'k1' : 1, 'k2' : 2}
    
            get_mult [ and L{set_multi} ] can take str()-ables like ints /
            longs as keys too. Such as your db pri key fields.  They're
            rotored through str() before being passed off to memcache,
            with or without the use of a key_prefix.  In this mode, the
            key_prefix could be a table name, and the key itself a db
            primary key number.
    
            >>> mc.set_multi({42: 'douglass adams',
            ...               46: 'and 2 just ahead of me'},
            ...              key_prefix='numkeys_') == []
            >>> mc.get_multi([46, 42], key_prefix='numkeys_') == {
            ...     42: 'douglass adams',
            ...     46: 'and 2 just ahead of me'
            ... }
    
            This method is recommended over regular L{get} as it lowers
            the number of total packets flying around your network,
            reducing total latency, since your app doesn't have to wait
            for each round-trip of L{get} before sending the next one.
    
            See also L{set_multi}.
    
            @param keys: An array of keys.
    
            @param key_prefix: A string to prefix each key when we
            communicate with memcache.  Facilitates pseudo-namespaces
            within memcache. Returned dictionary keys will not have this
            prefix.
    
            @return: A dictionary of key/value pairs that were
            available. If key_prefix was provided, the keys in the retured
            dictionary will not have it present.
            '''
    
            self._statlog('get_multi')
    
            server_keys, prefixed_to_orig_key = self._map_and_prefix_keys(
                keys, key_prefix)
    
            # send out all requests on each server before reading anything
            dead_servers = []
            for server in six.iterkeys(server_keys):
                try:
                    fullcmd = b"get " + b" ".join(server_keys[server])
                    server.send_cmd(fullcmd)
                except socket.error as msg:
                    if isinstance(msg, tuple):
                        msg = msg[1]
                    server.mark_dead(msg)
                    dead_servers.append(server)
    
            # if any servers died on the way, don't expect them to respond.
            for server in dead_servers:
                del server_keys[server]
    
            retvals = {}
            for server in six.iterkeys(server_keys):
                try:
                    line = server.readline()
                    while line and line != b'END':
                        rkey, flags, rlen = self._expectvalue(server, line)
                        #  Bo Yang reports that this can sometimes be None
                        if rkey is not None:
                            val = self._recv_value(server, flags, rlen)
                            # un-prefix returned key.
                            retvals[prefixed_to_orig_key[rkey]] = val
                        line = server.readline()
                except (_Error, socket.error) as msg:
                    if isinstance(msg, tuple):
                        msg = msg[1]
                    server.mark_dead(msg)
            return retvals
    
        def _expect_cas_value(self, server, line=None, raise_exception=False):
            if not line:
                line = server.readline(raise_exception)
    
            if line and line[:5] == b'VALUE':
                resp, rkey, flags, len, cas_id = line.split()
                return (rkey, int(flags), int(len), int(cas_id))
            else:
                return (None, None, None, None)
    
        def _expectvalue(self, server, line=None, raise_exception=False):
            if not line:
                line = server.readline(raise_exception)
    
            if line and line[:5] == b'VALUE':
                resp, rkey, flags, len = line.split()
                flags = int(flags)
                rlen = int(len)
                return (rkey, flags, rlen)
            else:
                return (None, None, None)
    
        def _recv_value(self, server, flags, rlen):
            rlen += 2  # include 
    
            buf = server.recv(rlen)
            if len(buf) != rlen:
                raise _Error("received %d bytes when expecting %d"
                             % (len(buf), rlen))
    
            if len(buf) == rlen:
                buf = buf[:-2]  # strip 
    
    
            if flags & Client._FLAG_COMPRESSED:
                buf = self.decompressor(buf)
                flags &= ~Client._FLAG_COMPRESSED
    
            if flags == 0:
                # Bare string
                if six.PY3:
                    val = buf.decode('utf8')
                else:
                    val = buf
            elif flags & Client._FLAG_INTEGER:
                val = int(buf)
            elif flags & Client._FLAG_LONG:
                if six.PY3:
                    val = int(buf)
                else:
                    val = long(buf)
            elif flags & Client._FLAG_PICKLE:
                try:
                    file = BytesIO(buf)
                    unpickler = self.unpickler(file)
                    if self.persistent_load:
                        unpickler.persistent_load = self.persistent_load
                    val = unpickler.load()
                except Exception as e:
                    self.debuglog('Pickle error: %s
    ' % e)
                    return None
            else:
                self.debuglog("unknown flags on get: %x
    " % flags)
                raise ValueError('Unknown flags on get: %x' % flags)
    
            return val
    
        def check_key(self, key, key_extra_len=0):
            """Checks sanity of key.
    
                Fails if:
    
                Key length is > SERVER_MAX_KEY_LENGTH (Raises MemcachedKeyLength).
                Contains control characters  (Raises MemcachedKeyCharacterError).
                Is not a string (Raises MemcachedStringEncodingError)
                Is an unicode string (Raises MemcachedStringEncodingError)
                Is not a string (Raises MemcachedKeyError)
                Is None (Raises MemcachedKeyError)
            """
            if isinstance(key, tuple):
                key = key[1]
            if key is None:
                raise Client.MemcachedKeyNoneError("Key is None")
            if key is '':
                if key_extra_len is 0:
                    raise Client.MemcachedKeyNoneError("Key is empty")
    
                #  key is empty but there is some other component to key
                return
    
            if not isinstance(key, six.binary_type):
                raise Client.MemcachedKeyTypeError("Key must be a binary string")
    
            if (self.server_max_key_length != 0 and
                    len(key) + key_extra_len > self.server_max_key_length):
                raise Client.MemcachedKeyLengthError(
                    "Key length is > %s" % self.server_max_key_length
                )
            if not valid_key_chars_re.match(key):
                raise Client.MemcachedKeyCharacterError(
                    "Control/space characters not allowed (key=%r)" % key)
    View Code

    1、第一次操作

    1 import memcache
    2  
    3 mc = memcache.Client(['10.211.55.4:12000'], debug=True)
    4 mc.set("foo", "bar")
    5 ret = mc.get('foo')
    6 print ret

    Ps:debug = True 表示运行出现错误时,显示错误信息,上线后移除该参数。

    2、天生支持集群

    python-memcached模块原生支持集群操作,其原理是在内存维护一个主机列表,且集群中主机的权重值和主机在列表中重复出现的次数成正比

    1      主机    权重
    2     1.1.1.1   1
    3     1.1.1.2   2
    4     1.1.1.3   1
    5  
    6 那么在内存中主机列表为:
    7     host_list = ["1.1.1.1", "1.1.1.2", "1.1.1.2", "1.1.1.3", ]

    如果用户根据如果要在内存中创建一个键值对(如:k1 = "v1"),那么要执行一下步骤:

    • 根据算法将 k1 转换成一个数字
    • 将数字和主机列表长度求余数,得到一个值 N( 0 <= N < 列表长度 )
    • 在主机列表中根据 第2步得到的值为索引获取主机,例如:host_list[N]
    • 连接 将第3步中获取的主机,将 k1 = "v1" 放置在该服务器的内存中

    代码实现如下:

    1 mc = memcache.Client([('1.1.1.1:12000', 1), ('1.1.1.2:12000', 2), ('1.1.1.3:12000', 1)], debug=True)
    2  
    3 mc.set('k1', 'v1')

    3、add

    添加一条键值对,如果已经存在的 key,重复执行add操作异常

    1 #!/usr/bin/env python
    2 # -*- coding:utf-8 -*-
    3 import memcache
    4  
    5 mc = memcache.Client(['192.168.1.1:12000'], debug=True)
    6 mc.add('k1', 'v1')
    7 # mc.add('k1', 'v2') # 报错,对已经存在的key重复添加,失败!!!

    4、replace

    replace 修改某个key的值,如果key不存在,则异常

    1 #!/usr/bin/env python
    2 # -*- coding:utf-8 -*-
    3 import memcache
    4  
    5 mc = memcache.Client(['192.168.1.1:12000'], debug=True)
    6 # 如果memcache中存在kkkk,则替换成功,否则一场
    7 mc.replace('kkkk','999')

    5、set 和 set_multi

    set            设置一个键值对,如果key不存在,则创建,如果key存在,则修改
    set_multi   设置多个键值对,如果key不存在,则创建,如果key存在,则修改

    1 #!/usr/bin/env python
    2 # -*- coding:utf-8 -*-
    3 import memcache
    4  
    5 mc = memcache.Client(['192.168.1.1:12000'], debug=True)
    6  
    7 mc.set('key0', 'weibinf')
    8  
    9 mc.set_multi({'key1': 'val1', 'key2': 'val2'})

    6、delete 和 delete_multi

    delete             在Memcached中删除指定的一个键值对
    delete_multi    在Memcached中删除指定的多个键值对

    1 #!/usr/bin/env python
    2 # -*- coding:utf-8 -*-
    3 import memcache
    4  
    5 mc = memcache.Client(['192.168.1.1:12000'], debug=True)
    6  
    7 mc.delete('key0')
    8 mc.delete_multi(['key1', 'key2'])

    7、get 和 get_multi

    get            获取一个键值对
    get_multi   获取多一个键值对

    1 #!/usr/bin/env python
    2 # -*- coding:utf-8 -*-
    3 import memcache
    4  
    5 mc = memcache.Client(['192.168.1.1:12000'], debug=True)
    6  
    7 val = mc.get('key0')
    8 item_dict = mc.get_multi(["key1", "key2", "key3"])

    8、append 和 prepend

    append    修改指定key的值,在该值 后面 追加内容
    prepend   修改指定key的值,在该值 前面 插入内容

     1 #!/usr/bin/env python
     2 # -*- coding:utf-8 -*-
     3 import memcache
     4  
     5 mc = memcache.Client(['192.168.1.1:12000'], debug=True)
     6 # k1 = "v1"
     7  
     8 mc.append('k1', 'after')
     9 # k1 = "v1after"
    10  
    11 mc.prepend('k1', 'before')
    12 # k1 = "beforev1after"

    9、decr 和 incr 

    incr  自增,将Memcached中的某一个值增加 N ( N默认为1 )
    decr 自减,将Memcached中的某一个值减少 N ( N默认为1 )

     1 #!/usr/bin/env python
     2 # -*- coding:utf-8 -*-
     3 import memcache
     4  
     5 mc = memcache.Client(['192.168.1.1:12000'], debug=True)
     6 mc.set('k1', '777')
     7  
     8 mc.incr('k1')
     9 # k1 = 778
    10  
    11 mc.incr('k1', 10)
    12 # k1 = 788
    13  
    14 mc.decr('k1')
    15 # k1 = 787
    16  
    17 mc.decr('k1', 10)
    18 # k1 = 777

    10、gets 和 cas

    如商城商品剩余个数,假设改值保存在memcache中,product_count = 900
    A用户刷新页面从memcache中读取到product_count = 900
    B用户刷新页面从memcache中读取到product_count = 900

    如果A、B用户均购买商品

    A用户修改商品剩余个数 product_count=899
    B用户修改商品剩余个数 product_count=899

    如此一来缓存内的数据便不在正确,两个用户购买商品后,商品剩余还是 899
    如果使用python的set和get来操作以上过程,那么程序就会如上述所示情况!

    如果想要避免此情况的发生,只要使用 gets 和 cas 即可,如:

    1 #!/usr/bin/env python
    2 # -*- coding:utf-8 -*-
    3 import memcache
    4 mc = memcache.Client(['192.168.1.1:12000'], debug=True, cache_cas=True)
    5  
    6 v = mc.gets('product_count')
    7 # ...
    8 # 如果有人在gets之后和cas之前修改了product_count,那么,下面的设置将会执行失败,剖出异常,从而避免非正常数据的产生
    9 mc.cas('product_count', "899")

    Ps:本质上每次执行gets时,会从memcache中获取一个自增的数字,通过cas去修改gets的值时,会携带之前获取的自增值和memcache中的自增值进行比较,如果相等,则可以提交,如果不想等,那表示在gets和cas执行之间,又有其他人执行了gets(获取了缓冲的指定值), 如此一来有可能出现非正常数据,则不允许修改。

    Memcached 真的过时了吗?

    Redis

    redis是一个key-value存储系统。和Memcached类似,它支持存储的value类型相对更多,包括string(字符串)、list(链表)、set(集合)、zset(sorted set --有序集合)和hash(哈希类型)。这些数据类型都支持push/pop、add/remove及取交集并集和差集及更丰富的操作,而且这些操作都是原子性的。在此基础上,redis支持各种不同方式的排序。与memcached一样,为了保证效率,数据都是缓存在内存中。区别的是redis会周期性的把更新的数据写入磁盘或者把修改操作写入追加的记录文件,并且在此基础上实现了master-slave(主从)同步。

    Redis安装和基本使用

    wget http://download.redis.io/releases/redis-3.0.6.tar.gz
    tar xzf redis-3.0.6.tar.gz
    cd redis-3.0.6
    make
    make install

    启动服务端

    1 redis-server

    启动客户端

    1 redis-cli
    2 redis> set foo bar
    3 OK
    4 redis> get foo
    5 "bar"

    Python操作Redis

    安装API

    1 sudo pip install redis
    2 or
    3 sudo easy_install redis
    4 or
    5 源码安装
    6  
    7 详见:https://github.com/WoLpH/redis-py

    常用操作

    API使用
    
    redis-py 的API的使用可以分类为:
    
    连接方式
    连接池
    操作
    String 操作
    Hash 操作
    List 操作
    Set 操作
    Sort Set 操作
    管道
    发布订阅

    1、操作模式

    redis-py提供两个类Redis和StrictRedis用于实现Redis的命令,StrictRedis用于实现大部分官方的命令,并使用官方的语法和命令,Redis是StrictRedis的子类,用于向后兼容旧版本的redis-py。

    #!/usr/bin/env python
    # -*- coding:utf-8 -*-
     
    import redis
     
    r = redis.Redis(host='192.168.1.1', port=6379)
    r.set('foo', 'Bar')
    print r.get('foo')

    2、连接池

    redis-py使用connection pool来管理对一个redis server的所有连接,避免每次建立、释放连接的开销。默认,每个Redis实例都会维护一个自己的连接池。可以直接建立一个连接池,然后作为参数Redis,这样就可以实现多个Redis实例共享一个连接池。

    #!/usr/bin/env python
    # -*- coding:utf-8 -*-
     
    import redis
     
    pool = redis.ConnectionPool(host='192.168.1.1', port=6379)
     
    r = redis.Redis(connection_pool=pool)
    r.set('foo', 'Bar')
    print r.get('foo')

    3、操作

      String操作,redis中的String在在内存中按照一个name对应一个value来存储。如图:

      

     

    set(name, value, ex=None, px=None, nx=False, xx=False)
    在Redis中设置值,默认,不存在则创建,存在则修改
    参数:
         ex,过期时间(秒)
         px,过期时间(毫秒)
         nx,如果设置为True,则只有name不存在时,当前set操作才执行
         xx,如果设置为True,则只有name存在时,岗前set操作才执行

    setnx(name, value)

    设置值,只有name不存在时,执行设置操作(添加)

    setex(name, value, time)

    # 设置值
    # 参数:
        # time,过期时间(数字秒 或 timedelta对象)

    psetex(name, time_ms, value)

    # 设置值
    # 参数:
        # time_ms,过期时间(数字毫秒 或 timedelta对象)

    mset(*args, **kwargs)

    批量设置值
    如:
        mset(k1='v1', k2='v2')
        或
        mget({'k1': 'v1', 'k2': 'v2'})

    get(name)

    获取值

    mget(keys, *args)

    批量获取
    如:
        mget('ylr', 'weibinf')
        或
        r.mget(['ylr', 'weibinf'])

    getset(name, value)

    设置新值并获取原来的值

    getrange(key, start, end)

    # 获取子序列(根据字节获取,非字符)
    # 参数:
        # name,Redis 的 name
        # start,起始位置(字节)
        # end,结束位置(字节)
    # 如: "武沛齐" ,0-3表示 "武"

    setrange(name, offset, value)

    # 修改字符串内容,从指定字符串索引开始向后替换(新值太长时,则向后添加)
    # 参数:
        # offset,字符串的索引,字节(一个汉字三个字节)
        # value,要设置的值

    setbit(name, offset, value)

    # 对name对应值的二进制表示的位进行操作
     
    # 参数:
        # name,redis的name
        # offset,位的索引(将值变换成二进制后再进行索引)
        # value,值只能是 1 或 0
     
    # 注:如果在Redis中有一个对应: n1 = "foo",
            那么字符串foo的二进制表示为:01100110 01101111 01101111
        所以,如果执行 setbit('n1', 7, 1),则就会将第7位设置为1,
            那么最终二进制则变成 01100111 01101111 01101111,即:"goo"
     
    # 扩展,转换二进制表示:
     
        # source = "武沛齐"
        source = "foo"
     
        for i in source:
            num = ord(i)
            print bin(num).replace('b','')
     
        特别的,如果source是汉字 "樊伟彬"怎么办?
        答:对于utf-8,每一个汉字占 3 个字节,那么 "樊伟彬" 则有 9个字节
           对于汉字,for循环时候会按照 字节 迭代,那么在迭代时,将每一个字节转换 十进制数,然后再将十进制数转换成二进制
            11100110 10101101 10100110 11100110 10110010 10011011 11101001 10111101 10010000
            -------------------------- ----------------------------- -----------------------------
                        樊                         伟                           彬

    getbit(name, offset)

    # 获取name对应的值的二进制表示中的某位的值 (0或1)

    bitcount(key, start=None, end=None)

    # 获取name对应的值的二进制表示中 1 的个数
    # 参数:
        # key,Redis的name
        # start,位起始位置
        # end,位结束位置

    bitop(operation, dest, *keys)

    # 获取多个值,并将值做位运算,将最后的结果保存至新的name对应的值
     
    # 参数:
        # operation,AND(并) 、 OR(或) 、 NOT(非) 、 XOR(异或)
        # dest, 新的Redis的name
        # *keys,要查找的Redis的name
     
    # 如:
        bitop("AND", 'new_name', 'n1', 'n2', 'n3')
        # 获取Redis中n1,n2,n3对应的值,然后讲所有的值做位运算(求并集),然后将结果保存 new_name 对应的值中

    strlen(name)

    # 返回name对应值的字节长度(一个汉字3个字节)

    incr(self, name, amount=1)

    # 自增 name对应的值,当name不存在时,则创建name=amount,否则,则自增。
     
    # 参数:
        # name,Redis的name
        # amount,自增数(必须是整数)
     
    # 注:同incrby

    incrbyfloat(self, name, amount=1.0)

    # 自增 name对应的值,当name不存在时,则创建name=amount,否则,则自增。
     
    # 参数:
        # name,Redis的name
        # amount,自增数(浮点型)

    decr(self, name, amount=1)

    # 自减 name对应的值,当name不存在时,则创建name=amount,否则,则自减。
     
    # 参数:
        # name,Redis的name
        # amount,自减数(整数)

    append(key, value)

    # 在redis name对应的值后面追加内容
     
    # 参数:
        key, redis的name
        value, 要追加的字符串

    Hash操作,redis中Hash在内存中的存储格式如下图:

    hset(name, key, value)
    # name对应的hash中设置一个键值对(不存在,则创建;否则,修改)
     
    # 参数:
        # name,redis的name
        # key,name对应的hash中的key
        # value,name对应的hash中的value
     
    # 注:
        # hsetnx(name, key, value),当name对应的hash中不存在当前key时则创建(相当于添加)

    hmset(name, mapping)

    # 在name对应的hash中批量设置键值对
     
    # 参数:
        # name,redis的name
        # mapping,字典,如:{'k1':'v1', 'k2': 'v2'}
     
    # 如:
        # r.hmset('xx', {'k1':'v1', 'k2': 'v2'})

    hget(name,key)

    # 在name对应的hash中获取根据key获取value

    hmget(name, keys, *args)

    # 在name对应的hash中获取多个key的值
     
    # 参数:
        # name,reids对应的name
        # keys,要获取key集合,如:['k1', 'k2', 'k3']
        # *args,要获取的key,如:k1,k2,k3
     
    # 如:
        # r.mget('xx', ['k1', 'k2'])
        #
        # print r.hmget('xx', 'k1', 'k2')

    hgetall(name)

    获取name对应hash的所有键值

    hlen(name)

    # 获取name对应的hash中键值对的个数

    hkeys(name)

    # 获取name对应的hash中所有的key的值

    hvals(name)

    # 获取name对应的hash中所有的value的值

    hexists(name, key)

    # 检查name对应的hash是否存在当前传入的key

    hdel(name,*keys)

    # 将name对应的hash中指定key的键值对删除

    hincrby(name, key, amount=1)

    # 自增name对应的hash中的指定key的值,不存在则创建key=amount
    # 参数:
        # name,redis中的name
        # key, hash对应的key
        # amount,自增数(整数)

    hincrbyfloat(name, key, amount=1.0)

    # 自增name对应的hash中的指定key的值,不存在则创建key=amount
     
    # 参数:
        # name,redis中的name
        # key, hash对应的key
        # amount,自增数(浮点数)
     
    # 自增name对应的hash中的指定key的值,不存在则创建key=amount

    hscan(name, cursor=0, match=None, count=None)

    # 增量式迭代获取,对于数据大的数据非常有用,hscan可以实现分片的获取数据,并非一次性将数据全部获取完,从而放置内存被撑爆
     
    # 参数:
        # name,redis的name
        # cursor,游标(基于游标分批取获取数据)
        # match,匹配指定key,默认None 表示所有的key
        # count,每次分片最少获取个数,默认None表示采用Redis的默认分片个数
     
    # 如:
        # 第一次:cursor1, data1 = r.hscan('xx', cursor=0, match=None, count=None)
        # 第二次:cursor2, data1 = r.hscan('xx', cursor=cursor1, match=None, count=None)
        # ...
        # 直到返回值cursor的值为0时,表示数据已经通过分片获取完毕

    hscan_iter(name, match=None, count=None)

    # 利用yield封装hscan创建生成器,实现分批去redis中获取数据
     
    # 参数:
        # match,匹配指定key,默认None 表示所有的key
        # count,每次分片最少获取个数,默认None表示采用Redis的默认分片个数
     
    # 如:
        # for item in r.hscan_iter('xx'):
        #     print item

    List操作,redis中的List在在内存中按照一个name对应一个List来存储。如图:

    lpush(name,values)
    # 在name对应的list中添加元素,每个新的元素都添加到列表的最左边
     
    # 如:
        # r.lpush('oo', 11,22,33)
        # 保存顺序为: 33,22,11
     
    # 扩展:
        # rpush(name, values) 表示从右向左操作

    lpushx(name,value)

    # 在name对应的list中添加元素,只有name已经存在时,值添加到列表的最左边
     
    # 更多:
        # rpushx(name, value) 表示从右向左操作

    llen(name)

    # name对应的list元素的个数

    linsert(name, where, refvalue, value))

    # 在name对应的列表的某一个值前或后插入一个新值
     
    # 参数:
        # name,redis的name
        # where,BEFORE或AFTER
        # refvalue,标杆值,即:在它前后插入数据
        # value,要插入的数据

    r.lset(name, index, value)

    # 对name对应的list中的某一个索引位置重新赋值
     
    # 参数:
        # name,redis的name
        # index,list的索引位置
        # value,要设置的值

    r.lrem(name, value, num)

    # 在name对应的list中删除指定的值
     
    # 参数:
        # name,redis的name
        # value,要删除的值
        # num,  num=0,删除列表中所有的指定值;
               # num=2,从前到后,删除2个;
               # num=-2,从后向前,删除2个

    lpop(name)

    # 在name对应的列表的左侧获取第一个元素并在列表中移除,返回值则是第一个元素
     
    # 更多:
        # rpop(name) 表示从右向左操作

    lindex(name, index)

    #在name对应的列表中根据索引获取列表元素

    lrange(name, start, end)

    # 在name对应的列表分片获取数据
    # 参数:
        # name,redis的name
        # start,索引的起始位置
        # end,索引结束位置

    ltrim(name, start, end)

    # 在name对应的列表中移除没有在start-end索引之间的值
    # 参数:
        # name,redis的name
        # start,索引的起始位置
        # end,索引结束位置

    rpoplpush(src, dst)

    # 从一个列表取出最右边的元素,同时将其添加至另一个列表的最左边
    # 参数:
        # src,要取数据的列表的name
        # dst,要添加数据的列表的name

    blpop(keys, timeout)

    # 将多个列表排列,按照从左到右去pop对应列表的元素
     
    # 参数:
        # keys,redis的name的集合
        # timeout,超时时间,当元素所有列表的元素获取完之后,阻塞等待列表内有数据的时间(秒), 0 表示永远阻塞
     
    # 更多:
        # r.brpop(keys, timeout),从右向左获取数据

    brpoplpush(src, dst, timeout=0)

    # 从一个列表的右侧移除一个元素并将其添加到另一个列表的左侧
     
    # 参数:
        # src,取出并要移除元素的列表对应的name
        # dst,要插入元素的列表对应的name
        # timeout,当src对应的列表中没有数据时,阻塞等待其有数据的超时时间(秒),0 表示永远阻塞

    自定义增量迭代

    # 由于redis类库中没有提供对列表元素的增量迭代,如果想要循环name对应的列表的所有元素,那么就需要:
        # 1、获取name对应的所有列表
        # 2、循环列表
    # 但是,如果列表非常大,那么就有可能在第一步时就将程序的内容撑爆,所有有必要自定义一个增量迭代的功能:
     
    def list_iter(name):
        """
        自定义redis列表增量迭代
        :param name: redis中的name,即:迭代name对应的列表
        :return: yield 返回 列表元素
        """
        list_count = r.llen(name)
        for index in xrange(list_count):
            yield r.lindex(name, index)
     
    # 使用
    for item in list_iter('pp'):
        print item

    Set操作,Set集合就是不允许重复的列表

    sadd(name,values)

    # name对应的集合中添加元素

    scard(name)

    #获取name对应的集合中元素个数

    sdiff(keys, *args)

    #在第一个name对应的集合中且不在其他name对应的集合的元素集合

    sdiffstore(dest, keys, *args)

    # 获取第一个name对应的集合中且不在其他name对应的集合,再将其新加入到dest对应的集合中

    sinter(keys, *args)

    # 获取多一个name对应集合的并集

    sinterstore(dest, keys, *args)

    # 获取多一个name对应集合的并集,再讲其加入到dest对应的集合中

    sismember(name, value)

    # 检查value是否是name对应的集合的成员

    smembers(name)

    # 获取name对应的集合的所有成员

    smove(src, dst, value)

    # 将某个成员从一个集合中移动到另外一个集合

    spop(name)

    # 从集合的右侧(尾部)移除一个成员,并将其返回

    srandmember(name, numbers)

    # 从name对应的集合中随机获取 numbers 个元素

    srem(name, values)

    # 在name对应的集合中删除某些值

    sunion(keys, *args)

    # 获取多一个name对应的集合的并集

    sunionstore(dest,keys, *args)

    # 获取多一个name对应的集合的并集,并将结果保存到dest对应的集合中

    sscan(name, cursor=0, match=None, count=None)
    sscan_iter(name, match=None, count=None)

    # 同字符串的操作,用于增量迭代分批获取元素,避免内存消耗太大

    有序集合,在集合的基础上,为每元素排序;元素的排序需要根据另外一个值来进行比较,所以,对于有序集合,每一个元素有两个值,即:值和分数,分数专门用来做排序。

    zadd(name, *args, **kwargs)
    # 在name对应的有序集合中添加元素
    # 如:
         # zadd('zz', 'n1', 1, 'n2', 2)
         #
         # zadd('zz', n1=11, n2=22)

    zcard(name)

    # 获取name对应的有序集合元素的数量

    zcount(name, min, max)

    # 获取name对应的有序集合中分数 在 [min,max] 之间的个数

    zincrby(name, value, amount)

    # 自增name对应的有序集合的 name 对应的分数

    r.zrange( name, start, end, desc=False, withscores=False, score_cast_func=float)

    # 按照索引范围获取name对应的有序集合的元素
     
    # 参数:
        # name,redis的name
        # start,有序集合索引起始位置(非分数)
        # end,有序集合索引结束位置(非分数)
        # desc,排序规则,默认按照分数从小到大排序
        # withscores,是否获取元素的分数,默认只获取元素的值
        # score_cast_func,对分数进行数据转换的函数
     
    # 更多:
        # 从大到小排序
        # zrevrange(name, start, end, withscores=False, score_cast_func=float)
     
        # 按照分数范围获取name对应的有序集合的元素
        # zrangebyscore(name, min, max, start=None, num=None, withscores=False, score_cast_func=float)
        # 从大到小排序
        # zrevrangebyscore(name, max, min, start=None, num=None, withscores=False, score_cast_func=float)

    zrank(name, value)

    # 获取某个值在 name对应的有序集合中的排行(从 0 开始)
     
    # 更多:
        # zrevrank(name, value),从大到小排序

    zrangebylex(name, min, max, start=None, num=None)

    # 当有序集合的所有成员都具有相同的分值时,有序集合的元素会根据成员的 值 (lexicographical ordering)来进行排序,而这个命令则可以返回给定的有序集合键 key 中, 元素的值介于 min 和 max 之间的成员
    # 对集合中的每个成员进行逐个字节的对比(byte-by-byte compare), 并按照从低到高的顺序, 返回排序后的集合成员。 如果两个字符串有一部分内容是相同的话, 那么命令会认为较长的字符串比较短的字符串要大
     
    # 参数:
        # name,redis的name
        # min,左区间(值)。 + 表示正无限; - 表示负无限; ( 表示开区间; [ 则表示闭区间
        # min,右区间(值)
        # start,对结果进行分片处理,索引位置
        # num,对结果进行分片处理,索引后面的num个元素
     
    # 如:
        # ZADD myzset 0 aa 0 ba 0 ca 0 da 0 ea 0 fa 0 ga
        # r.zrangebylex('myzset', "-", "[ca") 结果为:['aa', 'ba', 'ca']
     
    # 更多:
        # 从大到小排序
        # zrevrangebylex(name, max, min, start=None, num=None)

    zrem(name, values)

    # 删除name对应的有序集合中值是values的成员
     
    # 如:zrem('zz', ['s1', 's2'])

    zremrangebyrank(name, min, max)

    # 根据排行范围删除

    zremrangebyscore(name, min, max)

    # 根据分数范围删除

    zremrangebylex(name, min, max)

    # 根据值返回删除

    zscore(name, value)

    # 获取name对应有序集合中 value 对应的分数

    zinterstore(dest, keys, aggregate=None)

    # 获取两个有序集合的交集,如果遇到相同值不同分数,则按照aggregate进行操作
    # aggregate的值为:  SUM  MIN  MAX

    zunionstore(dest, keys, aggregate=None)

    # 获取两个有序集合的并集,如果遇到相同值不同分数,则按照aggregate进行操作
    # aggregate的值为:  SUM  MIN  MAX

    zscan(name, cursor=0, match=None, count=None, score_cast_func=float)
    zscan_iter(name, match=None, count=None,score_cast_func=float)

    # 同字符串相似,相较于字符串新增score_cast_func,用来对分数进行操作

    其他常用操作

    delete(*names)
    # 根据删除redis中的任意数据类型

    exists(name)

    # 检测redis的name是否存在

    keys(pattern='*')

    # 根据模型获取redis的name
     
    # 更多:
        # KEYS * 匹配数据库中所有 key 。
        # KEYS h?llo 匹配 hello , hallo 和 hxllo 等。
        # KEYS h*llo 匹配 hllo 和 heeeeello 等。
        # KEYS h[ae]llo 匹配 hello 和 hallo ,但不匹配 hillo

    expire(name ,time)

    # 为某个redis的某个name设置超时时间

    rename(src, dst)

    # 对redis的name重命名为

    move(name, db))

    # 将redis的某个值移动到指定的db下

    randomkey()

    # 随机获取一个redis的name(不删除)

    type(name)

    # 获取name对应值的类型

    scan(cursor=0, match=None, count=None)
    scan_iter(match=None, count=None)

    # 同字符串操作,用于增量迭代获取key

    4、管道

    redis-py默认在执行每次请求都会创建(连接池申请连接)和断开(归还连接池)一次连接操作,如果想要在一次请求中指定多个命令,则可以使用pipline实现一次请求指定多个命令,并且默认情况下一次pipline 是原子性操作。

    #!/usr/bin/env python
    # -*- coding:utf-8 -*-
     
    import redis
     
    pool = redis.ConnectionPool(host='192.168.1.1', port=6379)
     
    r = redis.Redis(connection_pool=pool)
     
    # pipe = r.pipeline(transaction=False)
    pipe = r.pipeline(transaction=True)
     
    r.set('name', 'alex')
    r.set('role', 'sb')
     
    pipe.execute()

    5、发布订阅

    #!/usr/bin/env python
    # -*- coding:utf-8 -*-
    
    import redis
    
    
    class RedisHelper:
    
        def __init__(self):
            self.__conn = redis.Redis(host='192.168.1.1')
            self.chan_sub = 'fm104.5'
            self.chan_pub = 'fm104.5'
    
        def public(self, msg):
            self.__conn.publish(self.chan_pub, msg)
            return True
    
        def subscribe(self):
            pub = self.__conn.pubsub()
            pub.subscribe(self.chan_sub)
            pub.parse_response()
            return pub
    RedisHelper

    订阅者:

     1 #!/usr/bin/env python
     2 # -*- coding:utf-8 -*-
     3  
     4 from monitor.RedisHelper import RedisHelper
     5  
     6 obj = RedisHelper()
     7 redis_sub = obj.subscribe()
     8  
     9 while True:
    10     msg= redis_sub.parse_response()
    11     print msg

    发布者:

    1 #!/usr/bin/env python
    2 # -*- coding:utf-8 -*-
    3  
    4 from monitor.RedisHelper import RedisHelper
    5  
    6 obj = RedisHelper()
    7 obj.public('hello')

    更多参见:https://github.com/andymccurdy/redis-py/

    RabbitMQ

    RabbitMQ是一个在AMQP基础上完整的,可复用的企业消息系统。他遵循Mozilla Public License开源协议。

    MQ全称为Message Queue, 消息队列(MQ)是一种应用程序对应用程序的通信方法。应用程序通过读写出入队列的消息(针对应用程序的数据)来通信,而无需专用连接来链接它们。消 息传递指的是程序之间通过在消息中发送数据进行通信,而不是通过直接调用彼此来通信,直接调用通常是用于诸如远程过程调用的技术。排队指的是应用程序通过 队列来通信。队列的使用除去了接收和发送应用程序同时执行的要求。

    RabbitMQ安装

    1 安装配置epel源
    2    $ rpm -ivh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
    3  
    4 安装erlang
    5    $ yum -y install erlang
    6  
    7 安装RabbitMQ
    8    $ yum -y install rabbitmq-server

    注意:service rabbitmq-server start/stop

    安装API

    1 pip install pika
    2 or
    3 easy_install pika
    4 or
    5 源码
    6  
    7 https://pypi.python.org/pypi/pika

    使用API操作RabbitMQ

    基于Queue实现生产者消费者模型

    #!/usr/bin/env python
    # -*- coding:utf-8 -*-
    import Queue
    import threading
    
    
    message = Queue.Queue(10)
    
    
    def producer(i):
        while True:
            message.put(i)
    
    
    def consumer(i):
        while True:
            msg = message.get()
    
    
    for i in range(12):
        t = threading.Thread(target=producer, args=(i,))
        t.start()
    
    for i in range(10):
        t = threading.Thread(target=consumer, args=(i,))
        t.start()
    Queue

    对于RabbitMQ来说,生产和消费不再针对内存里的一个Queue对象,而是某台服务器上的RabbitMQ Server实现的消息队列。

    # ######################### 生产者 #########################
    #!/usr/bin/env python
    import pika
     
    
     
    connection = pika.BlockingConnection(pika.ConnectionParameters(
            host='localhost'))
    channel = connection.channel()
     
    channel.queue_declare(queue='hello')
     
    channel.basic_publish(exchange='',
                          routing_key='hello',
                          body='Hello World!')
    print(" [x] Sent 'Hello World!'")
    connection.close()
    
    
    # ########################## 消费者 ##########################
    #!/usr/bin/env python
    import pika
     
     
    connection = pika.BlockingConnection(pika.ConnectionParameters(
            host='localhost'))
    channel = connection.channel()
     
    channel.queue_declare(queue='hello')
     
    def callback(ch, method, properties, body):
        print(" [x] Received %r" % body)
     
    channel.basic_consume(callback,
                          queue='hello',
                          no_ack=True)
     
    print(' [*] Waiting for messages. To exit press CTRL+C')
    channel.start_consuming()

    1、acknowledgment 消息不丢失

    no-ack = False,如果生产者遇到情况(its channel is closed, connection is closed, or TCP connection is lost)挂掉了,那么,RabbitMQ会重新将该任务添加到队列中。

    import pika
    
    connection = pika.BlockingConnection(pika.ConnectionParameters(
            host='192.168.1.1'))
    channel = connection.channel()
    
    channel.queue_declare(queue='hello')
    
    def callback(ch, method, properties, body):
        print(" [x] Received %r" % body)
        import time
        time.sleep(10)
        print 'ok'
        ch.basic_ack(delivery_tag = method.delivery_tag)
    
    channel.basic_consume(callback,
                          queue='hello',
                          no_ack=False)
    
    print(' [*] Waiting for messages. To exit press CTRL+C')
    channel.start_consuming()
    
    消费者
    消费者

    2、durable   消息不丢失

    #!/usr/bin/env python
    import pika
    
    connection = pika.BlockingConnection(pika.ConnectionParameters(host='192.168.1.1'))
    channel = connection.channel()
    
    # make message persistent
    channel.queue_declare(queue='hello', durable=True)
    
    channel.basic_publish(exchange='',
                          routing_key='hello',
                          body='Hello World!',
                          properties=pika.BasicProperties(
                              delivery_mode=2, # make message persistent
                          ))
    print(" [x] Sent 'Hello World!'")
    connection.close()
    
    生产者
    生产者
    #!/usr/bin/env python
    # -*- coding:utf-8 -*-
    import pika
    
    connection = pika.BlockingConnection(pika.ConnectionParameters(host='192.168.1.1'))
    channel = connection.channel()
    
    # make message persistent
    channel.queue_declare(queue='hello', durable=True)
    
    
    def callback(ch, method, properties, body):
        print(" [x] Received %r" % body)
        import time
        time.sleep(10)
        print 'ok'
        ch.basic_ack(delivery_tag = method.delivery_tag)
    
    channel.basic_consume(callback,
                          queue='hello',
                          no_ack=False)
    
    print(' [*] Waiting for messages. To exit press CTRL+C')
    channel.start_consuming()
    
    消费者
    消费者

    3、消息获取顺序

    默认消息队列里的数据是按照顺序被消费者拿走,例如:消费者1 去队列中获取 奇数 序列的任务,消费者1去队列中获取 偶数 序列的任务。

    channel.basic_qos(prefetch_count=1) 表示谁来谁取,不再按照奇偶数排列

    #!/usr/bin/env python
    # -*- coding:utf-8 -*-
    import pika
    
    connection = pika.BlockingConnection(pika.ConnectionParameters(host='192.168.1.1'))
    channel = connection.channel()
    
    # make message persistent
    channel.queue_declare(queue='hello')
    
    
    def callback(ch, method, properties, body):
        print(" [x] Received %r" % body)
        import time
        time.sleep(10)
        print 'ok'
        ch.basic_ack(delivery_tag = method.delivery_tag)
    
    channel.basic_qos(prefetch_count=1)
    
    channel.basic_consume(callback,
                          queue='hello',
                          no_ack=False)
    
    print(' [*] Waiting for messages. To exit press CTRL+C')
    channel.start_consuming()
    
    消费者
    消费者

    4、发布订阅

    发布订阅和简单的消息队列区别在于,发布订阅会将消息发送给所有的订阅者,而消息队列中的数据被消费一次便消失。所以,RabbitMQ实现发布和订阅时,会为每一个订阅者创建一个队列,而发布者发布消息时,会将消息放置在所有相关队列中。

     exchange type = fanout

    #!/usr/bin/env python
    import pika
    import sys
    
    connection = pika.BlockingConnection(pika.ConnectionParameters(
            host='localhost'))
    channel = connection.channel()
    
    channel.exchange_declare(exchange='logs',
                             type='fanout')
    
    message = ' '.join(sys.argv[1:]) or "info: Hello World!"
    channel.basic_publish(exchange='logs',
                          routing_key='',
                          body=message)
    print(" [x] Sent %r" % message)
    connection.close()
    
    发布者
    发布者
    #!/usr/bin/env python
    import pika
    
    connection = pika.BlockingConnection(pika.ConnectionParameters(
            host='localhost'))
    channel = connection.channel()
    
    channel.exchange_declare(exchange='logs',
                             type='fanout')
    
    result = channel.queue_declare(exclusive=True)
    queue_name = result.method.queue
    
    channel.queue_bind(exchange='logs',
                       queue=queue_name)
    
    print(' [*] Waiting for logs. To exit press CTRL+C')
    
    def callback(ch, method, properties, body):
        print(" [x] %r" % body)
    
    channel.basic_consume(callback,
                          queue=queue_name,
                          no_ack=True)
    
    channel.start_consuming()
    
    订阅者
    订阅者

    5、关键字发送

     exchange type = direct

    之前事例,发送消息时明确指定某个队列并向其中发送消息,RabbitMQ还支持根据关键字发送,即:队列绑定关键字,发送者将数据根据关键字发送到消息exchange,exchange根据 关键字 判定应该将数据发送至指定队列。

    #!/usr/bin/env python
    import pika
    import sys
    
    connection = pika.BlockingConnection(pika.ConnectionParameters(
            host='192.168.1.1'))
    channel = connection.channel()
    
    channel.exchange_declare(exchange='direct_logs',
                             type='direct')
    
    result = channel.queue_declare(exclusive=True)
    queue_name = result.method.queue
    
    severities = sys.argv[1:]
    if not severities:
        sys.stderr.write("Usage: %s [info] [warning] [error]
    " % sys.argv[0])
        sys.exit(1)
    
    for severity in severities:
        channel.queue_bind(exchange='direct_logs',
                           queue=queue_name,
                           routing_key=severity)
    
    print(' [*] Waiting for logs. To exit press CTRL+C')
    
    def callback(ch, method, properties, body):
        print(" [x] %r:%r" % (method.routing_key, body))
    
    channel.basic_consume(callback,
                          queue=queue_name,
                          no_ack=True)
    
    channel.start_consuming()
    
    消费者
    消费者
    #!/usr/bin/env python
    import pika
    import sys
    
    connection = pika.BlockingConnection(pika.ConnectionParameters(
            host='192.168.1.1'))
    channel = connection.channel()
    
    channel.exchange_declare(exchange='direct_logs',
                             type='direct')
    
    severity = sys.argv[1] if len(sys.argv) > 1 else 'info'
    message = ' '.join(sys.argv[2:]) or 'Hello World!'
    channel.basic_publish(exchange='direct_logs',
                          routing_key=severity,
                          body=message)
    print(" [x] Sent %r:%r" % (severity, message))
    connection.close()
    
    生产者
    生产者

    6、模糊匹配

     exchange type = topic

    在topic类型下,可以让队列绑定几个模糊的关键字,之后发送者将数据发送到exchange,exchange将传入”路由值“和 ”关键字“进行匹配,匹配成功,则将数据发送到指定队列。

    • # 表示可以匹配 0 个 或 多个 单词
    • *  表示只能匹配 一个 单词
    1 发送者路由值              队列中
    2 old.boy.python          old.*  -- 不匹配
    3 old.boy.python          old.#  -- 匹配
    #!/usr/bin/env python
    import pika
    import sys
    
    connection = pika.BlockingConnection(pika.ConnectionParameters(
            host='192.168.1.1'))
    channel = connection.channel()
    
    channel.exchange_declare(exchange='topic_logs',
                             type='topic')
    
    result = channel.queue_declare(exclusive=True)
    queue_name = result.method.queue
    
    binding_keys = sys.argv[1:]
    if not binding_keys:
        sys.stderr.write("Usage: %s [binding_key]...
    " % sys.argv[0])
        sys.exit(1)
    
    for binding_key in binding_keys:
        channel.queue_bind(exchange='topic_logs',
                           queue=queue_name,
                           routing_key=binding_key)
    
    print(' [*] Waiting for logs. To exit press CTRL+C')
    
    def callback(ch, method, properties, body):
        print(" [x] %r:%r" % (method.routing_key, body))
    
    channel.basic_consume(callback,
                          queue=queue_name,
                          no_ack=True)
    
    channel.start_consuming()
    
    消费者
    消费者
    #!/usr/bin/env python
    import pika
    import sys
    
    connection = pika.BlockingConnection(pika.ConnectionParameters(
            host='192.168.1.1'))
    channel = connection.channel()
    
    channel.exchange_declare(exchange='topic_logs',
                             type='topic')
    
    routing_key = sys.argv[1] if len(sys.argv) > 1 else 'anonymous.info'
    message = ' '.join(sys.argv[2:]) or 'Hello World!'
    channel.basic_publish(exchange='topic_logs',
                          routing_key=routing_key,
                          body=message)
    print(" [x] Sent %r:%r" % (routing_key, message))
    connection.close()
    
    生产者
    生产者

    SQLAlchemy

    SQLAlchemy是Python编程语言下的一款ORM框架,该框架建立在数据库API之上,使用关系对象映射进行数据库操作,简言之便是:将对象转换成SQL,然后使用数据API执行SQL并获取执行结果。

    Dialect用于和数据API进行交流,根据配置文件的不同调用不同的数据库API,从而实现对数据库的操作,如:

     1 MySQL-Python
     2     mysql+mysqldb://<user>:<password>@<host>[:<port>]/<dbname>
     3  
     4 pymysql
     5     mysql+pymysql://<username>:<password>@<host>/<dbname>[?<options>]
     6  
     7 MySQL-Connector
     8     mysql+mysqlconnector://<user>:<password>@<host>[:<port>]/<dbname>
     9  
    10 cx_Oracle
    11     oracle+cx_oracle://user:pass@host:port/dbname[?key=value&key=value...]
    12  
    13 更多详见:http://docs.sqlalchemy.org/en/latest/dialects/index.html

    步骤一:

    使用 Engine/ConnectionPooling/Dialect 进行数据库操作,Engine使用ConnectionPooling连接数据库,然后再通过Dialect执行SQL语句。

     1 #!/usr/bin/env python
     2 # -*- coding:utf-8 -*-
     3  
     4 from sqlalchemy import create_engine
     5  
     6  
     7 engine = create_engine("mysql+mysqldb://root:123@127.0.0.1:3306/s11", max_overflow=5)
     8  
     9 engine.execute(
    10     "INSERT INTO ts_test (a, b) VALUES ('2', 'v1')"
    11 )
    12  
    13 engine.execute(
    14      "INSERT INTO ts_test (a, b) VALUES (%s, %s)",
    15     ((555, "v1"),(666, "v1"),)
    16 )
    17 engine.execute(
    18     "INSERT INTO ts_test (a, b) VALUES (%(id)s, %(name)s)",
    19     id=999, name="v1"
    20 )
    21  
    22 result = engine.execute('select * from ts_test')
    23 result.fetchall()
    #!/usr/bin/env python
    # -*- coding:utf-8 -*-
    
    from sqlalchemy import create_engine
    
    
    engine = create_engine("mysql+mysqldb://root:123@127.0.0.1:3306/s11", max_overflow=5)
    
    
    # 事务操作
    with engine.begin() as conn:
        conn.execute("insert into table (x, y, z) values (1, 2, 3)")
        conn.execute("my_special_procedure(5)")
        
        
    conn = engine.connect()
    # 事务操作 
    with conn.begin():
           conn.execute("some statement", {'x':5, 'y':10})
    
    事务操作
    事物操作

    注:查看数据库连接:show status like 'Threads%';

    步骤二:

    使用 Schema Type/SQL Expression Language/Engine/ConnectionPooling/Dialect 进行数据库操作。Engine使用Schema Type创建一个特定的结构对象,之后通过SQL Expression Language将该对象转换成SQL语句,然后通过 ConnectionPooling 连接数据库,再然后通过 Dialect 执行SQL,并获取结果。

    #!/usr/bin/env python
    # -*- coding:utf-8 -*-
     
    from sqlalchemy import create_engine, Table, Column, Integer, String, MetaData, ForeignKey
     
    metadata = MetaData()
     
    user = Table('user', metadata,
        Column('id', Integer, primary_key=True),
        Column('name', String(20)),
    )
     
    color = Table('color', metadata,
        Column('id', Integer, primary_key=True),
        Column('name', String(20)),
    )
    engine = create_engine("mysql+mysqldb://root:123@127.0.0.1:3306/s11", max_overflow=5)
     
    metadata.create_all(engine)
    # metadata.clear()
    # metadata.remove()
    #!/usr/bin/env python
    # -*- coding:utf-8 -*-
    
    from sqlalchemy import create_engine, Table, Column, Integer, String, MetaData, ForeignKey
    
    metadata = MetaData()
    
    user = Table('user', metadata,
        Column('id', Integer, primary_key=True),
        Column('name', String(20)),
    )
    
    color = Table('color', metadata,
        Column('id', Integer, primary_key=True),
        Column('name', String(20)),
    )
    engine = create_engine("mysql+mysqldb://root:123@127.0.0.1:3306/s11", max_overflow=5)
    
    conn = engine.connect()
    
    # 创建SQL语句,INSERT INTO "user" (id, name) VALUES (:id, :name)
    conn.execute(user.insert(),{'id':7,'name':'seven'})
    conn.close()
    
    # sql = user.insert().values(id=123, name='wu')
    # conn.execute(sql)
    # conn.close()
    
    # sql = user.delete().where(user.c.id > 1)
    
    # sql = user.update().values(fullname=user.c.name)
    # sql = user.update().where(user.c.name == 'jack').values(name='ed')
    
    # sql = select([user, ])
    # sql = select([user.c.id, ])
    # sql = select([user.c.name, color.c.name]).where(user.c.id==color.c.id)
    # sql = select([user.c.name]).order_by(user.c.name)
    # sql = select([user]).group_by(user.c.name)
    
    # result = conn.execute(sql)
    # print result.fetchall()
    # conn.close()
    
    增删改查
    增删改查

    更多内容详见:

        http://www.jianshu.com/p/e6bba189fcbd   ----中文

        http://docs.sqlalchemy.org/en/latest/core/expression_api.html ---官方英文

    注:SQLAlchemy无法修改表结构,如果需要可以使用SQLAlchemy开发者开源的另外一个软件Alembic来完成。

    步骤三:

    使用 ORM/Schema Type/SQL Expression Language/Engine/ConnectionPooling/Dialect 所有组件对数据进行操作。根据类创建对象,对象转换成SQL,执行SQL。

     1 #!/usr/bin/env python
     2 # -*- coding:utf-8 -*-
     3  
     4 from sqlalchemy.ext.declarative import declarative_base
     5 from sqlalchemy import Column, Integer, String
     6 from sqlalchemy.orm import sessionmaker
     7 from sqlalchemy import create_engine
     8  
     9 engine = create_engine("mysql+mysqldb://root:123@127.0.0.1:3306/s11", max_overflow=5)
    10  
    11 Base = declarative_base()
    12  
    13  
    14 class User(Base):
    15     __tablename__ = 'users'
    16     id = Column(Integer, primary_key=True)
    17     name = Column(String(50))
    18  
    19 # 寻找Base的所有子类,按照子类的结构在数据库中生成对应的数据表信息
    20 # Base.metadata.create_all(engine)
    21  
    22 Session = sessionmaker(bind=engine)
    23 session = Session()
    24  
    25  
    26 # ########## 增 ##########
    27 # u = User(id=2, name='sb')
    28 # session.add(u)
    29 # session.add_all([
    30 #     User(id=3, name='sb'),
    31 #     User(id=4, name='sb')
    32 # ])
    33 # session.commit()
    34  
    35 # ########## 删除 ##########
    36 # session.query(User).filter(User.id > 2).delete()
    37 # session.commit()
    38  
    39 # ########## 修改 ##########
    40 # session.query(User).filter(User.id > 2).update({'cluster_id' : 0})
    41 # session.commit()
    42 # ########## 查 ##########
    43 # ret = session.query(User).filter_by(name='sb').first()
    44  
    45 # ret = session.query(User).filter_by(name='sb').all()
    46 # print ret
    47  
    48 # ret = session.query(User).filter(User.name.in_(['sb','bb'])).all()
    49 # print ret
    50  
    51 # ret = session.query(User.name.label('name_label')).all()
    52 # print ret,type(ret)
    53  
    54 # ret = session.query(User).order_by(User.id).all()
    55 # print ret
    56  
    57 # ret = session.query(User).order_by(User.id)[1:3]
    58 # print ret
    59 # session.commit()

    更多功能参见文档,猛击这里下载PDF

  • 相关阅读:
    APPIUM Android 定位方式
    SQL Server 根据存储过程的结果集创建临时表
    Ubuntu18.04 设置开机自启动服务
    ubuntu-18.04 (各版本镜像下载) 及的环境初始化配置
    CentOS 7 编译安装PHP5.6.31
    Centos7 编译安装 MySQL 5.5.62
    Windows 2008 R2 远程桌面连接记录(客户端IP)
    CentOS crontab定时任务
    CentOS 7 安装MySql 5.5.60
    SQL Server 数据库错误码解释
  • 原文地址:https://www.cnblogs.com/fanweibin/p/5138630.html
Copyright © 2011-2022 走看看