zoukankan      html  css  js  c++  java
  • 【源码】openresty 限流

    小结:

    1、在连接环节计数,有清零环节

    有3个参量 

    max
    burst
    unit_delay

    https://github.com/openresty/lua-resty-limit-traffic/blob/master/README.md

    -- limit the requests under 200 req/sec with a burst of 100 req/sec,
    -- that is, we delay requests under 300 req/sec and above 200
    -- req/sec, and reject any requests exceeding 300 req/sec.

    -- the following call must be per-request.
    -- here we use the remote (IP) address as the limiting key
    local key = ngx.var.binary_remote_addr
    local delay, err = lim:incoming(key, true)
    if not delay then
    if err == "rejected" then
    return ngx.exit(503)
    end
    ngx.log(ngx.ERR, "failed to limit req: ", err)
    return ngx.exit(500)
    end

    if delay >= 0.001 then
    -- the 2nd return value holds the number of excess requests
    -- per second for the specified key. for example, number 31
    -- means the current request rate is at 231 req/sec for the
    -- specified key.
    local excess = err

    -- the request exceeding the 200 req/sec but below 300 req/sec,
    -- so we intentionally delay it here a bit to conform to the
    -- 200 req/sec rate.
    ngx.sleep(delay)
    end

    delay
    the delay in seconds (the caller should sleep before processing the current request)

    https://github.com/openresty/lua-resty-limit-traffic/blob/master/lib/resty/limit/count.md


    syntax: delay, err = obj:incoming(key, commit)

    Fires a new request incoming event and calculates the delay needed (if any) for the current request upon the specified key or whether the user should reject it immediately.

    This method accepts the following arguments:

    key is the user specified key to limit the rate.

    For example, one can use the host name (or server zone) as the key so that we limit rate per host name. Otherwise, we can also use the authorization header value as the key so that we can set a rate for individual user.

    Please note that this module does not prefix nor suffix the user key so it is the user's responsibility to ensure the key is unique in the lua_shared_dict shm zone).

    commit is a boolean value. If set to true, the object will actually record the event in the shm zone backing the current object; otherwise it would just be a "dry run" (which is the default).

    The return values depend on the following cases:

    If the request does not exceed the count value specified in the new method, then this method returns 0 as the delay and the remaining count of allowed requests at the current time (as the 2nd return value).

    If the request exceeds the count limit specified in the new method then this method returns nil and the error string "rejected".

    If an error occurred (like failures when accessing the lua_shared_dict shm zone backing the current object), then this method returns nil and a string describing the error.

    在nginx粒度度限制,各个worker
    The limiting works on the granularity of an individual NGINX server instance (including all its worker processes). Thanks to the shm mechanism; we can share state cheaply across all the workers in a single NGINX server instance.
    限流是nginx粒度的,不是woker粒度的,需分析量化对qps的降低的影响程度

    清零
    https://github.com/openresty/lua-resty-limit-traffic/blob/master/lib/resty/limit/req.lua
    -- we do not handle changing rate values specifically. the excess value
    -- can get automatically adjusted by the following formula with new rate
    -- values rather quickly anyway.
    excess = max(tonumber(rec.excess) - rate * abs(elapsed) / 1000 + 1000,0)


    if conn > max + self.burst then
    conn, err = dict:incr(key, -1)
    if not conn then
    return nil, err
    end
    return nil, "rejected"
    end
    self.committed = true
    https://github.com/openresty/lua-resty-limit-traffic/blob/master/lib/resty/limit/conn.lua

    function _M.incoming(self, key, commit)
        local dict = self.dict
        local max = self.max
    
        self.committed = false
    
        local conn, err
        if commit then
            conn, err = dict:incr(key, 1, 0)
            if not conn then
                return nil, err
            end
    
            if conn > max + self.burst then
                conn, err = dict:incr(key, -1)
                if not conn then
                    return nil, err
                end
                return nil, "rejected"
            end
            self.committed = true
    
        else
            conn = (dict:get(key) or 0) + 1
            if conn > max + self.burst then
                return nil, "rejected"
            end
        end
    
        if conn > max then
            -- make the exessive connections wait
            return self.unit_delay * floor((conn - 1) / max), conn
        end
    
        -- we return a 0 delay by default
        return 0, conn
    end
    
    
    function _M.is_committed(self)
        return self.committed
    end
    
    
    function _M.leaving(self, key, req_latency)
        assert(key)
        local dict = self.dict
    
        local conn, err = dict:incr(key, -1)
        if not conn then
            return nil, err
        end
    
        if req_latency then
            local unit_delay = self.unit_delay
            self.unit_delay = (req_latency + unit_delay) / 2
        end
    
        return conn
    end
    

      

  • 相关阅读:
    loj 6035 「雅礼集训 2017 Day4」洗衣服
    BZOJ 3251 树上三角形
    UwrhrQNgRh
    百度之星2018资格赛1002题解
    [CF-676B]PYRAMID OF GLASSES
    【CF-371C】Hamburgers
    洛谷P1012拼数——字符串排序
    位运算详解及应用
    NOIP 2014 Day2 T1 无线网络发射器
    写代码要注意的几点(2)
  • 原文地址:https://www.cnblogs.com/rsapaper/p/10957760.html
Copyright © 2011-2022 走看看