zoukankan      html  css  js  c++  java
  • nginx模块开发(18)—日志分析

    1、日志简介

    nginx日志主要有两种:访问日志和错误日志。访问日志主要记录客户端访问nginx的每一个请求,格式可以自定义;错误日志主要记录客户端访问nginx出错时的日志,格式不支持自定义。两种日志都可以选择性关闭。

    通过访问日志,你可以得到用户地域来源、跳转来源、使用终端、某个URL访问量等相关信息;通过错误日志,你可以得到系统某个服务或server的性能瓶颈等。因此,将日志好好利用,你可以得到很多有价值的信息。

    2、访问日志

    [Access.log]

    log_format  main  '$remote_addr $remote_user [$time_local] "$request" $http_host '

                      '$status $upstream_status $body_bytes_sent "$http_referer" '

                      '"$http_user_agent" $ssl_protocol $ssl_cipher $upstream_addr '

                      '$request_time $upstream_response_time'; 

    变量名称

    变量描述

    举例说明

    $remote_addr

    客户端地址

     

    1.1.1.1

    $remote_user

    客户端用户名称

    -

    $time_local

    访问时间和时区

    18/Jul/2012:17:00:01 +0800

    $request

    请求的URIHTTP协议

    "GET /XX  HTTP/1.1"

    $http_host

    请求地址,即浏览器中你输入的地址(IP或域名)

    hello.world.com

    2.2.2.2

    $status

    HTTP请求状态

    200

    $upstream_status

    upstream状态

    200

    $body_bytes_sent

    发送给客户端文件内容大小

    547

    $http_referer

    跳转来源

     "https://hello.cj.com.../"

    $http_user_agent

    用户终端代理

    "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; SV1; GTB7.0; .NET4.0C;

    $ssl_protocol

    SSL协议版本

    TLSv1

    $ssl_cipher

    交换数据中的算法

    RC4-SHA

    $upstream_addr

    后台upstream的地址,即真正提供服务的主机地址

    3.3.3.3:80

    $request_time

    整个请求的总时间

    0.205

    $upstream_response_time

    请求过程中,upstream响应时间

    0.002

    线上实例:

    1.1.1.1 - [02/Aug/2012:14:47:12 +0800] "GET /images/XX/20100324752729.png HTTP/1.1" hello.world.com 200 200 2038https://hello.cj.com/XX/PaymentResult.htm "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; Tablet PC 2.0; 360SE)" TLSv1 AES128-SHA 3.3.3.3:80 0.198 0.001

    3、错误日志

    错误信息

    错误说明

    "upstream prematurely(过早的)closed connection"

    upstream关闭了链接造成的

    "recv() failed (104: Connection reset by peer)"

    1)服务器的并发连接数超过了其承载量,服务器会将其中一些连接Down掉; 

    2)客户关掉了浏览器,而服务器还在给客户端发送数据; 

    3)浏览器端按了Stop

    "(111: Connection refused) while connecting to upstream"

    用户在连接时,若遇到后端upstream挂掉或者不通,会收到该错误

    "(111: Connection refused) while reading response header from upstream"

    用户在连接成功后读取数据时,若遇到后端upstream挂掉或者不通,会收到该错误

    "(111: Connection refused) while sending request to upstream"

    Nginxupstream连接成功后发送数据时,若遇到后端upstream挂掉或者不通,会收到该错误

    "(110: Connection timed out) while connecting to upstream"

    nginx连接后面的upstream时超时

    "(110: Connection timed out) while reading upstream"

    nginx读取来自upstream的响应时超时

     

    "(110: Connection timed out) while reading response header from upstream"

    nginx读取来自upstream的响应头时超时

    "(110: Connection timed out) while reading upstream"

    nginx读取来自upstream的响应时超时

    "(104: Connection reset by peer) while connecting to upstream"

    upstream发送了RST,将连接重置

    "upstream sent invalid header while reading response header from upstream"

    upstream发送的响应头无效

    "upstream sent no valid HTTP/1.0 header while reading response header from upstream"

    upstream发送的响应头无效

    "client intended to send too large body"

    用于设置允许接受的客户端请求内容的最大值,默认值是1Mclient发送的body超过了设置值

    "reopening logs"

    用户发送kill  -USR1命令

    "gracefully shutting down",

    用户发送kill  -WINCH命令

    "no servers are inside upstream"

    upstream下未配置server

    "no live upstreams while connecting to upstream"

    upstream下的server全都挂了

    "SSL_do_handshake() failed"

    SSL握手失败

    "SSL_write() failed (SSL:) while sending to client"

     

    "(13: Permission denied) while reading upstream"

     

    "(98: Address already in use) while connecting to upstream"

     

    "(99: Cannot assign requested address) while connecting to upstream"

     

    "ngx_slab_alloc() failed: no memory in SSL session shared cache"

    ssl_session_cache大小不够等原因造成

    "could not add new SSL session to the session cache while SSL handshaking"

    ssl_session_cache大小不够等原因造成

    "send() failed (111: Connection refused)"

     

     

     

  • 相关阅读:
    nodejs难点
    react 组件化
    vue router & vuex
    vue源码思考
    cookie & session
    servlet
    Hashmap
    Zookeeper+Kafka+flink+socket
    flink consumer can realize kafka avro with overwriting kafkaDeseriler, and executor multithread and genretic extends
    flink kafka consumer with avro schema. handling null
  • 原文地址:https://www.cnblogs.com/DjangoBlog/p/5482742.html
Copyright © 2011-2022 走看看