zoukankan      html  css  js  c++  java
  • spring boot下使用logback或log4j生成符合Logstash标准的JSON格式

    spring boot下使用logback或log4j生成符合Logstash标准的JSON格式

    一、依赖

    由于配置中使用了json格式的日志输出,所以需要引入如下依赖

    "net.logstash.logback:logstash-logback-encoder:4.11",

    <!-- https://mvnrepository.com/artifact/net.logstash.logback/logstash-logback-encoder -->
    <dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>4.11</version>
    </dependency>

    二、配置说明

    1.日志的输出路径

    <property name="LOG_PATH" value="phantom-log" />
    • 1

    2.读取spring容器中的属性,这里是获取项目名称和运行的服务器IP

    <springProperty scope="context" name="appName" source="spring.application.name" />
    <springProperty scope="context" name="ip" source="spring.cloud.client.ipAddress" />
    • 1
    • 2

    3.设置日志的格式

    <property name="CONSOLE_LOG_PATTERN"
                value="[%d{yyyy-MM-dd HH:mm:ss.SSS} ${ip} ${appName} %highlight(%-5level) %yellow(%X{X-B3-TraceId}),%green(%X{X-B3-SpanId}),%blue(%X{X-B3-ParentSpanId}) %yellow(%thread) %green(%logger) %msg%n"/>
    • 1
    • 2

    4.添加一个输出器,并滚动输出

    <appender name="FILEERROR" class="ch.qos.logback.core.rolling.RollingFileAppender">
    • 1

    5.指定输出的文件位置

    <file>../${LOG_PATH}/${appName}/${appName}-error.log</file>
    • 1

    6.指定滚动输出的策略,按天数进行切分,或者文件大小超过2M进行切分

    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
          <fileNamePattern>../${LOG_PATH}/${appName}/${appName}-error-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
          <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
            <maxFileSize>2MB</maxFileSize>
          </timeBasedFileNamingAndTriggeringPolicy>
        </rollingPolicy>

    1

    • 2
    • 3
    • 4
    • 5
    • 6

    7.下面的文件中一共有四个appender, FILEERROR, FILEEWARN, FILEINFO, logstash。

    其中FILEERROR, FILEEWARN, FILEINFO三个是相类似的,只是打印不同级别的日志信息。 
    logstash是用来生成json格式的日志文件,方便与ELK日志系统进行集成。

    三、完整配置

    复制代码
    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
      <contextName>${HOSTNAME}</contextName>
      <property name="LOG_PATH" value="phantom-log" />
      <springProperty scope="context" name="appName" source="spring.application.name" />
      <springProperty scope="context" name="ip" source="spring.cloud.client.ipAddress" />
      <property name="CONSOLE_LOG_PATTERN"
                value="[%d{yyyy-MM-dd HH:mm:ss.SSS} ${ip} ${appName} %highlight(%-5level) %yellow(%X{X-B3-TraceId}),%green(%X{X-B3-SpanId}),%blue(%X{X-B3-ParentSpanId}) %yellow(%thread) %green(%logger) %msg%n"/>
    
      <appender name="FILEERROR" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>../${LOG_PATH}/${appName}/${appName}-error.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
          <fileNamePattern>../${LOG_PATH}/${appName}/${appName}-error-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
          <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
            <maxFileSize>2MB</maxFileSize>
          </timeBasedFileNamingAndTriggeringPolicy>
        </rollingPolicy>
        <append>true</append>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
          <pattern>${CONSOLE_LOG_PATTERN}</pattern>
          <charset>utf-8</charset>
        </encoder>
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
          <level>error</level>
          <onMatch>ACCEPT</onMatch>
          <onMismatch>DENY</onMismatch>
        </filter>
      </appender>
    
      <appender name="FILEWARN" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>../${LOG_PATH}/${appName}/${appName}-warn.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
          <fileNamePattern>../${LOG_PATH}/${appName}/${appName}-warn-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
          <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
            <maxFileSize>2MB</maxFileSize>
          </timeBasedFileNamingAndTriggeringPolicy>
        </rollingPolicy>
        <append>true</append>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
          <pattern>${CONSOLE_LOG_PATTERN}</pattern>
          <charset>utf-8</charset>
        </encoder>
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
          <level>warn</level>
          <onMatch>ACCEPT</onMatch>
          <onMismatch>DENY</onMismatch>
        </filter>
      </appender>
    
      <appender name="FILEINFO" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>../${LOG_PATH}/${appName}/${appName}-info.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
          <fileNamePattern>../${LOG_PATH}/${appName}/${appName}-info-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
          <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
            <maxFileSize>2MB</maxFileSize>
          </timeBasedFileNamingAndTriggeringPolicy>
        </rollingPolicy>
        <append>true</append>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
          <pattern>${CONSOLE_LOG_PATTERN}</pattern>
          <charset>utf-8</charset>
        </encoder>
    
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
          <level>info</level>
          <onMatch>ACCEPT</onMatch>
          <onMismatch>DENY</onMismatch>
        </filter>
      </appender>
    
      <appender name="logstash" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>../${LOG_PATH}/${appName}/${appName}.json</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
          <fileNamePattern>../${LOG_PATH}/${appName}/${appName}-%d{yyyy-MM-dd}.json</fileNamePattern>
          <maxHistory>7</maxHistory>
        </rollingPolicy>
        <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
          <providers>
            <timestamp>
              <timeZone>UTC</timeZone>
            </timestamp>
            <pattern>
              <pattern>
                {
                "ip": "${ip}",
                "app": "${appName}",
                "level": "%level",
                "trace": "%X{X-B3-TraceId:-}",
                "span": "%X{X-B3-SpanId:-}",
                "parent": "%X{X-B3-ParentSpanId:-}",
                "thread": "%thread",
                "class": "%logger{40}",
                "message": "%message",
                "stack_trace": "%exception{10}"
                }
              </pattern>
            </pattern>
          </providers>
        </encoder>
      </appender>
    
      <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
          <pattern>${CONSOLE_LOG_PATTERN}</pattern>
          <charset>utf-8</charset>
        </encoder>
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
          <level>debug</level>
        </filter>
      </appender>
    
      <logger name="org.springframework" level="INFO" />
      <logger name="org.hibernate" level="INFO" />
      <logger name="com.kingboy.repository" level="DEBUG" />
    
      <root level="INFO">
        <appender-ref ref="FILEERROR" />
        <appender-ref ref="FILEWARN" />
        <appender-ref ref="FILEINFO" />
        <appender-ref ref="logstash" />
        <appender-ref ref="STDOUT" />
      </root>
    </configuration>
    复制代码

    生成的日志文件如下:

    {"@timestamp":"2019-03-14T07:02:15.318+00:00","ip":"ip_IS_UNDEFINED","app":"appName_IS_UNDEFINED","level":"INFO","trace":"","span":"","parent":"","thread":"main","class":"o.apache.coyote.http11.Http11NioProtocol","message":"Starting ProtocolHandler ["https-jsse-nio-8443"]","stack_trace":""}
    {"@timestamp":"2019-03-14T07:02:15.621+00:00","ip":"ip_IS_UNDEFINED","app":"appName_IS_UNDEFINED","level":"INFO","trace":"","span":"","parent":"","thread":"main","class":"o.apache.tomcat.util.net.NioSelectorPool","message":"Using a shared selector for servlet write/read","stack_trace":""}
    {"@timestamp":"2019-03-14T07:02:15.633+00:00","ip":"ip_IS_UNDEFINED","app":"appName_IS_UNDEFINED","level":"INFO","trace":"","span":"","parent":"","thread":"main","class":"o.apache.coyote.http11.Http11NioProtocol","message":"Starting ProtocolHandler ["http-nio-80"]","stack_trace":""}
    {"@timestamp":"2019-03-14T07:02:15.642+00:00","ip":"ip_IS_UNDEFINED","app":"appName_IS_UNDEFINED","level":"INFO","trace":"","span":"","parent":"","thread":"main","class":"o.s.b.w.embedded.tomcat.TomcatWebServer","message":"Tomcat started on port(s): 8443 (https) 80 (http) with context path ''","stack_trace":""}
    {"@timestamp":"2019-03-14T07:02:15.645+00:00","ip":"ip_IS_UNDEFINED","app":"appName_IS_UNDEFINED","level":"INFO","trace":"","span":"","parent":"","thread":"main","class":"org.data.collector.Application","message":"Started Application in 5.971 seconds (JVM running for 19.807)","stack_trace":""}

    filebeat.yml

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    #=========================== Filebeat prospectors =============================

    filebeat.prospectors:

    - input_type: log

          enabled: true #半天没有日志上传,就是这里的原因!!!


     # Paths that should be crawled and fetched. Glob based paths.
     paths:
       - /var/log/nginx/*access*.log
     json.keys_under_root: true #json的配置要放在这里!!! input里面的prospector
     json.overwrite_keys: true

    #-------------------------- Elasticsearch output ------------------------------
    output.elasticsearch:
     # Array of hosts to connect to.
     hosts: ["ip:port","ip:port"]
     index: "filebeat_server_nginx_%{+YYYY-MM}"

    这里面需要注意的是
    json.keys_under_root: 默认这个值是FALSE的,也就是我们的json日志解析后会被放在json键上。设为TRUE,所有的keys就会被放到根节点
    json.overwrite_keys: 是否要覆盖原有的key,这是关键配置,将keys_under_root设为TRUE后,再将overwrite_keys也设为TRUE,就能把filebeat默认的key值给覆盖了

    还有其他的配置
    json.add_error_key:添加json_error key键记录json解析失败错误
    json.message_key:指定json日志解析后放到哪个key上,默认是json,你也可以指定为log等。

    logback-spring.xml

    <?xml version="1.0" encoding="UTF-8"?>
    
    <configuration debug="false" scan="true" scanPeriod="600000">
        <!--定义日志文件的存储地址 勿在 LogBack 的配置中使用相对路径-->  
        <property name="LOG_HOME" value="/var/log" />  
        <contextName>${HOSTNAME}</contextName>
        <springProperty scope="context" name="appName" source="spring.application.name" />
          <springProperty scope="context" name="ip" source="spring.cloud.client.ipAddress" />
          
          <!--格式化输出:%d表示日期,%thread表示线程名,%-5level:级别从左显示5个字符宽度%msg:日志消息,%n是换行符 -->
      <property name="CONSOLE_LOG_PATTERN"
                value="[%d{yyyy-MM-dd HH:mm:ss.SSS} ${ip} ${appName} %highlight(%-5level) %yellow(%X{X-B3-TraceId}),%green(%X{X-B3-SpanId}),%blue(%X{X-B3-ParentSpanId}) %yellow(%thread) %green(%logger) %msg%n"/>
        
        
        <!-- <logger name="org.springframework.web" level="DEBUG" /> -->
    
    
        <!-- show parameters for hibernate sql 专为 Hibernate 定制 -->
        <!--<logger name="org.hibernate.type.descriptor.sql.BasicBinder"  level="TRACE" />-->
        <!--<logger name="org.hibernate.type.descriptor.sql.BasicExtractor"  level="DEBUG" />-->
        <!--<logger name="org.hibernate.engine.QueryParameters" level="DEBUG" />-->
        <!--<logger name="org.hibernate.engine.query.HQLQueryPlan" level="DEBUG" />-->
        
        <!-- <logger name="org.hibernate.SQL" level="DEBUG" />  -->
        <logger name="logging.level.com.italktv.platform" level="info" />
    
        <!-- 控制台输出 -->
        <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
            <encoder>
                <pattern>${CONSOLE_LOG_PATTERN}</pattern>
                <charset>utf-8</charset>
            </encoder>
            <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
                <level>debug</level>
            </filter>
        </appender>
        
        <!-- 按照每天生成日志文件 -->   
        <appender name="FILE"  class="ch.qos.logback.core.rolling.RollingFileAppender">  
                 <!-- 正在记录的日志文件的路径及文件名 -->
            <file>${LOG_HOME}/bigdata/data-api.log</file> 
            <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
                <!--日志文件输出的文件名-->
                <FileNamePattern>${LOG_HOME}/bigdata/data-api.%d{yyyy-MM-dd}.%i.log</FileNamePattern> 
                <!--日志文件保留天数-->
                <MaxHistory>30</MaxHistory>
                <maxFileSize>1MB</maxFileSize> 
                <totalSizeCap>10MB</totalSizeCap> 
            </rollingPolicy>   
        <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
          <providers>
            <timestamp>
              <timeZone>UTC</timeZone>
            </timestamp>
            <pattern>
              <pattern>
                {
                "ip": "${ip}",
                "app": "${appName}",
                "level": "%level",
                "trace": "%X{X-B3-TraceId:-}",
                "span": "%X{X-B3-SpanId:-}",
                "parent": "%X{X-B3-ParentSpanId:-}",
                "thread": "%thread",
                "class": "%logger{40}",
                "message": "%message",
                "stack_trace": "%exception{10}"
                }
              </pattern>
            </pattern>
          </providers>
        </encoder> 
            <!--日志文件最大的大小
           <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
             <MaxFileSize>10KB</MaxFileSize>
           </triggeringPolicy>
           -->
        </appender> 
          
        <!-- 日志输出级别 -->
        <root level="INFO">
            <!-- 生产上不输出stdout log -->
            <!--appender-ref ref="STDOUT" /-->
            
            <appender-ref ref="FILE" />
        </root> 
    
    </configuration>
    View Code

    ____________________________________________

    application.xml:

    logging:
    config: logback-spring.xml

    _____________________________

    tomcat里面 log4j的字段值:

    https://www.jianshu.com/p/a26da0c55255

  • 相关阅读:
    nginx-1.8.1的安装
    ElasticSearch 在3节点集群的启动
    The type java.lang.CharSequence cannot be resolved. It is indirectly referenced from required .class files
    sqoop导入导出对mysql再带数据库test能跑通用户自己建立的数据库则不行
    LeetCode 501. Find Mode in Binary Search Tree (找到二叉搜索树的众数)
    LeetCode 437. Path Sum III (路径之和之三)
    LeetCode 404. Sum of Left Leaves (左子叶之和)
    LeetCode 257. Binary Tree Paths (二叉树路径)
    LeetCode Questions List (LeetCode 问题列表)- Java Solutions
    LeetCode 561. Array Partition I (数组分隔之一)
  • 原文地址:https://www.cnblogs.com/bigben0123/p/10530446.html
Copyright © 2011-2022 走看看