zoukankan      html  css  js  c++  java
  • 本地日志输出和全链路日志配置

    这段时间的面试,遇到好多问日志管理的,今天我就一起聊聊,本地日志输出和全链路日志配置
     
    目标:介绍我们框架中使用的本地日志系统及全链路日志系统。本地日志使用spring推荐的logbak,基本是零配置的形式。全链路日志追踪也叫分布式日志,使用zipkin。原理很简单:
             1、各模块集成zipkin日志收集功能,日志打在kafka中(生产者);
             2、zipkin管理端会上来搜刮kafka(消费者),并使用elasticsearch保存日志;
             3、使用zipkin ui或kabana展示,在之前的我接触的项目中我们两者都用了
     
     一、本地日志输出配置
     
            1. 新增日志配置文件[src/main/resources/logback-spring.xml]
                      必须使用这个配置,这是需要事先和运维约定好,日志文件的格式、存放路径、存放周期、命名等都有要求,不要自己再去撸一个运维不认的
                      注意这个地方用到了一个变量${app.dir},必须先设置值。
                      有两种方式,第一种就是引入我们已经封装好项目模块,pom.xml引入就好了,目前是0.0.2版本。
                      另外一种方式就是自己撸代码,在自己的启动类中加入以下java代码。
            log.info("初始化System.setProperty("app.dir")");
            String userDir = System.getProperty("user.dir");
            System.setProperty("app.dir", 
            userDir.substring(userDir.lastIndexOf(File.separator)));
    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
        <property name="CONSOLE_LOG_PATTERN"
                  value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:[%L]){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>
        <property name="FILE_LOG_PATTERN"
                  value="${FILE_LOG_PATTERN:-%d{yyyy-MM-dd HH:mm:ss.SSS} ${LOG_LEVEL_PATTERN:-%5p} ${PID:- } --- [%t] %-40.40logger{39} :[%L] %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/>
        <property name="LOG_PATH" value="/log/web/${app.dir}"/>
        <property name="LOG_FILE" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/${app.dir}-info}"/>
        <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
        <include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
     
        <appender name="FILE"
                  class="ch.qos.logback.core.rolling.RollingFileAppender">
            <file>${LOG_FILE}.log</file>
            <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
                <!-- rollover daily -->
                <fileNamePattern>${LOG_FILE}-%d{yyyy-MM-dd-HH00}.%i.log</fileNamePattern>
                <!-- each file should be at most 100MB, keep 15 days worth of history, but at most 20GB -->
                <maxFileSize>100MB</maxFileSize>
                <maxHistory>360</maxHistory>
                <totalSizeCap>20GB</totalSizeCap>
            </rollingPolicy>
            <encoder>
                <pattern>${FILE_LOG_PATTERN}</pattern>
            </encoder>
        </appender>
     
        <logger name="org.springframework.security" level="DEBUG"/>
     
        <logger name="org.springframework.cloud.sleuth.instrument.web.client.feign.TraceFeignClient" level="DEBUG"/>
        <logger name="org.springframework.web.servlet.DispatcherServlet" level="DEBUG"/>
        <logger name="org.springframework.cloud.sleuth.instrument.web.TraceFilter" level="DEBUG"/>
        <logger name="com.jarvis.cache" level="DEBUG"/>
        <logger name="com.youli" level="DEBUG"/>
     
        <logger name="org.springframework.security.web.util.matcher" level="INFO"/>
     
        <root level="INFO">
            <appender-ref ref="CONSOLE"/>
            <appender-ref ref="FILE"/>
        </root>
     
    </configuration>
    2. 在spring boot的启动类中增加环境变量设置代码
     
        public static void main(String[] args) {
            //只要增加下面两行就行了,为了获取当前jar包所在的目录。日志文件将输出在/log/web/{app.dir}下面
            String userDir = System.getProperty("user.dir");
            System.setProperty("app.dir", userDir.substring(userDir.lastIndexOf(File.separator)));
            //End
            SpringApplication.run(AppGatewayApplication.class, args);
        }
    3. 代码逻辑,lombok的搞法。
     
    lombok是个很好用的工具包,比如@Setter@Getter之类的,话题太长,不懂的度娘扫盲下。
    安装也很简单,去官网下载个jar包,本地java -jar运行就好了。
     
    pom.xml引入lombok依赖  
     
        <!--lombok工具-->
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
        </dependency>
    在Class上加上注解@Slf4j,然后日志就可以开心的打日志了
     
    package com.youli.demo.controller;
     
    import javax.servlet.http.HttpServletRequest;
     
    import org.slf4j.LoggerFactory;  
    import org.slf4j.Logger;
    import org.springframework.web.bind.annotation.RequestMapping;
    import org.springframework.web.bind.annotation.RestController;
     
    @RestController
    @Slf4j//这个注解加上
    @RequestMapping(value="/demo/log")
    public class LogController {
        //这行代码不要了
        //private final Logger logger = LoggerFactory.getLogger(this.getClass());
     
        @RequestMapping(value="/logMe")
        public String logMe(HttpServletRequest request) {
            String a = request.getParameter("a");
            String b = request.getParameter("b");
            // 日志输出禁止使用字符串拼接,应该使用占位符的方式
            // 以防止级别调高性能依然消耗的情况
            //logger.debug("params: a={}, b={}", a, b);
            //logger没有了,用@Slf4j注进来的log
            log.debug("params: a={}, b={}", a, b);
            return "ok";
        }
    }
     
    程序启动后在D:logweb[你的工程目录名]能看到日志了
     
    3.日志场景
     
    接口访问404
     
    修改日志配置文件[src/main/resources/logback-spring.xml],添加配置将RequestMappingHandlerMapping类设置为debug输出
     
     <logger name="org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping" level="DEBUG" />
    这样当出现404错误的时候服务端日志就能看见如下输出:
        2017-12-14 11:35:17.691 DEBUG 4484 --- [nio-9050-exec-1] s.w.s.m.m.a.RequestMappingHandlerMapping : Looking up handler method for path /abcdsdfsdf
        2017-12-14 11:35:17.694 DEBUG 4484 --- [nio-9050-exec-1] s.w.s.m.m.a.RequestMappingHandlerMapping : Did not find handler method for [/abcdsdfsdf]
        2017-12-14 11:35:17.699 DEBUG 4484 --- [nio-9050-exec-1] s.w.s.m.m.a.RequestMappingHandlerMapping : Looking up handler method for path /error
        2017-12-14 11:35:17.701 DEBUG 4484 --- [nio-9050-exec-1] s.w.s.m.m.a.RequestMappingHandlerMapping : Returning handler method [public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)]
    二、全链路日志配置
     
    1.目标
     
    简单两个步骤的配置就搞定了,配置成功的标志是-日志输出带全局traceID,长下面这个样子的([bootstrap,948cb6650eb020a6,948cb6650eb020a6,true]这个就是全局traceID和spanID了)
     
    2017-12-15 15:07:55.570  INFO [bootstrap,948cb6650eb020a6,948cb6650eb020a6,true] 5856 --- [nio-9001-exec-1] s.c.a.AnnotationConfigApplicationContext :[583] Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@40c35b01: startup date [Fri Dec 15 15:07:55 CST 2017]; parent: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@162be91c
    2017-12-15 15:07:55.636  INFO [bootstrap,948cb6650eb020a6,948cb6650eb020a6,true] 5856 --- [nio-9001-exec-1] f.a.AutowiredAnnotationBeanPostProcessor :[155] JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
    2017-12-15 15:07:55.920  INFO [bootstrap,948cb6650eb020a6,948cb6650eb020a6,true] 5856 --- [nio-9001-exec-1] c.netflix.config.ChainedDynamicProperty  :[115] Flipping property: common-external-platform.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
    2017-12-15 15:07:55.945  INFO [bootstrap,948cb6650eb020a6,948cb6650eb020a6,true] 5856 --- [nio-9001-exec-1] c.n.u.concurrent.ShutdownEnabledTimer    :[58] Shutdown hook installed for: NFLoadBalancer-PingTimer-common-external-platform
    2017-12-15 15:07:55.965  INFO [bootstrap,948cb6650eb020a6,948cb6650eb020a6,true] 5856 --- [nio-9001-exec-1] c.netflix.loadbalancer.BaseLoadBalancer  :[192] Client: common-external-platform instantiated a LoadBalancer: DynamicServerListLoadBalancer:{NFLoadBalancer:name=common-external-platform,current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:null
    2017-12-15 15:07:55.971  INFO [bootstrap,948cb6650eb020a6,948cb6650eb020a6,true] 5856 --- [nio-9001-exec-1] c.n.l.DynamicServerListLoadBalancer      :[214] Using serverListUpdater PollingServerListUpdater
    2017-12-15 15:07:55.999  INFO [bootstrap,948cb6650eb020a6,948cb6650eb020a6,true] 5856 --- [nio-9001-exec-1] c.netflix.config.ChainedDynamicProperty  :[115] Flipping property: common-external-platform.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
    2017-12-15 15:07:56.001  INFO [bootstrap,948cb6650eb020a6,948cb6650eb020a6,true] 5856 --- [nio-9001-exec-1] c.n.l.DynamicServerListLoadBalancer      :[150] DynamicServerListLoadBalancer for client common-external-platform initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=common-external-platform,current list of Servers=[10.18.2.82:9050, 10.18.2.81:9050],Load balancer stats=Zone stats: {defaultzone=[Zone:defaultzone;   Instance count:2;   Active connections count: 0;    Circuit breaker tripped count: 0;   Active connections per server: 0.0;]
    2.步骤
     
    1.修改配置文件[pom.xml] 添加zipkin依赖库
     
        <!--全链路日志追踪zipkin,kafka收集-->
     
       <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-sleuth-zipkin-stream</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-stream-kafka</artifactId>
        </dependency>
    2.修改配置文件[src/main/resources/application.yml] 配置kafka和ZK节点信息(注意:这个配置是SIT的),配置sleuth采样率为1(100%采样)
    spring:
    cloud:
    stream:
      kafka:
        binder:
          brokers: 192.167.1.3:9092,192.168.1.4:9092,192.168.1.5:9092
          zkNodes: 192.168.1.3:2181
    sleuth:
    sampler:
      percentage: 1.0
    完了,很简单吧
  • 相关阅读:
    Datagrip导入导出为一个sql文件详细说明 (mysql)
    Linux/Unix/Mac OS下的远程访问和文件共享方式
    批量杀掉多个pid文件中记录的pid进程, 并集成到shell脚本中
    把tomcat服务器配置为windows服务的方法
    idea导入java项目
    linux-umount挂载点无法卸载:device is busy(解决)
    elasticsearch插件大全
    分布式搜索elasticsearch配置文件详解
    centos fastdfs 多服务器 多硬盘 多组 配置详解
    redis 配置 linux
  • 原文地址:https://www.cnblogs.com/haoliyou/p/10000550.html
Copyright © 2011-2022 走看看