zoukankan      html  css  js  c++  java
  • 嘿!为你的应用创建滚动日志吧?

      一般作为服务端的应用,必须要有相应的日志,否则问题怎么排查呢?

      而日志怎么打印,也是一个技术活。不然java中也不会存在N多厂商争相提供日志框架了!

      而日志滚动则往往也是刚需,毕竟没人能保证日志的量及可阅读性。日志滚动实现主要有两个大方向:

        1.  让应用服务自行打印,打印到哪里也完全由应用决定!

        2. 借助第三方的工具进行日志打印,这种一般要借助于控制台或者agent!

        3. 让日志框架提供日志滚动功能,自行管理日志;这样做有个好处就是,应用自带,无需外部处理。坏处就是要完全依赖该应用,会影响该应用的性能,且如果该应用存在bug,则功能就不敢保证了。(稍后我会以logback的日志滚动说明)

        4. 借助第三方的工具进行日志滚动;这样做的好处是滚动功能更独立,对代码无入侵,即使真的有问题,大不了把它干掉也没关系;另外,第三方工具不会因为应用本身的bug而导致滚动异常,从而保证了有足够的排查依据。(稍后我会以cronolog进行讲解滚动实现);

      

    具体日志滚动实现

    1. 使用应用打印的方式:如logback的rollingpolicy,则自带滚动日志功能!但是坑多!

      1.1. 首先我们看下日志滚动的配置:(在 logback.xml 配置)

        <!--输出到文件-->
        <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
            <file>${log_path}/api.ln.log</file>
            <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy" >
                <fileNamePattern>${log_path}/api.%d{yyyy-MM-dd_HH}.log</fileNamePattern>
                <!-- keep 10 days' worth of history capped at 8GB total size -->
                <maxHistory>10</maxHistory>
                <totalSizeCap>8GB</totalSizeCap>
            </rollingPolicy>
            <encoder>
                <pattern>%d{MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
            </encoder>
        </appender>

      这里配置以时间为滚动标准,每小时滚动一次!最大保留10天日志,总共大小不超过8G。我们后面来看下他的效果!

    1.2. 看下滚动代码!

      首先,日志滚动会有相应的线程一直在跑(不管是应用实现还是第三方实现都是这样,否则怎么随时检测滚动时机呢)!

       在 EventPlayer中,有个play方法,此时会决断是否是 EndEvent, 如果是的话就会调用后台线程生成!

        // ch.qos.logback.core.joran.spi.EventPlayer
        public void play(List<SaxEvent> aSaxEventList) {
            eventList = aSaxEventList;
            SaxEvent se;
            for (currentIndex = 0; currentIndex < eventList.size(); currentIndex++) {
                se = eventList.get(currentIndex);
    
                if (se instanceof StartEvent) {
                    interpreter.startElement((StartEvent) se);
                    // invoke fireInPlay after startElement processing
                    interpreter.getInterpretationContext().fireInPlay(se);
                }
                if (se instanceof BodyEvent) {
                    // invoke fireInPlay before characters processing
                    interpreter.getInterpretationContext().fireInPlay(se);
                    interpreter.characters((BodyEvent) se);
                }
                // rollingPollicy 在此处调唤醒
                if (se instanceof EndEvent) {
                    // invoke fireInPlay before endElement processing
                    interpreter.getInterpretationContext().fireInPlay(se);
                    interpreter.endElement((EndEvent) se);
                }
    
            }
        }

      然后,几经转换,就到了Interpreter 了,这里会做一个死循环,一直在监听!

        // ch.qos.logback.core.joran.spi.Interpreter
        private void callEndAction(List<Action> applicableActionList, String tagName) {
            if (applicableActionList == null) {
                return;
            }
    
            // logger.debug("About to call end actions on node: [" + localName + "]");
            Iterator<Action> i = applicableActionList.iterator();
    
            while (i.hasNext()) {
                Action action = i.next();
                // now let us invoke the end method of the action. We catch and report
                // any eventual exceptions
                try {
                    action.end(interpretationContext, tagName);
                } catch (ActionException ae) {
                    // at this point endAction, there is no point in skipping children as
                    // they have been already processed
                    cai.addError("ActionException in Action for tag [" + tagName + "]", ae);
                } catch (RuntimeException e) {
                    // no point in setting skip
                    cai.addError("RuntimeException in Action for tag [" + tagName + "]", e);
                }
            }
        }

      最后,就会调用 RollingPolicy 的start()了,这里是 TimeBasedRollingPollicy .

        // ch.qos.logback.core.rolling.TimeBasedRollingPolicy
        public void start() {
            // set the LR for our utility object
            renameUtil.setContext(this.context);
    
            // find out period from the filename pattern
            if (fileNamePatternStr != null) {
                fileNamePattern = new FileNamePattern(fileNamePatternStr, this.context);
                determineCompressionMode();
            } else {
                addWarn(FNP_NOT_SET);
                addWarn(CoreConstants.SEE_FNP_NOT_SET);
                throw new IllegalStateException(FNP_NOT_SET + CoreConstants.SEE_FNP_NOT_SET);
            }
    
            compressor = new Compressor(compressionMode);
            compressor.setContext(context);
    
            // wcs : without compression suffix
            fileNamePatternWithoutCompSuffix = new FileNamePattern(Compressor.computeFileNameStrWithoutCompSuffix(fileNamePatternStr, compressionMode), this.context);
    
            addInfo("Will use the pattern " + fileNamePatternWithoutCompSuffix + " for the active file");
    
            if (compressionMode == CompressionMode.ZIP) {
                String zipEntryFileNamePatternStr = transformFileNamePattern2ZipEntry(fileNamePatternStr);
                zipEntryFileNamePattern = new FileNamePattern(zipEntryFileNamePatternStr, context);
            }
            // 默认会使用 DefaultTimeBasedFileNamingAndTriggeringPolicy 进行滚动
            if (timeBasedFileNamingAndTriggeringPolicy == null) {
                timeBasedFileNamingAndTriggeringPolicy = new DefaultTimeBasedFileNamingAndTriggeringPolicy<E>();
            }
            timeBasedFileNamingAndTriggeringPolicy.setContext(context);
            timeBasedFileNamingAndTriggeringPolicy.setTimeBasedRollingPolicy(this);
            timeBasedFileNamingAndTriggeringPolicy.start();
    
            if (!timeBasedFileNamingAndTriggeringPolicy.isStarted()) {
                addWarn("Subcomponent did not start. TimeBasedRollingPolicy will not start.");
                return;
            }
    
            // the maxHistory property is given to TimeBasedRollingPolicy instead of to
            // the TimeBasedFileNamingAndTriggeringPolicy. This makes it more convenient
            // for the user at the cost of inconsistency here.
            if (maxHistory != UNBOUND_HISTORY) {
                archiveRemover = timeBasedFileNamingAndTriggeringPolicy.getArchiveRemover();
                archiveRemover.setMaxHistory(maxHistory);
                archiveRemover.setTotalSizeCap(totalSizeCap.getSize());
                if (cleanHistoryOnStart) {
                    addInfo("Cleaning on start up");
                    Date now = new Date(timeBasedFileNamingAndTriggeringPolicy.getCurrentTime());
                    cleanUpFuture = archiveRemover.cleanAsynchronously(now);
                }
            } else if (!isUnboundedTotalSizeCap()) {
                addWarn("'maxHistory' is not set, ignoring 'totalSizeCap' option with value ["+totalSizeCap+"]");
            }
            // 调用父类start(), 设置启动标识,不允许多次调用初始化
            super.start();
        }
        // DefaultTimeBasedFileNamingAndTriggeringPolicy 的实现,设置类功能主要还是调用 TimeBasedFileNamingAndTriggeringPolicy 的方法,而其自身,则是处理一些异常情况,以及开启一个 Remover, 供具体的实现调用
        @Override
        public void start() {
            super.start();
            if (!super.isErrorFree())
                return;
            if(tbrp.fileNamePattern.hasIntegerTokenCOnverter()) {
                addError("Filename pattern ["+tbrp.fileNamePattern+"] contains an integer token converter, i.e. %i, INCOMPATIBLE with this configuration. Remove it.");
                return;
            }
            
            archiveRemover = new TimeBasedArchiveRemover(tbrp.fileNamePattern, rc);
            archiveRemover.setContext(context);
            started = true;
        }
        // TimeBasedFileNamingAndTriggeringPolicy, 则实际处理日志的滚动逻辑了
        public void start() {
            DateTokenConverter<Object> dtc = tbrp.fileNamePattern.getPrimaryDateTokenConverter();
            if (dtc == null) {
                throw new IllegalStateException("FileNamePattern [" + tbrp.fileNamePattern.getPattern() + "] does not contain a valid DateToken");
            }
    
            if (dtc.getTimeZone() != null) {
                rc = new RollingCalendar(dtc.getDatePattern(), dtc.getTimeZone(), Locale.getDefault());
            } else {
                rc = new RollingCalendar(dtc.getDatePattern());
            }
            addInfo("The date pattern is '" + dtc.getDatePattern() + "' from file name pattern '" + tbrp.fileNamePattern.getPattern() + "'.");
            rc.printPeriodicity(this);
    
            if (!rc.isCollisionFree()) {
                addError("The date format in FileNamePattern will result in collisions in the names of archived log files.");
                addError(CoreConstants.MORE_INFO_PREFIX + COLLIDING_DATE_FORMAT_URL);
                withErrors();
                return;
            }
    
            setDateInCurrentPeriod(new Date(getCurrentTime()));
            if (tbrp.getParentsRawFileProperty() != null) {
                File currentFile = new File(tbrp.getParentsRawFileProperty());
                if (currentFile.exists() && currentFile.canRead()) {
                    setDateInCurrentPeriod(new Date(currentFile.lastModified()));
                }
            }
            addInfo("Setting initial period to " + dateInCurrentPeriod);
            computeNextCheck();
        }

      经过如上初始化动作之后,发现并没有启动相应的轮循线程,所以这个点也是超出简单的认知了,不管怎么样,我们还要继续的!我们先来看一下 RollingFileAppender 的 append() 逻辑吧,毕竟它才是log的接入口!

       // ch.qos.logback.core.ch.qos.logback.core.rolling.RollingFileAppender, 其接入口为: UnsynchronizedAppenderBase.doAppend()
       // ch.qos.logback.core.OutputStreamAppender
        @Override
        protected void append(E eventObject) {
            if (!isStarted()) {
                return;
            }
            // 调用 RollingFileAppender 实现
            subAppend(eventObject);
        }
       // ch.qos.logback.core.ch.qos.logback.core.rolling.RollingFileAppender
        @Override
        protected void subAppend(E event) {
            // The roll-over check must precede actual writing. This is the
            // only correct behavior for time driven triggers.
    
            // We need to synchronize on triggeringPolicy so that only one rollover
            // occurs at a time
            synchronized (triggeringPolicy) {
                if (triggeringPolicy.isTriggeringEvent(currentlyActiveFile, event)) {
                    rollover();
                }
            }
    
            super.subAppend(event);
        }

      其中,rollover()就是其滚动逻辑!

      所以,看到了吧!这里的文件滚动,是依赖于外部写入的,原因是为了写入的线程安全,保证文件的完整性!

      换句话说就是,如果在滚动的这个时机,如果有外部写入,那么,文件得以滚动,否则,不会主动滚动文件!如果外部一直没日志写入,就不会存在日志滚动!

      我们先来看下滚动的条件吧: triggeringPolicy.isTriggeringEvent(currentlyActiveFile, event)

        // ch.qos.logback.core.rolling.DefaultTimeBasedFileNamingAndTriggeringPolicy
        public boolean isTriggeringEvent(File activeFile, final E event) {
            long time = getCurrentTime();
            if (time >= nextCheck) {
                Date dateOfElapsedPeriod = dateInCurrentPeriod;
                addInfo("Elapsed period: " + dateOfElapsedPeriod);
                elapsedPeriodsFileName = tbrp.fileNamePatternWithoutCompSuffix.convert(dateOfElapsedPeriod);
                setDateInCurrentPeriod(time);
                computeNextCheck();
                return true;
            } else {
                return false;
            }
        }

      如上判断,即将当前时间与需要滚动的时间做对,大于滚动时间则返回 true, 并计算出下次需要滚动的时间,备用!

    接下来,我们看下,具体的文件滚动实现!两个主逻辑: 1. 将文件更名滚动; 2. 重新创建一个新的目标文件,以使后续可以写入!

        /**
         * Implemented by delegating most of the rollover work to a rolling policy.
         */
        public void rollover() {
            // 此处lock为 ReentrantLock, 即是互斥锁,只能一个线程可访问!
            lock.lock();
            try {
                // Note: This method needs to be synchronized because it needs exclusive
                // access while it closes and then re-opens the target file.
                //
                // make sure to close the hereto active log file! Renaming under windows
                // does not work for open files.
                this.closeOutputStream();
                attemptRollover();
                attemptOpenFile();
            } finally {
                lock.unlock();
            }
        }
        // 滚动文件逻辑,调用设置的 policy 实现进行滚动,此处我设置的是 TimeBasedRollingPolicy
        private void attemptRollover() {
            try {
                rollingPolicy.rollover();
            } catch (RolloverFailure rf) {
                addWarn("RolloverFailure occurred. Deferring roll-over.");
                // we failed to roll-over, let us not truncate and risk data loss
                this.append = true;
            }
        }
        // ch.qos.logback.core.rolling.TimeBasedRollingPolicy rollover
        public void rollover() throws RolloverFailure {
    
            // when rollover is called the elapsed period's file has
            // been already closed. This is a working assumption of this method.
    
            String elapsedPeriodsFileName = timeBasedFileNamingAndTriggeringPolicy.getElapsedPeriodsFileName();
    
            String elapsedPeriodStem = FileFilterUtil.afterLastSlash(elapsedPeriodsFileName);
    
            if (compressionMode == CompressionMode.NONE) {
                if (getParentsRawFileProperty() != null) {
                    renameUtil.rename(getParentsRawFileProperty(), elapsedPeriodsFileName);
                } // else { nothing to do if CompressionMode == NONE and parentsRawFileProperty == null }
            } else {
                if (getParentsRawFileProperty() == null) {
                    compressionFuture = compressor.asyncCompress(elapsedPeriodsFileName, elapsedPeriodsFileName, elapsedPeriodStem);
                } else {
                    compressionFuture = renameRawAndAsyncCompress(elapsedPeriodsFileName, elapsedPeriodStem);
                }
            }
    
            if (archiveRemover != null) {
                Date now = new Date(timeBasedFileNamingAndTriggeringPolicy.getCurrentTime());
                this.cleanUpFuture = archiveRemover.cleanAsynchronously(now);
            }
        }

      TimeBasedRollingPolicy 的滚动方式为,重命名文件即可!即先获取外部设置的主写文件,然后根据新文件命名规则,生成一个新路径,然后重命名文件!重命名也是有些讲究的,有兴趣的同学可以查看下其重命名的实现!

        
        // ch.qos.logback.core.rolling.helper.RenameUtil
        /**
         * A relatively robust file renaming method which in case of failure due to
         * src and target being on different volumes, falls back onto
         * renaming by copying.
         *
         * @param src
         * @param target
         * @throws RolloverFailure
         */
        public void rename(String src, String target) throws RolloverFailure {
            if (src.equals(target)) {
                addWarn("Source and target files are the same [" + src + "]. Skipping.");
                return;
            }
            File srcFile = new File(src);
    
            if (srcFile.exists()) {
                // 如果目录不存在,会先去创建目录,所以你可以滚动到其他地方,而目录位置则不用管(权限除外)
                File targetFile = new File(target);
                createMissingTargetDirsIfNecessary(targetFile);
    
                addInfo("Renaming file [" + srcFile + "] to [" + targetFile + "]");
    
                boolean result = srcFile.renameTo(targetFile);
    
                // 对于直接重命名失败,则会再次尝试,如果在不同的分区,则会使用一次文件复制的方式进行一次重命名,具体做法是,先把文件copy到新地址,然后再将当前文件删除
                if (!result) {
                    addWarn("Failed to rename file [" + srcFile + "] as [" + targetFile + "].");
                    Boolean areOnDifferentVolumes = areOnDifferentVolumes(srcFile, targetFile);
                    if (Boolean.TRUE.equals(areOnDifferentVolumes)) {
                        addWarn("Detected different file systems for source [" + src + "] and target [" + target + "]. Attempting rename by copying.");
                        renameByCopying(src, target);
                        return;
                    } else {
                        addWarn("Please consider leaving the [file] option of " + RollingFileAppender.class.getSimpleName() + " empty.");
                        addWarn("See also " + RENAMING_ERROR_URL);
                    }
                }
            } else {
                throw new RolloverFailure("File [" + src + "] does not exist.");
            }
        }

      在做完日志重命名的滚动后,还有一个可能的工作,就是删除过期的日志!这个工作由 archiveRemover 来做,即之前在 DefaultTimeBasedFileNamingAndTriggeringPolicy 中创建的实例! 会调用其 archiveRemover.cleanAsynchronously(now);

        public Future<?> cleanAsynchronously(Date now) {
            ArhiveRemoverRunnable runnable = new ArhiveRemoverRunnable(now);
            ExecutorService executorService = context.getScheduledExecutorService();
            Future<?> future = executorService.submit(runnable);
            return future;
        }

      在做删除过期日志时,会先获取一个 ExecutorService, 进行异步删除, 而这个 ExecutorService 默认开启 8 常驻线程,进行日志处理!

      删除动作进行异步执行,从而避免影响业务执行!清理过程如下:

        public class ArhiveRemoverRunnable implements Runnable {
            Date now;
    
            ArhiveRemoverRunnable(Date now) {
                this.now = now;
            }
    
            @Override
            public void run() {
                // 先清除当前文件,再根据设置的最大值,删除列表
                clean(now);
                if (totalSizeCap != UNBOUNDED_TOTAL_SIZE_CAP && totalSizeCap > 0) {
                    capTotalSize(now);
                }
            }
        }
        public void clean(Date now) {
     
            long nowInMillis = now.getTime();
            // for a live appender periodsElapsed is expected to be 1
            int periodsElapsed = computeElapsedPeriodsSinceLastClean(nowInMillis);
            lastHeartBeat = nowInMillis;
            if (periodsElapsed > 1) {
                addInfo("Multiple periods, i.e. " + periodsElapsed + " periods, seem to have elapsed. This is expected at application start.");
            }
            for (int i = 0; i < periodsElapsed; i++) {
                // 此处会根据 maxHistory 进行 -1 后清除文件,即: 只会清理 periodsElapsed 次历史日志
                int offset = getPeriodOffsetForDeletionTarget() - i;
                Date dateOfPeriodToClean = rc.getEndOfNextNthPeriod(now, offset);
                cleanPeriod(dateOfPeriodToClean);
            }
        }
        public void cleanPeriod(Date dateOfPeriodToClean) {
            // 获取需要删除的文件列表,然后依次删除,如果文件夹内的文件全部被删除,则将文件夹删除
            File[] matchingFileArray = getFilesInPeriod(dateOfPeriodToClean);
    
            for (File f : matchingFileArray) {
                addInfo("deleting " + f);
                f.delete();
            }
    
            if (parentClean && matchingFileArray.length > 0) {
                File parentDir = getParentDir(matchingFileArray[0]);
                removeFolderIfEmpty(parentDir);
            }
        }
        // 按规则匹配需要删除的文件
        protected File[] getFilesInPeriod(Date dateOfPeriodToClean) {
            String filenameToDelete = fileNamePattern.convert(dateOfPeriodToClean);
            File file2Delete = new File(filenameToDelete);
    
            if (fileExistsAndIsFile(file2Delete)) {
                return new File[] { file2Delete };
            } else {
                return new File[0];
            }
        }
        // 清理历史文件逻辑,注意要想清理历史文件,就一定要设置好 totalSizeCap, 否则,不会进行自动清理!
        void capTotalSize(Date now) {
            long totalSize = 0;
            long totalRemoved = 0;
            for (int offset = 0; offset < maxHistory; offset++) {
                Date date = rc.getEndOfNextNthPeriod(now, -offset);
                File[] matchingFileArray = getFilesInPeriod(date);
                descendingSortByLastModified(matchingFileArray);
                for (File f : matchingFileArray) {
                    long size = f.length();
                    if (totalSize + size > totalSizeCap) {
                        addInfo("Deleting [" + f + "]" + " of size " + new FileSize(size));
                        totalRemoved += size;
                        f.delete();
                    }
                    totalSize += size;
                }
            }
            addInfo("Removed  " + new FileSize(totalRemoved) + " of files");
        }

      以上就是一个删除过期日志的逻辑,主要有几个点:

        1. 只会进行清理 maxHistory 个周期的日志,即只会倒推 n 个周期内的日志;
        2. 只会清理文件大小大于 totalSizeCap 大小以后的文件;(这个文件强依赖文件列表的排序,这里的排序是根据最后修改时间来排的)
        3. maxHistory 并非最大保留天数,不要相信坑货文档,它只是一个扫描周期而已,不过这个值在上一步清理时会处理一次!

    还有个细节,咱们得再来看看:滚动时机,按天,按小时,按分钟?

        // 滚动时机判定
        // ch.qos.logback.core.rolling.helper.RollingCalendar
        public Date getEndOfNextNthPeriod(Date now, int periods) {
            return innerGetEndOfNextNthPeriod(this, this.periodicityType, now, periods);
        }
        static private Date innerGetEndOfNextNthPeriod(Calendar cal, PeriodicityType periodicityType, Date now, int numPeriods) {
            cal.setTime(now);
            switch (periodicityType) {
            case TOP_OF_MILLISECOND:
                cal.add(Calendar.MILLISECOND, numPeriods);
                break;
    
            case TOP_OF_SECOND:
                cal.set(Calendar.MILLISECOND, 0);
                cal.add(Calendar.SECOND, numPeriods);
                break;
    
            case TOP_OF_MINUTE:
                cal.set(Calendar.SECOND, 0);
                cal.set(Calendar.MILLISECOND, 0);
                cal.add(Calendar.MINUTE, numPeriods);
                break;
    
            case TOP_OF_HOUR:
                cal.set(Calendar.MINUTE, 0);
                cal.set(Calendar.SECOND, 0);
                cal.set(Calendar.MILLISECOND, 0);
                cal.add(Calendar.HOUR_OF_DAY, numPeriods);
                break;
    
            case TOP_OF_DAY:
                cal.set(Calendar.HOUR_OF_DAY, 0);
                cal.set(Calendar.MINUTE, 0);
                cal.set(Calendar.SECOND, 0);
                cal.set(Calendar.MILLISECOND, 0);
                cal.add(Calendar.DATE, numPeriods);
                break;
    
            case TOP_OF_WEEK:
                cal.set(Calendar.DAY_OF_WEEK, cal.getFirstDayOfWeek());
                cal.set(Calendar.HOUR_OF_DAY, 0);
                cal.set(Calendar.MINUTE, 0);
                cal.set(Calendar.SECOND, 0);
                cal.set(Calendar.MILLISECOND, 0);
                cal.add(Calendar.WEEK_OF_YEAR, numPeriods);
                break;
    
            case TOP_OF_MONTH:
                cal.set(Calendar.DATE, 1);
                cal.set(Calendar.HOUR_OF_DAY, 0);
                cal.set(Calendar.MINUTE, 0);
                cal.set(Calendar.SECOND, 0);
                cal.set(Calendar.MILLISECOND, 0);
                cal.add(Calendar.MONTH, numPeriods);
                break;
    
            default:
                throw new IllegalStateException("Unknown periodicity type.");
            }
    
            return cal.getTime();
        }

      可以看到其滚动的粒度: TOP_OF_MILLISECOND/TOP_OF_SECOND/TOP_OF_MINUTE/TOP_OF_HOUR/TOP_OF_DAY/TOP_OF_WEEK/TOP_OF_MONTH, 要说起来,粒度还是很细的哦!至于能不能真的有用,另说了!

      总结下logback的滚动方式!

        1. 在写入的时机进行滚动时机检查,合适则进行滚动;
        2. 同步滚动操作,保证线程安全;
        3. 使用重命名的方式进行滚动文件处理,如果失败会尝试一次不同分区的文件复制操作;
        4. 删除过期日志有两个时机,一个是判断当前周期前 n 个周期文件,如果有则删除;
        5. 对于设置了最大文件大小限制时,另外进行允许周期内的文件大小判定,超过大小后按修改时间最早删除;
        6. 触发滚动时机后,进行异步删除,一般不影响业务;

    第三方工具如: 经典版 cronolog, 时尚版 logrotate(麻烦)

      cronolog 是一个很古老的日志滚动工具了(应该已经不维护了)。它可以接收应用的输出日志,然后按照规则进行日志存储,比如按照年月日时分秒来保存文件!

      在网上其资料也已经不是很多了,很多人为了下载一个安装包也是绞尽脑汁啊!我也提供一个便捷安装包吧: 点此下载; 

      其 github 项目地址: https://github.com/fordmason/cronolog , 你完全可以自己去下载一个完全的包,自己安装!

      不过我还是要说一下其他两个安装方式:

        1. 直接使用 yum 源安装;(好像是要安装 epel 源) (推荐)

    yum install cronolog -y

        2. 使用上面下载的包,直接解压即可

    tar -zxvf cronolog-bin.tar.gz -C /

        3. 使用网上别人提供的源码安装

    hehe...

      说了这么多,还不是为了使用,如何与应用结合?

      其实只需要在你原来应用启动的后面再加上如下命令就可以了!

    $> | /usr/local/sbin/cronolog -S /var/logs/ai_ln.out /var/logs/ai.%Y-%m-%d-%H.out

      完整的操作示例如下:

    exec nohup java -jar /www/aproj.jar 2>&1 | /usr/local/sbin/cronolog -S /var/logs/ai_ln.out /var/logs/ai.%Y-%m-%d-%H.out >> /dev/null &

      如上命令是网上大部分人是这么写的,但是在某些情况下会有问题。比如我想远程启动这个服务的时候,就会一直拿不到结果!为啥?反正写成下面这个就完美了!即在 cronolog 之后,再加一个重定向输出 2>&1 。

    exec nohup java -jar /www/aproj.jar 2>&1 | /usr/local/sbin/cronolog -S /var/logs/ai_ln.out /var/logs/ai.%Y-%m-%d-%H.out >> /dev/null 2>&1 &

      那么,这个工具和应用自己输出日志相比,有什么好处吗?它是怎么实现的呢?

      好处前面已经说了,对代码无侵入,控制更灵活!

      其实现原理为,接收一个标准的输入流,然后写入到相应文件即可!它不负责文件的删除,所以删除过期文件还得依赖另外的脚本!

      其主体源码如下:

        
        /* Loop, waiting for data on standard input */
        for (;;)
        {
            /** 
             * Read a buffer's worth of log file data, exiting on errors
             * or end of file.
             */
            n_bytes_read = read(0, read_buf, sizeof read_buf);
            if (n_bytes_read == 0)
            {
                exit(3);
            }
            if (errno == EINTR)
            {
                continue;
            }
            else if (n_bytes_read < 0)
            {
                exit(4);
            }
    
            time_now = time(NULL) + time_offset;
            
            /**
             * If the current period has finished and there is a log file
             * open, close the log file
             */
            if ((time_now >= next_period) && (log_fd >= 0))
            {
                close(log_fd);
                log_fd = -1;
            }
            
            /** 
             * If there is no log file open then open a new one.
             */
            if (log_fd < 0)
            {
                log_fd = new_log_file(template, linkname, linktype, prevlinkname,
                          periodicity, period_multiple, period_delay,
                          filename, sizeof (filename), time_now, &next_period);
            }
    
            DEBUG(("%s (%d): wrote message; next period starts at %s (%d) in %d secs
    ",
                   timestamp(time_now), time_now, 
                   timestamp(next_period), next_period,
                   next_period - time_now));
    
            /**
             * Write out the log data to the current log file.
             */
            if (write(log_fd, read_buf, n_bytes_read) != n_bytes_read)
            {
                perror(filename);
                exit(5);
            }
        }

      大概操作就是:

        1. cronolog 进程开启后,会一直死循环,除非遇到错误如应用关闭等;
        2. 阻塞从标准输入读取信息,读取到后,再进行文件操作;
        3. 每次读取内容后判断是否到达需要新滚动的周期,如果到了,就把原来的文件close掉,并重新创建一个用于写的文件;
        4. 只管向打开的文件中写入缓冲内容即可;
        5. 所有读入数据是基于管道操作的,简单实用;

      看起来很简单啊!会不会有什么问题呢?应该不会吧,它可是经过时间考验的哦。越是简单的,往往越是可靠的!

      看着上面代码,有同学肯定要说了,这么简单的代码谁不会啊,自己顺手就来一个shell搞定。 且不论你的shell写得是否可靠,但是你基于 shell, 别人是基于c的,恐怕不是一个量级的哦!

      最后,还有个问题我们要处理下,那就是过期日志的清理问题?

      这个简单的脚本是不会给你做了,或者说我没有发现它有这功能;所以,只能自己写脚本清理了!一行代码搞定!

        # vim clean_log.sh
            find /var/logs/ai -mtime +8 -name "ai.*out" -exec rm -rf {} ;
        # 然后在 crontab 中加入执行时机即可,一般一天一次!
            0 0  * * * sh clean_log.sh

      搞定!

    当然,你也可以写完善点:

    #!/bin/bash
    
    log_path_prefix=/opt/springboot/logs
    expire_hours=3;
    
    expire_minutes=$[ expire_hours * 60 ];
    now_time=`date "+%Y-%m-%d %H:%M:%S"`
    
    echo "-At $now_time";
    
    # del function
    function del_expire_logs() {
        find_cmd="find $1 -mmin +${2} -type f "
        if [ "$3" != "" ]; then
            find_cmd="$find_cmd -name '$3'";
        fi;
        echo " -Cmd: $find_cmd";
        f_expired_files=`eval $find_cmd`;
        echo " -Find result: $f_expired_files";
        if [ "$f_expired_files" != "" ]; then
            file_list=($f_expired_files);
            for item in ${file_list[@]};
            do
                echo " -Del file: $item";
                rm -rf $item;        
            done;
        fi;
    }
    
    del_expire_logs $log_path_prefix $expire_minutes "*.out";
    
    log_path_prefix2=/opt/logs
    $expire_minutes2=2880;        # for 2 day
    
    del_expire_logs $log_path_prefix2 $expire_minutes2;

      以上,就是一些日志滚动的实现及原理解析了!是不是有一种豁然开朗的感觉?哈哈。。

      事情其实并没有想像中的难!

  • 相关阅读:
    混用Int与IntPtr导致GetProcAddress始终返回null
    Net中获取程序集路径
    Sql server 2014 同一数据库换名还原,导致同名库一直处于还原状态
    微耕N3000注入
    Xaramin IOS 开发常见问题
    Vs2017 xaramin mac build agent部署后记
    Git 笔记
    spring AOP
    JAVA 反射原理
    Hyperledger Fabric:fabric private data技术【官方文档翻译】
  • 原文地址:https://www.cnblogs.com/yougewe/p/10343855.html
Copyright © 2011-2022 走看看