zoukankan      html  css  js  c++  java
  • Serilog.Sinks.Elasticsearch

    Serilog.Sinks.Elasticsearch

    This repository contains two nuget packages: Serilog.Sinks.Elasticsearch and Serilog.Formatting.Elasticsearch.

    What is this sink

    The Serilog Elasticsearch sink project is a sink (basically a writer) for the Serilog logging framework. Structured log events are written to sinks and each sink is responsible for writing it to its own backend, database, store etc. This sink delivers the data to Elasticsearch, a NoSQL search engine. It does this in a similar structure as Logstash and makes it easy to use Kibana for visualizing your logs.

    This example shows the options that are currently available when using the appSettings reader.

    <appSettings>
        <add key="serilog:using" value="Serilog.Sinks.Elasticsearch"/>
        <add key="serilog:write-to:Elasticsearch.nodeUris" value="http://localhost:9200;http://remotehost:9200"/>
        <add key="serilog:write-to:Elasticsearch.indexFormat" value="custom-index-{0:yyyy.MM}"/>
        <add key="serilog:write-to:Elasticsearch.templateName" value="myCustomTemplate"/>
        <add key="serilog:write-to:Elasticsearch.typeName" value="myCustomLogEventType"/>
        <add key="serilog:write-to:Elasticsearch.pipelineName" value="myCustomPipelineName"/>
        <add key="serilog:write-to:Elasticsearch.batchPostingLimit" value="50"/>
        <add key="serilog:write-to:Elasticsearch.period" value="2"/>
        <add key="serilog:write-to:Elasticsearch.inlineFields" value="true"/>
        <add key="serilog:write-to:Elasticsearch.restrictedToMinimumLevel" value="Warning"/>
        <add key="serilog:write-to:Elasticsearch.bufferBaseFilename" value="C:TempSerilogElasticBuffer"/>
        <add key="serilog:write-to:Elasticsearch.bufferFileSizeLimitBytes" value="5242880"/>
        <add key="serilog:write-to:Elasticsearch.bufferLogShippingInterval" value="5000"/>
        <add key="serilog:write-to:Elasticsearch.bufferRetainedInvalidPayloadsLimitBytes" value="5000"/>
        <add key="serilog:write-to:Elasticsearch.bufferFileCountLimit " value="31"/>
        <add key="serilog:write-to:Elasticsearch.connectionGlobalHeaders" value="Authorization=Bearer SOME-TOKEN;OtherHeader=OTHER-HEADER-VALUE" />
        <add key="serilog:write-to:Elasticsearch.connectionTimeout" value="5" />
        <add key="serilog:write-to:Elasticsearch.emitEventFailure" value="WriteToSelfLog" />
        <add key="serilog:write-to:Elasticsearch.queueSizeLimit" value="100000" />
        <add key="serilog:write-to:Elasticsearch.autoRegisterTemplate" value="true" />
        <add key="serilog:write-to:Elasticsearch.autoRegisterTemplateVersion" value="ESv2" />
        <add key="serilog:write-to:Elasticsearch.overwriteTemplate" value="false" />
        <add key="serilog:write-to:Elasticsearch.registerTemplateFailure" value="IndexAnyway" />
        <add key="serilog:write-to:Elasticsearch.deadLetterIndexName" value="deadletter-{0:yyyy.MM}" />
        <add key="serilog:write-to:Elasticsearch.numberOfShards" value="20" />
        <add key="serilog:write-to:Elasticsearch.numberOfReplicas" value="10" />
        <add key="serilog:write-to:Elasticsearch.formatProvider" value="My.Namespace.MyFormatProvider, My.Assembly.Name" />
        <add key="serilog:write-to:Elasticsearch.connection" value="My.Namespace.MyConnection, My.Assembly.Name" />
        <add key="serilog:write-to:Elasticsearch.serializer" value="My.Namespace.MySerializer, My.Assembly.Name" />
        <add key="serilog:write-to:Elasticsearch.connectionPool" value="My.Namespace.MyConnectionPool, My.Assembly.Name" />
        <add key="serilog:write-to:Elasticsearch.customFormatter" value="My.Namespace.MyCustomFormatter, My.Assembly.Name" />
        <add key="serilog:write-to:Elasticsearch.customDurableFormatter" value="My.Namespace.MyCustomDurableFormatter, My.Assembly.Name" />
        <add key="serilog:write-to:Elasticsearch.failureSink" value="My.Namespace.MyFailureSink, My.Assembly.Name" />
      </appSettings>

    Handling errors

    From version 5.5 you have the option to specify how to handle issues with Elasticsearch. Since the sink delivers in a batch, it might be possible that one or more events could actually not be stored in the Elasticsearch store. Can be a mapping issue for example. It is hard to find out what happened here. There is a new option called EmitEventFailure which is an enum (flagged) with the following options:

    4种方法来处理问题

    • WriteToSelfLog, the default option in which the errors are written to the SelfLog.
    • WriteToFailureSink, the failed events are send to another sink. Make sure to configure this one by setting the FailureSink option.
    • ThrowException, in which an exception is raised.
    • RaiseCallback, the failure callback function will be called when the event cannot be submitted to Elasticsearch. Make sure to set the FailureCallback option to handle the event.

    An example:

    .WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("http://localhost:9200"))
                    {
                        FailureCallback = e => Console.WriteLine("Unable to submit event " + e.MessageTemplate),
                        EmitEventFailure = EmitEventFailureHandling.WriteToSelfLog |
                                           EmitEventFailureHandling.WriteToFailureSink |
                                           EmitEventFailureHandling.RaiseCallback,
                        FailureSink = new FileSink("./failures.txt", new JsonFormatter(), null)
                    })

    With the AutoRegisterTemplate option the sink will write a default template to Elasticsearch. When this template is not there, you might not want to index as it can influence the data quality. Since version 5.5 you can use the RegisterTemplateFailure option.

    • IndexAnyway; the default option, the events will be send to the server
    • IndexToDeadletterIndex; using the deadletterindex format, it will write the events to the deadletter queue. When you fix your template mapping, you can copy your data into the right index.
    • FailSink; this will simply fail the sink by raising an exception.

    Since version 7 you can specify an action to do when log row was denied by the elasticsearch because of the data (payload) if durable file is specied. i.e.

    BufferCleanPayload = (failingEvent, statuscode, exception) =>
                        {
                            dynamic e = JObject.Parse(failingEvent);
                            return JsonConvert.SerializeObject(new Dictionary<string, object>()
                            {
                                { "@timestamp",e["@timestamp"]},
                                { "level","Error"},
                                { "message","Error: "+e.message},
                                { "messageTemplate",e.messageTemplate},
                                { "failingStatusCode", statuscode},
                                { "failingException", exception}
                            });
                        },

    The IndexDecider didnt worked well when durable file was specified so an option to specify BufferIndexDecider is added. Datatype of logEvent is string i.e.

     BufferIndexDecider = (logEvent, offset) => "log-serilog-" + (new Random().Next(0, 2)),

    Option BufferFileCountLimit is added. The maximum number of log files that will be retained. including the current log file. For unlimited retention, pass null. The default is 31. Option BufferFileSizeLimitBytes is added The maximum size, in bytes, to which the buffer log file for a specific date will be allowed to grow. By default 100L * 1024 * 1024 will be applied.

    Serilog.Sinks.Elasticsearch.Sample

    https://github.com/serilog/serilog-sinks-elasticsearch/blob/dev/sample/Serilog.Sinks.Elasticsearch.Sample/Program.cs

     class Program
        {
            private static IConfiguration Configuration { get; } = new ConfigurationBuilder()
                .SetBasePath(Directory.GetCurrentDirectory())
                .AddJsonFile("appsettings.json", true, true)
                .AddEnvironmentVariables()
                .Build();
           
            static void Main(string[] args)
            {
    
                // Enable the selflog output
                SelfLog.Enable(Console.Error);
                Log.Logger = new LoggerConfiguration()
                    .MinimumLevel.Debug()
                    .WriteTo.Console(theme: SystemConsoleTheme.Literate)
                    .WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri(Configuration.GetConnectionString("elasticsearch"))) // for the docker-compose implementation
                    {
                        AutoRegisterTemplate = true,
                        OverwriteTemplate = true,
                        DetectElasticsearchVersion = true,
                        AutoRegisterTemplateVersion = AutoRegisterTemplateVersion.ESv7,
                        NumberOfReplicas = 1,
                        NumberOfShards = 2,
                        //BufferBaseFilename = "./buffer",
                        RegisterTemplateFailure = RegisterTemplateRecovery.FailSink,
                        FailureCallback = e => Console.WriteLine("Unable to submit event " + e.MessageTemplate),
                        EmitEventFailure = EmitEventFailureHandling.WriteToSelfLog |
                                           EmitEventFailureHandling.WriteToFailureSink |
                                           EmitEventFailureHandling.RaiseCallback,
                        FailureSink = new FileSink("./fail-{Date}.txt", new JsonFormatter(), null, null)
                    })
                    .CreateLogger();
    
                Log.Information("Hello, world!");
             
                int a = 10, b = 0;
                try
                {
                    Log.Debug("Dividing {A} by {B}", a, b);
                    Console.WriteLine(a / b);
                }
                catch (Exception ex)
                {
                    Log.Error(ex, "Something went wrong");
                }
    
                // Introduce a failure by storing a field as a different type
                Log.Debug("Reusing {A} by {B}", "string", true);
    
                Log.CloseAndFlush();
                Console.WriteLine("Press any key to continue...");
                while (!Console.KeyAvailable)
                {
                    Thread.Sleep(500);
                }
            }
    
          
        }

    默认的配置

    https://github.com/serilog/serilog-sinks-elasticsearch/blob/dev/src/Serilog.Sinks.Elasticsearch/Sinks/ElasticSearch/ElasticsearchSinkOptions.cs#L273

        /// <summary>
            /// Configures the elasticsearch sink defaults
            /// </summary>
            public ElasticsearchSinkOptions()
            {
                this.IndexFormat = "logstash-{0:yyyy.MM.dd}";
                this.DeadLetterIndexName = "deadletter-{0:yyyy.MM.dd}";
                this.TypeName = DefaultTypeName;
                this.Period = TimeSpan.FromSeconds(2);
                this.BatchPostingLimit = 50;
                this.SingleEventSizePostingLimit = null;
                this.TemplateName = "serilog-events-template";
                this.ConnectionTimeout = TimeSpan.FromSeconds(5);
                this.EmitEventFailure = EmitEventFailureHandling.WriteToSelfLog;
                this.RegisterTemplateFailure = RegisterTemplateRecovery.IndexAnyway;
                this.QueueSizeLimit = 100000;
                this.BufferFileCountLimit = 31;
                this.BufferFileSizeLimitBytes = 100L * 1024 * 1024;
                this.FormatStackTraceAsArray = false;
                this.ConnectionPool = new SingleNodeConnectionPool(_defaultNode);
            }
  • 相关阅读:
    C 语言高效编程的几招——A few action of efficient C language programming
    UDP套接字——(DGRAM)
    初学数位DP--hdu 2089
    leetcode Reverse Nodes in k-Group
    CC+语言 struct 深层探索——CC + language struct deep exploration
    [置顶] JDK工具(一)–Java编译器javac
    非归档数据文件offline的恢复
    [置顶] OpenJDK源码研究笔记(九)-可恨却又可亲的的异常(NullPointerException)
    MSF溢出实战教程
    一些安全名词解释
  • 原文地址:https://www.cnblogs.com/chucklu/p/13272240.html
Copyright © 2011-2022 走看看