zoukankan      html  css  js  c++  java
  • 01-初识InfluxDB

    初识InfluxDB

    1. InfluxDB介绍

    时间序列数据库,简称时序数据库,Time Series Database,一个全新的领域,最大的特点就是每个条数据都带有Time列。

    时序数据库到底能用到什么业务场景,答案是:监控系统。

    InfluxDB是一个当下比较流行的时序数据库,InfluxDB使用 Go 语言编写,无需外部依赖,安装配置非常方便,适合构建大型分布式系统的监控系统。

    2. 安装方法

    2.1 linux版本

    软件包:influxdb-1.7.7.x86_64.rpm

    安装命令:yum localinstall influxdb-1.7.7.x86_64.rpm

    2.2 常用命令

    /usr/bin/influxd influxdb服务器

    /usr/bin/influx influxdb命令行客户端

    /usr/bin/influx_inspect 查看工具

    /usr/bin/influx_stress 压力测试工具

    /usr/bin/influx_tsm 数据库转换工具(将数据库从b1或bz1格式转换为tsm1格式)

    2.3 常用目录

    /var/lib/influxdb/data 存放最终存储的数据,文件以.tsm结尾

    /var/lib/influxdb/meta 存放数据库元数据

    /var/lib/influxdb/wal 存放预写日志文件

    2.4 配置文件

    路径 /etc/influxdb/influxdb.conf 

      1 ### Welcome to the InfluxDB configuration file.
      2 
      3 # The values in this file override the default values used by the system if
      4 # a config option is not specified. The commented out lines are the configuration
      5 # field and the default value used. Uncommenting a line and changing the value
      6 # will change the value used at runtime when the process is restarted.
      7 
      8 # Once every 24 hours InfluxDB will report usage data to usage.influxdata.com
      9 # The data includes a random ID, os, arch, version, the number of series and other
     10 # usage data. No data from user databases is ever transmitted.
     11 # Change this option to true to disable reporting.
     12 # 该选项用于上报influxdb的使用信息给InfluxData公司,默认值为false
     13 # reporting-disabled = false
     14 
     15 # Bind address to use for the RPC service for backup and restore.
     16 # 备份恢复时使用,默认值为8088
     17 # bind-address = "127.0.0.1:8088"
     18 
     19 ###
     20 ### [meta]
     21 ###
     22 ### Controls the parameters for the Raft consensus group that stores metadata
     23 ### about the InfluxDB cluster.
     24 ###
     25 
     26 [meta]
     27   # Where the metadata/raft database is stored
     28   # meta数据存放目录,默认值:/var/lib/influxdb/meta
     29   dir = "/var/lib/influxdb/meta"
     30 
     31   # Automatically create a default retention policy when creating a database.
     32   # 用于控制默认存储策略,数据库创建时,会自动生成autogen的存储策略,默认值:true
     33   # retention-autocreate = true
     34 
     35   # If log messages are printed for the meta service
     36   # 是否开启日志,默认值:true
     37   # logging-enabled = true
     38 
     39 ###
     40 ### [data]
     41 ###
     42 ### Controls where the actual shard data for InfluxDB lives and how it is
     43 ### flushed from the WAL. "dir" may need to be changed to a suitable place
     44 ### for your system, but the WAL settings are an advanced configuration. The
     45 ### defaults should work for most systems.
     46 ###
     47 
     48 [data]
     49   # The directory where the TSM storage engine stores TSM files.
     50   # 最终数据(TSM文件)存储目录,默认值:/var/lib/influxdb/data
     51   dir = "/var/lib/influxdb/data"
     52 
     53   # The directory where the TSM storage engine stores WAL files.
     54   # 预写日志存储目录,默认值:/var/lib/influxdb/wal
     55   wal-dir = "/var/lib/influxdb/wal"
     56 
     57   # The amount of time that a write will wait before fsyncing.  A duration
     58   # greater than 0 can be used to batch up multiple fsync calls.  This is useful for slower
     59   # disks or when WAL write contention is seen.  A value of 0s fsyncs every write to the WAL.
     60   # Values in the range of 0-100ms are recommended for non-SSD disks.
     61   # wal-fsync-delay = "0s"
     62 
     63 
     64   # The type of shard index to use for new shards.  The default is an in-memory index that is
     65   # recreated at startup.  A value of "tsi1" will use a disk based index that supports higher
     66   # cardinality datasets.
     67   # index-version = "inmem"
     68 
     69   # Trace logging provides more verbose output around the tsm engine. Turning
     70   # this on can provide more useful output for debugging tsm engine issues.
     71   # 是否开启trace日志,默认值: false
     72   # trace-logging-enabled = false
     73 
     74   # Whether queries should be logged before execution. Very useful for troubleshooting, but will
     75   # log any sensitive data contained within a query.
     76   # query-log-enabled = true
     77 
     78   # Validates incoming writes to ensure keys only have valid unicode characters.
     79   # This setting will incur a small overhead because every key must be checked.
     80   # validate-keys = false
     81 
     82   # Settings for the TSM engine
     83 
     84   # CacheMaxMemorySize is the maximum size a shard's cache can
     85   # reach before it starts rejecting writes.
     86   # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
     87   # Values without a size suffix are in bytes.
     88   # 用于限定shard最大值,大于该值时会拒绝写入,默认值:
     89   # DefaultCacheMaxMemorySize = 1024 * 1024 * 1024 // 1GB
     90   # cache-max-memory-size = "1g"
     91 
     92   # CacheSnapshotMemorySize is the size at which the engine will
     93   # snapshot the cache and write it to a TSM file, freeing up memory
     94   # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
     95   # Values without a size suffix are in bytes.
     96   # 用于设置快照大小,大于该值时数据会刷新到tsm文件,默认值:
     97   # DefaultCacheSnapshotMemorySize = 25 * 1024 * 1024 // 25MB
     98   # cache-snapshot-memory-size = "25m"
     99 
    100   # CacheSnapshotWriteColdDuration is the length of time at
    101   # which the engine will snapshot the cache and write it to
    102   # a new TSM file if the shard hasn't received writes or deletes
    103   # tsm1引擎 snapshot写盘延迟,默认值:10m
    104   # cache-snapshot-write-cold-duration = "10m"
    105 
    106   # CompactFullWriteColdDuration is the duration at which the engine
    107   # will compact all TSM files in a shard if it hasn't received a
    108   # write or delete
    109   #tsm文件在压缩前可以存储的最大时间,默认值:4h
    110   # compact-full-write-cold-duration = "4h"
    111 
    112   # The maximum number of concurrent full and level compactions that can run at one time.  A
    113   # value of 0 results in 50% of runtime.GOMAXPROCS(0) used at runtime.  Any number greater
    114   # than 0 limits compactions to that value.  This setting does not apply
    115   # to cache snapshotting.
    116   # max-concurrent-compactions = 0
    117 
    118   # CompactThroughput is the rate limit in bytes per second that we
    119   # will allow TSM compactions to write to disk. Note that short bursts are allowed
    120   # to happen at a possibly larger value, set by CompactThroughputBurst
    121   # compact-throughput = "48m"
    122 
    123   # CompactThroughputBurst is the rate limit in bytes per second that we
    124   # will allow TSM compactions to write to disk.
    125   # compact-throughput-burst = "48m"
    126 
    127   # If true, then the mmap advise value MADV_WILLNEED will be provided to the kernel with respect to
    128   # TSM files. This setting has been found to be problematic on some kernels, and defaults to off.
    129   # It might help users who have slow disks in some cases.
    130   # tsm-use-madv-willneed = false
    131 
    132   # Settings for the inmem index
    133 
    134   # The maximum series allowed per database before writes are dropped.  This limit can prevent
    135   # high cardinality issues at the database level.  This limit can be disabled by setting it to
    136   # 0.
    137   # 限制数据库的级数,该值为0时取消限制,默认值:1000000
    138   # max-series-per-database = 1000000
    139 
    140   # The maximum number of tag values per tag that are allowed before writes are dropped.  This limit
    141   # can prevent high cardinality tag values from being written to a measurement.  This limit can be
    142   # disabled by setting it to 0.
    143   # 一个tag最大的value数,0取消限制,默认值:100000
    144   # max-values-per-tag = 100000
    145 
    146   # Settings for the tsi1 index
    147 
    148   # The threshold, in bytes, when an index write-ahead log file will compact
    149   # into an index file. Lower sizes will cause log files to be compacted more
    150   # quickly and result in lower heap usage at the expense of write throughput.
    151   # Higher sizes will be compacted less frequently, store more series in-memory,
    152   # and provide higher write throughput.
    153   # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
    154   # Values without a size suffix are in bytes.
    155   # max-index-log-file-size = "1m"
    156 
    157   # The size of the internal cache used in the TSI index to store previously 
    158   # calculated series results. Cached results will be returned quickly from the cache rather
    159   # than needing to be recalculated when a subsequent query with a matching tag key/value 
    160   # predicate is executed. Setting this value to 0 will disable the cache, which may
    161   # lead to query performance issues.
    162   # This value should only be increased if it is known that the set of regularly used 
    163   # tag key/value predicates across all measurements for a database is larger than 100. An
    164   # increase in cache size may lead to an increase in heap usage.
    165   series-id-set-cache-size = 100
    166 
    167 ###
    168 ### [coordinator]
    169 ###
    170 ### Controls the clustering service configuration.
    171 ###
    172 
    173 [coordinator]
    174   # The default time a write request will wait until a "timeout" error is returned to the caller.
    175   # 写操作超时时间,默认值: 10s
    176   # write-timeout = "10s"
    177 
    178   # The maximum number of concurrent queries allowed to be executing at one time.  If a query is
    179   # executed and exceeds this limit, an error is returned to the caller.  This limit can be disabled
    180   # by setting it to 0.
    181   # 最大并发查询数,0无限制,默认值: 0
    182   # max-concurrent-queries = 0
    183 
    184   # The maximum time a query will is allowed to execute before being killed by the system.  This limit
    185   # can help prevent run away queries.  Setting the value to 0 disables the limit.
    186   # 查询操作超时时间,0无限制,默认值:0s
    187   # query-timeout = "0s"
    188 
    189   # The time threshold when a query will be logged as a slow query.  This limit can be set to help
    190   # discover slow or resource intensive queries.  Setting the value to 0 disables the slow query logging.
    191   # 慢查询超时时间,0无限制,默认值:0s
    192   # log-queries-after = "0s"
    193 
    194   # The maximum number of points a SELECT can process.  A value of 0 will make
    195   # the maximum point count unlimited.  This will only be checked every second so queries will not
    196   # be aborted immediately when hitting the limit.
    197   # SELECT语句可以处理的最大点数(points),0无限制,默认值:0
    198   # max-select-point = 0
    199 
    200   # The maximum number of series a SELECT can run.  A value of 0 will make the maximum series
    201   # count unlimited.
    202   # SELECT语句可以处理的最大级数(series),0无限制,默认值:0
    203   # max-select-series = 0
    204 
    205   # The maximum number of group by time bucket a SELECT can create.  A value of zero will max the maximum
    206   # number of buckets unlimited.
    207   # SELECT语句可以处理的最大"GROUP BY time()"的时间周期,0无限制,默认值:0
    208   # max-select-buckets = 0
    209 
    210 ###
    211 ### [retention]
    212 ###
    213 ### Controls the enforcement of retention policies for evicting old data.
    214 ###
    215 
    216 [retention]
    217   # Determines whether retention policy enforcement enabled.
    218   # 是否启用该模块,默认值 : true
    219   # enabled = true
    220 
    221   # The interval of time when retention policy enforcement checks run.
    222   # 检查时间间隔,默认值 :"30m"
    223   # check-interval = "30m"
    224 
    225 ###
    226 ### [shard-precreation]
    227 ###
    228 ### Controls the precreation of shards, so they are available before data arrives.
    229 ### Only shards that, after creation, will have both a start- and end-time in the
    230 ### future, will ever be created. Shards are never precreated that would be wholly
    231 ### or partially in the past.
    232 
    233 [shard-precreation]
    234   # Determines whether shard pre-creation service is enabled.
    235   # enabled = true
    236 
    237   # The interval of time when the check to pre-create new shards runs.
    238   # check-interval = "10m"
    239 
    240   # The default period ahead of the endtime of a shard group that its successor
    241   # group is created.
    242   # advance-period = "30m"
    243 
    244 ###
    245 ### Controls the system self-monitoring, statistics and diagnostics.
    246 ###
    247 ### The internal database for monitoring data is created automatically if
    248 ### if it does not already exist. The target retention within this database
    249 ### is called 'monitor' and is also created with a retention period of 7 days
    250 ### and a replication factor of 1, if it does not exist. In all cases the
    251 ### this retention policy is configured as the default for the database.
    252 
    253 [monitor]
    254   # Whether to record statistics internally.
    255   # 是否启用该模块,默认值 :true
    256   # store-enabled = true
    257 
    258   # The destination database for recorded statistics
    259   # 默认数据库:"_internal"
    260   # store-database = "_internal"
    261 
    262   # The interval at which to record statistics
    263   # 统计间隔,默认值:"10s"
    264   # store-interval = "10s"
    265 
    266 ###
    267 ### [http]
    268 ###
    269 ### Controls how the HTTP endpoints are configured. These are the primary
    270 ### mechanism for getting data into and out of InfluxDB.
    271 ###
    272 
    273 [http]
    274   # Determines whether HTTP endpoint is enabled.
    275   # 是否启用该模块,默认值 :true
    276    enabled = true
    277 
    278   # Determines whether the Flux query endpoint is enabled.
    279   # flux-enabled = false
    280 
    281   # Determines whether the Flux query logging is enabled.
    282   # flux-log-enabled = false
    283 
    284   # The bind address used by the HTTP service.
    285   # 绑定地址,默认值:":8086"
    286    bind-address = ":8086"
    287 
    288   # Determines whether user authentication is enabled over HTTP/HTTPS.
    289   # 是否开启认证,默认值:false
    290   # auth-enabled = false
    291 
    292   # The default realm sent back when issuing a basic auth challenge.
    293   # realm = "InfluxDB"
    294 
    295   # Determines whether HTTP request logging is enabled.
    296   # 是否开启日志,默认值:true
    297   # log-enabled = true
    298 
    299   # Determines whether the HTTP write request logs should be suppressed when the log is enabled.
    300   # suppress-write-log = false
    301 
    302   # When HTTP request logging is enabled, this option specifies the path where
    303   # log entries should be written. If unspecified, the default is to write to stderr, which
    304   # intermingles HTTP logs with internal InfluxDB logging.
    305   #
    306   # If influxd is unable to access the specified path, it will log an error and fall back to writing
    307   # the request log to stderr.
    308   # access-log-path = ""
    309 
    310   # Filters which requests should be logged. Each filter is of the pattern NNN, NNX, or NXX where N is
    311   # a number and X is a wildcard for any number. To filter all 5xx responses, use the string 5xx.
    312   # If multiple filters are used, then only one has to match. The default is to have no filters which
    313   # will cause every request to be printed.
    314   # access-log-status-filters = []
    315 
    316   # Determines whether detailed write logging is enabled.
    317   # 是否开启写操作日志,如果置成true,每一次写操作都会打日志,默认值:false
    318   # write-tracing = false
    319 
    320   # Determines whether the pprof endpoint is enabled.  This endpoint is used for
    321   # troubleshooting and monitoring.
    322   # 是否开启pprof,默认值:true
    323   # pprof-enabled = true
    324 
    325   # Enables a pprof endpoint that binds to localhost:6060 immediately on startup.
    326   # This is only needed to debug startup issues.
    327   # debug-pprof-enabled = false
    328 
    329   # Determines whether HTTPS is enabled.
    330   # 是否开启https,默认值:false
    331   # https-enabled = false
    332 
    333   # The SSL certificate to use when HTTPS is enabled.
    334   # 设置https证书路径,默认值:"/etc/ssl/influxdb.pem"
    335   # https-certificate = "/etc/ssl/influxdb.pem"
    336 
    337   # Use a separate private key location.
    338   # 设置https私钥,无默认值
    339   # https-private-key = ""
    340 
    341   # The JWT auth shared secret to validate requests using JSON web tokens.
    342   # 用于JWT签名的共享密钥,无默认值
    343   # shared-secret = ""
    344 
    345   # The default chunk size for result sets that should be chunked.
    346   # 配置查询返回最大行数,默认值:0
    347   # max-row-limit = 0
    348 
    349   # The maximum number of HTTP connections that may be open at once.  New connections that
    350   # would exceed this limit are dropped.  Setting this value to 0 disables the limit.
    351   # 配置最大连接数,0无限制,默认值:0
    352   # max-connection-limit = 0
    353 
    354   # Enable http service over unix domain socket
    355   # unix-socket-enabled = false
    356 
    357   # The path of the unix domain socket.
    358   # unix-socket路径,默认值:"/var/run/influxdb.sock"
    359   # bind-socket = "/var/run/influxdb.sock"
    360 
    361   # The maximum size of a client request body, in bytes. Setting this value to 0 disables the limit.
    362   # max-body-size = 25000000
    363 
    364   # The maximum number of writes processed concurrently.
    365   # Setting this to 0 disables the limit.
    366   # max-concurrent-write-limit = 0
    367 
    368   # The maximum number of writes queued for processing.
    369   # Setting this to 0 disables the limit.
    370   # max-enqueued-write-limit = 0
    371 
    372   # The maximum duration for a write to wait in the queue to be processed.
    373   # Setting this to 0 or setting max-concurrent-write-limit to 0 disables the limit.
    374   # enqueued-write-timeout = 0
    375 
    376 ###
    377 ### [logging]
    378 ###
    379 ### Controls how the logger emits logs to the output.
    380 ###
    381 
    382 [logging]
    383   # Determines which log encoder to use for logs. Available options
    384   # are auto, logfmt, and json. auto will use a more a more user-friendly
    385   # output format if the output terminal is a TTY, but the format is not as
    386   # easily machine-readable. When the output is a non-TTY, auto will use
    387   # logfmt.
    388   # format = "auto"
    389 
    390   # Determines which level of logs will be emitted. The available levels
    391   # are error, warn, info, and debug. Logs that are equal to or above the
    392   # specified level will be emitted.
    393   # level = "info"
    394 
    395   # Suppresses the logo output that is printed when the program is started.
    396   # The logo is always suppressed if STDOUT is not a TTY.
    397   # suppress-logo = false
    398 
    399 ###
    400 ### [subscriber]
    401 ###
    402 ### Controls the subscriptions, which can be used to fork a copy of all data
    403 ### received by the InfluxDB host.
    404 ###
    405 
    406 [subscriber]
    407   # Determines whether the subscriber service is enabled.
    408   # 是否启用该模块,默认值 :true
    409   # enabled = true
    410 
    411   # The default timeout for HTTP writes to subscribers.
    412   # http超时时间,默认值:"30s"
    413   # http-timeout = "30s"
    414 
    415   # Allows insecure HTTPS connections to subscribers.  This is useful when testing with self-
    416   # signed certificates.
    417   # 是否允许不安全的证书,当测试自己签发的证书时比较有用。默认值: false
    418   # insecure-skip-verify = false
    419 
    420   # The path to the PEM encoded CA certs file. If the empty string, the default system certs will be used
    421   # 设置CA证书,无默认值
    422   # ca-certs = ""
    423 
    424   # The number of writer goroutines processing the write channel.
    425   # 设置并发数目,默认值:40
    426   # write-concurrency = 40
    427 
    428   # The number of in-flight writes buffered in the write channel.
    429   # 设置buffer大小,默认值:1000
    430   # write-buffer-size = 1000
    431 
    432 
    433 ###
    434 ### [[graphite]]
    435 ###
    436 ### Controls one or many listeners for Graphite data.
    437 ###
    438 
    439 [[graphite]]
    440   # Determines whether the graphite endpoint is enabled.
    441   # 是否启用该模块,默认值 :false
    442   # enabled = false
    443   # 数据库名称,默认值:"graphite"
    444   # database = "graphite"
    445   # 存储策略,无默认值
    446   # retention-policy = ""
    447   # 绑定地址,默认值:":2003"
    448   # bind-address = ":2003"
    449   # 协议,默认值:"tcp"
    450   # protocol = "tcp"
    451   # 一致性级别,默认值:"one"
    452   # consistency-level = "one"
    453 
    454   # These next lines control how batching works. You should have this enabled
    455   # otherwise you could get dropped metrics or poor performance. Batching
    456   # will buffer points in memory if you have many coming in.
    457 
    458   # Flush if this many points get buffered
    459   # 批量size,默认值:5000
    460   # batch-size = 5000
    461 
    462   # number of batches that may be pending in memory
    463   # 配置在内存中等待的batch数,默认值:10
    464   # batch-pending = 10
    465 
    466   # Flush at least this often even if we haven't hit buffer limit
    467   # 超时时间,默认值:"1s"
    468   # batch-timeout = "1s"
    469 
    470   # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
    471   # udp读取buffer的大小,0表示使用操作系统提供的值,如果超过操作系统的默认配置则会出错。 该配置的默认值:0
    472   # udp-read-buffer = 0
    473 
    474   ### This string joins multiple matching 'measurement' values providing more control over the final measurement name.
    475   #多个measurement间的连接符,默认值: "."
    476   # separator = "."
    477 
    478   ### Default tags that will be added to all metrics.  These can be overridden at the template level
    479   ### or by tags extracted from metric
    480   # tags = ["region=us-east", "zone=1c"]
    481 
    482   ### Each template line requires a template pattern.  It can have an optional
    483   ### filter before the template and separated by spaces.  It can also have optional extra
    484   ### tags following the template.  Multiple tags should be separated by commas and no spaces
    485   ### similar to the line protocol format.  There can be only one default template.
    486   # templates = [
    487   #   "*.app env.service.resource.measurement",
    488   #   # Default template
    489   #   "server.*",
    490   # ]
    491 
    492 ###
    493 ### [collectd]
    494 ###
    495 ### Controls one or many listeners for collectd data.
    496 ###
    497 
    498 [[collectd]]
    499   # 是否启用该模块,默认值 :false
    500   # enabled = false
    501   # 绑定地址,默认值: ":25826"
    502   # bind-address = ":25826"
    503   # 数据库名称,默认值:"collectd"
    504   # database = "collectd"
    505   # 存储策略,无默认值
    506   # retention-policy = ""
    507   #
    508   # The collectd service supports either scanning a directory for multiple types
    509   # db files, or specifying a single db file.
    510   # 路径,默认值:"/usr/share/collectd/types.db"
    511   # typesdb = "/usr/local/share/collectd"
    512   #
    513   # security-level = "none"
    514   # auth-file = "/etc/collectd/auth_file"
    515 
    516   # These next lines control how batching works. You should have this enabled
    517   # otherwise you could get dropped metrics or poor performance. Batching
    518   # will buffer points in memory if you have many coming in.
    519 
    520   # Flush if this many points get buffered
    521   # batch-size = 5000
    522 
    523   # Number of batches that may be pending in memory
    524   # batch-pending = 10
    525 
    526   # Flush at least this often even if we haven't hit buffer limit
    527   # batch-timeout = "10s"
    528 
    529   # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
    530   # read-buffer = 0
    531 
    532   # Multi-value plugins can be handled two ways.
    533   # "split" will parse and store the multi-value plugin data into separate measurements
    534   # "join" will parse and store the multi-value plugin as a single multi-value measurement.
    535   # "split" is the default behavior for backward compatibility with previous versions of influxdb.
    536   # parse-multivalue-plugin = "split"
    537 ###
    538 ### [opentsdb]
    539 ###
    540 ### Controls one or many listeners for OpenTSDB data.
    541 ###
    542 
    543 [[opentsdb]]
    544   # 是否启用该模块,默认值:false
    545   # enabled = false
    546   # 绑定地址,默认值:":4242"
    547   # bind-address = ":4242"
    548   # 默认数据库:"opentsdb"
    549   # database = "opentsdb"
    550   # 存储策略,无默认值
    551   # retention-policy = ""
    552   # 一致性级别,默认值:"one"
    553   # consistency-level = "one"
    554   # 是否开启tls,默认值:false
    555   # tls-enabled = false
    556   # 证书路径,默认值:"/etc/ssl/influxdb.pem"
    557   # certificate= "/etc/ssl/influxdb.pem"
    558 
    559   # Log an error for every malformed point.
    560   # 出错时是否记录日志,默认值:true
    561   # log-point-errors = true
    562 
    563   # These next lines control how batching works. You should have this enabled
    564   # otherwise you could get dropped metrics or poor performance. Only points
    565   # metrics received over the telnet protocol undergo batching.
    566 
    567   # Flush if this many points get buffered
    568   # batch-size = 1000
    569 
    570   # Number of batches that may be pending in memory
    571   # batch-pending = 5
    572 
    573   # Flush at least this often even if we haven't hit buffer limit
    574   # batch-timeout = "1s"
    575 
    576 ###
    577 ### [[udp]]
    578 ###
    579 ### Controls the listeners for InfluxDB line protocol data via UDP.
    580 ###
    581 
    582 [[udp]]
    583   # 是否启用该模块,默认值:false
    584   # enabled = false
    585   # 绑定地址,默认值:":8089"
    586   # bind-address = ":8089"
    587   # 数据库名称,默认值:"udp"
    588   # database = "udp"
    589   # 存储策略,无默认值
    590   # retention-policy = ""
    591 
    592   # InfluxDB precision for timestamps on received points ("" or "n", "u", "ms", "s", "m", "h")
    593   # 时间精度,无默认值
    594   # precision = ""
    595 
    596   # These next lines control how batching works. You should have this enabled
    597   # otherwise you could get dropped metrics or poor performance. Batching
    598   # will buffer points in memory if you have many coming in.
    599 
    600   # Flush if this many points get buffered
    601   # batch-size = 5000
    602 
    603   # Number of batches that may be pending in memory
    604   # batch-pending = 10
    605 
    606   # Will flush at least this often even if we haven't hit buffer limit
    607   # batch-timeout = "1s"
    608 
    609   # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
    610   # udp读取buffer的大小,0表示使用操作系统提供的值,如果超过操作系统的默认配置则会出错。 该配置的默认值:0
    611   # read-buffer = 0
    612 
    613 ###
    614 ### [continuous_queries]
    615 ###
    616 ### Controls how continuous queries are run within InfluxDB.
    617 ###
    618 
    619 [continuous_queries]
    620   # Determines whether the continuous query service is enabled.
    621   # enabled = true
    622 
    623   # Controls whether queries are logged when executed by the CQ service.
    624   # 是否开启日志,默认值:true
    625   # log-enabled = true
    626 
    627   # Controls whether queries are logged to the self-monitoring data store.
    628   # query-stats-enabled = false
    629 
    630   # interval for how often continuous queries will be checked if they need to run
    631   # 时间间隔,默认值:"1s"
    632   # run-interval = "1s"
    633 
    634 ###
    635 ### [tls]
    636 ###
    637 ### Global configuration settings for TLS in InfluxDB.
    638 ###
    639 
    640 [tls]
    641   # Determines the available set of cipher suites. See https://golang.org/pkg/crypto/tls/#pkg-constants
    642   # for a list of available ciphers, which depends on the version of Go (use the query
    643   # SHOW DIAGNOSTICS to see the version of Go used to build InfluxDB). If not specified, uses
    644   # the default settings from Go's crypto/tls package.
    645   # ciphers = [
    646   #   "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
    647   #   "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
    648   # ]
    649 
    650   # Minimum version of the tls protocol that will be negotiated. If not specified, uses the
    651   # default settings from Go's crypto/tls package.
    652   # min-version = "tls1.2"
    653 
    654   # Maximum version of the tls protocol that will be negotiated. If not specified, uses the
    655   # default settings from Go's crypto/tls package.
    656   # max-version = "tls1.2"
    /etc/influxdb/influxdb.conf

     3 登陆说明

    使用命令influx

     1 Usage of influx:
     2   -version
     3        Display the version and exit.
     4   -host 'host name'
     5        Host to connect to.
     6   -port 'port #'
     7        Port to connect to.
     8   -socket 'unix domain socket'
     9        Unix socket to connect to.
    10   -database 'database name'
    11        Database to connect to the server.
    12   -password 'password'
    13       Password to connect to the server.  Leaving blank will prompt for password (--password '').
    14   -username 'username'
    15        Username to connect to the server.
    16   -ssl
    17         Use https for requests.
    18   -unsafeSsl
    19         Set this when connecting to the cluster using https and not use SSL verification.
    20   -execute 'command'
    21        Execute command and quit.
    22   -type 'influxql|flux'
    23        Type specifies the query language for executing commands or when invoking the REPL.
    24   -format 'json|csv|column'
    25        Format specifies the format of the server responses:  json, csv, or column.
    26   -precision 'rfc3339|h|m|s|ms|u|ns'
    27        Precision specifies the format of the timestamp:  rfc3339, h, m, s, ms, u or ns.
    28   -consistency 'any|one|quorum|all'
    29        Set write consistency level: any, one, quorum, or all
    30   -pretty
    31        Turns on pretty print for the json format.
    32   -import
    33        Import a previous database export from file
    34   -pps
    35        How many points per second the import will allow.  By default it is zero and will not throttle importing.
    36   -path
    37        Path to file to import
    38   -compressed
    39        Set to true if the import file is compressed
    help文件

     范例:

    # 连接本机的IfluxDB
    influx
    
    # 连接指定主机的InfluxDB
    influx -host localhost
    
    # 连接指定主机指定端口的InfluxDB
    influx -host localhost -port 8086
    
    # 连接指定主机指定端口的InfluxDB的指定数据库
    influx -host localhost -port 8086 -database testdb
    
    # 使用指定用户连接指定主机指定端口的InfluxDB的指定数据库
    influx -host localhost -port 8086 -database testdb -username root
    
    # 使用指定用户密码连接指定主机指定端口的InfluxDB的指定数据库
    influx -host localhost -port 8086 -database testdb -username root -password root
    
    # 远程执行命令
    influx -execute 'show databases;'
    
    # 使用指定用户密码连接指定主机指定端口的InfluxDB的指定数据库执行命令,返回结果为json格式
    influx -host localhost -port 8086 -database testdb -username root -password root -execute 'select * from win' -format 'json'
    
    # 使用指定用户密码连接指定主机指定端口的InfluxDB的指定数据库执行命令,返回结果为json格式,并进行格式优化
    influx -host localhost -port 8086 -database testdb -username root -password root -execute 'select * from win' -format 'json' -pretty
    

      

  • 相关阅读:
    Jmeter非GUI、GUI模式压测生成测试报告
    测试轮播banner
    Jmeter如何监控服务器CPU、内存、i/o等资源
    java基础(三)
    用python从符合一定格式的txt文档中逐行读取数据并按一定规则写入excel(openpyxl支持Excel 2007 .xlsx格式)
    L2-011. 玩转二叉树
    L2-010. 排座位
    L2-008. 最长对称子串
    L2-009. 抢红包
    L2-006. 树的遍历
  • 原文地址:https://www.cnblogs.com/gongniue/p/11315721.html
Copyright © 2011-2022 走看看