zoukankan      html  css  js  c++  java
  • 在 kubernetes 中部署两副本的 redis deployment 并且通过 prometheus 监控

    前言

    redis deployment 是 kubernetes 外面的 redis 从。prometheus 是通过 redis-exporter 监控 redis 的。 redis-exporter 和 redis 是部署在一个 pod 里面的。本文中用到的 prometheus operator 中的 CustomResource 有 prometheus podmonitors。如下:

    $ k get prometheus
    NAME   AGE
    k8s    3d5h
    
    $ k get podmonitors.monitoring.coreos.com --all-namespaces
    NAMESPACE   NAME          AGE
    default     example-app   4h3m
    

    监控的具体原理是 prometheus(CustomResource) 中配置 podmonitor,podmonitor 在配置中通过 label selector 选择对应的 pod。剩下的就是 prometheus operator 中的逻辑了。

    环境介绍

    需要有已经安装好了的 kubernetes 集群。我这边使用的 zsh,安装了 kubectl 插件,还安装了 kubens 插件,具体可以从 github 上找。

     ~$ k version
    Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.8-aliyun.1", GitCommit:"51888f5", GitTreeState:"", BuildDate:"2019-10-16T08:29:13Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
    

    通过 kube-prometheus 安装 prometheus

    具体可以参考 github 上的介绍,我这边是当前最新的版本。

    git clone https://github.com/coreos/kube-prometheus.git
    kubectl apply -f manifests/setup
    kubectl create -f manifests/
    

    部署 redis 到 kubernetes

    我们这次是部署 redis 从到 kubernetes。下面都是在 default namespace 下:

    # redis master 配置:
    $ ls
    master_redis.yaml  redis2s.yaml  podmonitor.yaml  redis_config.yaml
    $ cat master_redis.yaml
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: redis2
    spec:
      ports:
        - port: 9911
    ---
    apiVersion: v1
    kind: Endpoints
    metadata:
      name: redis2
    subsets:
      - addresses:
          - ip: 192.168.10.11
        ports:
          - port: 9911
    
    
    $ cat redis2s.yaml                                                                                    
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: redis2s
    spec:
      replicas: 2
      template:
        metadata:
          annotations:
            prometheus.io/scrape: "true"
            prometheus.io/port: "9121"
          labels:
            app: redis2s
            kind: redis
        spec:
          containers:
          - name: redis
            image: redis:2.6
            command: ["redis-server"]
            args: ["/etc/redis/redis2s.conf"]
            ports:
            - containerPort: 9911
            volumeMounts:
            - name: v3redis-config
              mountPath: /etc/redis/
          - name: redis-exporter
            image: oliver006/redis_exporter:latest
            args: ["--redis.addr","redis://localhost:9911","--redis.password","redis2s",]
            resources:
              requests:
                cpu: 100m
                memory: 100Mi
            ports:
            - containerPort: 9121
          volumes:
            - name: v3redis-config
              configMap:
                name: v3redis-config
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: redis2s
    spec:
      ports:
      - port: 9911
      selector:
        app: redis2s
        kind: redis
    
    $ cat redis_config.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: v3redis-config
      namespace: default
    data:
      redis1.conf: |
        daemonize no
        pidfile /usr/local/redis/redis1.pid
        timeout 0
        dir ./
      redis2s.conf: |
        daemonize no
        pidfile /usr/local/redis/redis2s.pid
        port 9911
        timeout 0
        tcp-keepalive 60
        loglevel notice
        logfile redis2s.log
        databases 16
        stop-writes-on-bgsave-error yes
        rdbcompression yes
        rdbchecksum yes
        dbfilename redis2s.rdb
        dir ./
        slaveof redis2  9911
        slave-serve-stale-data yes
        slave-read-only yes
    
    
    $ cat podmonitor.yaml
    apiVersion: monitoring.coreos.com/v1
    kind: PodMonitor
    metadata:
      name: example-app
      labels:
        team: frontend
    spec:
      selector:
        matchLabels:
          app: redis2s
      podMetricsEndpoints:
      - targetPort: 9121
    
    

    然后执行下面的命令

    k label namespaces default prometheus.monitor=true
    k apply -f .
    

    promethues-operator 配置

    下面是在 monitoring namespace 下:
    上面已经配置了 podmonitor(promethues-operator 中的 CustomResource),接下来配置 promethues-operator 中的 CR(CustomResource) Prometheus,其实只需要修改如下配置就行了:

      podMonitorNamespaceSelector:
        matchLabels:
          prometheus.monitor: true
      podMonitorSelector:
        matchLabels:
          team: frontend
    

    完整的配置:

    k get prometheus k8s -o yaml
    apiVersion: monitoring.coreos.com/v1
    kind: Prometheus
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"annotations":{},"labels":{"prometheus":"k8s"},"name":"k8s","namespace":"monitoring"},"spec":{"alerting":{"alertmanagers":[{"name":"alertmanager-main","namespace":"monitoring","port":"web"}]},"baseImage":"quay.io/prometheus/prometheus","nodeSelector":{"kubernetes.io/os":"linux"},"podMonitorNamespaceSelector":{},"podMonitorSelector":{},"replicas":2,"resources":{"requests":{"memory":"400Mi"}},"ruleSelector":{"matchLabels":{"prometheus":"k8s","role":"alert-rules"}},"securityContext":{"fsGroup":2000,"runAsNonRoot":true,"runAsUser":1000},"serviceAccountName":"prometheus-k8s","serviceMonitorNamespaceSelector":{},"serviceMonitorSelector":{},"version":"v2.11.0"}}
        project.cattle.io/namespaces: '["catalog","default","monitoring"]'
      creationTimestamp: "2019-12-17T01:41:43Z"
      generation: 4
      labels:
        prometheus: k8s
      name: k8s
      namespace: monitoring
      resourceVersion: "37485079"
      selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/prometheuses/k8s
      uid: 60f7b6aa-206e-11ea-9e1d-064ec46212f4
    spec:
      additionalScrapeConfigs:
        key: prometheus-additional.yaml
        name: additional-scrape-configs
      alerting:
        alertmanagers:
        - name: alertmanager-main
          namespace: monitoring
          port: web
      baseImage: quay.io/prometheus/prometheus
      nodeSelector:
        kubernetes.io/os: linux
      podMonitorNamespaceSelector:
        matchLabels:
          prometheus.monitor: true
      podMonitorSelector:
        matchLabels:
          team: frontend
      replicas: 2
      resources:
        requests:
          memory: 400Mi
      ruleSelector:
        matchLabels:
          prometheus: k8s
          role: alert-rules
      rules:
        alert: {}
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 1000
      serviceAccountName: prometheus-k8s
      serviceMonitorNamespaceSelector: {}
      serviceMonitorSelector: {}
      version: v2.11.0
    

    这里面还添加了一下自定义的 promethues 配置,如果想要添加可以参考这个链接

    grafana 配置

    redis-exporter 中监控项在grafana中的展示可以参考这个 dashboard.

  • 相关阅读:
    应急响应之如何发现隐藏的Webshell后门
    从失败终止到崩溃
    DumpConfigurator Utility工具
    使例外成为例外(而不是异常)
    用于可视化虚拟内存使用情况和GC堆使用情况的工具。
    关于EEMessageException异常
    c#/C++混合编程的一个问题
    关于std::__non_rtti_object异常
    仅通过转储来排除内存泄漏
    调试器不应该改变行为
  • 原文地址:https://www.cnblogs.com/WisWang/p/12073316.html
Copyright © 2011-2022 走看看