zoukankan      html  css  js  c++  java
  • 五步教你如何使用k8s快速部署ES

    前言
    今天小编打算用五步教大家如何使用k8s快速部署ES,有兴趣的小伙伴可以了解一下~

    由于是使用本地存储,所以需要先创建pv

    1、创建存储类
    local-elasticsearch.yaml

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: local-elasticsearch
    provisioner: kubernetes.io/no-provisioner
    volumeBindingMode: WaitForFirstConsumer
    
    存储类是pv的一种模板声明
    kubectl apply -f local-elasticsearch.yaml
    

    2、创建pv
    elasticsearch-pv-01.yaml

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: local-es-0(local-es-1/local-es-2...)        #这里需要改名
    spec:
      capacity:
        storage: 3Gi
      volumeMode: Filesystem # volumeMode field requires BlockVolume Alpha feature gate to be enabled.
      accessModes:
      - ReadWriteOnce
      storageClassName: local-elasticsearch        #这里对应StorageClass的名字
      persistentVolumeReclaimPolicy: Retain
      local:
        path: /data/local-es        # 这里是本地存储的路径,需要提前创建好目录
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - k8s-node4(xxx/yyy...)            #这里是pv本地存储所在的node节点名称
    

    由于es集群是3个副本,所以需要分别在3台node节点上面创建本地存储目录,创建3个pv

    kubectl apply -f elasticsearch-pv-01.yaml
    kubectl apply -f elasticsearch-pv-02.yaml
    kubectl apply -f elasticsearch-pv-03.yaml
    

    3、创建一个pvc,用于挂载备份目录
    elasticsearch-pvc.yaml

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: elasticsearch-pvc
      namespace: elasticsearch
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 30Gi
      storageClassName: nfs-client            #这个是nfs的一个存储类,用于存储数据到nfs的
    
    kubectl apply -f elasticsearch-pvc.yaml
    

    4、创建es命名空间

    kubectl create namespace elasticsearch
    

    5、helm部署
    添加本地helm库

    helm repo add --username **** --password **** elk http://69.172.74.253:8080/chartrepo/elk
    

    nodePort使用预留好的端口即可
    这里的pvc.enabled是开启pvc,pvc.name是设置要绑定的pvc名字,对应上述创建的pvc

    helm upgrade --install elasticsearch 
    elk/elasticsearch --version 7.8.0 --set service.nodePort=xxxx --set 
    pvc.enabled=true --set pvc.name=elasticsearch-pvc 
    --namespace=elasticsearch
    

    至此部署成功

    ES helm chart详解
    values.yaml

    ---
    clusterName: "elasticsearch"
    nodeGroup: "master"
    
    # The service that non master groups will try to connect to when joining the cluster
    # This should be set to clusterName + "-" + nodeGroup for your master group
    masterService: ""
    
    # Elasticsearch roles that will be applied to this nodeGroup
    # These will be set as environment variables. E.g. node.master=true
    roles:
      master: "true"
      ingest: "true"
      data: "true"
    
    replicas: 3
    minimumMasterNodes: 2
    
    esMajorVersion: ""
    
    # Allows you to add any config files in /usr/share/elasticsearch/config/
    # such as elasticsearch.yml and log4j2.properties
    esConfig:
     elasticsearch.yml: |
    #  path.repo: "/usr/share/elasticsearch/myBackup"
    #  log4j2.properties: |
    #    key = value
    
    # Extra environment variables to append to this nodeGroup
    # This will be appended to the current 'env:' key. You can use any of the kubernetes env
    # syntax here
    extraEnvs: []
    #  - name: MY_ENVIRONMENT_VAR
    #    value: the_value_goes_here
    
    # Allows you to load environment variables from kubernetes secret or config map
    envFrom: []
    # - secretRef:
    #     name: env-secret
    # - configMapRef:
    #     name: config-map
    
    # A list of secrets and their paths to mount inside the pod
    # This is useful for mounting certificates for security and for mounting
    # the X-Pack license
    secretMounts: []
    #  - name: elastic-certificates
    #    secretName: elastic-certificates
    #    path: /usr/share/elasticsearch/config/certs
    #    defaultMode: 0755
    
    image: "69.172.74.253:8080/elk/elasticsearch"
    imageTag: "7.7.1"
    imagePullPolicy: "IfNotPresent"
    
    podAnnotations: {}
      # iam.amazonaws.com/role: es-cluster
    
    # additionals labels
    labels: {}
    
    esJavaOpts: "-Xmx1g -Xms1g"
    
    resources:
      requests:
        cpu: "1000m"
        memory: "2Gi"
      limits:
        cpu: "1000m"
        memory: "2Gi"
    
    initResources: {}
      # limits:
      #   cpu: "25m"
      #   # memory: "128Mi"
      # requests:
      #   cpu: "25m"
      #   memory: "128Mi"
    
    sidecarResources: {}
      # limits:
      #   cpu: "25m"
      #   # memory: "128Mi"
      # requests:
      #   cpu: "25m"
      #   memory: "128Mi"
    
    networkHost: "0.0.0.0"
    
    volumeClaimTemplate:
      accessModes: ["ReadWriteOnce" ]
      volumeMode: Filesystem
      storageClassName: local-elasticsearch
      resources:
        requests:
          storage: 3Gi
    
    rbac:
      create: false
      serviceAccountName: ""
    
    podSecurityPolicy:
      create: false
      name: ""
      spec:
        privileged: true
        fsGroup:
          rule: RunAsAny
        runAsUser:
          rule: RunAsAny
        seLinux:
          rule: RunAsAny
        supplementalGroups:
          rule: RunAsAny
        volumes:
          - secret
          - configMap
          - persistentVolumeClaim
    
    persistence:
      enabled: true
      annotations: {}
      #annotations: {volume.beta.kubernetes.io/storage-class: "nfs-client"}
    
    pvc:
      enabled: false
      name: elasticsearch-pvc
    
    extraVolumes: []
      # - name: extras
      #   emptyDir: {}
    
    extraVolumeMounts: []
      # - name: extras
      #   mountPath: /usr/share/extras
      #   readOnly: true
    
    extraContainers: []
      # - name: do-something
      #   image: busybox
      #   command: ['do', 'something']
    
    extraInitContainers: []
      # - name: do-something
      #   image: busybox
      #   command: ['do', 'something']
    
    # This is the PriorityClass settings as defined in
    # https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
    priorityClassName: ""
    
    # By default this will make sure two pods don't end up on the same node
    # Changing this to a region would allow you to spread pods across regions
    antiAffinityTopologyKey: "kubernetes.io/hostname"
    
    # Hard means that by default pods will only be scheduled if there are enough nodes for them
    # and that they will never end up on the same node. Setting this to soft will do this "best effort"
    antiAffinity: "hard"
    
    # This is the node affinity settings as defined in
    # https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
    nodeAffinity: {}
    
    # The default is to deploy all pods serially. By setting this to parallel all pods are started at
    # the same time when bootstrapping the cluster
    podManagementPolicy: "Parallel"
    
    # The environment variables injected by service links are not used, but can lead to slow Elasticsearch boot times when
    # there are many services in the current namespace.
    # If you experience slow pod startups you probably want to set this to `false`.
    enableServiceLinks: true
    
    protocol: http
    httpPort: 9200
    transportPort: 9300
    
    service:
      labels: {}
      labelsHeadless: {}
      type: NodePort
      nodePort: 32060
      annotations: {}
      httpPortName: http
      transportPortName: transport
      loadBalancerIP: ""
      loadBalancerSourceRanges: []
    
    updateStrategy: RollingUpdate
    
    # This is the max unavailable setting for the pod disruption budget
    # The default value of 1 will make sure that kubernetes won't allow more than 1
    # of your pods to be unavailable during maintenance
    maxUnavailable: 1
    
    podSecurityContext:
      fsGroup: 1000
      runAsUser: 1000
    
    securityContext:
      capabilities:
        drop:
        - ALL
      # readOnlyRootFilesystem: true
      runAsNonRoot: true
      runAsUser: 1000
    
    # How long to wait for elasticsearch to stop gracefully
    terminationGracePeriod: 120
    
    sysctlVmMaxMapCount: 262144
    
    readinessProbe:
      failureThreshold: 3
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 3
      timeoutSeconds: 5
    
    # https://www.elastic.co/guide/en/elasticsearch/reference/7.8/cluster-health.html#request-params wait_for_status
    clusterHealthCheckParams: "wait_for_status=green&timeout=1s"
    
    ## Use an alternate scheduler.
    ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
    ##
    schedulerName: ""
    
    imagePullSecrets:
      - name: registry-secret
    nodeSelector: {}
    tolerations: []
    
    # Enabling this will publically expose your Elasticsearch instance.
    # Only enable this if you have security enabled on your cluster
    ingress:
      enabled: false
      annotations: {}
        # kubernetes.io/ingress.class: nginx
        # kubernetes.io/tls-acme: "true"
      path: /
      hosts:
        - chart-example.local
      tls: []
      #  - secretName: chart-example-tls
      #    hosts:
      #      - chart-example.local
    
    nameOverride: ""
    fullnameOverride: ""
    
    # https://github.com/elastic/helm-charts/issues/63
    masterTerminationFix: false
    
    lifecycle: {}
      # preStop:
      #   exec:
      #     command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
      # postStart:
      #   exec:
      #     command:
      #       - bash
      #       - -c
      #       - |
      #         #!/bin/bash
      #         # Add a template to adjust number of shards/replicas
      #         TEMPLATE_NAME=my_template
      #         INDEX_PATTERN="logstash-*"
      #         SHARD_COUNT=8
      #         REPLICA_COUNT=1
      #         ES_URL=http://localhost:9200
      #         while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
      #         curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'
    
    sysctlInitContainer:
      enabled: true
    
    keystore: []
    
    # Deprecated
    # please use the above podSecurityContext.fsGroup instead
    fsGroup: ""
    

    以上是全部字段,下面抽一些常用字段出来解释,其他字段默认即可

    replicas: 3                                            # pod副本数
    
    minimumMasterNodes: 2                                # es集群最少node数量
    
    esConfig:                                            # es配置文件,挂载出来修改的
     elasticsearch.yml: |
    #  path.repo: "/usr/share/elasticsearch/myBackup"
    #  log4j2.properties: |
    #    key = value
    
    image: "69.172.74.253:8080/elk/elasticsearch"        # es使用的镜像地址
    imageTag: "7.7.1"                                    # es使用镜像的tag
    imagePullPolicy: "IfNotPresent"                        # 是否每次重新拉取镜像
    
    volumeClaimTemplate:                                # 外部存储模板
      accessModes: ["ReadWriteOnce" ]                    # 读取模式
      volumeMode: Filesystem                            # 存储模式
      storageClassName: local-elasticsearch                # 存储类名称,存储类对应真实存储
      resources:
        requests:
          storage: 3Gi                                    # 需要内存数量
    
    pvc:
      enabled: false                                    # 是否开启pvc存储
      name: elasticsearch-pvc                            # pvc名称
    
    
    imagePullSecrets:                                    # 拉取私有镜像仓库secret
    - name: registry-secret
    nodeSelector: {}                                    # 节点选择器
    

    以上就是这次的内容,咱们下期再见!当然,如果小伙伴有更快速更便捷的方式也可以推荐给小编哈~

  • 相关阅读:
    Nginx rewrite模块深入浅出详解
    一个ip对应多个域名多个ssl证书配置-Nginx实现多域名证书HTTPS
    nginx: [emerg] getpwnam(“www”) failed错误
    mysql5.7 启动报发生系统错误2
    obv15 实例6:如果K线柱过多,ZIG将发生变动,导致明显的OBV15指标被隐藏!
    obv15 案例4,待日后分析
    稳定
    教你识别指标骗局:以某家捕捞季节和主力追踪为例讲解
    C++ 语句
    C++ 表达式
  • 原文地址:https://www.cnblogs.com/eflypro/p/13949124.html
Copyright © 2011-2022 走看看