zoukankan      html  css  js  c++  java
  • 基于Kubernetes部署nacos配置中心

    写在前面的话

    作为运维人员,本身对nacos配置中心可能不太熟悉。nacos既是配种中心,又是注册中心,相当于是eureka 和Apollo 的结合体。 应该是可以这么理解的。

    Apollo的官方GitHub上,有关于如何基于k8s进行部署和配置的文档以及相应的镜像。但是看了nacos的官网文档之后,感觉一脸懵逼。

    官方文档将MySQL也部署在k8s集群内部,建议不要这么做。

    一:构建nacos镜像

    由于之前的nacos镜像特别的大,而且官网呢,也没有任何的优化。感觉很糊弄。这里我们基于alpine的基础镜像重新初始化了jdk以及nacos的镜像。

    项目地址:https://github.com/skymyyang/nacos-docker

    这里我主要修改了build目录下的Dockerfile,别的没有动。

    构建完成之后,我上传到了自己的阿里云镜像仓库。不为别的,就为拉取快。镜像地址:registry.cn-beijing.aliyuncs.com/skymyyang/nacos:1.3.0  

    公有的镜像仓库。大家都可以用。

    完整的dockerfile如下:

    FROM alpine:3.11.2
    MAINTAINER skymyyang yang-li@live.cn
    
    ENV LANG=C.UTF-8 
        TZ=Asia/Shanghai
    RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime 
    && echo $TZ > /etc/timezone 
    && sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories 
    && apk update 
    && apk add --update procps 
    && apk add --no-cache  ca-certificates ttf-dejavu tzdata tini bash
    
    ARG NACOS_VERSION=1.3.0
    
    COPY nacos-server-${NACOS_VERSION}.tar.gz /home
    
    RUN apk add --no-cache  openjdk8-jre 
    && rm -rf /home/nacos-server-${NACOS_VERSION}.tar.gz /home/nacos/bin/* /home/nacos/conf/*.properties /home/nacos/conf/*.example /home/nacos/conf/nacos-mysql.sql
    # set environment
    ENV MODE="cluster" 
        PREFER_HOST_MODE="ip"
        BASE_DIR="/home/nacos" 
        CLASSPATH=".:/home/nacos/conf:$CLASSPATH" 
        CLUSTER_CONF="/home/nacos/conf/cluster.conf" 
        FUNCTION_MODE="all" 
        JAVA_HOME="/usr/lib/jvm/java-1.8-openjdk" 
        NACOS_USER="nacos" 
        JAVA="/usr/lib/jvm/java-1.8-openjdk/bin/java" 
        JVM_XMS="2g" 
        JVM_XMX="2g" 
        JVM_XMN="1g" 
        JVM_MS="128m" 
        JVM_MMS="320m" 
        NACOS_DEBUG="n" 
        TOMCAT_ACCESSLOG_ENABLED="false"
    
    WORKDIR /$BASE_DIR
    
    ADD bin/docker-startup.sh bin/docker-startup.sh
    ADD conf/application.properties conf/application.properties
    ADD init.d/custom.properties init.d/custom.properties
    
    # set startup log dir
    RUN mkdir -p logs 
        && cd logs 
        && touch start.out 
        && ln -sf /dev/stdout start.out 
        && ln -sf /dev/stderr start.out
    RUN chmod +x bin/docker-startup.sh 
    && rm -rf /var/cache/apk/*
    
    EXPOSE 8848
    ENTRYPOINT ["bin/docker-startup.sh"

    二:配置nfs的存储类

    官方文档说,nacos/nacos-peer-finder-plugin:1.0 这个插件是帮助Nacos集群进行动态扩容的,咱也不知道具体的原理,反正搞就对了,如果不需要动态扩容的话,我觉得根本就不需要搞。

    关于k8s扩展存储,官方的驱动插件地址:https://github.com/kubernetes-incubator/external-storage

    配置nfs的deployment  rbac class即可。

    deployment.yaml 

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nfs-client-provisioner
      labels:
        app: nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    spec:
      replicas: 1
      strategy:
        type: Recreate
      selector:
        matchLabels:
          app: nfs-client-provisioner
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: quay.io/external_storage/nfs-client-provisioner:latest
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: fuseim.pri/ifs
                - name: NFS_SERVER
                  value: 192.168.50.99
                - name: NFS_PATH
                  value: /nfsdata
          volumes:
            - name: nfs-client-root
              nfs:
                server: 192.168.50.99
                path: /nfsdata

    rbac.yaml

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: nfs-client-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["create", "update", "patch"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: run-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: default
    roleRef:
      kind: ClusterRole
      name: nfs-client-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    rules:
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["get", "list", "watch", "create", "update", "patch"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: default
    roleRef:
      kind: Role
      name: leader-locking-nfs-client-provisioner
      apiGroup: rbac.authorization.k8s.io

    class.yaml

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: managed-nfs-storage
    provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
    parameters:
      archiveOnDelete: "false"

    三: 部署nacos

    这里我们使用的是default名称空间,建议单独创建一个namespace

    nacos-pvc-nfs.yml

    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: nacos-headless
      labels:
        app: nacos
      annotations:
        service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
    spec:
      ports:
        - port: 8848
          name: server
          targetPort: 8848
      clusterIP: None
      selector:
        app: nacos
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: nacos-cm
    data:
      mysql.host: "192.168.50.99"
      mysql.db.name: "nacos_devtest"
      mysql.port: "3306"
      mysql.user: "nacos"
      mysql.password: "aixnacos"
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: nacos
    spec:
      serviceName: nacos-headless
      replicas: 3
      template:
        metadata:
          labels:
            app: nacos
          annotations:
            pod.alpha.kubernetes.io/initialized: "true"
        spec:
          affinity:
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                - labelSelector:
                    matchExpressions:
                      - key: "app"
                        operator: In
                        values:
                          - nacos
                  topologyKey: "kubernetes.io/hostname"
          serviceAccountName: nfs-client-provisioner
          initContainers:
            - name: peer-finder-plugin-install
              image: nacos/nacos-peer-finder-plugin:1.0
              imagePullPolicy: Always
              volumeMounts:
                - mountPath: "/home/nacos/plugins/peer-finder"
                  name: plugindir
          containers:
            - name: nacos
              imagePullPolicy: IfNotPresent
              image: registry.cn-beijing.aliyuncs.com/skymyyang/nacos:1.3.0
              resources:
                requests:
                  memory: "2Gi"
                  cpu: "500m"
              ports:
                - containerPort: 8848
                  name: client-port
              env:
                - name: NACOS_REPLICAS
                  value: "3"
                - name: SERVICE_NAME
                  value: "nacos-headless"
                - name: DOMAIN_NAME
                  value: "cluster.local"
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: metadata.namespace
                - name: MYSQL_SERVICE_HOST
                  valueFrom:
                    configMapKeyRef:
                      name: nacos-cm
                      key: mysql.host
                - name: MYSQL_SERVICE_DB_NAME
                  valueFrom:
                    configMapKeyRef:
                      name: nacos-cm
                      key: mysql.db.name
                - name: MYSQL_SERVICE_PORT
                  valueFrom:
                    configMapKeyRef:
                      name: nacos-cm
                      key: mysql.port
                - name: MYSQL_SERVICE_USER
                  valueFrom:
                    configMapKeyRef:
                      name: nacos-cm
                      key: mysql.user
                - name: MYSQL_SERVICE_PASSWORD
                  valueFrom:
                    configMapKeyRef:
                      name: nacos-cm
                      key: mysql.password
                - name: NACOS_SERVER_PORT
                  value: "8848"
                - name: NACOS_APPLICATION_PORT
                  value: "8848"
                - name: PREFER_HOST_MODE
                  value: "hostname"
              volumeMounts:
                - name: plugindir
                  mountPath: /home/nacos/plugins/peer-finder
                - name: datadir
                  mountPath: /home/nacos/data
                - name: logdir
                  mountPath: /home/nacos/logs
      volumeClaimTemplates:
        - metadata:
            name: plugindir
            annotations:
              volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
          spec:
            accessModes: [ "ReadWriteMany" ]
            resources:
              requests:
                storage: 5Gi
        - metadata:
            name: datadir
            annotations:
              volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
          spec:
            accessModes: [ "ReadWriteMany" ]
            resources:
              requests:
                storage: 5Gi
        - metadata:
            name: logdir
            annotations:
              volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
          spec:
            accessModes: [ "ReadWriteMany" ]
            resources:
              requests:
                storage: 5Gi
      selector:
        matchLabels:
          app: nacos

    创建ingress资源,将服务暴漏

    nacos-ingress.yml

    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: nacos-ingress
      namespace: default
      annotations:
        kubernetes.io/ingress.class: "nginx"
    spec:
      rules:
      - host: nacos-dev.aixbx.com 
        http:
          paths:
          - backend:
              serviceName: nacos-headless
              servicePort: 8848
  • 相关阅读:
    在浏览器地址栏输入URL,按下回车后究竟发生了什么?
    企业内部DNS跨国配置案例
    Cobbler批量部署CentOS
    入侵检测工具之RKHunter & AIDE
    SSH服务端配置、优化加速、安全防护
    数据传输的加密过程
    mysql怎么让一个存储过程定时执行
    MYSQL中replace的用法
    eclipse 插件
    MySQL定时执行脚本(计划任务)命令实例
  • 原文地址:https://www.cnblogs.com/skymyyang/p/13386301.html
Copyright © 2011-2022 走看看