zoukankan      html  css  js  c++  java
  • Jenkins-k8s-helm-eureka-harbor-githab-mysql-nfs微服务发布平台实战

    基于 K8S 构建 Jenkins 微服务发布平台

    实现汇总:

    1. 发布流程设计讲解
    2. 准备基础环境
      1. K8s环境(部署Ingress Controller,CoreDNS,Calico/Flannel)
      2. 部署代码版本仓库Gitlab
      3. 配置本地Git上传测试代码,创建项目到Gitlab
      4. 部署pinpoint 全链路监控系统(提前修改Dockerfile,打包镜像上传)
      5. 部署镜像仓库Harbor(开启helm仓库)
      6. master节点部署helm应用包管理器(配置本地helm仓库,上传helm包)
      7. 部署K8S 存储(nfs、ceph),master节点提供pv自动供给
      8. 部署MySQL集群(导入微服务数据库)
      9. 部署EFK日志采集(追加)
      10. 部署prometheus监控系统(追加)
    3. 在Kubernetes中部署Jenkins
    4. Jenkins Pipeline 及参数化构建
    5. Jenkins在K8S中动态创建代理
    6. 自定义构建Jenkins-Slave镜像
    7. 基于Kubernetes构建Jenkins CI系统
    8. Pipeline 集成 Helm 发布微服务项目

    发布流程设计讲解

    机器环境

    当前环境部署主要是实现微服务自动发布和推送,具体实现的功能细节主要实现在下述几大软件上面。其实自动发布和推送有很多种方式,如有不足,请留言补充。

    IP地址 主机名 服务配置
    192.168.25.223 k8s-master01 Kubernetes-Master节点+Jenkins
    192.168.25.225 k8s-node01 Kubernetes-Node节点
    192.168.25.226 k8s-node02 Kubernetes-Node节点
    192.168.25.227 gitlab-nfs Gitlab,NFS,Git
    192.168.25.228 harbor harbor,mysql,docker,pinpoint

    准备基础环境

    K8s环境(部署Ingress Controller,CoreDNS,Calico/Flannel)

    部署命令
    单Master版:

    ansible-playbook -i hosts single-master-deploy.yml -uroot -k
    

    多Master版:

    ansible-playbook -i hosts multi-master-deploy.yml -uroot -k
    

    部署控制

    如果安装某个阶段失败,可针对性测试.

    例如:只运行部署插件

    ansible-playbook -i hosts single-master-deploy.yml -uroot -k --tags addons
    

    示例参考:https://github.com/ansible/ansible-examples


    部署代码版本仓库Gitlab

    部署docker

    Uninstall old versions
    $ sudo yum remove docker 
                      docker-client 
                      docker-client-latest 
                      docker-common 
                      docker-latest 
                      docker-latest-logrotate 
                      docker-logrotate 
                      docker-engine
    
    SET UP THE REPOSITORY
    $ sudo yum install -y yum-utils 
      device-mapper-persistent-data 
      lvm2
    
    $ sudo yum-config-manager 
        --add-repo 
        https://download.docker.com/linux/centos/docker-ce.repo
    
    INSTALL DOCKER ENGINE
    $ sudo yum install docker-ce docker-ce-cli containerd.io -y 
    
    $ sudo systemctl start docker && sudo systemctl enable docker
    
    $ sudo docker run hello-world
    

    部署gitlab

    docker run -d 
      --name gitlab 
      -p 8443:443 
      -p 9999:80 
      -p 9998:22 
      -v $PWD/config:/etc/gitlab 
      -v $PWD/logs:/var/log/gitlab 
      -v $PWD/data:/var/opt/gitlab 
      -v /etc/localtime:/etc/localtime 
      passzhang/gitlab-ce-zh:latest
    

    访问地址:http://IP:9999

    初次会先设置管理员密码 ,然后登陆,默认管理员用户名root,密码就是刚设置的。

    配置本地Git上传测试代码,创建项目到Gitlab

    https://github.com/passzhang/simple-microservice

    代码分支说明:

    • dev1 交付代码

    • dev2 编写Dockerfile构建镜像

    • dev3 K8S资源编排

    • dev4 增加微服务链路监控

    • master 最终上线

    拉取master分支,推送到私有代码仓库:

    git clone https://github.com/PassZhang/simple-microservice.git
    
    # cd 进入simple-microservice目录
    # 修改.git/config文件,将地址上传地址配置成本地gitlab既可以
    vim /root/simple-microservice/.git/config
    ...
    [remote "origin"]
            url = http://192.168.25.227:9999/root/simple-microservice.git
            fetch = +refs/heads/*:refs/remotes/origin/*
    ...
    
    # 下载之后,还需修改连接数据库配置(xxx-service/src/main/resources/application-fat.yml),本次测试我将数据库地址修改成192.168.25.228::3306.
    # 修改好数据库地址后,才可以上传文件。
    
    
    cd microservice
    git config --global user.email "passzhang@example.com"
    git config --global user.name "passzhang"
    git add .
    git commit -m 'all'
    git push origin master
    

    部署pinpoint 全链路监控系统(提前修改Dockerfile,打包镜像上传)


    部署镜像仓库Harbor(开启helm仓库)

    安装docker与docker-compose

    # wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
    # yum install docker-ce -y
    # systemctl start docker && systemctl enable docker
    
    curl -L https://github.com/docker/compose/releases/download/1.25.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
    chmod +x /usr/local/bin/docker-compose
    

    2.2 解压离线包部署

    # tar zxvf harbor-offline-installer-v1.9.1.tgz
    # cd harbor
    -----------
    # vi harbor.yml
    hostname: 192.168.25.228
    	http: 8088
    -----------
    # ./prepare
    # ./install.sh --with-chartmuseum --with-clair
    # docker-compose ps 
    

    --with-chartmuseum 参数表示启用Charts存储功能。

    配置Docker可信任

    由于habor未配置https,还需要在docker配置可信任。

    # cat /etc/docker/daemon.json 
    {"registry-mirrors": ["http://f1361db2.m.daocloud.io"],
      "insecure-registries": ["192.168.25.228:8088"]
    }
    # systemctl restart docker
    #这边配置好仓库之后,也要保证K8S的master节点和docker节点都能同时连接上。需要修改dameon.json文件。
    
    

    master节点部署helm应用包管理器(配置本地helm仓库,上传helm包)

    安装Helm工具

    # wget https://get.helm.sh/helm-v3.0.0-linux-amd64.tar.gz
    # tar zxvf helm-v3.0.0-linux-amd64.tar.gz 
    # mv linux-amd64/helm /usr/bin/
    

    配置国内Chart仓库

    # helm repo add stable http://mirror.azure.cn/kubernetes/charts
    # helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts 
    # helm repo list
    

    安装push插件

    # helm plugin install https://github.com/chartmuseum/helm-push
    

    如果网络下载不了,也可以直接解压课件里包:

    # tar zxvf helm-push_0.7.1_linux_amd64.tar.gz
    # mkdir -p /root/.local/share/helm/plugins/helm-push
    # chmod +x bin/*
    # mv bin plugin.yaml /root/.local/share/helm/plugins/helm-push
    

    添加repo

    # helm repo add  --username admin --password Harbor12345 myrepo http://192.168.25.228:8088/chartrepo/ms
    

    推送与安装Chart

    # helm push ms-0.1.0.tgz --username=admin --password=Harbor12345 http://192.168.25.228:8088/chartrepo/ms
    # helm install    --username=admin --password=Harbor12345 --version 0.1.0 http://192.168.25.228:8088/chartrepo/library/ms
    

    部署K8S 存储(nfs、ceph),master节点提供pv自动供给

    先准备一台NFS服务器为K8S提供存储支持。

    # yum install nfs-utils -y
    # vi /etc/exports
    /ifs/kubernetes * (rw,no_root_squash)
    # mkdir -p /ifs/kubernetes
    # systemctl start nfs
    # systemctl enable nfs
    

    并且要在每个Node上安装nfs-utils包,用于mount挂载时用。

    由于K8S不支持NFS动态供给,还需要先安装上图中的nfs-client-provisioner插件:

    具体配置文件如下:

    [root@k8s-master1 nfs-storage-class]# tree  
    .
    ├── class.yaml
    ├── deployment.yaml
    └── rbac.yaml
    
    0 directories, 3 files
    
    

    rbac.yaml

    [root@k8s-master1 nfs-storage-class]# cat rbac.yaml 
    kind: ServiceAccount
    apiVersion: v1
    metadata:
      name: nfs-client-provisioner
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: nfs-client-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["create", "update", "patch"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: run-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        namespace: default
    roleRef:
      kind: ClusterRole
      name: nfs-client-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
    rules:
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["get", "list", "watch", "create", "update", "patch"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: default
    roleRef:
      kind: Role
      name: leader-locking-nfs-client-provisioner
      apiGroup: rbac.authorization.k8s.io
    
    

    class.yaml

    [root@k8s-master1 nfs-storage-class]# cat class.yaml
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: managed-nfs-storage
    provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
    parameters:
      archiveOnDelete: "true"
    
    

    deployment.yaml

    [root@k8s-master1 nfs-storage-class]# cat deployment.yaml 
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nfs-client-provisioner
    ---
    kind: Deployment
    apiVersion: apps/v1 
    metadata:
      name: nfs-client-provisioner
    spec:
      replicas: 1
      strategy:
        type: Recreate
      selector:
        matchLabels:
          app: nfs-client-provisioner
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: quay.io/external_storage/nfs-client-provisioner:latest
              imagePullPolicy: IfNotPresent
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: fuseim.pri/ifs
                - name: NFS_SERVER
                  value: 192.168.25.227 
                - name: NFS_PATH
                  value: /ifs/kubernetes
          volumes:
            - name: nfs-client-root
              nfs:
                server: 192.168.25.227 
                path: /ifs/kubernetes
                
    # 部署时不要忘记将server地址修改成新的nfs地址。
    
    # cd nfs-client
    # vi deployment.yaml # 修改里面NFS地址和共享目录为你的
    # kubectl apply -f .
    # kubectl get pods
    NAME                                     READY   STATUS    RESTARTS   AGE
    nfs-client-provisioner-df88f57df-bv8h7   1/1     Running   0          49m
    

    部署MySQL集群(导入微服务数据库)

    # yum install mariadb-server -y
    # systemctl start mariadb.service
    # mysqladmin -uroot password '123456'
    

    或者docker创建

    docker run -d --name db -p 3306:3306 -v /opt/mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 mysql:5.7 --character-set-server=utf8
    

    最后将微服务数据库导入。

    [root@cephnode03 db]# pwd 
    /root/simple-microservice/db
    [root@cephnode03 db]# ls 
    order.sql  product.sql  stock.sql
    [root@cephnode03 db]# mysql -uroot -p123456 <order.sql 
    [root@cephnode03 db]# mysql -uroot -p123456 <product.sql 
    [root@cephnode03 db]# mysql -uroot -p123456 <stock.sql 
    
    # 配置好之后需要修改数据库授权 
    GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.25.%' IDENTIFIED BY '123456';
    

    部署EFK日志采集(追加)


    部署prometheus监控系统(追加)


    在Kubernetes中部署Jenkins

    参考:https://github.com/jenkinsci/kubernetes-plugin/tree/fc40c869edfd9e3904a9a56b0f80c5a25e988fa1/src/main/kubernetes

    当前我们直接在kubernetes中部署Jenkins程序,部署之前需要提前准备好存储,前面已经部署了nfs 存储,也可以使用其他存储方案,例如ceph等。接下来我们开始部署吧。

    Jenkins yaml文件汇总

    [root@k8s-master1 jenkins]# tree 
    .
    ├── deployment.yml
    ├── ingress.yml
    ├── rbac.yml
    ├── service-account.yml
    └── service.yml
    
    0 directories, 5 files
    
    

    rbac.yml

    [root@k8s-master1 jenkins]# cat rbac.yml 
    ---
    # 创建名为jenkins的ServiceAccount
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: jenkins
    
    ---
    # 创建名为jenkins的Role,授予允许管理API组的资源Pod
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: jenkins
    rules:
    - apiGroups: [""]
      resources: ["pods"]
      verbs: ["create","delete","get","list","patch","update","watch"]
    - apiGroups: [""]
      resources: ["pods/exec"]
      verbs: ["create","delete","get","list","patch","update","watch"]
    - apiGroups: [""]
      resources: ["pods/log"]
      verbs: ["get","list","watch"]
    - apiGroups: [""]
      resources: ["secrets"]
      verbs: ["get"]
    
    ---
    # 将名为jenkins的Role绑定到名为jenkins的ServiceAccount
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: RoleBinding
    metadata:
      name: jenkins
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: jenkins
    subjects:
    - kind: ServiceAccount
      name: jenkins
    
    

    service-account.yml

    [root@k8s-master1 jenkins]# cat service-account.yml 
    # In GKE need to get RBAC permissions first with
    # kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin [--user=<user-name>|--group=<group-name>]
    
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: jenkins
    
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: jenkins
    rules:
    - apiGroups: [""]
      resources: ["pods"]
      verbs: ["create","delete","get","list","patch","update","watch"]
    - apiGroups: [""]
      resources: ["pods/exec"]
      verbs: ["create","delete","get","list","patch","update","watch"]
    - apiGroups: [""]
      resources: ["pods/log"]
      verbs: ["get","list","watch"]
    - apiGroups: [""]
      resources: ["secrets"]
      verbs: ["get"]
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: RoleBinding
    metadata:
      name: jenkins
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: jenkins
    subjects:
    - kind: ServiceAccount
      name: jenkins
    
    

    **ingress.yml **

    [root@k8s-master1 jenkins]# cat ingress.yml 
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: jenkins
      annotations:
        nginx.ingress.kubernetes.io/ssl-redirect: "true"
        nginx.ingress.kubernetes.io/proxy-body-size: 100m
    spec:
      rules:
      - host: jenkins.test.com
        http:
          paths:
          - path: /
            backend:
              serviceName: jenkins
              servicePort: 80
    
    

    service.yml

    [root@k8s-master1 jenkins]# cat service.yml 
    apiVersion: v1
    kind: Service
    metadata:
      name: jenkins
    spec:
      selector:
        name: jenkins
      type: NodePort
      ports:
        - name: http
          port: 80
          targetPort: 8080
          protocol: TCP
          nodePort: 30006
        - name: agent
          port: 50000
          protocol: TCP
    
    

    deployment.yml

    [root@k8s-master1 jenkins]# cat deployment.yml 
    apiVersion: apps/v1
    kind: Deployment 
    metadata:
      name: jenkins
      labels:
        name: jenkins
    spec:
      replicas: 1
      selector:
        matchLabels:
          name: jenkins 
      template:
        metadata:
          name: jenkins
          labels:
            name: jenkins
        spec:
          terminationGracePeriodSeconds: 10
          serviceAccountName: jenkins
          containers:
            - name: jenkins
              image: jenkins/jenkins:lts 
              imagePullPolicy: Always
              ports:
                - containerPort: 8080
                - containerPort: 50000
              resources:
                limits:
                  cpu: 1
                  memory: 1Gi
                requests:
                  cpu: 0.5
                  memory: 500Mi
              env:
                - name: LIMITS_MEMORY
                  valueFrom:
                    resourceFieldRef:
                      resource: limits.memory
                      divisor: 1Mi
                - name: JAVA_OPTS
                  value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 -Duser.timezone=Asia/Shanghai
              volumeMounts:
                - name: jenkins-home
                  mountPath: /var/jenkins_home
              livenessProbe:
                httpGet:
                  path: /login
                  port: 8080
                initialDelaySeconds: 60
                timeoutSeconds: 5
                failureThreshold: 12
              readinessProbe:
                httpGet:
                  path: /login
                  port: 8080
                initialDelaySeconds: 60
                timeoutSeconds: 5
                failureThreshold: 12
          securityContext:
            fsGroup: 1000
          volumes:
            - name: jenkins-home
              persistentVolumeClaim:
                claimName: jenkins-home
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: jenkins-home
    spec:
      storageClassName: "managed-nfs-storage"
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 5Gi
    
    

    登录地址:直接输入ingress配置的域名:http://jenkins.test.com

    修改插件地址:

    由于默认插件源在国外服务器,大多数网络无法顺利下载,需修改国内插件源地址:

    cd jenkins_home/updates
    sed -i 's/http://updates.jenkins-ci.org/download/https://mirrors.tuna.tsinghua.edu.cn/jenkins/g' default.json && 
    sed -i 's/http://www.google.com/https://www.baidu.com/g' default.json
    

    Jenkins Pipeline 及参数化构建

    Jenkins参数化构建流程图

    Jenkins Pipeline是一套插件,支持在Jenkins中实现集成和持续交付管道;

    • Pipeline通过特定语法对简单到复杂的传输管道进行建模;
      1. 声明式:遵循与Groovy相同语法。pipeline { }
      2. 脚本式:支持Groovy大部分功能,也是非常表达和灵活的工具。node { }
    • Jenkins Pipeline的定义被写入一个文本文件,称为Jenkinsfile。

    参考:https://jenkins.io/doc/book/pipeline/syntax/

    当前环境中我们需要配置pipeline脚本,我们可以先来创建一个Jenkins-pipeline脚本测试一下

    安装pipeline插件 : Jenkins 首页 ------ >系统管理 ------ > 插件管理 ------> 可选插件 ------> 过滤输入pipeline, 安装pipeline插件既可以使用。

    流水线中输入以下脚本进行测试

    pipeline {
        agent any
        stages {
            stage('Build') {
                steps {
                    echo 'Building'
                }
            }
            stage('Test') {
                steps {
                    echo 'Testing'
                }
            }
            stage('Deploy') {
                steps {
                    echo 'Deploying'
                }
            }
        }
    }
    

    测试结果如下:

    日志如下:

    控制台输出
    Started by user admin
    Running in Durability level: MAX_SURVIVABILITY
    [Pipeline] Start of Pipeline
    [Pipeline] node
    Running on Jenkins in /var/jenkins_home/workspace/pipeline-test
    [Pipeline] {
    [Pipeline] stage
    [Pipeline] { (Build)
    [Pipeline] echo
    Building
    [Pipeline] }
    [Pipeline] // stage
    [Pipeline] stage
    [Pipeline] { (Test)
    [Pipeline] echo
    Testing
    [Pipeline] }
    [Pipeline] // stage
    [Pipeline] stage
    [Pipeline] { (Deploy)
    [Pipeline] echo
    Deploying
    [Pipeline] }
    [Pipeline] // stage
    [Pipeline] }
    [Pipeline] // node
    [Pipeline] End of Pipeline
    Finished: SUCCESS
    

    输出SUCCESS即成功测试

    Jenkins在K8S中动态创建代理

    前面我们已经完成了pipeline脚本的测试,但是考虑到Jenkins 主机性能有限,如果我们要运行大批量的任务,Jenkins 主机可能会崩溃,这时我们采用Jenkins-slave的方式,给Jenkins主机增加小弟,由Jenkins主机来部署任务,具体任务和编译则留给小弟去做。

    传统的Jenkins Master/Slave架构

    K8S中Jenkins Master/Slave架构

    添加Kubernetes插件

    Kubernetes插件:Jenkins在Kubernetes集群中运行动态代理.

    插件介绍:https://github.com/jenkinsci/kubernetes-plugin

    新增一个kubernetes 云

    当前环境中我们需要将Jenkins和kubernetes 进行关联,让Jenkins可以连通kubernetes 并且自动在kubernetes 中 进行命令操作,需要添加kubernetes 云,操作步骤如下:

    Jenkins 首页 ------ > 系统管理 ------ > 系统配置 ------ > 云 ------ > 新增一个云 ------ > Kubernetes

    配置一下kubernetes 云,当前我们部署的Jenkins是在kubernetes 中直接部署的pod,Jenkins可以直接通过service 读取到kubernetes的地址,所以我们这个地方输入kubernetes的DNS地址(https://kubernetes.default)就可以了,输入完之后不要忘记点击链接测试哦。

    Jenkins地址我们也直接输入DNS地址既可以,地址为(http://jenkins.default),这样我们就新增了一个kubernetes云。

    自定义构建Jenkins-Slave镜像推送到镜像仓库

    配置所需文件:

    [root@k8s-master1 jenkins-slave]# tree 
    .
    ├── Dockerfile				#构建Jenkins-slave所需
    ├── helm							#helm 命令:用于在Jenkins-slave pod 工作时,执行helm 操作安装helm chart库。
    ├── jenkins-slave			#jenkins-slave所需脚本
    ├── kubectl						#kebectl 命令:用于在Jenkins-slave pod 工作中,执行pod 创建命令和查询pod 运行结果等。
    ├── settings.xml			#Jenkins-slave 所需文件
    └── slave.jar					#Jenkins-slave jar包
    
    0 directories, 6 files
    
    

    Jenkins-slave 所需 Dockerfile文件

    FROM centos:7
    LABEL maintainer passzhang
    RUN yum install -y java-1.8.0-openjdk maven curl git   libtool-ltdl-devel && 
      yum clean all && 
      rm -rf /var/cache/yum/* && 
      mkdir -p /usr/share/jenkins
    COPY slave.jar /usr/share/jenkins/slave.jar
    COPY jenkins-slave /usr/bin/jenkins-slave
    COPY settings.xml /etc/maven/settings.xml
    RUN chmod +x /usr/bin/jenkins-slave
    COPY helm kubectl /usr/bin/
    ENTRYPOINT ["jenkins-slave"]
    

    参考:https://github.com/jenkinsci/docker-jnlp-slave

    参考:https://plugins.jenkins.io/kubernetes

    推送Jenkins-slave 镜像到harbor仓库

    [root@k8s-master1 jenkins-slave]# 
    docker build -t jenkins-slave:jdk-1.8 .
    
    docker tag jenkins-slave:jdk-1.8 192.168.25.228:8088/library/jenkins-slave:jdk-1.8
    
    docker login 192.168.25.228:8088  #登录私有仓库
    docker push 192.168.25.228:8088/library/jenkins-slave:jdk-1.8 										#推送镜像到私有仓库
    
    

    配置好之后,需要使用pipeline 流水线测试一下是否可以直接调用Jenkins-slave ,查看Jenkins-slave 是否正常工作。

    测试pipeline脚本:

    pipeline {
        agent {   
        kubernetes {
          label "jenkins-slave"
          yaml """
    apiVersion: v1
    kind: Pod
    metadata:
      name: jenkins-slave
    spec:
      containers:
      - name: jnlp
        image: 192.168.25.228:8088/library/jenkins-slave:jdk-1.8
    """
        }
    }
        stages {
            stage('Build') {
                steps {
                    echo 'Building'
                }
            }
            stage('Test') {
                steps {
                    echo 'Testing'
                }
            }
            stage('Deploy') {
                steps {
                    echo 'Deploying'
                }
            }
        }
    }
    

    部署截图如下:

    Pipeline 集成 Helm 发布微服务项目

    部署步骤:

    拉取代码 ——> 代码编译 ——> 单元测试 ——> 构建镜像 ——> Helm部署到K8S 测试

    创建新的Jenkins任务k8s-deploy-spring-cloud

    增加pipeline脚本:

    #!/usr/bin/env groovy
    // 所需插件: Git Parameter/Git/Pipeline/Config File Provider/kubernetes/Extended Choice Parameter
    // 公共
    def registry = "192.168.25.228:8088"
    // 项目
    def project = "ms"
    def git_url = "http://192.168.25.227:9999/root/simple-microservice.git"
    def gateway_domain_name = "gateway.test.com"
    def portal_domain_name = "portal.test.com"
    // 认证
    def image_pull_secret = "registry-pull-secret"
    def harbor_registry_auth = "9d5822e8-b1a1-473d-a372-a59b20f9b721"
    def git_auth = "2abc54af-dd98-4fa7-8ac0-8b5711a54c4a"
    // ConfigFileProvider ID
    def k8s_auth = "f1a38eba-4864-43df-87f7-1e8a523baa35"
    
    pipeline {
      agent {
        kubernetes {
            label "jenkins-slave"
            yaml """
    kind: Pod
    metadata:
      name: jenkins-slave
    spec:
      containers:
      - name: jnlp
        image: "${registry}/library/jenkins-slave:jdk-1.8"
        imagePullPolicy: Always
        volumeMounts:
          - name: docker-cmd
            mountPath: /usr/bin/docker
          - name: docker-sock
            mountPath: /var/run/docker.sock
          - name: maven-cache
            mountPath: /root/.m2
      volumes:
        - name: docker-cmd
          hostPath:
            path: /usr/bin/docker
        - name: docker-sock
          hostPath:
            path: /var/run/docker.sock
        - name: maven-cache
          hostPath:
            path: /tmp/m2
    """
            }
          
          }
        parameters {
            gitParameter branch: '', branchFilter: '.*', defaultValue: '', description: '选择发布的分支', name: 'Branch', quickFilterEnabled: false, selectedValue: 'NONE', sortMode: 'NONE', tagFilter: '*', type: 'PT_BRANCH'        
            extendedChoice defaultValue: 'none', description: '选择发布的微服务', 
              multiSelectDelimiter: ',', name: 'Service', type: 'PT_CHECKBOX', 
              value: 'gateway-service:9999,portal-service:8080,product-service:8010,order-service:8020,stock-service:8030'
            choice (choices: ['ms', 'demo'], description: '部署模板', name: 'Template')
            choice (choices: ['1', '3', '5', '7', '9'], description: '副本数', name: 'ReplicaCount')
            choice (choices: ['ms'], description: '命名空间', name: 'Namespace')
        }
        stages {
            stage('拉取代码'){
                steps {
                    checkout([$class: 'GitSCM', 
                    branches: [[name: "${params.Branch}"]], 
                    doGenerateSubmoduleConfigurations: false, 
                    extensions: [], submoduleCfg: [], 
                    userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_url}"]]
                    ])
                }
            }
            stage('代码编译') {
                // 编译指定服务
                steps {
                    sh """
                      mvn clean package -Dmaven.test.skip=true
                    """
                }
            }
            stage('构建镜像') {
              steps {
                  withCredentials([usernamePassword(credentialsId: "${harbor_registry_auth}", passwordVariable: 'password', usernameVariable: 'username')]) {
                    sh """
                     docker login -u ${username} -p '${password}' ${registry}
                     for service in $(echo ${Service} |sed 's/,/ /g'); do
                        service_name=${service%:*}
                        image_name=${registry}/${project}/${service_name}:${BUILD_NUMBER}
                        cd ${service_name}
                        if ls |grep biz &>/dev/null; then
                            cd ${service_name}-biz
                        fi
                        docker build -t ${image_name} .
                        docker push ${image_name}
                        cd ${WORKSPACE}
                      done
                    """
                    configFileProvider([configFile(fileId: "${k8s_auth}", targetLocation: "admin.kubeconfig")]){
                        sh """
                        # 添加镜像拉取认证
                        kubectl create secret docker-registry ${image_pull_secret} --docker-username=${username} --docker-password=${password} --docker-server=${registry} -n ${Namespace} --kubeconfig admin.kubeconfig |true
                        # 添加私有chart仓库
                        helm repo add  --username ${username} --password ${password} myrepo http://${registry}/chartrepo/${project}
                        """
                    }
                  }
              }
            }
            stage('Helm部署到K8S') {
              steps {
                  sh """
                  common_args="-n ${Namespace} --kubeconfig admin.kubeconfig"
                  
                  for service in  $(echo ${Service} |sed 's/,/ /g'); do
                    service_name=${service%:*}
                    service_port=${service#*:}
                    image=${registry}/${project}/${service_name}
                    tag=${BUILD_NUMBER}
                    helm_args="${service_name} --set image.repository=${image} --set image.tag=${tag} --set replicaCount=${replicaCount} --set imagePullSecrets[0].name=${image_pull_secret} --set service.targetPort=${service_port} myrepo/${Template}"
    
                    # 判断是否为新部署
                    if helm history ${service_name} ${common_args} &>/dev/null;then
                      action=upgrade
                    else
                      action=install
                    fi
    
                    # 针对服务启用ingress
                    if [ ${service_name} == "gateway-service" ]; then
                      helm ${action} ${helm_args} 
                      --set ingress.enabled=true 
                      --set ingress.host=${gateway_domain_name} 
                       ${common_args}
                    elif [ ${service_name} == "portal-service" ]; then
                      helm ${action} ${helm_args} 
                      --set ingress.enabled=true 
                      --set ingress.host=${portal_domain_name} 
                       ${common_args}
                    else
                      helm ${action} ${helm_args} ${common_args}
                    fi
                  done
                  # 查看Pod状态
                  sleep 10
                  kubectl get pods ${common_args}
                  """
              }
            }
        }
    }
    

    执行结果如下:

    当前直接点击构建,构建时前面几次可能会失败,多构建一次,打印出所有参数,既可以直接执行成功。

    点击发布gateway-service pod 查看日志结果

    + kubectl get pods -n ms --kubeconfig admin.kubeconfig
    NAME                                  READY   STATUS    RESTARTS   AGE
    eureka-0                              1/1     Running   0          3h11m
    eureka-1                              1/1     Running   0          3h10m
    eureka-2                              1/1     Running   0          3h9m
    ms-gateway-service-66d695c486-9x9mc   0/1     Running   0          10s
    [Pipeline] }
    [Pipeline] // stage
    [Pipeline] }
    [Pipeline] // node
    [Pipeline] }
    [Pipeline] // podTemplate
    [Pipeline] End of Pipeline
    Finished: SUCCESS
    
    # 执行成功之后,会打印出来pod 信息
    

    发布剩下的服务,并查看结果:

    + kubectl get pods -n ms --kubeconfig admin.kubeconfig
    NAME                                  READY   STATUS    RESTARTS   AGE
    eureka-0                              1/1     Running   0          3h14m
    eureka-1                              1/1     Running   0          3h13m
    eureka-2                              1/1     Running   0          3h12m
    ms-gateway-service-66d695c486-9x9mc   1/1     Running   0          3m1s
    ms-order-service-7465c47d79-lbxgd     0/1     Running   0          10s
    ms-portal-service-7fd6c57955-jkgkk    0/1     Running   0          11s
    ms-product-service-68dbf5b57-jwpv9    0/1     Running   0          10s
    ms-stock-service-b8b9895d6-cb72b      0/1     Running   0          10s
    [Pipeline] }
    [Pipeline] // stage
    [Pipeline] }
    [Pipeline] // node
    [Pipeline] }
    [Pipeline] // podTemplate
    [Pipeline] End of Pipeline
    Finished: SUCCESS
    

    查看eureka结果:

    可以看到所有的服务模块都已经注册到eureka中了。

    访问一下前端页面:

    可以看到有商品查询出来,代表已经连接数据库,同时业务可以正常运行。大功告成了!

    总结环境所需插件

    • 使用Jenkins的插件
      • Git & gitParameter
      • Kubernetes
      • Pipeline
      • Kubernetes Continuous Deploy
      • Config File Provider
      • Extended Choice Parameter
    • CI/CD环境特点
      • Slave弹性伸缩
      • 基于镜像隔离构建环境
      • 流水线发布,易维护
    • Jenkins参数化构建可帮助你完成更复杂环境CI/CD
  • 相关阅读:
    echarts 柱状图
    echarts 双y轴渐变色进度条
    echarts 折线图
    算法系列一:本质以及特征
    导致薪水低的九大行为表现
    Tomcat使用shutdown.sh无法关闭
    定时将上月的数据导入到Oracle中,并更新指定的列
    定时抛转数据 crontab
    微服务主要模块
    tk.mybatis 调用oracle,生成ID
  • 原文地址:https://www.cnblogs.com/passzhang/p/12228650.html
Copyright © 2011-2022 走看看