zoukankan      html  css  js  c++  java
  • kubernetes三: 交付dubbo服务到K8S集群

    一、Dubbo介绍

    1.Dubbo是什么?

    • Dubbo是阿里巴巴SOA服务化治理方案的核心框架,每天为2000+个服务提供30亿+次访问支持,并被广泛应用于阿里巴巴集团的各成员站点。
    • Dubbo是一个分布式服务框架,致力于提供高性能和透明化的PRC远程服务调用方案,以及SOA服务治理方案
    • 简单的说,Dubbo就是个服务框架,如果没有分布式的需求,其实是不需要用的,只有在分布式的时候,才有dubbo这样的的分布式服务框架的需求,并且本质上是个服务调用的东东,说白了就是个远程服务调用的分布式框架

    2.Dubbo能做什么?

    • 透明化的远程方法调用,就像调用本地方法一样调用远程方法,只需要简单的配置,没有任何API侵入
    • 软负载均衡及容错机制,可在内网替代F5等硬件负载均衡器,降低成本,减少单点。
    • 服务自动注册与发现,不再需要写死服务提供方地址,注册中心基于接口名查询服务提供者的IP地址,并且能够平滑添加或删除服务提供者。

    3.Dubbo工作原理

    • 简单的说,Dubbo 是 基于 Java 的RPC 框架。Dubbo 工作分为 4 个角色,分别是服务提供者、服务消费者、注册中心、和监控中心。
    • 按照工作阶段又分为部署阶段和运行阶段。
    • 其中部署阶段在图中以蓝色的线来表示,代表服务注册、服务订阅的过程,而运行阶段在图中以红色的线来表示,代表一次 RPC 的完整调用。
    • 部署阶段中服务提供方在启动时在指定的端口上暴露服务,并向注册中心汇报自己的地址。
    • 服务调用方启动时向注册中心订阅自己感兴趣的服务。
    • 运行阶段注册中心先将地址列表推送给服务消费者,服务消费者选取一个地址向对端发起调用。
    • 在这个过程中,服务消费者和服务提供者的运行状态会上报给监控中心。

    二、实战交付一套dubbo微服务到kubernetes集群

    1.实验拓扑图

    • 第一层代表部署在k8s之外的,第二层部署在k8s中,第三层部署在7-200中

    2.基础架构

    主机名 角色 ip
    kjdow7-11.host.com k8s代理节点1,zk1 10.4.7.11
    kjdow7-12.host.com k8s代理节点2,zk2 10.4.7.12
    kjdow7-21.host.com k8s代理节点3,zk3 10.4.7.21
    kjdow7-22.host.com k8s运算节点2,jenkins 10.4.7.22
    kjdow7-200.host.com k8s运维节点(docker仓库) 10.4.7.200

    3.部署zookeeper

    3.1 安装jdk

    在kjdow7-11、kjdow7-12、kjdow7-21三台主机上部署

     ~]# mkdir /usr/java
     ~]# tar xf jdk-8u221-linux-x64.tar.gz -C /usr/java
     ~]# ln -s /usr/java/jdk1.8.0_221 /usr/java/jdk
     ~]# vim /etc/profile
    export JAVA_HOME=/usr/java/jdk
    export PATH=$JAVA_HOME/bin:$JAVA_HOME/bin:$PATH
    export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
     ~]# source /etc/profile
     ~]# java -version
    java version "1.8.0_221"
    Java(TM) SE Runtime Environment (build 1.8.0_221-b11)
    Java HotSpot(TM) 64-Bit Server VM (build 25.221-b11, mixed mode)
    

    3.2 安装zookeeper(3台zk角色主机)

    在kjdow7-11、kjdow7-12、kjdow7-21三台主机上部署

    zk下载地址

    #解压、配置
     ~]# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz
     ~]# tar xf zookeeper-3.4.14.tar.gz -C /opt
     ~]# ln -s /opt/zookeeper-3.4.14 /opt/zookeeper
     ~]# mkdir -p /data/zookeeper/data /data/zookeeper/logs
     ~]# vi /opt/zookeeper/conf/zoo.cfg
     tickTime=2000
    initLimit=10
    syncLimit=5
    dataDir=/data/zookeeper/data
    dataLogDir=/data/zookeeper/logs
    clientPort=2181
    server.1=zk1.phc-dow.com:2888:3888
    server.2=zk2.phc-dow.com:2888:3888
    server.3=zk3.phc-dow.com:2888:3888
    

    注意:各节点zk配置相同。

    在kjdow7-11主机上部署

    [root@kjdow7-11 ~]# cat /data/zookeeper/data/myid
    1
    

    在kjdow7-12主机上部署

    [root@kjdow7-12 ~]# cat /data/zookeeper/data/myid
    2
    

    在kjdow7-21主机上部署

    [root@kjdow7-12 ~]# cat /data/zookeeper/data/myid
    3
    

    3.3 做dns解析

    在kjdow7-11主机上部署

    [root@kjdow7-11 ~]# cat /var/named/phc-dow.com.zone
    
                                    2020010206   ; serial   #serial值加一
    
    zk1	60 IN      A         10.4.7.11                      #末尾添加此三行
    zk2	60 IN      A         10.4.7.12
    zk3	60 IN      A         10.4.7.21
    [root@kjdow7-11 ~]# systemctl restart named
    [root@kjdow7-11 ~]# dig -t A zk1.phc-dow.com @10.4.7.11 +short
    10.4.7.11
    [root@kjdow7-11 ~]# dig -t A zk2.phc-dow.com @10.4.7.11 +short
    10.4.7.12
    [root@kjdow7-11 ~]# dig -t A zk3.phc-dow.com @10.4.7.11 +short
    10.4.7.21
    

    3.4 依次启动zk

    [root@kjdow7-11 ~]# /opt/zookeeper/bin/zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    
    [root@kjdow7-11 ~]# netstat -lntup | grep 19333
    tcp6       0      0 10.4.7.11:3888          :::*                    LISTEN      19333/java          
    tcp6       0      0 :::36989                :::*                    LISTEN      19333/java          
    tcp6       0      0 :::2181                 :::*                    LISTEN      19333/java  
    
    [root@kjdow7-21 ~]# netstat -lntup | grep 3675
    tcp6       0      0 10.4.7.21:2888          :::*                    LISTEN      3675/java           
    tcp6       0      0 10.4.7.21:3888          :::*                    LISTEN      3675/java           
    tcp6       0      0 :::2181                 :::*                    LISTEN      3675/java           
    tcp6       0      0 :::39301                :::*                    LISTEN      3675/java 
    
    [root@kjdow7-12 ~]# netstat -lntup | grep 11949
    tcp6       0      0 10.4.7.12:3888          :::*                    LISTEN      11949/java          
    tcp6       0      0 :::46303                :::*                    LISTEN      11949/java          
    tcp6       0      0 :::2181                 :::*                    LISTEN      11949/java  
    
    [root@kjdow7-11 ~]# /opt/zookeeper/bin/zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper/bin/../conf/zoo.cfg
    Mode: follower
    [root@kjdow7-12 ~]# /opt/zookeeper/bin/zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper/bin/../conf/zoo.cfg
    Mode: follower
    [root@kjdow7-21 ~]# /opt/zookeeper/bin/zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper/bin/../conf/zoo.cfg
    Mode: leader
    

    4.安装部署jenkins准备工作

    jenkins官方镜像

    4.1 准备镜像

    [root@kjdow7-200 ~]# docker pull jenkins/jenkins:2.190.3
    [root@kjdow7-200 ~]# docker images | grep jenkins
    jenkins/jenkins                                   2.190.3                    22b8b9a84dbe        2 months ago        568MB
    [root@kjdow7-200 ~]# docker tag 22b8b9a84dbe harbor.phc-dow.com/public/jenkins:v2.190.3
    [root@kjdow7-200 ~]# docker push harbor.phc-dow.com/public/jenkins:v2.190.3
    

    4.2 自定义Dockerfile

    官网拉取的镜像需要做些自定义操作,才能在k8s集群中部署

    在运维主机kjdow-200.host.com`上编辑自定义dockerfile

    mkdir -p /data/dockerfile/jenkins
    cd /data/dockerfile/jenkins
    vim Dockerfile
    FROM harbor.phc-dow.com/public/jenkins:v2.190.3
    USER root
    RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && 
        echo 'Asia/Shanghai' >/etc/timezone
    ADD id_rsa /root/.ssh/id_rsa
    ADD config.json /root/.docker/config.json
    ADD get-docker.sh /get-docker.sh
    RUN echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config &&
        /get-docker.sh
    
    

    这个Dockerfile里我们主要做了以下几件事

    • 设置容器用户为root
    • 设置容器内的时区
    • 将ssh私钥加入(使用git拉代码时要用到,配对的公钥应配置在gitlab中)
    • 加入了登录自建harbor仓库的config文件
    • 修改了ssh客户端的
    • 安装一个docker的客户端
    • 如果因为网络原因构建失败,可以在最后“ /get-docker.sh --mirror Aliyun”
    1) 生成ssh密钥对:
    [root@kjdow7-200 jenkins]# ssh-keygen -t rsa -b 2048 -C "897307140@qq.com" -N "" -f /root/.ssh/id_rsa
    Generating public/private rsa key pair.
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    SHA256:bIajghsF/BqJouTeNvZXvQWvolAKWvhVSuZ3uVWoVXU 897307140@qq.com
    The key's randomart image is:
    +---[RSA 2048]----+
    |             ...E|
    |.           o   .|
    |..   o .   o .   |
    |..+ + oo  +..    |
    |o=.+ +ooS+..o    |
    |=o* o.++..o. o   |
    |++...o  ..  +    |
    |.o.=  .. . o     |
    |..o.o.... .      |
    +----[SHA256]-----+
    [root@kjdow7-200 jenkins]# cp /root/.ssh/id_rsa .
    
    
    2) 准备其他文件
    [root@kjdow7-200 jenkins]# cp /root/.docker/config.json .
    [root@kjdow7-200 jenkins]# curl -fsSL get.docker.com -o get-docker.sh
    [root@kjdow7-200 jenkins]# chmod +x get-docker.sh 
    [root@kjdow7-200 jenkins]# ll
    total 28
    -rw------- 1 root root   160 Jan 28 23:41 config.json
    -rw-r--r-- 1 root root   355 Jan 28 23:38 Dockerfile
    -rwxr-xr-x 1 root root 13216 Jan 28 23:42 get-docker.sh
    -rw------- 1 root root  1675 Jan 28 23:38 id_rsa
    
    
    3) 登录harbor仓库页面,创建infra

    创建infra的project,access level 为Private

    4)生成镜像
    [root@kjdow7-200 jenkins]# docker build -t harbor.phc-dow.com/infra/jenkins:v2.190.3 .
    Sending build context to Docker daemon  19.46kB
    Step 1/7 : FROM harbor.phc-dow.com/public/jenkins:v2.190.3
     ---> 22b8b9a84dbe
    Step 2/7 : USER root
     ---> Running in 7604d600a620
    Removing intermediate container 7604d600a620
     ---> c8d326bfe8b7
    Step 3/7 : RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&    echo 'Asia/Shanghai' >/etc/timezone
     ---> Running in 1b72c3d69eea
    Removing intermediate container 1b72c3d69eea
     ---> f839ab1701d0
    Step 4/7 : ADD id_rsa /root/.ssh/id_rsa
     ---> 840bac71419f
    Step 5/7 : ADD config.json /root/.docker/config.json
     ---> 2dcd61ef1c90
    Step 6/7 : ADD get-docker.sh /get-docker.sh
     ---> 9430aa0cb5ad
    Step 7/7 : RUN echo "    StrictHostKeyChecking no" >> /etc/ssh/sshd_config &&    /get-docker.sh
     ---> Running in ff19d96b70da
    # Executing docker install script, commit: f45d7c11389849ff46a6b4d94e0dd1ffebca32c1
    + sh -c apt-get update -qq >/dev/null
    + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
    debconf: delaying package configuration, since apt-utils is not installed
    + sh -c curl -fsSL "https://download.docker.com/linux/debian/gpg" | apt-key add -qq - >/dev/null
    Warning: apt-key output should not be parsed (stdout is not a terminal)
    + sh -c echo "deb [arch=amd64] https://download.docker.com/linux/debian stretch stable" > /etc/apt/sources.list.d/docker.list
    + sh -c apt-get update -qq >/dev/null
    + [ -n  ]
    + sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/null
    debconf: delaying package configuration, since apt-utils is not installed
    If you would like to use Docker as a non-root user, you should now consider
    adding your user to the "docker" group with something like:
    
      sudo usermod -aG docker your-user
    
    Remember that you will have to log out and back in for this to take effect!
    
    WARNING: Adding a user to the "docker" group will grant the ability to run
             containers which can be used to obtain root privileges on the
             docker host.
             Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
             for more information.
    Removing intermediate container ff19d96b70da
     ---> 637a6cbc288d
    Successfully built 637a6cbc288d
    Successfully tagged harbor.phc-dow.com/infra/jenkins:v2.190.3
    
    
    5) 推送镜像到仓库
    [root@kjdow7-200 jenkins]# docker push harbor.phc-dow.com/infra/jenkins:v2.190.3
    
    

    4.3 准备共享存储

    jenkins的/var/lib/jenkins_home里面有jenkins的配置等需要挂载到宿主机,这样,无论在哪个运算节点起pod,无论pod是否运行,新运行的pod也会有之前的配置内容不会丢失

    1) 在所有主机上运行
    yum install nfs-utils -y
    
    
    2) 配置NFS服务
    [root@kjdow7-200 ~]# vim /etc/exports
    /data/nfs-volume 10.4.7.0/24(rw,no_root_squash)
    ###启动NFS服务
    [root@kjdow7-200 ~]# mkdir -p /data/nfs-volume
    [root@kjdow7-200 ~]# systemctl start nfs
    [root@kjdow7-200 ~]# systemctl enable nfs
    
    

    4.4 准备资源配置清单

    运维主机kjdow-200.host.com上:

    [root@kjdow7-200 ~]# mkdir /data/k8s-yaml/jenkins && mkdir -p /data/nfs-volume/jenkins_home && cd /data/k8s-yaml/jenkins
    
    
    [root@kjdow7-200 ~]# vi dp.yaml
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: jenkins
      namespace: infra
      labels: 
        name: jenkins
    spec:
      replicas: 1
      selector:
        matchLabels: 
          name: jenkins
      template:
        metadata:
          labels: 
            app: jenkins 
            name: jenkins
        spec:
          volumes:
          - name: data
            nfs: 
              server: kjdow7-200
              path: /data/nfs-volume/jenkins_home
          - name: docker
            hostPath: 
              path: /run/docker.sock
              type: ''
          containers:
          - name: jenkins
            image: harbor.phc-dow.com/infra/jenkins:v2.190.3
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 8080
              protocol: TCP
            env:
            - name: JAVA_OPTS
              value: -Xmx512m -Xms512m
            volumeMounts:
            - name: data
              mountPath: /var/jenkins_home
            - name: docker
              mountPath: /run/docker.sock
          imagePullSecrets:
          - name: harbor
          securityContext: 
            runAsUser: 0
      strategy:
        type: RollingUpdate
        rollingUpdate: 
          maxUnavailable: 1
          maxSurge: 1
      revisionHistoryLimit: 7
      progressDeadlineSeconds: 600
    
    

    注:imagePullSecrets:中name在创建secret时指定过的

    将宿主机/run/docker.sock挂载到pod中,那么pod就可以与宿主机的docker的server端进行通信了

    [root@kjdow7-200 ~]# vim service.yaml
    kind: Service
    apiVersion: v1
    metadata: 
      name: jenkins
      namespace: infra
    spec:
      ports:
      - protocol: TCP
        port: 80
        targetPort: 8080
      selector:
        app: jenkins
    
    

    注: targetport指定镜像端口,jenkins默认打开页面端口是8080

    port指定暴露给service中的cluster ip使用80端口

    [root@kjdow7-200 ~]# vim ingress.yaml
    kind: Ingress
    apiVersion: extensions/v1beta1
    metadata: 
      name: jenkins
      namespace: infra
    spec:
      rules:
      - host: jenkins.phc-dow.com
        http:
          paths:
          - path: /
            backend: 
              serviceName: jenkins
              servicePort: 80
    
    

    4.5 运算节点创建必须的资源

    [root@kjdow7-21 ~]# kubectl create ns infra
    namespace/infra created
    [root@kjdow7-21 ~]# kubectl create secret docker-registry harbor --docker-server=harbor.phc-dow.com --docker-username=admin --docker-password=Harbor_kjdow1! -n infra
    secret/harbor created
    ###创建一个名字叫harbor的secret
    
    

    注: 创建infra的命名空间,所有的运维pod都在此空间中

    创建secret资源用于从infra的私有仓库中拉取镜像时提供用户名和密码,在上面的dp.yaml里面指定使用此secret

    4.6 应用资源配置清单

    [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/jenkins/dp.yaml
    deployment.extensions/jenkins created
    [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/jenkins/service.yaml
    service/jenkins created
    [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/jenkins/ingress.yaml
    ingress.extensions/jenkins created
    
    

    4.7 打开页面访问

    [root@kjdow7-200 ~]# cat /data/nfs-volume/jenkins_home/secrets/initialAdminPassword
    112f082a79ce4e389be1cf884cc652e8
    
    

    访问jenkins.phc-dow.com并进行简单的配置,设置用户名admin密码admin123

    在页面进行简单的配置

    给jenkins添加插件blue ocean

    4.8 验证jenkins搭建完成

    • 验证用户是否是root
    • 验证时间是否对
    • 验证docker ps -a是否跟宿主机显示一样
    • 验证sshi是否不用输入yes、no
    • 验证是否已经登录成功harbor仓库
    • 使用私钥验证git是否能连接成功

    5.安装部署maven

    maven官方下载地址

    ###查看jenkins的pod中java版本
    [root@kjdow7-22 ~]# kubectl get pod -n infra -o wide
    NAME                       READY   STATUS    RESTARTS   AGE   IP           NODE                 NOMINATED NODE   READINESS GATES
    jenkins-67d4b48b54-gd9g7   1/1     Running   0          33m   172.7.22.7   kjdow7-22.host.com   <none>           <none>
    [root@kjdow7-22 ~]# kubectl exec jenkins-67d4b48b54-gd9g7  -it  /bin/bash -n infra
    root@jenkins-67d4b48b54-gd9g7:/# java -version
    openjdk version "1.8.0_232"
    OpenJDK Runtime Environment (build 1.8.0_232-b09)
    OpenJDK 64-Bit Server VM (build 25.232-b09, mixed mode)
    
    ###下载软件
    [root@kjdow7-200 ~]# wget https://archive.apache.org/dist/maven/maven-3/3.6.1/binaries/apache-maven-3.6.1-bin.tar.gz
    [root@kjdow7-200 ~]# tar xf apache-maven-3.6.1-bin.tar.gz -C /data/nfs-volume/jenkins_home/
    [root@kjdow7-200 ~]# cd /data/nfs-volume/jenkins_home/
    [root@kjdow7-200 jenkins_home]# mv apache-maven-3.6.1 maven-3.6.1-8u232
    [root@kjdow7-200 ~]# vi /data/nfs-volume/jenkins_home/maven-3.6.1-8u232/conf/settings.xml
      <mirrors>
        <mirror>
          <id>alimaven</id>
          <name>aliyun maven</name>
          <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
          <mirrorOf>central</mirrorOf>
        </mirror>
        <!-- mirror
         | Specifies a repository mirror site to use instead of a given repository. The repository that
         | this mirror serves has an ID that matches the mirrorOf element of this mirror. IDs are used
         | for inheritance and direct lookup purposes, and must be unique across the set of mirrors.
         |
         -->
      </mirrors>
    ###添加到文件的相应位置,jenkins的pod自动就会同步更改
    
    

    6.dubbo微服务底包镜像制作

    6.1 自定义dockerfile

    注:我们需要一个java运行时环境的底包

    [root@kjdow7-200 ~]# docker pull docker.io/stanleyws/jre8:8u112
    [root@kjdow7-200 ~]# docker images | grep jre8
    stanleyws/jre8                                    8u112                      fa3a085d6ef1        2 years ago         363MB
    [root@kjdow7-200 ~]# docker tag fa3a085d6ef1 harbor.phc-dow.com/public/jre8:8u112
    [root@kjdow7-200 ~]# docker push harbor.phc-dow.com/public/jre8:8u112
    [root@kjdow7-200 ~]# mkdir /data/dockerfile/jre8
    [root@kjdow7-200 ~]# cd /data/dockerfile/jre8
    [root@kjdow7-200 jre8]# vim Dockerfile
    FROM docker.io/stanleyws/jre8:8u112
    RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&
        echo 'Asia/Shanghai' >/etc/timezone
    ADD config.yml /opt/prom/config.yml
    ADD jmx_javaagent-0.3.1.jar /opt/prom/
    WORKDIR /opt/project_dir
    ADD entrypoint.sh /entrypoint.sh
    CMD ["/entrypoint.sh"]
    
    

    注: 第三行主要是普罗米修斯监控的配置文件

    ​ 第四行是普罗米修斯使用这个jar包来监控jvm

    ###准备其他必须的文件
    [root@kjdow7-200 jre8]# wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar -O jmx_javaagent-0.3.1.jar
    ————————————————————————————————————————————————————————————————————————
    [root@kjdow7-200 jre8]# vi config.yml
    ---
    rules:
      - pattern: '.*'
    ————————————————————————————————————————————————————————————————————————    
    [root@kjdow7-200 jre8]# vi entrypoint.sh
    #!/bin/sh
    M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml"
    C_OPTS=${C_OPTS}
    JAR_BALL=${JAR_BALL}
    exec java -jar ${M_OPTS} ${C_OPTS} ${JAR_BALL}
    [root@kjdow7-200 jre8]# chmod +x entrypoint.sh 
    
    [root@kjdow7-200 jre8]# ll
    total 372
    -rw-r--r-- 1 root root     29 Jan 29 23:11 config.yml
    -rw-r--r-- 1 root root    297 Jan 29 22:54 Dockerfile
    -rwxr-xr-x 1 root root    234 Jan 29 23:11 entrypoint.sh
    -rw-r--r-- 1 root root 367417 May 10  2018 jmx_javaagent-0.3.1.jar
    
    
    

    注:entrypoint.sh文件中

    ​ C_OPTS=${C_OPTS}表示将资源配置清单中的变量值赋值给它

    ​ ${M_PORT:-"12346"}表示如果没有给它赋值,则默认值是12346

    ​ 最后一行前面加exec是因为这个shell执行完,这个容器就死了,exec作用就是把这个shell 的pid交给 exec后面的命令继续使用,这样java不死,这个pod就能一直存活

    shell的内建命令exec将并不启动新的shell,而是用要被执行命令替换当前的shell进程,并且将老进程的环境清理掉,而且exec命令后的其它命令将不再执行。

    6.2 harbor页面创建object

    在harbor中创建base的object,用来存放所有业务基础镜像.权限为公开

    6.3 创建镜像

    [root@kjdow7-200 jre8]# docker build -t harbor.phc-dow.com/base/jre8:8u112 .
    [root@kjdow7-200 jre8]# docker push harbor.phc-dow.com/base/jre8:8u112
    
    

    7.使用Jenkins进行持续构建交付dubo服务的提供者

    7.1 新建新项目

    新建名为dubbo-demo的pipeline项目

    7.2 设置丢弃旧的构建,保存三天的,最多30个

    参数化构建

    7.3 jenkins流水线配置的十个参数

    • app_name -->项目名

    • image_name --> 镜像名

    • git_repo --> 项目的git地址

    • git_ver --> 项目的git版本号或分支

    • add_tag --> 镜像标签,日期时间戳(20200130_1421)

    • mvn_dir --> 编译项目的目录

    • target_dir --> 项目编译完成后,禅城的jar、war包所在的目录

    • mvn_cmd --> 编译项目的命令

    • base_image --> 项目的docker底包镜像

    • maven --> maven软件的版本

    7.4 pipeline流水线代码

    pipeline {
      agent any
        stages {
    	  stage('pull') {
    	    steps {
    		  sh "git clone ${params.git_repo} ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.app_name}/${env.BUILD_NUMBER} &&  git checkout ${params.git_ver}"
    		}
    	  }
    	  stage('build') {
    	    steps {
    		  sh "cd ${params.app_name}/${env.BUILD_NUMBER} && /var/jenkins_home/maven-${params.maven}/bin/${params.mvn_cmd}"
    		}
    	  }
    	  stage('package') {
    	    steps {
    		  sh "cd ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.target_dir} && mkdir project_dir && mv *.jar ./project_dir"
    		}
    	  }
    	  stage('image') {
    	    steps {
    		  writeFile file: "${params.app_name}/${env.BUILD_NUMBER}/Dockerfile", text: """FROM harbor.phc-dow.com/${params.base_image} 
    		  ADD ${params.target_dir}/project_dir /opt/project_dir"""
    		  sh "cd ${params.app_name}/${env.BUILD_NUMBER} && docker build -t harbor.phc-dow.com/${params.image_name}:${params.git_ver}_${params.add_tag} . && docker push harbor.phc-dow.com/${params.image_name}:${params.git_ver}_${params.add_tag}"
    		}
    	  }
    	}
    }
    
    

    7.5 构建前准备工作

    harbor仓库创建私有projects名字为app

    7.6 开始构建

    打开jenkins页面开始构建,填写参数值

    依次填入/选择:
    app_name:       dubbo-demo-service
    image_name:     app/dubbo-demo-service
    git_repo:       https://github.com/zizhufanqing/dubbo-demo-service.git
    git_ver:        master
    add_tag:        202001311655
    mvn_dir:        ./
    target_dir:     ./dubbo-server/target
    mvn_cmd:        mvn clean package -Dmaven.test.skip=true
    base_image:     base/jre8:8u112
    maven:          3.6.0-8u181
    点击Build进行构建,等待构建完成。
    
    

    注: 在github上已经添加了公钥

    • 构建完成后在harbor的app中查看自动提交的镜像

    7.7 准备资源配置清单

    在kjdow7-200上部署

    [root@kjdow7-200 ~]# mkdir /data/k8s-yaml/dubbo-demo-service
    [root@kjdow7-200 ~]# vi /data/k8s-yaml/dubbo-demo-service/dp.yaml
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: dubbo-demo-service
      namespace: app
      labels: 
        name: dubbo-demo-service
    spec:
      replicas: 1
      selector:
        matchLabels: 
          name: dubbo-demo-service
      template:
        metadata:
          labels: 
            app: dubbo-demo-service
            name: dubbo-demo-service
        spec:
          containers:
          - name: dubbo-demo-service
            image: harbor.phc-dow.com/app/dubbo-demo-service:master_202001311655
            ports:
            - containerPort: 20880
              protocol: TCP
            env:
            - name: JAR_BALL
              value: dubbo-server.jar
            imagePullPolicy: IfNotPresent
          imagePullSecrets:
          - name: harbor
          restartPolicy: Always
          terminationGracePeriodSeconds: 30
          securityContext: 
            runAsUser: 0
          schedulerName: default-scheduler
      strategy:
        type: RollingUpdate
        rollingUpdate: 
          maxUnavailable: 1
          maxSurge: 1
      revisionHistoryLimit: 7
      progressDeadlineSeconds: 600
    
    

    注意:这里给JAR_BALL进行赋值,使用上面构建的镜像,进行创建pod,由于harbor里app是私有仓库,因此需要在k8s中先创建指定的namespace和secret

    7.8 应用配置清单前准备工作

    [root@kjdow7-21 ~]# kubectl create ns app
    namespace/app created
    [root@kjdow7-21 ~]# kubectl create secret docker-registry harbor --docker-server=harbor.phc-dow.com --docker-username=admin --docker-password=Harbor_kjdow1! -n app
    secret/harbor created
    
    

    注意:这里secret的名字要跟上面的dp.yaml中imagePullSecrets的name的值一样,secret的名字可以自定义,但是要引用对应的名字

    7.9 应用资源配置清单

    • 应用前
    [root@kjdow7-11 zookeeper]# ./bin/zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper/bin/../conf/zoo.cfg
    Mode: follower
    [root@kjdow7-11 zookeeper]# bin/zkCli.sh -server localhost:2181
    
    WATCHER::
    WatchedEvent state:SyncConnected type:None path:null
    
    [zk: localhost:2181(CONNECTED) 0] ls /
    [zookeeper]
    
    

    注:此时里面只有zookeeper

    • 应用资源配置清单
    [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dubbo-demo-service/dp.yaml
    deployment.extensions/dubbo-demo-service created
    
    
    • 应用后
    [zk: localhost:2181(CONNECTED) 0] ls /
    [dubbo, zookeeper]
    [zk: localhost:2181(CONNECTED) 1] ls /dubbo
    [com.od.dubbotest.api.HelloService]
    
    

    注:这里可以看到已经自动注册进来了。代码里面写死了注册地址是zk1.od.com,而我们的域名是phc-dow.com,因此可以在bind中添加一个od.com的配置文件,或者修改源码

    8.交付dubbo-monitor到K8S集群

    8.1 下载源码包

    dubbo-monitor下载地址

    [root@kjdow7-200 ~]# wget https://github.com/Jeromefromcn/dubbo-monitor/archive/master.zip
    [root@kjdow7-200 ~]# unzip master.zip
    [root@kjdow7-200 ~]# mv dubbo-monitor-master /opt/src/dubbo-monitor
    
    

    8.2 修改源码包

    [root@kjdow7-200 ~]# vim /opt/src/dubbo-monitor/dubbo-monitor-simple/conf/dubbo_origin.properties
    dubbo.application.name=kjdow-monitor
    dubbo.application.owner=kjdow
    dubbo.registry.address=zookeeper://zk1.phc-dow.com:2181?backup=zk2.phc-dow.com:2181,zk3.phc-dow.com:2181
    dubbo.protocol.port=20880
    dubbo.jetty.port=8080
    dubbo.jetty.directory=/dubbo-monitor-simple/monitor
    dubbo.charts.directory=/dubbo-monitor-simple/charts
    
    
    

    8.3 制作配置文件

    [root@kjdow7-200 ~]# mkdir /data/dockerfile/dubbo-monitor
    [root@kjdow7-200 ~]# cp -r /opt/src/dubbo-monitor/* /data/dockerfile/dubbo-monitor/
    [root@kjdow7-200 ~]# cd /data/dockerfile/dubbo-monitor/
    [root@kjdow7-200 dubbo-monitor]# ls
    Dockerfile  dubbo-monitor-simple  README.md
    [root@kjdow7-200 dubbo-monitor]# vim  ./dubbo-monitor-simple/bin/start.sh
    if [ -n "$BITS" ]; then
        JAVA_MEM_OPTS=" -server -Xmx128m -Xms128m -Xmn32m -XX:PermSize=16m -Xss256k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 "
    else
        JAVA_MEM_OPTS=" -server -Xms128m -Xmx128m -XX:PermSize=16m -XX:SurvivorRatio=2 -XX:+UseParallelGC "
    fi
    
    echo -e "Starting the $SERVER_NAME ...c"
    exec  java $JAVA_OPTS $JAVA_MEM_OPTS $JAVA_DEBUG_OPTS $JAVA_JMX_OPTS -classpath $CONF_DIR:$LIB_JARS com.alibaba.dubbo.container.Main > $STDOUT_FILE 2>&1 
    
    

    注:脚本的59行和61行jvm进行调优

    ​ 64行java启动脚本改成exec开头,并删除最后的&,让java前台执行,并接管这个shell的进程pid,并删除此行以下的所有内容

    [root@kjdow7-200 dubbo-monitor]# docker build -t harbor.phc-dow.com/infra/dubbo-monitor:latest .
    [root@kjdow7-200 ~]# docker push harbor.phc-dow.com/infra/dubbo-monitor:latest
    
    

    8.4 准备k8s资源配置清单

    [root@kjdow7-200 ~]# vi /data/k8s-yaml/dubbo-monitor/dp.yaml
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: dubbo-monitor
      namespace: infra
      labels: 
        name: dubbo-monitor
    spec:
      replicas: 1
      selector:
        matchLabels: 
          name: dubbo-monitor
      template:
        metadata:
          labels: 
            app: dubbo-monitor
            name: dubbo-monitor
        spec:
          containers:
          - name: dubbo-monitor
            image: harbor.phc-dow.com/infra/dubbo-monitor:latest
            ports:
            - containerPort: 8080
              protocol: TCP
            - containerPort: 20880
              protocol: TCP
            imagePullPolicy: IfNotPresent
          imagePullSecrets:
          - name: harbor
          restartPolicy: Always
          terminationGracePeriodSeconds: 30
          securityContext: 
            runAsUser: 0
          schedulerName: default-scheduler
      strategy:
        type: RollingUpdate
        rollingUpdate: 
          maxUnavailable: 1
          maxSurge: 1
      revisionHistoryLimit: 7
      progressDeadlineSeconds: 600
    [root@kjdow7-200 ~]# vi /data/k8s-yaml/dubbo-monitor/svc.yaml
    kind: Service
    apiVersion: v1
    metadata: 
      name: dubbo-monitor
      namespace: infra
    spec:
      ports:
      - protocol: TCP
        port: 8080
        targetPort: 8080
      selector: 
        app: dubbo-monitor
      clusterIP: None
      type: ClusterIP
      sessionAffinity: None
    [root@kjdow7-200 ~]# vi /data/k8s-yaml/dubbo-monitor/ingress.yaml
    kind: Ingress
    apiVersion: extensions/v1beta1
    metadata: 
      name: dubbo-monitor
      namespace: infra
    spec:
      rules:
      - host: dubbo-monitor.phc-dow.com
        http:
          paths:
          - path: /
            backend: 
              serviceName: dubbo-monitor
              servicePort: 8080
    
    

    8.5 应用资源配置清单前准备工作-解析域名

    [root@kjdow7-11 ~]# vim /var/named/phc-dow.com.zone
    $ORIGIN  phc-dow.com.
    $TTL  600   ; 10 minutes
    @        IN SOA dns.phc-dow.com. dnsadmin.phc-dow.com. (
                                    2020010207   ; serial           #serial值加一
                                    10800        ; refresh (3 hours)
                                    900          ; retry  (15 minutes)
                                    604800       ; expire (1 week)
                                    86400        ; minimum (1 day)
                    )
                            NS   dns.phc-dow.com.
    $TTL  60 ; 1 minute
    dns                A         10.4.7.11
    harbor             A         10.4.7.200
    k8s-yaml           A         10.4.7.200
    traefik            A         10.4.7.10
    dashboard          A         10.4.7.10
    zk1     60 IN      A         10.4.7.11
    zk2     60 IN      A         10.4.7.12
    zk3     60 IN      A         10.4.7.21
    dubbo-monitor      A         10.4.7.10                          #添加此行配置
    [root@kjdow7-11 ~]# systemctl restart named
    [root@kjdow7-11 ~]# dig -t A dubbo-monitor.phc-dow.com @10.4.7.11 +short
    10.4.7.10
    
    

    8.6 应用k8s资源配置清单

    [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dubbo-monitor/dp.yaml
    deployment.extensions/dubbo-monitor created
    [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dubbo-monitor/svc.yaml
    service/dubbo-monitor created
    [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dubbo-monitor/ingress.yaml
    ingress.extensions/dubbo-monitor created
    
    

    8.7 打开网页进行访问

    http://dubbo-monitor.phc-dow.com

    9.交付dubbo服务的消费者集群到K8S

    9.1 使用jenkins进行持续构建dubbo消费者镜像

    依次填入/选择:
    app_name:       dubbo-demo-consumer
    image_name:     app/dubbo-demo-consumer
    git_repo:       git@github.com:zizhufanqing/dubbo-demo-web.git
    git_ver:        master
    add_tag:        202002011530
    mvn_dir:        ./
    target_dir:     ./dubbo-client/target
    mvn_cmd:        mvn clean package -Dmaven.test.skip=true
    base_image:     base/jre8:8u112
    maven:          3.6.0-8u181
    点击Build进行构建,等待构建完成。
    
    

    注: 构建完成后在harbor的app中查看自动提交的镜像

    9.2 准备资源配置清单

    [root@kjdow7-200 ~]# vi /data/k8s-yaml/dubbo-demo-consumer/dp.yaml
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: dubbo-demo-consumer
      namespace: app
      labels: 
        name: dubbo-demo-consumer
    spec:
      replicas: 1
      selector:
        matchLabels: 
          name: dubbo-demo-consumer
      template:
        metadata:
          labels: 
            app: dubbo-demo-consumer
            name: dubbo-demo-consumer
        spec:
          containers:
          - name: dubbo-demo-consumer
            image: harbor.phc.com/app/dubbo-demo-consumer:master_202002011530
            ports:
            - containerPort: 8080
              protocol: TCP
            - containerPort: 20880
              protocol: TCP
            env:
            - name: JAR_BALL
              value: dubbo-client.jar
            imagePullPolicy: IfNotPresent
          imagePullSecrets:
          - name: harbor
          restartPolicy: Always
          terminationGracePeriodSeconds: 30
          securityContext: 
            runAsUser: 0
          schedulerName: default-scheduler
      strategy:
        type: RollingUpdate
        rollingUpdate: 
          maxUnavailable: 1
          maxSurge: 1
      revisionHistoryLimit: 7
      progressDeadlineSeconds: 600
    [root@kjdow7-200 ~]# vi /data/k8s-yaml/dubbo-demo-consumer/svc.yaml
    kind: Service
    apiVersion: v1
    metadata: 
      name: dubbo-demo-consumer
      namespace: app
    spec:
      ports:
      - protocol: TCP
        port: 8080
        targetPort: 8080
      selector: 
        app: dubbo-demo-consumer
      clusterIP: None
      type: ClusterIP
      sessionAffinity: None
    [root@kjdow7-200 ~]# vi /data/k8s-yaml/dubbo-demo-consumer/ingress.yaml
    kind: Ingress
    apiVersion: extensions/v1beta1
    metadata: 
      name: dubbo-demo-consumer
      namespace: app
    spec:
      rules:
      - host: demo.phc-dow.com
        http:
          paths:
          - path: /
            backend: 
              serviceName: dubbo-demo-consumer
              servicePort: 8080
    
    

    9.3 应用资源配置清单前准备工作-解析域名

    [root@kjdow7-11 ~]# vim /var/named/phc-dow.com.zone
    $ORIGIN  phc-dow.com.
    $TTL  600   ; 10 minutes
    @        IN SOA dns.phc-dow.com. dnsadmin.phc-dow.com. (
                                    2020010208   ; serial
                                    10800        ; refresh (3 hours)
                                    900          ; retry  (15 minutes)
                                    604800       ; expire (1 week)
                                    86400        ; minimum (1 day)
                    )
                            NS   dns.phc-dow.com.
    $TTL  60 ; 1 minute
    dns                A         10.4.7.11
    harbor             A         10.4.7.200
    k8s-yaml           A         10.4.7.200
    traefik            A         10.4.7.10
    dashboard          A         10.4.7.10
    zk1     60 IN      A         10.4.7.11
    zk2     60 IN      A         10.4.7.12
    zk3     60 IN      A         10.4.7.21
    dubbo-monitor      A         10.4.7.10
    demo               A         10.4.7.10
    [root@kjdow7-11 ~]# systemctl restart named
    [root@kjdow7-11 ~]# dig -t A demo.phc-dow.com @10.4.7.11 +short
    10.4.7.10
    
    

    9.4 应用资源配置清单

    [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dubbo-demo-consumer/dp.yaml
    deployment.extensions/dubbo-demo-consumer created
    [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dubbo-demo-consumer/svc.yaml
    service/dubbo-demo-consumer created
    [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dubbo-demo-consumer/ingress.yaml
    ingress.extensions/dubbo-demo-consumer created
    
    

    9.5 验证

    • 登录dubbo-monitor页面查看

    http://dubbo-monitor.phc-dow.com/applications.html

    Applications已经能看到部署的三个

    • 打开页面进行访问
    http://demo.phc-dow.com/hello?name=wanglei
    
    

    注:这里通过客户端调用hello的方法,客户端通过rpc协议调用服务端的hello方法,返回结果

    三 、实战dubbo集群的日常维护

    1.jenkins持续集成与持续部署

    • 1.jenkins从git上拉取新代码,并按照上述方式进行构建
    • 2.jenkins自动集成生成新的app的镜像
    • 3.在k8s中修改对应的服务所使用的镜像,k8s自动进行滚动更新

    2.服务的扩容与缩容

    • 1.修改deployment中声明的pod的个数
    • 2.应用新的配置清单
    • 3.k8s自动进行扩容与缩容
  • 相关阅读:
    经典代码JSKeyword查看(M。。。$)的哦!
    django处理websocket
    产品所有者也应该是Scrum教练吗?
    google的javascript编码规范
    python 处理websocket
    [转] 虚拟座谈会:TDD有多美?
    python 数字相关
    google的python编码规范
    python 函数相关
    python推荐的模块结构
  • 原文地址:https://www.cnblogs.com/dinghc/p/13224803.html