zoukankan      html  css  js  c++  java
  • k8s服务发现

    K8S服务发现

    • 服务发现就是服务(应用)之间相互定位的过程。
    • 服务发现不是非云计算时代独有的,传统的单体架构时代也会用到。以下应用场景下,更需要服务发现。
      • 服务(应用)的动态性强
      • 服务(应用)更新发布频繁
      • 服务(应用)支持自动伸缩
    • 在K8S集群里,POD的IP是不断变化的,如何以不变应万变?
      • 抽象出了service资源,通过标签选择器,管理一组pod
      • 抽象出了集群网络,通过相对固定的“集群IP”,使服务接入点固定
    • 那么如何自动关联service资源的“名称”和“集群网络IP”,从而达到服务被集群自动发现的目的呢?
      • 考虑传统DNS的模型:hdss7-21.host.com → 10.4.7.21
      • 能否在K8S里建立这样的模型:Nginx-ds → 192.168.0.5

    K8S里服务发现的方式——DNS

    实现K8S里DNS功能的插件(软件)

    • kube-dns——kubereters-v1.2至kubernetes-v1.10
    • Coredns——kubernetes-v1.11——至今(取代了kube-dns)作为k8s默认的DNS插件

    注意:

    • K8S里的DNS不是万能的!它应该只负责自动维护“服务名”→ 集群网络IP之间的关系

    K8S服务发现插件——CoreDNS

    部署K8S的内网资源配置清单HTTP服务

    在运维主机hdss7-200.host.com上,配置一个Nginx虚拟主机,用以提供k8s统一的资源配置清单访问入口

    • 配置Nginx
    [root@hdss7-200 ~]# cd /etc/nginx/conf.d/
    [root@hdss7-200 conf.d]# vim /etc/nginx/conf.d/k8s-yaml.od.com.conf
    
    server {
       listen       80;
       server_name  k8s-yaml.od.com;
    
       location / {
           autoindex on;
           default_type text/plain;
           root /data/k8s-yaml;
       }
    }
    

    创建目录并检查

    [root@hdss7-200 conf.d]# mkdir /data/k8s-yaml
    
    [root@hdss7-200 conf.d]# nginx -t
    nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /etc/nginx/nginx.conf test is successful
    
    [root@hdss7-200 conf.d]# nginx -s reload
    [root@hdss7-200 conf.d]# cd /data/k8s-yaml/
    

    在7-11上

    [root@hdss7-11 ~]# vim /var/named/od.com.zone
    

    修改如下

                   2020080103; serial
    k8s-yaml         A 10.4.7.200
    

    重启named服务,检查服务

    [root@hdss7-11 ~]# systemctl restart named
    [root@hdss7-11 ~]# dig -t A k8s-yaml.od.com @10.4.7.11 +short
    10.4.7.200
    

    回到7-200主机,创建coredns目录

    [root@hdss7-200 k8s-yaml]# mkdir coredns
    

    下载coredns镜像

    [root@hdss7-200 conf.d]# docker pull docker.io/coredns/coredns:1.6.1
    [root@hdss7-200 conf.d]# docker images | grep coredns
    coredns/coredns          1.6.1            c0f6e815079e        12 months ago       42.2MB
    
    [root@hdss7-200 conf.d]# docker tag c0f6e815079e harbor.od.com/public/coredns:v1.6.1
    
    [root@hdss7-200 conf.d]# docker push !$
    docker push harbor.od.com/public/coredns:v1.6.1
    The push refers to repository [harbor.od.com/public/coredns]
    da1ec456edc8: Pushed 
    225df95e717c: Pushed 
    v1.6.1: digest: sha256:c7bf0ce4123212c87db74050d4cbab77d8f7e0b49c041e894a35ef15827cf938 size: 739
    

    需要四个资源配置清单

    • rbac.yaml
    • cm.yaml
    • dp.yaml
    • svc.yaml

    rbac.yaml

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: coredns
      namespace: kube-system
      labels:
          kubernetes.io/cluster-service: "true"
          addonmanager.kubernetes.io/mode: Reconcile
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        kubernetes.io/bootstrapping: rbac-defaults
        addonmanager.kubernetes.io/mode: Reconcile
      name: system:coredns
    rules:
    - apiGroups:
      - ""
      resources:
      - endpoints
      - services
      - pods
      - namespaces
      verbs:
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      annotations:
        rbac.authorization.kubernetes.io/autoupdate: "true"
      labels:
        kubernetes.io/bootstrapping: rbac-defaults
        addonmanager.kubernetes.io/mode: EnsureExists
      name: system:coredns
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:coredns
    subjects:
    - kind: ServiceAccount
      name: coredns
      namespace: kube-system
    

    cm.yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: coredns
      namespace: kube-system
    data:
      Corefile: |
        .:53 {
            errors
            log
            health
            ready
            kubernetes cluster.local 192.168.0.0/16
            forward . 10.4.7.11
            cache 30
            loop
            reload
            loadbalance
           }
    

    dp.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: coredns
      namespace: kube-system
      labels:
        k8s-app: coredns
        kubernetes.io/name: "CoreDNS"
    spec:
      replicas: 1
      selector:
        matchLabels:
          k8s-app: coredns
      template:
        metadata:
          labels:
            k8s-app: coredns
        spec:
          priorityClassName: system-cluster-critical
          serviceAccountName: coredns
          containers:
          - name: coredns
            image: harbor.od.com/public/coredns:v1.6.1
            args:
            - -conf
            - /etc/coredns/Corefile
            volumeMounts:
            - name: config-volume
              mountPath: /etc/coredns
            ports:
            - containerPort: 53
              name: dns
              protocol: UDP
            - containerPort: 53
              name: dns-tcp
              protocol: TCP
            - containerPort: 9153
              name: metrics
              protocol: TCP
            livenessProbe:
              httpGet:
                path: /health
                port: 8080
                scheme: HTTP
              initialDelaySeconds: 60
              timeoutSeconds: 5
              successThreshold: 1
              failureThreshold: 5
          dnsPolicy: Default
          volumes:
            - name: config-volume
              configMap:
                name: coredns
                items:
                - key: Corefile
                  path: Corefile
    

    svc.yaml

    apiVersion: v1
    kind: Service
    metadata:
      name: coredns
      namespace: kube-system
      labels:
        k8s-app: coredns
        kubernetes.io/cluster-service: "true"
        kubernetes.io/name: "CoreDNS"
    spec:
      selector:
        k8s-app: coredns
      clusterIP: 192.168.0.2
      ports:
      - name: dns
        port: 53
        protocol: UDP
      - name: dns-tcp
        port: 53
      - name: metrics
        port: 9153
        protocol: TCP
    

    在7-21上操作

    用陈述式资源管理方法生成声明式资源管理配置清单

    [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/rbac.yaml
    serviceaccount/coredns created
    clusterrole.rbac.authorization.k8s.io/system:coredns created
    clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
    
    [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/cm.yaml
    configmap/coredns created
    
    [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/dp.yaml
    deployment.apps/coredns created
    
    [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/svc.yaml
    service/coredns created
    

    查看POD资源

    [root@hdss7-21 ~]# kubectl get all -n kube-system
    NAME                           READY   STATUS    RESTARTS   AGE
    pod/coredns-6b6c4f9648-ttfg8   1/1     Running   0          2m3s
    
    NAME              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
    service/coredns   ClusterIP   192.168.0.2   <none>        53/UDP,53/TCP,9153/TCP   105s
    
    NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/coredns   1/1     1            1           2m3s
    
    NAME                                 DESIRED   CURRENT   READY   AGE
    replicaset.apps/coredns-6b6c4f9648   1         1         1       2m3s
    

    查看sh文件

    [root@hdss7-21 ~]# cat /opt/kubernetes/server/bin/kubelet.sh 
    #!/bin/sh
    ./kubelet 
    ..........
      --cluster-dns 192.168.0.2 
    ..........
    

    可以看到集群id已经固定了,是192.168.0.2这个IP,这是这个集群DNS统一接入的点

    [root@hdss7-21 ~]# dig -t A www.baidu.com @192.168.0.2 +short
    www.a.shifen.com.
    180.101.49.12
    180.101.49.11
      
    [root@hdss7-21 ~]# dig -t A hdss7-21.host.com @192.168.0.2 +short
    10.4.7.21
    

    因为我们在cm里面已经指定了forward是10.4.7.11,我们自建的dns是coredns的上级dns,

    [root@hdss7-21 ~]# kubectl get svc -o wide
    NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE   SELECTOR
    kubernetes   ClusterIP   192.168.0.1   <none>        443/TCP   18d   <none>
    
    [root@hdss7-21 ~]# kubectl create deployment nginx-dp --image=harbor.od.com/public/nginx:v1.7.9 -n kube-public
    deployment.apps/nginx-dp created
    
    [root@hdss7-21 ~]# kubectl expose deployment nginx-dp --port=80 -n kube-public
    service/nginx-dp exposed
    
    [root@hdss7-21 ~]# kubectl get pods -n kube-public
    NAME                        READY   STATUS    RESTARTS   AGE
    nginx-dp-5dfc689474-vjpp8   1/1     Running   0          9s
    
    [root@hdss7-21 ~]# kubectl get svc -o wide
    NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE   SELECTOR
    kubernetes   ClusterIP   192.168.0.1   <none>        443/TCP   19d   <none>
    
    [root@hdss7-21 ~]# kubectl get svc -n kube-public
    NAME       TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)   AGE
    nginx-dp   ClusterIP   192.168.188.157   <none>        80/TCP    3m52s
    
    [root@hdss7-21 ~]# dig -t A nginx-dp @192.168.0.2 +short
    

    没有返回!

    这里想通过coredns去查询service的名称,需要用fqdn,需要加上

    [root@hdss7-21 ~]# dig -t A nginx-dp.kube-public.svc.cluster.local. @192.168.0.2 +short
    192.168.188.157
    
    [root@hdss7-21 ~]# kubectl get pod -n kube-public -o wide
    NAME                        READY   STATUS    RESTARTS   AGE   IP           NODE                NOMINATED NODE   READINESS GATES
    nginx-dp-5dfc689474-vjpp8   1/1     Running   0          10m   172.7.22.3   hdss7-22.host.com   <none>           <none>
    

    进入容器

    [root@hdss7-21 ~]# kubectl get pods -o wide
    NAME             READY   STATUS    RESTARTS   AGE     IP           NODE                NOMINATED NODE   READINESS GATES
    nginx-ds-gwswr   1/1     Running   0          4h55m   172.7.22.2   hdss7-22.host.com   <none>           <none>
    nginx-ds-jh2x5   1/1     Running   0          4h55m   172.7.21.2   hdss7-21.host.com   <none>           <none>
    
    [root@hdss7-21 ~]# kubectl exec -it nginx-ds-jh2x5 /bin/bash
    
    root@nginx-ds-jh2x5:/# curl 192.168.188.157
    <!DOCTYPE html>
    ...........
    

    通过名称空间访问

    root@nginx-ds-jh2x5:/# curl nginx-dp.kube-public.svc.cluster.local
    <!DOCTYPE html>
    ...........
    

    因为一个default,就是默认的名称空间,一个是public名称空间,在集群里面,curl service的时候可以简写成nginx-dp.kube-public

    root@nginx-ds-jh2x5:/# curl nginx-dp.kube-public
    <!DOCTYPE html>
    ...........
    

    这是为什么呢?查看一下之前配好的一个配置文件。

    root@nginx-ds-jh2x5:/# cat /etc/resolv.conf 
    nameserver 192.168.0.2
    search default.svc.cluster.local svc.cluster.local cluster.local host.com
    options ndots:5
    

    可以看到,这里已经将default.svc.cluster.localsvc.cluster.local cluster.local都加到svc这个域里了。

    把coredns安装到k8s集群里,最重要的一点就是把service的名字和cluster做一个自动关联。做自动关联就实现了所谓的服务发现。(只需要记service的名字)

  • 相关阅读:
    服务部署 RPC vs RESTful
    模拟浏览器之从 Selenium 到splinter
    windows程序设计 vs2012 新建win32项目
    ubuntu python 安装numpy,scipy.pandas.....
    vmvare 将主机的文件复制到虚拟机系统中 安装WMware tools
    ubuntu 修改root密码
    python 定义类 简单使用
    python 定义函数 两个文件调用函数
    python 定义函数 调用函数
    python windows 安装gensim
  • 原文地址:https://www.cnblogs.com/liuhuan086/p/13547478.html
Copyright © 2011-2022 走看看