zoukankan      html  css  js  c++  java
  • kubernetes核心组件

    1.kubelet

    kubelet运行于集群的所有节点上,包括master的节点。
    kubelet用于处理master节点下发到本节点的任务,管理本节点上的pod以及pod中的容器。
    每个kubelet会在APIServer上注册本节点的信息,并定期上报本节点的资源使用情况。
    kubelet先于集群而存在于每个节点上的。
    kubelet作为集群节点的后台守护进程,在节点启动时,由节点上的操作系统init进程(systemd)拉起。
    分别是下面两个文件:

    root@VM-16-6-ubuntu:~# ls /lib/systemd/system/kubelet.service 
    /lib/systemd/system/kubelet.service
    root@VM-16-6-ubuntu:~# ls /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
    /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

    kubelet主要参数配置:

    root@VM-16-6-ubuntu:~# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
    [Service]
    Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
    Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
    Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
    Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
    Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
    Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
    Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
    ExecStart=
    ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS

    bootstrap-kubeconfig用于在kubeconfig文件不存在的情况下,像APIServer获取client端的证书文件。
    获取的证书文件将存储在cert-dir指定的目录下。同时数据也将写入一份到kubeconfig指定的目录里面。

    pod-manifest-path是静态pod的manifest的存放路径。kubelet将启动这些pod,并维持这些pod处于运行状态。

    KUBELET_NETWORK_ARGS是与节点上cni网络插件相关的配置。kubelet通过这个配置,来调用cni相关程序,配置容器中的相关网络。

    KUBELET_DNS_ARGS是集群DNS相关的配置。

    cadvisor-port默认值是4194,当配置为0的时候,表示不在节点上开始cadvisor服务。
    cadvisor是一个分析容器资源使用率和性能特性的代理工具,默认情况下会在每个节点上安装Agent。
    通过暴露cadvisor-port来提供服务。

    多数情况下,不需要修改启动文件的参数。修改之后执行:
      systemctl daemon-reload & systemctl restart kubelet

    2.kube-apiserver

    kube-apiserver是整个集群调用的入口,通过APIServer暴露的API实现对整个集群的对象和状态管理。
    kubeadm引导建立的kubernetes集群的APIServer是以静态pod的形式运行的,由kubelet启动。
    APIServer是一个庞大的单体程序。可以查看APIServer的配置yaml:

    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
      creationTimestamp: null
      labels:
        component: kube-apiserver
    spec:
      containers:
      - command:
        - kube-apiserver
        - --insecure-port=0  #APIServer非安全服务端口,默认不开启
        - --requestheader-username-headers=X-Remote-User
        - --client-ca-file=/etc/kubernetes/pki/ca.crt
        - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key #APIServer以client端身份访问kubelet所使用的私钥文件
        - --secure-port=6443
        - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
        - --requestheader-extra-headers-prefix=X-Remote-Extra-
        - --service-cluster-ip-range=10.96.0.0/12  #设置无类别预见地址的分配范围.不能与pod地址有交集
        - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt  #APIServer以client端身份访问kubelet使用的数字公钥证书.
        - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
        - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
        - --allow-privileged=true  #是否允许启动特权容器
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --requestheader-allowed-names=front-proxy-client
        - --service-account-key-file=/etc/kubernetes/pki/sa.pub
        - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
        - --enable-bootstrap-token-auth=true  #允许secret对象进行集群初始化
        - --requestheader-group-headers=X-Remote-Group
        - --advertise-address=148.70.251.10
        - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt  #APIserver公钥证书文件
        - --authorization-mode=Node,RBAC  #设置用户授权模式列表
        - --etcd-servers=https://127.0.0.1:2379  #服务地址
        - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt  #ca证书
        - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
        - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
        image: k8s.gcr.io/kube-apiserver-amd64:v1.10.2
        livenessProbe:
          failureThreshold: 8
          httpGet:
            host: 148.70.251.10
            path: /healthz
            port: 6443
            scheme: HTTPS
          initialDelaySeconds: 15
          timeoutSeconds: 15
        name: kube-apiserver
        resources:
          requests:
            cpu: 250m
        volumeMounts:
        - mountPath: /etc/kubernetes/pki
          name: k8s-certs
          readOnly: true
        - mountPath: /etc/ssl/certs
          name: ca-certs
          readOnly: true
      hostNetwork: true
      volumes:
      - hostPath:
          path: /etc/kubernetes/pki
          type: DirectoryOrCreate
        name: k8s-certs
      - hostPath:
          path: /etc/ssl/certs
          type: DirectoryOrCreate
        name: ca-certs
    status: {}

    kubelet监听/etc/kubernetes/manifests目录变化,自动重启配置发生变化的apiserver pod.

    3.etcd

    etcd用于存储整个集群的对象和状态.
    kubeadm引导启动的集群默认只启动一个etcd节点和APIServer,etcd也是由kubelet启动的static pod.
    如果要修改etcd的启动参数,直接修改etcd.yaml.保存之后kubelet会重启etcd的静态pod.
    apiserver与etcd之间采用基于TLS的安全通信.
    etcd挂载master节点本地路径/var/lib/etcd用于运行时的数据存储.

    root@VM-16-6-ubuntu:~# tree /var/lib/etcd
    /var/lib/etcd
    └── member
        ├── snap
        │   ├── 0000000000000002-0000000000033465.snap
        │   ├── 0000000000000002-0000000000035b76.snap
        │   ├── 0000000000000002-0000000000038287.snap
        │   ├── 0000000000000002-000000000003a998.snap
        │   ├── 0000000000000002-000000000003d0a9.snap
        │   └── db
        └── wal
            ├── 0000000000000000-0000000000000000.wal
            ├── 0000000000000001-000000000000deb3.wal
            ├── 0000000000000002-000000000001b87f.wal
            ├── 0000000000000003-0000000000029249.wal
            ├── 0000000000000004-0000000000036c13.wal
            └── 0.tmp

    如果需要做数据迁移和备份,只需要对这个目录进行操作就可以了.

    4.controller-manager

    负责集群内Node,Pod副本,服务的endpoint,命名空间,Service Account,资源配额管理等.

    controller通过APIServer监控资源的状态,一旦状态发生变化,controller就会改变这个状态,使其恢复正常.
    和APIServer一样,controller-manager由kubelet启动的static pod.
    如果要修改controller-manager的启动参数,直接修改kube-controller-manager.yaml文件.

    下面是controller-manager的启动参数配置:

    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
      creationTimestamp: null
      labels:
        component: kube-controller-manager
        tier: control-plane
      name: kube-controller-manager
      namespace: kube-system
    spec:
      containers:
      - command:
        - kube-controller-manager
        - --leader-elect=true  #执行主业务逻辑前是否进行leader选举.
        - --controllers=*,bootstrapsigner,tokencleaner
        - --kubeconfig=/etc/kubernetes/controller-manager.conf
        - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
        - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt  #证书文件
        - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key  #签发集群范围内的其它证书
        - --address=127.0.0.1
        - --root-ca-file=/etc/kubernetes/pki/ca.crt  #ca.crt会包含在service acount的对象参数中.
        - --use-service-account-credentials=true
        image: k8s.gcr.io/kube-controller-manager-amd64:v1.10.2
        livenessProbe:
          failureThreshold: 8
          httpGet:
            host: 127.0.0.1
            path: /healthz
            port: 10252
            scheme: HTTP
          initialDelaySeconds: 15
          timeoutSeconds: 15
        name: kube-controller-manager
        resources:
          requests:
            cpu: 200m
        volumeMounts:
        - mountPath: /etc/kubernetes/pki
          name: k8s-certs
          readOnly: true
        - mountPath: /etc/ssl/certs
          name: ca-certs
          readOnly: true
        - mountPath: /etc/kubernetes/controller-manager.conf
          name: kubeconfig
          readOnly: true
      hostNetwork: true
      volumes:
      - hostPath:
          path: /etc/kubernetes/pki
          type: DirectoryOrCreate
        name: k8s-certs
      - hostPath:
          path: /etc/ssl/certs
          type: DirectoryOrCreate
        name: ca-certs
      - hostPath:
          path: /etc/kubernetes/controller-manager.conf
          type: FileOrCreate
        name: kubeconfig
    status: {}

    5.kube-scheduler

    按照特定的调度算法和策略,将待调度pod绑定集群中某个适合的Node,并写入绑定信息.
    由kubelet启动的static pod.
    scheduler的配置参数:

    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
      creationTimestamp: null
      labels:
        component: kube-scheduler
        tier: control-plane
      name: kube-scheduler
      namespace: kube-system
    spec:
      containers:
      - command:
        - kube-scheduler
        - --address=127.0.0.1  #服务地址,不对外提供服务
        - --leader-elect=true  #在主业务逻辑循环前,是否进行选主
        - --kubeconfig=/etc/kubernetes/scheduler.conf  #启动文件
        image: k8s.gcr.io/kube-scheduler-amd64:v1.10.2
        livenessProbe:
          failureThreshold: 8
          httpGet:
            host: 127.0.0.1
            path: /healthz
            port: 10251
            scheme: HTTP
          initialDelaySeconds: 15
          timeoutSeconds: 15
        name: kube-scheduler
        resources:
          requests:
            cpu: 100m
        volumeMounts:
        - mountPath: /etc/kubernetes/scheduler.conf
          name: kubeconfig
          readOnly: true
      hostNetwork: true
      volumes:
      - hostPath:
          path: /etc/kubernetes/scheduler.conf
          type: FileOrCreate
        name: kubeconfig
    status: {}

    6.kube-proxy

    service在创建的时候会分配一个虚拟的服务IP,对service访问会按照一定策略分发到后面的pod.
    service并没有实体,让service起作用的是运行在kubernetes集群节点上的kube-proxy组件.

    由于proxy的存在,在client调用的service时,调用者无需关心后端pod的数量,负载均衡和故障恢复.
    kube-proxy由daemonset控制器在各个节点上启动唯一实例.
    kube-proxy不是静态pod,它的配置放在kube-proxy配置里面.
    我们可以查看kube-proxy的配置:

    root@VM-16-6-ubuntu:~# kubectl exec kube-proxy-gnrc7 -n kube-system -- cat /var/lib/kube-proxy/config.conf
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 10
      contentType: application/vnd.kubernetes.protobuf
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 5
    clusterCIDR: ""
    configSyncPeriod: 15m0s
    conntrack:
      max: null
      maxPerCore: 32768
      min: 131072
      tcpCloseWaitTimeout: 1h0m0s
      tcpEstablishedTimeout: 24h0m0s
    enableProfiling: false
    healthzBindAddress: 0.0.0.0:10256
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: 14
      minSyncPeriod: 0s
      syncPeriod: 30s
    ipvs:
      minSyncPeriod: 0s
      scheduler: ""
      syncPeriod: 30s
    kind: KubeProxyConfiguration
    metricsBindAddress: 127.0.0.1:10249
    mode: ""
    nodePortAddresses: null
    oomScoreAdj: -999
    portRange: ""
    resourceContainer: /kube-proxy

    最重要的配置就是mode,kube-proxy支持三种mode.分别是:user-spece,iptables,ipvs.
    如果配置文件的pod为空,那么将选择当前最好的mode(iptables),如果内核不支持将改为user-spece模式.

  • 相关阅读:
    解决VsCode中Go插件依赖安装失败问题
    C# httpclient获取cookies实现模拟web登录
    C#中调用HttpWebRequest类中Get/Post请求无故失效的诡异问题
    VisualSVN 5.1.7破译License Key
    AutoResetEvent类的使用
    26种设计模式之单例模式
    WPF的一些感悟
    vim 常用指令
    myeclipse 的.jsp文件中的<option>无法使用
    flume部署问题解决
  • 原文地址:https://www.cnblogs.com/yangmingxianshen/p/12627446.html
Copyright © 2011-2022 走看看