zoukankan      html  css  js  c++  java
  • Cilium路由模式(一)

    Encapsulation 封装

    官方文档

    缺省模式,Cilium 会自动运行在封装模式下,因为这种模式对底层网络基础设施(特别是云厂商)依赖性比较低,只要2层网络通城,执行封装技术即可达到Pod通信目的

    在这种模式下,所有集群节点使用基于 UDP 的封装协议 VXLAN 或 Geneve 形成隧道网状结构。 Cilium 节点之间的所有流量都被封装

    二种封装技术

    1. VXLAN
    2. Geneve

    如果你的网络有设置防火墙,注意了,这几个端口

    Encapsulation ModePort Range / Protocol
    VXLAN (Default) 8472/UDP
    Geneve 6081/UDP

    封装优势

    • 简单 

    连接集群节点的网络不需要知道 PodCIDR。 集群节点可以产生多个路由或链路层域。 只要集群节点可以使用 IP/UDP 相互访问,底层网络的拓扑结构就无关紧要

    • 寻址空间

    由于不依赖于任何底层网络限制,如果 PodCIDR 大小相应地配置,可用寻址空间可能要大得多,并且允许每个节点运行任意数量的 Pod

    • 自动配置

    当与编排系统(例如 Kubernetes)一起运行时,集群中所有节点的列表(包括其关联的分配前缀节点)将自动提供给每个cilium-agent, 加入集群的新节点将自动合并到网格中

    • identity context

     封装协议允许将元数据与网络数据包一起携带。 Cilium 利用这种能力来传输元数据,例如源安全身份。 身份传输是一种优化,旨在避免在远程节点上进行一次身份查找

    封装劣势 

    • MTU Overhead

    由于添加了封装header,MTU太大引起的弊端低于本机路由(VXLAN 的每个网络数据包 50 字节)。 这导致特定网络连接影响低吞吐量。 当然可以通过启用巨型帧(每 1500 字节有 50 字节的开销,而每 9000 字节有 50 字节的开销)来很大程度上缓解。

    配置过程

    1. 配置过程如下
      helm install cilium cilium/cilium --version 1.9.9 
          --namespace kube-system 
          --set tunnel=vxlan 
          --set kubeProxyReplacement=strict 
          --set ipam.mode=kubernetes 
          --set ipam.operator.clusterPoolIPv4PodCIDR=172.21.0.0/20 
          --set ipam.operator.clusterPoolIPv4MaskSize=26 
          --set k8sServiceHost=apiserver.qiangyun.com 
          --set k8sServicePort=6443
    2. 观察节点的路由,如下
      <root@PROD-FE-K8S-WN1 ~># netstat -rn
      Kernel IP routing table
      Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
      0.0.0.0         10.1.16.253     0.0.0.0         UG        0 0          0 eth0
      10.1.16.0       0.0.0.0         255.255.255.0   U         0 0          0 eth0
      169.254.0.0     0.0.0.0         255.255.0.0     U         0 0          0 eth0
      172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0
      172.21.0.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
      172.21.1.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
      172.21.2.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
      172.21.3.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
      172.21.4.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
      172.21.5.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
      172.21.6.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
      172.21.7.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
      172.21.8.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
      172.21.9.0      172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
      172.21.9.225    0.0.0.0         255.255.255.255 UH        0 0          0 cilium_host
      172.21.10.0     172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
      172.21.11.0     172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
      172.21.12.0     172.21.9.225    255.255.255.192 UG        0 0          0 cilium_host
      172.21.12.64    172.21.9.225    255.255.255.192 UG        0 0          0 cilium_host
      172.21.12.128   172.21.9.225    255.255.255.192 UG        0 0          0 cilium_host
      172.21.12.192   172.21.9.225    255.255.255.192 UG        0 0          0 cilium_host
      172.21.13.0     172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
      172.21.14.0     172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host
      172.21.15.0     172.21.9.225    255.255.255.0   UG        0 0          0 cilium_host

      # 说明
      每个节点都有一个CiliumInternal地址 172.21.9.225
      每个节点都有一个IPAM
      每个节点都有一个healthcheck的地址
      以下地址在下方体现
    3. CRD CiliumNode
      spec:
        addresses:
          - ip: 10.1.16.221
            type: InternalIP
          - ip: 172.21.9.225
            type: CiliumInternalIP
        azure: {}
        encryption: {}
        eni: {}
        health:
          ipv4: 172.21.9.190
        ipam:
          podCIDRs:
            - 172.21.9.0/24
    4. 简单说下,通信原理
      <root@PROD-FE-K8S-WN1 ~># ifconfig 
      cilium_host: flags=4291<UP,BROADCAST,RUNNING,NOARP,MULTICAST>  mtu 1500
              inet 172.21.9.225  netmask 255.255.255.255  broadcast 0.0.0.0
              ether 22:cb:9d:23:d8:48  txqueuelen 1000  (Ethernet)
              RX packets 4665  bytes 356292 (347.9 KiB)
              RX errors 0  dropped 0  overruns 0  frame 0
              TX packets 273  bytes 19841 (19.3 KiB)
              TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
      
      cilium_net: flags=4291<UP,BROADCAST,RUNNING,NOARP,MULTICAST>  mtu 1500
              ether 26:7f:1f:99:b5:db  txqueuelen 1000  (Ethernet)
              RX packets 273  bytes 19841 (19.3 KiB)
              RX errors 0  dropped 0  overruns 0  frame 0
              TX packets 4665  bytes 356292 (347.9 KiB)
              TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
      
      cilium_vxlan: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
              ether 02:16:be:c2:2f:2f  txqueuelen 1000  (Ethernet)
              RX packets 10023  bytes 634132 (619.2 KiB)
              RX errors 0  dropped 0  overruns 0  frame 0
              TX packets 9979  bytes 629067 (614.3 KiB)
              TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
      # 说明
      cilium_host 类似于一个路由器,网关设备
      cilium_net & cilium_host成对出现,像是一个veth,一端接容器一端接主机
      cilium_vxlan 就是虚拟二层网络,用来提供Pod跨节点通信MTU封装
    5. 虽然cilium默认使用的封装技术,但是host routing模式还是使用到BPF模式,如下
      root@PROD-FE-K8S-WN1:/home/cilium# cilium status --verbose
      KVStore:                Ok   Disabled
      Kubernetes:             Ok   1.18 (v1.18.5) [linux/amd64]
      Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
      KubeProxyReplacement:   Strict   [eth0 (Direct Routing)]
      Cilium:                 Ok   1.9.9 (v1.9.9-5bcf83c)
      NodeMonitor:            Listening for events on 2 CPUs with 64x4096 of shared memory
      Cilium health daemon:   Ok   
      IPAM:                   IPv4: 2/255 allocated from 172.21.9.0/24, 
      Allocated addresses:
        172.21.9.225 (router)
        172.21.9.26 (health)
      BandwidthManager:       Disabled
      Host Routing:           BPF
      Masquerading:           BPF   [eth0]   172.21.9.0/24
      Clock Source for BPF:   ktime
      Controller Status:      18/18 healthy
        Name                                  Last success   Last error   Count   Message
        cilium-health-ep                      33s ago        never        0       no error   
        dns-garbage-collector-job             41s ago        never        0       no error   
        endpoint-2159-regeneration-recovery   never          never        0       no error   
        endpoint-3199-regeneration-recovery   never          never        0       no error   
        k8s-heartbeat                         11s ago        never        0       no error   
        mark-k8s-node-as-available            48m34s ago     never        0       no error   
        metricsmap-bpf-prom-sync              6s ago         never        0       no error   
        neighbor-table-refresh                3m34s ago      never        0       no error   
        resolve-identity-2159                 3m34s ago      never        0       no error   
        resolve-identity-3199                 3m33s ago      never        0       no error   
        sync-endpoints-and-host-ips           34s ago        never        0       no error   
        sync-lb-maps-with-k8s-services        48m34s ago     never        0       no error   
        sync-policymap-2159                   31s ago        never        0       no error   
        sync-policymap-3199                   31s ago        never        0       no error   
        sync-to-k8s-ciliumendpoint (2159)     4s ago         never        0       no error   
        sync-to-k8s-ciliumendpoint (3199)     13s ago        never        0       no error   
        template-dir-watcher                  never          never        0       no error   
        update-k8s-node-annotations           48m40s ago     never        0       no error   
      Proxy Status:   OK, ip 172.21.9.225, 0 redirects active on ports 10000-20000
      Hubble:         Ok   Current/Max Flows: 4096/4096 (100.00%), Flows/s: 5.39   Metrics: Disabled
      KubeProxyReplacement Details:
        Status:              Strict
        Protocols:           TCP, UDP
        Devices:             eth0 (Direct Routing)
        Mode:                SNAT  (kuber-proxy的模式,默认就是SNAT)
        Backend Selection:   Random
        Session Affinity:    Enabled
        XDP Acceleration:    Disabled
        Services:
        - ClusterIP:      Enabled
        - NodePort:       Enabled (Range: 30000-32767) 
        - LoadBalancer:   Enabled 
        - externalIPs:    Enabled 
        - HostPort:       Enabled
      BPF Maps:   dynamic sizing: on (ratio: 0.002500)
        Name                          Size
        Non-TCP connection tracking   65536
        TCP connection tracking       131072
        Endpoint policy               65535
        Events                        2
        IP cache                      512000
        IP masquerading agent         16384
        IPv4 fragmentation            8192
        IPv4 service                  65536
        IPv6 service                  65536
        IPv4 service backend          65536
        IPv6 service backend          65536
        IPv4 service reverse NAT      65536
        IPv6 service reverse NAT      65536
        Metrics                       1024
        NAT                           131072
        Neighbor table                131072
        Global policy                 16384
        Per endpoint policy           65536
        Session affinity              65536
        Signal                        2
        Sockmap                       65535
        Sock reverse NAT              65536
        Tunnel                        65536
      Cluster health:                 19/19 reachable   (2021-08-27T17:54:39Z)
        Name                          IP                Node        Endpoints
        prod-fe-k8s-wn1 (localhost)   10.1.16.221       reachable   reachable
        prod-be-k8s-wn1               10.1.17.231       reachable   reachable
        prod-be-k8s-wn2               10.1.17.232       reachable   reachable
        prod-be-k8s-wn6               10.1.17.236       reachable   reachable
        prod-be-k8s-wn7               10.1.17.237       reachable   reachable
        prod-be-k8s-wn8               10.1.17.238       reachable   reachable
        prod-data-k8s-wn1             10.1.18.50        reachable   reachable
        prod-data-k8s-wn2             10.1.18.49        reachable   reachable
        prod-data-k8s-wn3             10.1.18.51        reachable   reachable
        prod-fe-k8s-wn2               10.1.16.222       reachable   reachable
        prod-fe-k8s-wn3               10.1.16.223       reachable   reachable
        prod-k8s-cp1                  10.1.0.5          reachable   reachable
        prod-k8s-cp2                  10.1.0.7          reachable   reachable
        prod-k8s-cp3                  10.1.0.6          reachable   reachable
        prod-sys-k8s-wn1              10.1.0.8          reachable   reachable
        prod-sys-k8s-wn2              10.1.0.9          reachable   reachable
        prod-sys-k8s-wn3              10.1.0.11         reachable   reachable
        prod-sys-k8s-wn4              10.1.0.10         reachable   reachable
        prod-sys-k8s-wn5              10.1.0.12         reachable   reachable
    6. cilium-agent 具体启动输出日志
      <root@PROD-FE-K8S-WN1 ~># dps
      6fdce5c6148b    Up 51 minutes   k8s_pord-ingress_prod-ingress-b76597794-tmrtc_ingress-nginx_a63d92fe-5c99-4948-89ca-fd70d2298f99_3
      43686a967be8    Up 52 minutes   k8s_cilium-agent_cilium-cgrdw_kube-system_14e0fb48-cc56-46d8-b929-64f66b36c6b7_2
      <root@PROD-FE-K8S-WN1 ~># dok^C
      <root@PROD-FE-K8S-WN1 ~># docker logs -f 43686a967be8
      level=info msg="Skipped reading configuration file" reason="Config File "ciliumd" Not Found in "[/root]"" subsys=config
      level=info msg="Started gops server" address="127.0.0.1:9890" subsys=daemon
      level=info msg="Memory available for map entries (0.003% of 3976814592B): 9942036B" subsys=config
      level=info msg="option bpf-ct-global-tcp-max set by dynamic sizing to 131072" subsys=config
      level=info msg="option bpf-ct-global-any-max set by dynamic sizing to 65536" subsys=config
      level=info msg="option bpf-nat-global-max set by dynamic sizing to 131072" subsys=config
      level=info msg="option bpf-neigh-global-max set by dynamic sizing to 131072" subsys=config
      level=info msg="option bpf-sock-rev-map-max set by dynamic sizing to 65536" subsys=config
      level=info msg="  --agent-health-port='9876'" subsys=daemon
      level=info msg="  --agent-labels=''" subsys=daemon
      level=info msg="  --allow-icmp-frag-needed='true'" subsys=daemon
      level=info msg="  --allow-localhost='auto'" subsys=daemon
      level=info msg="  --annotate-k8s-node='true'" subsys=daemon
      level=info msg="  --api-rate-limit='map[]'" subsys=daemon
      level=info msg="  --arping-refresh-period='5m0s'" subsys=daemon
      level=info msg="  --auto-create-cilium-node-resource='true'" subsys=daemon
      level=info msg="  --auto-direct-node-routes='false'" subsys=daemon  DSR 模式关闭状态(因为只能运行于Native-Routing)
      level=info msg="  --blacklist-conflicting-routes='false'" subsys=daemon
      level=info msg="  --bpf-compile-debug='false'" subsys=daemon
      level=info msg="  --bpf-ct-global-any-max='262144'" subsys=daemon
      level=info msg="  --bpf-ct-global-tcp-max='524288'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-regular-tcp='6h0m0s'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-service-tcp='6h0m0s'" subsys=daemon
      level=info msg="  --bpf-fragments-map-max='8192'" subsys=daemon
      level=info msg="  --bpf-lb-acceleration='disabled'" subsys=daemon
      level=info msg="  --bpf-lb-algorithm='random'" subsys=daemon
      level=info msg="  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'" subsys=daemon
      level=info msg="  --bpf-lb-maglev-table-size='16381'" subsys=daemon
      level=info msg="  --bpf-lb-map-max='65536'" subsys=daemon
      level=info msg="  --bpf-lb-mode='snat'" subsys=daemon
      level=info msg="  --bpf-map-dynamic-size-ratio='0.0025'" subsys=daemon
      level=info msg="  --bpf-nat-global-max='524288'" subsys=daemon
      level=info msg="  --bpf-neigh-global-max='524288'" subsys=daemon
      level=info msg="  --bpf-policy-map-max='16384'" subsys=daemon
      level=info msg="  --bpf-root=''" subsys=daemon
      level=info msg="  --bpf-sock-rev-map-max='262144'" subsys=daemon
      level=info msg="  --certificates-directory='/var/run/cilium/certs'" subsys=daemon
      level=info msg="  --cgroup-root='/run/cilium/cgroupv2'" subsys=daemon
      level=info msg="  --cluster-id=''" subsys=daemon
      level=info msg="  --cluster-name='default'" subsys=daemon
      level=info msg="  --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
      level=info msg="  --cmdref=''" subsys=daemon
      level=info msg="  --config=''" subsys=daemon
      level=info msg="  --config-dir='/tmp/cilium/config-map'" subsys=daemon
      level=info msg="  --conntrack-gc-interval='0s'" subsys=daemon
      level=info msg="  --crd-wait-timeout='5m0s'" subsys=daemon
      level=info msg="  --datapath-mode='veth'" subsys=daemon
      level=info msg="  --debug='false'" subsys=daemon
      level=info msg="  --debug-verbose=''" subsys=daemon
      level=info msg="  --device=''" subsys=daemon
      level=info msg="  --devices=''" subsys=daemon
      level=info msg="  --direct-routing-device=''" subsys=daemon
      level=info msg="  --disable-cnp-status-updates='true'" subsys=daemon
      level=info msg="  --disable-conntrack='false'" subsys=daemon
      level=info msg="  --disable-endpoint-crd='false'" subsys=daemon
      level=info msg="  --disable-envoy-version-check='false'" subsys=daemon
      level=info msg="  --disable-iptables-feeder-rules=''" subsys=daemon
      level=info msg="  --dns-max-ips-per-restored-rule='1000'" subsys=daemon
      level=info msg="  --egress-masquerade-interfaces=''" subsys=daemon 伪装模式的一种(基于iptables-base),还有一种是基于eBPF-base,二者并存优先取eBPF-base
      level=info msg="  --egress-multi-home-ip-rule-compat='false'" subsys=daemon
      level=info msg="  --enable-auto-protect-node-port-range='true'" subsys=daemon
      level=info msg="  --enable-bandwidth-manager='false'" subsys=daemon
      level=info msg="  --enable-bpf-clock-probe='true'" subsys=daemon
      level=info msg="  --enable-bpf-masquerade='true'" subsys=daemon
      level=info msg="  --enable-bpf-tproxy='false'" subsys=daemon
      level=info msg="  --enable-endpoint-health-checking='true'" subsys=daemon
      level=info msg="  --enable-endpoint-routes='false'" subsys=daemon  # Native-Routing的另一种路由模式,后面会有介绍,与前者不能并存,否则会报错
      level=info msg="  --enable-external-ips='true'" subsys=daemon
      level=info msg="  --enable-health-check-nodeport='true'" subsys=daemon
      level=info msg="  --enable-health-checking='true'" subsys=daemon
      level=info msg="  --enable-host-firewall='false'" subsys=daemon
      level=info msg="  --enable-host-legacy-routing='false'" subsys=daemon
      level=info msg="  --enable-host-port='true'" subsys=daemon
      level=info msg="  --enable-host-reachable-services='false'" subsys=daemon
      level=info msg="  --enable-hubble='true'" subsys=daemon
      level=info msg="  --enable-identity-mark='true'" subsys=daemon
      level=info msg="  --enable-ip-masq-agent='false'" subsys=daemon
      level=info msg="  --enable-ipsec='false'" subsys=daemon
      level=info msg="  --enable-ipv4='true'" subsys=daemon
      level=info msg="  --enable-ipv4-fragment-tracking='true'" subsys=daemon
      level=info msg="  --enable-ipv6='false'" subsys=daemon
      level=info msg="  --enable-ipv6-ndp='false'" subsys=daemon
      level=info msg="  --enable-k8s-api-discovery='false'" subsys=daemon
      level=info msg="  --enable-k8s-endpoint-slice='true'" subsys=daemon
      level=info msg="  --enable-k8s-event-handover='false'" subsys=daemon
      level=info msg="  --enable-l7-proxy='true'" subsys=daemon
      level=info msg="  --enable-local-node-route='true'" subsys=daemon
      level=info msg="  --enable-local-redirect-policy='false'" subsys=daemon
      level=info msg="  --enable-monitor='true'" subsys=daemon
      level=info msg="  --enable-node-port='false'" subsys=daemon
      level=info msg="  --enable-policy='default'" subsys=daemon
      level=info msg="  --enable-remote-node-identity='true'" subsys=daemon
      level=info msg="  --enable-selective-regeneration='true'" subsys=daemon
      level=info msg="  --enable-session-affinity='true'" subsys=daemon
      level=info msg="  --enable-svc-source-range-check='true'" subsys=daemon
      level=info msg="  --enable-tracing='false'" subsys=daemon
      level=info msg="  --enable-well-known-identities='false'" subsys=daemon
      level=info msg="  --enable-xt-socket-fallback='true'" subsys=daemon
      level=info msg="  --encrypt-interface=''" subsys=daemon
      level=info msg="  --encrypt-node='false'" subsys=daemon
      level=info msg="  --endpoint-interface-name-prefix='lxc+'" subsys=daemon
      level=info msg="  --endpoint-queue-size='25'" subsys=daemon
      level=info msg="  --endpoint-status=''" subsys=daemon
      level=info msg="  --envoy-log=''" subsys=daemon
      level=info msg="  --exclude-local-address=''" subsys=daemon
      level=info msg="  --fixed-identity-mapping='map[]'" subsys=daemon
      level=info msg="  --flannel-master-device=''" subsys=daemon
      level=info msg="  --flannel-uninstall-on-exit='false'" subsys=daemon
      level=info msg="  --force-local-policy-eval-at-source='true'" subsys=daemon
      level=info msg="  --gops-port='9890'" subsys=daemon
      level=info msg="  --host-reachable-services-protos='tcp,udp'" subsys=daemon
      level=info msg="  --http-403-msg=''" subsys=daemon
      level=info msg="  --http-idle-timeout='0'" subsys=daemon
      level=info msg="  --http-max-grpc-timeout='0'" subsys=daemon
      level=info msg="  --http-normalize-path='true'" subsys=daemon
      level=info msg="  --http-request-timeout='3600'" subsys=daemon
      level=info msg="  --http-retry-count='3'" subsys=daemon
      level=info msg="  --http-retry-timeout='0'" subsys=daemon
      level=info msg="  --hubble-disable-tls='false'" subsys=daemon
      level=info msg="  --hubble-event-queue-size='0'" subsys=daemon
      level=info msg="  --hubble-flow-buffer-size='4095'" subsys=daemon
      level=info msg="  --hubble-listen-address=':4244'" subsys=daemon
      level=info msg="  --hubble-metrics=''" subsys=daemon
      level=info msg="  --hubble-metrics-server=''" subsys=daemon
      level=info msg="  --hubble-socket-path='/var/run/cilium/hubble.sock'" subsys=daemon
      level=info msg="  --hubble-tls-cert-file='/var/lib/cilium/tls/hubble/server.crt'" subsys=daemon
      level=info msg="  --hubble-tls-client-ca-files='/var/lib/cilium/tls/hubble/client-ca.crt'" subsys=daemon
      level=info msg="  --hubble-tls-key-file='/var/lib/cilium/tls/hubble/server.key'" subsys=daemon
      level=info msg="  --identity-allocation-mode='crd'" subsys=daemon
      level=info msg="  --identity-change-grace-period='5s'" subsys=daemon
      level=info msg="  --install-iptables-rules='true'" subsys=daemon
      level=info msg="  --ip-allocation-timeout='2m0s'" subsys=daemon
      level=info msg="  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'" subsys=daemon
      level=info msg="  --ipam='kubernetes'" subsys=daemon CRD的模式,有好几种,官方默认是cluster-pool,kubernetes模式代表从node到拿IP地址,基于controller-manager 启动参数--allocate-node-cidrs
      level=info msg="  --ipsec-key-file=''" subsys=daemon
      level=info msg="  --iptables-lock-timeout='5s'" subsys=daemon
      level=info msg="  --iptables-random-fully='false'" subsys=daemon
      level=info msg="  --ipv4-node='auto'" subsys=daemon
      level=info msg="  --ipv4-pod-subnets=''" subsys=daemon
      level=info msg="  --ipv4-range='auto'" subsys=daemon
      level=info msg="  --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
      level=info msg="  --ipv4-service-range='auto'" subsys=daemon
      level=info msg="  --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
      level=info msg="  --ipv6-mcast-device=''" subsys=daemon
      level=info msg="  --ipv6-node='auto'" subsys=daemon
      level=info msg="  --ipv6-pod-subnets=''" subsys=daemon
      level=info msg="  --ipv6-range='auto'" subsys=daemon
      level=info msg="  --ipv6-service-range='auto'" subsys=daemon
      level=info msg="  --ipvlan-master-device='undefined'" subsys=daemon
      level=info msg="  --join-cluster='false'" subsys=daemon
      level=info msg="  --k8s-api-server=''" subsys=daemon
      level=info msg="  --k8s-force-json-patch='false'" subsys=daemon
      level=info msg="  --k8s-heartbeat-timeout='30s'" subsys=daemon
      level=info msg="  --k8s-kubeconfig-path=''" subsys=daemon
      level=info msg="  --k8s-namespace='kube-system'" subsys=daemon
      level=info msg="  --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
      level=info msg="  --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
      level=info msg="  --k8s-service-cache-size='128'" subsys=daemon
      level=info msg="  --k8s-service-proxy-name=''" subsys=daemon
      level=info msg="  --k8s-sync-timeout='3m0s'" subsys=daemon
      level=info msg="  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
      level=info msg="  --k8s-watcher-queue-size='1024'" subsys=daemon
      level=info msg="  --keep-config='false'" subsys=daemon
      level=info msg="  --kube-proxy-replacement='strict'" subsys=daemon
      level=info msg="  --kube-proxy-replacement-healthz-bind-address=''" subsys=daemon
      level=info msg="  --kvstore=''" subsys=daemon
      level=info msg="  --kvstore-connectivity-timeout='2m0s'" subsys=daemon
      level=info msg="  --kvstore-lease-ttl='15m0s'" subsys=daemon
      level=info msg="  --kvstore-opt='map[]'" subsys=daemon
      level=info msg="  --kvstore-periodic-sync='5m0s'" subsys=daemon
      level=info msg="  --label-prefix-file=''" subsys=daemon
      level=info msg="  --labels=''" subsys=daemon
      level=info msg="  --lib-dir='/var/lib/cilium'" subsys=daemon
      level=info msg="  --log-driver=''" subsys=daemon
      level=info msg="  --log-opt='map[]'" subsys=daemon
      level=info msg="  --log-system-load='false'" subsys=daemon
      level=info msg="  --masquerade='true'" subsys=daemon
      level=info msg="  --max-controller-interval='0'" subsys=daemon
      level=info msg="  --metrics=''" subsys=daemon
      level=info msg="  --monitor-aggregation='medium'" subsys=daemon
      level=info msg="  --monitor-aggregation-flags='all'" subsys=daemon
      level=info msg="  --monitor-aggregation-interval='5s'" subsys=daemon
      level=info msg="  --monitor-queue-size='0'" subsys=daemon
      level=info msg="  --mtu='0'" subsys=daemon
      level=info msg="  --nat46-range='0:0:0:0:0:FFFF::/96'" subsys=daemon
      level=info msg="  --native-routing-cidr=''" subsys=daemon
      level=info msg="  --node-port-acceleration='disabled'" subsys=daemon
      level=info msg="  --node-port-algorithm='random'" subsys=daemon
      level=info msg="  --node-port-bind-protection='true'" subsys=daemon
      level=info msg="  --node-port-mode='snat'" subsys=daemon
      level=info msg="  --node-port-range='30000,32767'" subsys=daemon
      level=info msg="  --policy-audit-mode='false'" subsys=daemon
      level=info msg="  --policy-queue-size='100'" subsys=daemon
      level=info msg="  --policy-trigger-interval='1s'" subsys=daemon
      level=info msg="  --pprof='false'" subsys=daemon
      level=info msg="  --preallocate-bpf-maps='false'" subsys=daemon
      level=info msg="  --prefilter-device='undefined'" subsys=daemon
      level=info msg="  --prefilter-mode='native'" subsys=daemon
      level=info msg="  --prepend-iptables-chains='true'" subsys=daemon
      level=info msg="  --prometheus-serve-addr=''" subsys=daemon
      level=info msg="  --proxy-connect-timeout='1'" subsys=daemon
      level=info msg="  --proxy-prometheus-port='0'" subsys=daemon
      level=info msg="  --read-cni-conf=''" subsys=daemon
      level=info msg="  --restore='true'" subsys=daemon
      level=info msg="  --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
      level=info msg="  --single-cluster-route='false'" subsys=daemon
      level=info msg="  --skip-crd-creation='false'" subsys=daemon
      level=info msg="  --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
      level=info msg="  --sockops-enable='false'" subsys=daemon
      level=info msg="  --state-dir='/var/run/cilium'" subsys=daemon
      level=info msg="  --tofqdns-dns-reject-response-code='refused'" subsys=daemon
      level=info msg="  --tofqdns-enable-dns-compression='true'" subsys=daemon
      level=info msg="  --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
      level=info msg="  --tofqdns-idle-connection-grace-period='0s'" subsys=daemon
      level=info msg="  --tofqdns-max-deferred-connection-deletes='10000'" subsys=daemon
      level=info msg="  --tofqdns-min-ttl='0'" subsys=daemon
      level=info msg="  --tofqdns-pre-cache=''" subsys=daemon
      level=info msg="  --tofqdns-proxy-port='0'" subsys=daemon
      level=info msg="  --tofqdns-proxy-response-max-delay='100ms'" subsys=daemon
      level=info msg="  --trace-payloadlen='128'" subsys=daemon
      level=info msg="  --tunnel='vxlan'" subsys=daemon
      level=info msg="  --version='false'" subsys=daemon
      level=info msg="  --write-cni-conf-when-ready=''" subsys=daemon
      level=info msg="     _ _ _" subsys=daemon
      level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
      level=info msg="|  _| | | | | |     |" subsys=daemon
      level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
      level=info msg="Cilium 1.9.9 5bcf83c 2021-07-19T16:45:00-07:00 go version go1.15.14 linux/amd64" subsys=daemon
      level=info msg="cilium-envoy  version: 82a70d56bf324287ced3129300db609eceb21d10/1.17.3/Distribution/RELEASE/BoringSSL" subsys=daemon
      level=info msg="clang (10.0.0) and kernel (5.11.1) versions: OK!" subsys=linux-datapath
      level=info msg="linking environment: OK!" subsys=linux-datapath
      level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
      level=info msg="Mounted cgroupv2 filesystem at /run/cilium/cgroupv2" subsys=cgroups
      level=info msg="Parsing base label prefixes from default label list" subsys=labels-filter
      level=info msg="Parsing additional label prefixes from user inputs: []" subsys=labels-filter
      level=info msg="Final label prefixes to be used for identity evaluation:" subsys=labels-filter
      level=info msg=" - reserved:.*" subsys=labels-filter
      level=info msg=" - :io.kubernetes.pod.namespace" subsys=labels-filter
      level=info msg=" - :io.cilium.k8s.namespace.labels" subsys=labels-filter
      level=info msg=" - :app.kubernetes.io" subsys=labels-filter
      level=info msg=" - !:io.kubernetes" subsys=labels-filter
      level=info msg=" - !:kubernetes.io" subsys=labels-filter
      level=info msg=" - !:.*beta.kubernetes.io" subsys=labels-filter
      level=info msg=" - !:k8s.io" subsys=labels-filter
      level=info msg=" - !:pod-template-generation" subsys=labels-filter
      level=info msg=" - !:pod-template-hash" subsys=labels-filter
      level=info msg=" - !:controller-revision-hash" subsys=labels-filter
      level=info msg=" - !:annotation.*" subsys=labels-filter
      level=info msg=" - !:etcd_node" subsys=labels-filter
      level=info msg="Auto-disabling "enable-bpf-clock-probe" feature since KERNEL_HZ cannot be determined" error="Cannot probe CONFIG_HZ" subsys=daemon
      level=info msg="Using autogenerated IPv4 allocation range" subsys=node v4Prefix=10.221.0.0/16
      level=info msg="Initializing daemon" subsys=daemon
      level=info msg="Establishing connection to apiserver" host="https://apiserver.qiangyun.com:6443" subsys=k8s
      level=info msg="Connected to apiserver" subsys=k8s
      level=info msg="Trying to auto-enable "enable-node-port", "enable-external-ips", "enable-host-reachable-services", "enable-host-port", "enable-session-affinity" features" subsys=daemon
      level=info msg="Inheriting MTU from external network interface" device=eth0 ipAddr=10.1.16.221 mtu=1500 subsys=mtu
      level=info msg="Restored services from maps" failed=0 restored=11 subsys=service
      level=info msg="Reading old endpoints..." subsys=daemon
      level=info msg="No old endpoints found." subsys=daemon
      level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock" subsys=envoy-manager
      level=info msg="Waiting until all Cilium CRDs are available" subsys=k8s
      level=info msg="All Cilium CRDs have been found and are available" subsys=k8s
      level=info msg="Retrieved node information from kubernetes node" nodeName=prod-fe-k8s-wn1 subsys=k8s
      level=info msg="Received own node information from API server" ipAddr.ipv4=10.1.16.221 ipAddr.ipv6="<nil>" k8sNodeIP=10.1.16.221 labels="map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/env:prod kubernetes.io/hostname:prod-fe-k8s-wn1 kubernetes.io/ingress:prod kubernetes.io/os:linux kubernetes.io/resource:prod-fe node-role.kubernetes.io/worker:worker topology.diskplugin.csi.alibabacloud.com/zone:cn-hangzhou-h]" nodeName=prod-fe-k8s-wn1 subsys=k8s v4Prefix=172.21.9.0/24 v6Prefix="<nil>"
      level=info msg="Restored router IPs from node information" ipv4=172.21.9.225 ipv6="<nil>" subsys=k8s
      level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
      level=info msg="Using auto-derived devices to attach Loadbalancer, Host Firewall or Bandwidth Manager program" devices="[eth0]" directRoutingDevice=eth0 subsys=daemon
      level=info msg="Enabling k8s event listener" subsys=k8s-watcher
      level=info msg="Removing stale endpoint interfaces" subsys=daemon
      level=info msg="Skipping kvstore configuration" subsys=daemon
      level=info msg="Restored router address from node_config" file=/var/run/cilium/state/globals/node_config.h ipv4=172.21.9.225 ipv6="<nil>" subsys=node
      level=info msg="Initializing node addressing" subsys=daemon
      level=info msg="Initializing kubernetes IPAM" subsys=ipam v4Prefix=172.21.9.0/24 v6Prefix="<nil>"
      level=info msg="Restoring endpoints..." subsys=daemon
      level=info msg="Endpoints restored" failed=0 restored=0 subsys=daemon
      level=info msg="Addressing information:" subsys=daemon
      level=info msg="  Cluster-Name: default" subsys=daemon
      level=info msg="  Cluster-ID: 0" subsys=daemon
      level=info msg="  Local node-name: prod-fe-k8s-wn1" subsys=daemon
      level=info msg="  Node-IPv6: <nil>" subsys=daemon
      level=info msg="  External-Node IPv4: 10.1.16.221" subsys=daemon
      level=info msg="  Internal-Node IPv4: 172.21.9.225" subsys=daemon
      level=info msg="  IPv4 allocation prefix: 172.21.9.0/24" subsys=daemon
      level=info msg="  Loopback IPv4: 169.254.42.1" subsys=daemon
      level=info msg="  Local IPv4 addresses:" subsys=daemon
      level=info msg="  - 10.1.16.221" subsys=daemon
      level=info msg="Creating or updating CiliumNode resource" node=prod-fe-k8s-wn1 subsys=nodediscovery
      level=info msg="Waiting until all pre-existing resources related to policy have been received" subsys=k8s-watcher
      level=info msg="Adding local node to cluster" node="{prod-fe-k8s-wn1 default [{InternalIP 10.1.16.221} {CiliumInternalIP 172.21.9.225}] 172.21.9.0/24 <nil> 172.21.9.26 <nil> 0 local 0 map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/env:prod kubernetes.io/hostname:prod-fe-k8s-wn1 kubernetes.io/ingress:prod kubernetes.io/os:linux kubernetes.io/resource:prod-fe node-role.kubernetes.io/worker:worker topology.diskplugin.csi.alibabacloud.com/zone:cn-hangzhou-h] 6}" subsys=nodediscovery
      level=info msg="Annotating k8s node" subsys=daemon v4CiliumHostIP.IPv4=172.21.9.225 v4Prefix=172.21.9.0/24 v4healthIP.IPv4=172.21.9.26 v6CiliumHostIP.IPv6="<nil>" v6Prefix="<nil>" v6healthIP.IPv6="<nil>"
      level=info msg="Initializing identity allocator" subsys=identity-cache
      level=info msg="Cluster-ID is not specified, skipping ClusterMesh initialization" subsys=daemon
      level=info msg="Setting up BPF datapath" bpfClockSource=ktime bpfInsnSet=v3 subsys=datapath-loader
      level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=net.core.bpf_jit_enable sysParamValue=1
      level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0
      level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1
      level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=kernel.timer_migration sysParamValue=0
      level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
      level=info msg="All pre-existing resources related to policy have been received; continuing" subsys=k8s-watcher
      level=info msg="Adding new proxy port rules for cilium-dns-egress:37581" proxy port name=cilium-dns-egress subsys=proxy
      level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
      level=info msg="Validating configured node address ranges" subsys=daemon
      level=info msg="Starting connection tracking garbage collector" subsys=daemon
      level=info msg="Datapath signal listener running" subsys=signal
      level=info msg="Starting IP identity watcher" subsys=ipcache
      level=info msg="Initial scan of connection tracking completed" subsys=ct-gc
      level=info msg="Regenerating restored endpoints" numRestored=0 subsys=daemon
      level=info msg="Creating host endpoint" subsys=daemon
      level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2159 ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2159 identityLabels="k8s:node-role.kubernetes.io/worker=worker,k8s:topology.diskplugin.csi.alibabacloud.com/zone=cn-hangzhou-h,reserved:host" ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2159 identity=1 identityLabels="k8s:node-role.kubernetes.io/worker=worker,k8s:topology.diskplugin.csi.alibabacloud.com/zone=cn-hangzhou-h,reserved:host" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
      level=info msg="Launching Cilium health daemon" subsys=daemon
      level=info msg="Finished regenerating restored endpoints" regenerated=0 subsys=daemon total=0
      level=info msg="Launching Cilium health endpoint" subsys=daemon
      level=info msg="Started healthz status API server" address="127.0.0.1:9876" subsys=daemon
      level=info msg="Initializing Cilium API" subsys=daemon
      level=info msg="Daemon initialization completed" bootstrapTime=8.140788349s subsys=daemon
      level=info msg="Serving cilium API at unix:///var/run/cilium/cilium.sock" subsys=daemon
      level=info msg="Configuring Hubble server" eventQueueSize=2048 maxFlows=4095 subsys=hubble
      level=info msg="Starting local Hubble server" address="unix:///var/run/cilium/hubble.sock" subsys=hubble
      level=info msg="Beginning to read perf buffer" startTime="2021-08-27 17:06:36.376614944 +0000 UTC m=+8.225871854" subsys=monitor-agent
      level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3199 ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3199 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3199 identity=4 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
      level=info msg="Compiled new BPF template" BPFCompilationTime=1.833919045s file-path=/var/run/cilium/state/templates/532a69347dd40c75334a195185011bc79bd07ca7/bpf_host.o subsys=datapath-loader
      level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=2159 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Compiled new BPF template" BPFCompilationTime=1.88288814s file-path=/var/run/cilium/state/templates/fb6dc13c1055d6e188939f7cb8ae5c7e8ed3fe25/bpf_lxc.o subsys=datapath-loader
      level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3199 identity=4 ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Serving cilium health API at unix:///var/run/cilium/health.sock" subsys=health-server
      level=info msg="Waiting for Hubble server TLS certificate and key files to be created" subsys=hubble
      level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.008514404296875 newInterval=7m30s subsys=map-ct
      level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.013427734375 newInterval=11m15s subsys=map-ct
      level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.0150604248046875 newInterval=16m53s subsys=map-ct
      level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.0257110595703125 newInterval=25m20s subsys=map-ct
  • 相关阅读:
    函数二
    python控制台输出带颜色的文字方法
    is 和 == 的区别
    基本数据类型(dict)
    基本数据类型(list,tuple)
    基本数据类型(int,bool,str)
    Python运算符与编码
    Java并发编程:synchronized
    泛型中? super T和? extends T的区别
    java中的匿名内部类总结
  • 原文地址:https://www.cnblogs.com/apink/p/15195782.html
Copyright © 2011-2022 走看看