zoukankan      html  css  js  c++  java
  • 【k8s】XX 修改ipvs模式

    https://blog.51cto.com/13641616/2442005

    排错背景:在一次生产环境的部署过程中,配置文件中配置的访问地址为集群的Service,配置好后发现服务不能正常访问,遂启动了一个busybox进行测试,测试发现在busybox中,能通过coredns正常的解析到IP,然后去ping了一下service,发现不能ping通,ping clusterIP也不能ping通。

    ping-F.png


    排错经历:首先排查了kube-proxy是否正常,发现启动都是正常的,然后也重启了,还是一样ping不通,然后又排查了网络插件,也重启过flannel,依然没有任何效果。后来想到自己的另一套k8s环境,是能正常ping通service的,就对比这两套环境检查配置,发现所有配置中只有kube-proxy的配置有一点差别,能ping通的环境kube-proxy使用了--proxy-mode=ipvs ,不能ping通的环境使用了默认模式(iptables)。

    iptables没有具体设备响应。

    然后就是开始经过多次测试,添加--proxy-mode=ipvs 后,清空node上防火墙规则,重启kube-proxy后就能正常的ping通了。

    image.png

    在学习K8S的时候,自己一直比较忽略底层流量转发,也即IPVS和iptables的相关知识,认为不管哪种模式,只要能转发访问到pod就可以,不用太在意这些细节,以后还是得更加仔细才行。

    补充:kubeadm 部署方式修改kube-proxy为 ipvs模式。

    默认情况下,我们部署的kube-proxy通过查看日志,能看到如下信息:Flag proxy-mode="" unknown,assuming iptables proxy

    [root@k8s-master ~]# kubectl logs -n kube-system kube-proxy-ppdb6 
    W1013 06:55:35.773739       1 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
    W1013 06:55:35.868822       1 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
    W1013 06:55:35.869786       1 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
    W1013 06:55:35.870800       1 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
    W1013 06:55:35.876832       1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
    I1013 06:55:35.890892       1 server_others.go:143] Using iptables Proxier.
    I1013 06:55:35.892136       1 server.go:534] Version: v1.15.0
    I1013 06:55:35.909025       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
    I1013 06:55:35.909053       1 conntrack.go:52] Setting nf_conntrack_max to 131072
    I1013 06:55:35.919298       1 conntrack.go:83] Setting conntrack hashsize to 32768
    I1013 06:55:35.945969       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
    I1013 06:55:35.946044       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
    I1013 06:55:35.946623       1 config.go:96] Starting endpoints config controller
    I1013 06:55:35.946660       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
    I1013 06:55:35.946695       1 config.go:187] Starting service config controller
    I1013 06:55:35.946713       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
    I1013 06:55:36.047121       1 controller_utils.go:1036] Caches are synced for endpoints config controller
    I1013 06:55:36.047195       1 controller_utils.go:1036] Caches are synced for service config controller
     

    这里我们需要修改kube-proxy的配置文件,添加mode 为ipvs。

    [root@k8s-master ~]# kubectl edit cm kube-proxy -n kube-system
    ...
    ipvs:
          excludeCIDRs: null
          minSyncPeriod: 0s
          scheduler: ""
          strictARP: false
          syncPeriod: 30s
        kind: KubeProxyConfiguration
        metricsBindAddress: 127.0.0.1:10249
        mode: "ipvs"
       ...
     

    ipvs模式需要注意的是要添加ip_vs相关模块:

    cat > /etc/sysconfig/modules/ipvs.modules <<EOF
     #!/bin/bash 
     modprobe -- ip_vs 
     modprobe -- ip_vs_rr 
     modprobe -- ip_vs_wrr 
     modprobe -- ip_vs_sh 
     modprobe -- nf_conntrack_ipv4 
    EOF
     
    chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
     

    重启kube-proxy 的pod

    [root@k8s-master ~]# kubectl get pod -n kube-system | grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
    pod "kube-proxy-62gvr" deleted
    pod "kube-proxy-n2rml" deleted
    pod "kube-proxy-ppdb6" deleted
    pod "kube-proxy-rr9cg" deleted
     

    在pod重启后再查看日志,发现模式已经变为ipvs了。

    [root@k8s-master ~]# kubectl get pod -n kube-system |grep kube-proxy
    kube-proxy-cbm8p                     1/1     Running   0          85s
    kube-proxy-d97pn                     1/1     Running   0          83s
    kube-proxy-gmq6s                     1/1     Running   0          76s
    kube-proxy-x6tcg                     1/1     Running   0          81s
    [root@k8s-master ~]# kubectl logs -n kube-system kube-proxy-cbm8p 
    I1013 07:34:38.685794       1 server_others.go:170] Using ipvs Proxier.
    W1013 07:34:38.686066       1 proxier.go:401] IPVS scheduler not specified, use rr by default
    I1013 07:34:38.687224       1 server.go:534] Version: v1.15.0
    I1013 07:34:38.692777       1 conntrack.go:52] Setting nf_conntrack_max to 131072
    I1013 07:34:38.693378       1 config.go:187] Starting service config controller
    I1013 07:34:38.693391       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
    I1013 07:34:38.693406       1 config.go:96] Starting endpoints config controller
    I1013 07:34:38.693411       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
    I1013 07:34:38.793684       1 controller_utils.go:1036] Caches are synced for endpoints config controller
    I1013 07:34:38.793688       1 controller_utils.go:1036] Caches are synced for service config controller
     

    再次测试ping service

    [root@k8s-master ~]# kubectl exec -it dns-test sh
    / # ping nginx-service
    PING nginx-service (10.1.58.65): 56 data bytes
    64 bytes from 10.1.58.65: seq=0 ttl=64 time=0.033 ms
    64 bytes from 10.1.58.65: seq=1 ttl=64 time=0.069 ms
    64 bytes from 10.1.58.65: seq=2 ttl=64 time=0.094 ms
    64 bytes from 10.1.58.65: seq=3 ttl=64 time=0.057 ms
    ^C
    --- nginx-service ping statistics ---
    4 packets transmitted, 4 packets received, 0% packet loss
    round-trip min/avg/max = 0.033/0.063/0.094 ms
     
  • 相关阅读:
    day 66 ORM django 简介
    day 65 HTTP协议 Web框架的原理 服务器程序和应用程序
    jQuery的事件绑定和解绑 事件委托 轮播实现 jQuery的ajax jQuery补充
    background 超链接导航栏案例 定位
    继承性和层叠性 权重 盒模型 padding(内边距) border(边框) margin 标准文档流 块级元素和行内元素
    属性选择器 伪类选择器 伪元素选择器 浮动
    css的导入方式 基础选择器 高级选择器
    03-body标签中相关标签
    Java使用内存映射实现大文件的上传
    正则表达式
  • 原文地址:https://www.cnblogs.com/oscarli/p/13674035.html
Copyright © 2011-2022 走看看