一、部署CNI网络
获取最新更新以及文章用到的软件包,请移步点击:查看更新
1、先准备好CNI二进制文件:
下载地址:https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
2、解压二进制包并移动到默认工作目录:
mkdir /opt/cni/bin tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
3、部署CNI网络:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml
4、默认镜像地址无法访问,修改为docker hub镜像仓库。
kubectl apply -f kube-flannel.yml kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE kube-flannel-ds-amd64-2pc95 1/1 Running 0 72s kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready <none> 41m v1.20.4
部署好网络插件,Node准备就绪。
5、授权apiserver访问kubelet
cat > apiserver-to-kubelet-rbac.yaml << EOF apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubelet rules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics - pods/log verbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:kube-apiserver namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubelet subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kubernetes EOF kubectl apply -f apiserver-to-kubelet-rbac.yaml
二、模式修改为IPVS
注意:使用flannel的k8s集群允许修改成ipvs模式(测试通没没问题),使用Calico的k8s集群需要内核升级到4.1。
1、开启内核参数
cat >> /etc/sysctl.conf << EOF net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF
sysctl -p #生效
2、开启ipvs支持
yum update -y # 可做可不做,如果CentOS 7内核大于3.10 yum install -y ipset ipvsadm conntrack conntrack-tools cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
3、配置kube-proxy服务
vi /usr/lib/systemd/system/kube-proxy.service # 修改服务文件 # ExecStart选项的末尾添加下面两行 --proxy-mode=ipvs --masquerade-all=true
4、修改kube-proxy配置文件
# systemctl status kube-proxy.service 查看配置文件保存地址 vi kube-proxy-config.yml # 末尾添加下面两行,启用ipvs,使用rr轮询 mode: ipvs scheduler: "rr"
5、重启服务,查看结果
# 重启kube-proxy systemctl daemon-reload systemctl restart kube-proxy systemctl status kube-proxy
# 查看转发
ipvsadm -L -n
6、加上网段文件
[root@k8s-master1 ~]# cat /run/flannel/subnet.env FLANNEL_NETWORK=10.244.0.0/16 FLANNEL_SUBNET=10.244.1.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true