随着官方部署工具kubeadm越来越成熟,k8s的部署也变得相对简单,生产部署也变得容易了很多。下面演示一下怎么用kubeadm引导一个高可用的k8s集群
堆叠方式部署 :
默认的部署模式,apiserver只和本地etcd通信
优点:方便部署和管理
缺点:etcd和控制节点耦合
外部etcd:
apiserver和etcd集群通信
优点:具有良好的高可用性
缺点:需要额外管理一个etcd集群增加管理成本
-
初始化机器环境
IP | 角色 | 主机名(建议使用dns) | 系统 |
192.168.1.1 | lb | api-lb.k8s.com | centos7 |
192.168.1.2 | master | m1.k8s.com | centos7 |
192.168.1.3 | master | m2.k8s.com | centos7 |
192.168.1.4 | master | m3.k8s.com | centos7 |
在 api-lb.k8s.com 上配置:
配置haproxy
frontend kube-apiserver bind *:6443 default_backend kube-apiserver mode tcp option tcplog backend kube-apiserver balance source mode tcp server master1 192.168.101.160:6443 check server master2 192.168.101.161:6443 check server master3 192.168.101.162:6443 check
在 m[1:3].k8s.com 配置:
确保iptables工具可以处理网桥流量
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
确保iptables工具后端不使用nftables
update-alternatives --set iptables /usr/sbin/iptables-legacy
安装docker
yum -y install yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum -y install docker-ce systemctl enable --now docker
配置镜像加速
tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": [ "http://dockerhub.azk8s.cn" ] } EOF systemctl daemon-reload systemctl restart docker
安装kubeadm, kubelet , kubectl
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable --now kubelet
如果没有使用swap需要配置
echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' >/etc/sysconfig/kubelet
-
使用kubeadm部署
在 m[1:3].k8s.com 配置:
建议先把需要的镜像安装好
kubeadm config images list --kubernetes-version=v1.17.4
W0320 15:26:32.612945 123330 validation.go:28] Cannot validate kubelet config - no validator is available
W0320 15:26:32.612995 123330 validation.go:28] Cannot validate kube-proxy config - no validator is available
k8s.gcr.io/kube-apiserver:v1.17.4
k8s.gcr.io/kube-controller-manager:v1.17.4
k8s.gcr.io/kube-scheduler:v1.17.4
k8s.gcr.io/kube-proxy:v1.17.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5
使用azure提供的国内源加速
kubeadm config images pull --image-repository gcr.azk8s.cn/google_containers --kubernetes-version=v1.17.4
懒得打tag可以在pull一下
kubeadm config images pull --kubernetes-version=v1.17.4
在 m1.k8s.com 配置:
创建第一个master节点,不出意外几分钟就好了
kubeadm init --kubernetes-version=v1.17.4 --apiserver-advertise-address=192.168.101.160 --control-plane-endpoint=kube-api-lb.k8s.com:6443 --pod-network-cidr=10.64.0.0/16 --service-cidr=10.32.0.0/16 --upload-certs
--pod-network-cidr=10.64.0.0/16 等一下要跟CNI网络插件的子网一致
--control-plane-endpoint=kube-api-lb.k8s.com:6443 需要高可用的时候才需要配置,api-lb.k8s.com 需要解析到192.168.1.1
--upload-certs 控制节点共享证书,否则需要手动复制证书
安装完后输出
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join api-lb.k8s.com:6443 --token 17b0vs.stsr9ocosv2gr0io
--discovery-token-ca-cert-hash sha256:66b2cd34026a290ac89997f7a8cc40b9d09fa62da153881c16c2262e69284f3f
--control-plane --certificate-key a88aadb2fdda79e0ae2b8e94a60f020da56d0f220533defa9d5108035a4b9662
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join api-lb.k8s.com:6443 --token 17b0vs.stsr9ocosv2gr0io
--discovery-token-ca-cert-hash sha256:66b2cd34026a290ac89997f7a8cc40b9d09fa62da153881c16c2262e69284f3f
配置kubectl
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
在 m[2:3].k8s.com 配置:
另外两台master节点加入
kubeadm join api-lb.k8s.com:6443 --token 17b0vs.stsr9ocosv2gr0io --discovery-token-ca-cert-hash sha256:66b2cd34026a290ac89997f7a8cc40b9d09fa62da153881c16c2262e69284f3f --control-plane --certificate-key a88aadb2fdda79e0ae2b8e94a60f020da56d0f220533defa9d5108035a4b9662
如果是worker节点则使用
kubeadm join api-lb.k8s.com:6443 --token 17b0vs.stsr9ocosv2gr0io --discovery-token-ca-cert-hash sha256:66b2cd34026a290ac89997f7a8cc40b9d09fa62da153881c16c2262e69284f3f
-
安装网络插件
在 m1.k8s.com 配置::
安装完成之后会发现节点的状态是NotReady
kubectl get node NAME STATUS ROLES AGE VERSION m1.k8s.com NotReady master 9m27s v1.17.4 m2.k8s.com NotReady master 2m12s v1.17.4 m3.k8s.com NotReady master 2m5s v1.17.4
查看kubelet会发现是网络插件没装
systemctl status kubelet.service
Mar 20 16:00:37 m1.k8s.com kubelet[15808]: E0320 16:00:37.274005 15808 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPlu...initialized
Mar 20 16:00:40 m1.k8s.com kubelet[15808]: W0320 16:00:40.733305 15808 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
安装flannel插件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml -O kube-flannel.yml sed -i 's/10.244.0.0/10.64.0.0/g' kube-flannel.yml kubectl apply -f kube-flannel.yml
其他网络插件安装可以查看这里:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network