kubeadm部署k8s-1.7.4
参考文档:
环境:
etcd1: 192.168.130.32
etcd2: 192.168.130.33
etcd3: 192.168.130.34
master: 192.168.130.42
node1: 192.168.130.43
node2: 192.168.130.44
kubeadm 2018年GA
一.公共组件
1.etcd集群
gcr.io/etcd-development/etcd
etcd-3.2.7集群:
http://192.168.130.32:2379,http://192.168.130.33:2379,http://192.168.130.34:2379
安装略
2.docker仓库
k8s1.6,1.7
可以通过docker-distribution创建私有仓库
192.168.130.254:5000/google_containers/hyperkube:v1.7.4
192.168.130.254:5000/google_containers/k8s-dns-eidecar-amd64:1.14.4
192.168.130.254:5000/google_containers/k8s-dns-kube-dns-amd64:1.14.4
192.168.130.254:5000/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
192.168.130.254:5000/google_containers/pause-amd64:3.0
提示: hyperkube是集成镜像,整合了kube-apiserver(/hyperkube
apiserver),kube-controller-manager(/hyperkube
controller-manager),kube-scheduler(/hyperkube
scheduler)以及/usr/local/bin/kube-proxy,及大地方便了快速部署
二进制包下载地址,
下面的链接实际指向对象存储地址,如https://storage.googleapis.com/kubernetes-release/release/v1.8.4/bin/linux/amd64/kubeadm
https://dl.k8s.io/release/v1.8.4/bin/linux/amd64/kube-apiserver
https://dl.k8s.io/release/v1.8.4/bin/linux/amd64/kube-controller-manager
https://dl.k8s.io/release/v1.8.4/bin/linux/amd64/kube-scheduler
https://dl.k8s.io/release/v1.8.4/bin/linux/amd64/kube-proxy
https://dl.k8s.io/release/v1.8.4/bin/linux/amd64/kubelet
https://dl.k8s.io/release/v1.8.4/bin/linux/amd64/kubectl
https://dl.k8s.io/release/v1.8.4/bin/linux/amd64/kubeadm
https://dl.k8s.io/v1.8.4/kubernetes-server-linux-amd64.tar.gz
k8s1.8,1.9
二.基础环境(master,node)
1.docker
cat > /etc/yum.repos.d/docker.repo <<EOF
[docker-repo]
name=Docker
Repository
enabled=1
gpgcheck=0
EOF
yum -y install docker-engine
sed -i '/^ExecStart=/usr/bin/dockerd/c
ExecStart=/usr/bin/dockerd --registry-mirror
http://192.168.130.254:5000 --insecure-registry
192.168.130.254:5000 -H tcp://0.0.0.0:2375 -H
unix:///var/run/docker.sock'
/lib/systemd/system/docker.service
systemctl daemon-reload
systemctl enable docker
systemctl restart docker
2.kubeadm
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
enabled=1
gpgcheck=0
EOF
yum -y install kubeadm
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g'
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
cat >
/etc/systemd/system/kubelet.service.d/20-pod-infra-image.conf
<<EOF
[Service]
Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=192.168.130.254:5000/google_containers/pause-amd64:3.0"
EOF
systemctl daemon-reload
systemctl enable kubelet
提示:此时kubelet因配置文件缺失而无法成功启动,等kubeadm
init生成配置文件后会自动启动kubelet服务
三.master节点初始化(kubeadm
init)
export
KUBE_REPO_PREFIX=192.168.130.254:5000/google_containers
export
KUBE_HYPERKUBE_IMAGE=192.168.130.254:5000/google_containers/hyperkube:v1.7.4
提示:k8s-1.7.x版本指定的etcd版本为3.0.17,192.168.8.254:5000/google_containers/etcd-amd64:3.0.17
使用外部etcd集群,早期kubeadm版本的--external-etcd-endpoints参数已经取消,取而代之的是--config参数外挂配置文件kubeadm.yml
flannel,calico网络在init时需要明确指定podSubnet,其它网络方案请参考k8s官方文档
cat >kubeadm.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
etcd:
networking:
kubernetesVersion: v1.7.4
EOF
cat <<EOF >
/etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
kubeadm init --config kubeadm.yaml
初始化完成后会自动创建生成证书,kube-apiserver.yaml,kube-controller-manager.yaml,kube-scheduler.yaml等文件,有需要时可以对参数进行微调
如果不使用己存在的etcd集群,也可以以容器方式运行etcd
etcd:
注意:
k8s1.7只支持变量而1.8及以后的版本则废弃了变量,通过配置文件指定
cat >kubeadm.yaml
<<EOF
apiVersion:
kubeadm.k8s.io/v1alpha1
kind:
MasterConfiguration
api:
etcd:
networking:
kubernetesVersion:
v1.7.4
imageRepository:
192.168.130.1:5000/google_containers
unifiedControlPlaneImage :
192.168.130.1:5000/google_containers/hyperkube:v1.7.4
EOF
如上,配置文件中的key
imageRepository对应变量KUBE_REPO_PREFIX
unifiedControlPlaneImage 对应变量KUBE_HYPERKUBE_IMAGE
四.配置kubectl
config
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

网络没好之前,node处于NotReady状态,默认master节点是不会调度到常规性的服务。
如果需要将pods能够调度到master上,需要在master上执行kubectl taint nodes --all
node-role.kubernetes.io/master-
报错1:
偶发性bug
kubeadm reset
rm -rf /run/kubernetes
重新kubeadm init
报错2:
原因:
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
没有成功覆盖己有的config
解决:
rm -rf ~/.kube
报错3:
Unable to connect to the server: x509: certificate is valid
for 192.168.130.100, 10.254.0.1, 10.96.0.10, not
192.168.130.11
原因:
证书与主机不匹配
解决:
根据主机重新生成证书
五.网络
flannel方案
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml
curl https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml -o
kube-flannel.yml
提示:
flannel作为DaemonSet存在,建议使用私有仓库192.168.130.254:5000/coreos/flannel:v0.8.0-amd64

会自动创建/etc/cni/net.d/10-flannel.conf配置文件并生成flannel.1网卡,至此master节点上有如下5个docker
image
calico方案
kubectl apply -f https://docs.projectcalico.org/v2.5/getting-started/kubernetes/installation/rbac.yaml
curl https://docs.projectcalico.org/v2.5/getting-started/kubernetes/installation/hosted/calico.yaml -o
calico.yaml
提示:calico.yaml最简只需修改etcd_endpoints, CALICO_IPV4POOL_CIDR
calico容器成功运行后,会在worker节点上自动创建/etc/cni/net.d,
/opt/cni/{calico,calico-ipam}


报错:The DaemonSet
"calico-node" is invalid:
spec.template.spec.containers[0].securityContext.privileged: Forbidden:
disallowed by cluster policy
解决:kube-apiserver和kubelet的启动脚本中添加--allow_privileged=true
六.添加node(kubeadm
join)
安装kubeadm同master节点,略
无需指定变量KUBE_REPO_PREFIX
kubeadm join --token 5ef782.c2f3b670f11f6d18
192.168.130.42:6443
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

flannel方案
calico方案
七.确认集群状态
kubectl get nodes
kubectl get pods --namespace=kube-system
flannel方案

calico方案
八.dashboard
cAdvisor--kubelet自带基础监测功能
修改/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=4194”
systemctl daemon-reload && systemctl restart
kubelet
1.创建dashboard容器
curl https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
-o kubernetes-dashboard.yaml
主要修改使用私有仓库及nodePort
---
kind: Service
apiVersion: v1
metadata:
spec:

2.访问url
kubectl proxy
kubectl proxy --address='0.0.0.0' --port=30001
--accept-hosts='^*$'
192.168.130.42:30001/ui
nodePort
192.168.130.43:30000/ui
api
https://192.168.130.42:6443/ui
3.heapster监控
只需修改image为私有仓库镜像即可
192.168.130.254:5000/google_containers/heapster-influxdb-amd64:v1.3.3
192.168.130.254:5000/google_containers/heapster-grafana-amd64:v4.4.3
192.168.130.254:5000/google_containers/heapster-amd64:v1.4.0
kubectl apply -f heapster.yaml
kubectl apply -f influxdb.yaml
kubectl apply -f grafana.yaml
kubectl apply -f heapster-rbac.yaml
kubectl get services --namespace=kube-system
monitoring-grafana monitoring-influxdb
提示:
grafana同样可以通过nodePort来暴露端口。在heapster部署成功后,kubernetes-dashboard需要重新部署一遍才能看到效果
Grafana默认用户名密码都是admin, influxDB默认数据库为k8s,用户名密码都为root
curl -s -G http://10.99.32.74:8086/query -u root:root
--data-urlencode "q=SHOW DATABASES"|python -mjson.tool