zoukankan      html  css  js  c++  java
  • kubeadm快速部署kubernetes

    kubeadm快速部署kubernetes

    环境初始化

    IP 角色 安装软件
    192.168.1.3 k8s-Master kube-apiserver
    kube-schduler
    kube-controller-manager
    docker
    flannel
    kubelet
    192.168.1.4 k8s-node01 kubelet
    kube-proxy
    docker
    flannel
    nginx
    dashboard

    服务器最低配置要求

    内存4G

    处理器1

    每个处理器核心数2

    不满足要求会报错

    以下所有操作,在三台节点全部执行

    关闭防火墙及selinux
    $ systemctl stop firewalld && systemctl disable firewalld
    $ sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config  && setenforce 0
    
    关闭 swap 分区
    $ swapoff -a $ sed -i '/ swap / s/^(.*)$/#1/g' /etc/fstab 
    
    分别在其他主机上设置主机名以及配置hosts
    $ hostnamectl set-hostname k8s-master
    $ hostnamectl set-hostname k8s-node01
    $ hostnamectl set-hostname k8s-"替换"
    
    在所有主机增加hosts
    主masterIP k8s-master
    从nodeIP k8s-node01
    
    内核调整,将桥接的IPv4流量传递到iptables的链
    cat > /etc/sysctl.d/k8s.conf <<EOF
    
    net.bridge.bridge-nf-call-ip6tables = 1
    
    net.bridge.bridge-nf-call-iptables = 1
    
    EOF
    
    sysctl --system
    
    设置系统时区并同步时间服务器
    $ yum install -y ntpdate
    $ ntpdate time.windows.com
    
    docker安装
    $ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
    $ yum -y install docker-ce-18.06.1.ce-3.el7
    
    修改docker文件驱动

    文件驱动默认由systemd改成cgroupfs, 而我们安装的docker使用的文件驱动是systemd, 造成不一致, 导致镜像无法启动

    docker info查看

    $ Cgroup Driver: systemd
    

    修改或创建/etc/docker/daemon.json,加入下面的内容:

    $ mkdir -p /etc/docker/
    $ vi /etc/docker/daemon.json
    {
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    

    启动docker:

    $ systemctl enable docker && systemctl start docker
    $ docker --version
    Docker version 18.06.1-ce, build e68fc7a 
    
    添加kubernetes YUM软件源
    cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    

    安装kubernetes组件

    安装kubeadm,kubelet和kubectl

    所有主机都需要安装, 由于版本更新频繁,这里指定版本号部署

    $ yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0
    $ systemctl enable kubelet
    
    上传镜像并加载
    部署Kubernetes Master

    只需要在Master 节点执行

    $ kubeadm init 
    --apiserver-advertise-address=主节点IP 
    --image-repository registry.aliyuncs.com/google_containers 
    --kubernetes-version v1.15.0 
    --service-cidr=10.1.0.0/16 
    --pod-network-cidr=10.244.0.0/16
    
    执行效果
    [init] Using Kubernetes version: v1.15.0
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Activating the kubelet service
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.3 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.3 127.0.0.1 ::1]
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.1.3]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 30.502915 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
    [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: p9a3vu.oscq5han2y5z5wfr
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    #下面是将node节点加入master组命令
    kubeadm join 192.168.1.3:6443 --token p9a3vu.oscq5han2y5z5wfr 
        --discovery-token-ca-cert-hash sha256:bc102f35fc9049eabfe12b91bdeac18cb4f10b410b5170a8d5f7fd6cf5e986cf 
    
    token问题(默认跳过)

    默认token的有效期为24小时,当过期之后,该token就不可用了,如果后续有nodes节点加入,重新生成新的token

    $ kubeadm token create
    $ kubeadm token list
    
    根据提示创建文件夹
    $ mkdir -p $HOME/.kube
    $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    获取ca证书sha256编码hash值
    $ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
    

    加入Kubernetes Node

    node上都要执行加组命令

    节点加入集群
    kubeadm join 主IP:6443 --token 替换自己的token 
        --discovery-token-ca-cert-hash 
    sha256:替换bc102f35fc9049eabfe12b91bdeac18cb4f10b410b5170a8d5f7fd6cf5e986cf 
    
    安装网络插件

    只需要在Master 节点执行

    kube-flannel.yml文件上传到主节点中

    执行部署脚本

    kubectl apply -f kube-flannel.yml
    

    查看集群的node状态,安装完网络工具之后,只有显示如下状态,所有节点全部都Ready好了之后才能继续后面的操作

    $ kubectl get nodes
    NAME         STATUS   ROLES    AGE     VERSION
    k8s-master   Ready    master   37m     v1.15.0
    k8s-node01   Ready    <none>   5m22s   v1.15.0
    k8s-node02   Ready    <none>   5m18s   v1.15.0
    $ kubectl get pod -n kube-system
    NAME                                 READY   STATUS    RESTARTS   AGE
    coredns-bccdc95cf-h2ngj              1/1     Running   0          14m
    coredns-bccdc95cf-m78lt              1/1     Running   0          14m
    etcd-k8s-master                      1/1     Running   0          13m
    kube-apiserver-k8s-master            1/1     Running   0          13m
    kube-controller-manager-k8s-master   1/1     Running   0          13m
    kube-flannel-ds-amd64-j774f          1/1     Running   0          9m48s
    kube-flannel-ds-amd64-t8785          1/1     Running   0          9m48s
    kube-flannel-ds-amd64-wgbtz          1/1     Running   0          9m48s
    kube-proxy-ddzdx                     1/1     Running   0          14m
    kube-proxy-nwhzt                     1/1     Running   0          14m
    kube-proxy-p64rw                     1/1     Running   0          13m
    kube-scheduler-k8s-master            1/1     Running   0          13m
    

    只有全部都为1/1则可以成功执行后续步骤 如果flannel需检查网络情况,重新进行如下操作

    kubectl delete -f kube-flannel.yml
    

    然后重新执行部署脚本

    kubectl apply -f kube-flannel.yml
    

    测试Kubernetes集群

    在Kubernetes集群中创建一个pod,然后暴露端口,验证是否正常访问:

    $ kubectl create deployment nginx --image=nginx
    deployment.apps/nginx created
    
    $ kubectl expose deployment nginx --port=80 --type=NodePort
    service/nginx exposed
    
    $ kubectl get pods,svc
    NAME                         READY   STATUS    RESTARTS   AGE
    pod/nginx-554b9c67f9-wf5lm   1/1     Running   0          24s
    
    NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
    service/kubernetes   ClusterIP   10.1.0.1       <none>        443/TCP        39m
    service/nginx        NodePort    10.1.224.251   <none>        80:30962/TCP   9
    

    访问地址:http://NodeIP:Port ,此例就是:http://192.168.1.3:30962

    扩展

    部署 Dashboards

    上传kubernetes-dashboard.yaml文件

    执行部署命令

    $ kubectl create -f kubernetes-dashboard.yaml
    

    创建完成后,检查相关服务运行状态

    $ kubectl get deployment kubernetes-dashboard -n kube-system
    
    $ kubectl get pods -n kube-system -o wide
    
    $ kubectl get services -n kube-system
    
    $ netstat -ntlp|grep 30001
    

    在Firefox浏览器输入Dashboard访问地址

    密钥页面未配置之前无法打开,务必使用Firefox浏览器打开

    查看访问Dashboard的认证令牌

    $ kubectl create serviceaccount  dashboard-admin -n kube-system
    
    $ kubectl create clusterrolebinding  dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
    
    $ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
    

    使用输出的token登录Dashboard

  • 相关阅读:
    SharePoint 2013 配置基于表单的身份认证
    SharePoint 2013 场解决方案包含第三方程序集
    SharePoint 2010 站点附加数据升级到SP2013
    SharePoint 2013 在母版页中插入WebPart
    SharePoint 2013 搭建负载均衡(NLB)
    SharePoint 部署解决方案Feature ID冲突
    SharePoint 2013 配置基于AD的Form认证
    SharePoint Server 2016 Update
    SharePoint 2013 为用户组自定义EventReceiver
    SharePoint 2013 JavaScript API 记录
  • 原文地址:https://www.cnblogs.com/iXiAo9/p/13625476.html
Copyright © 2011-2022 走看看