zoukankan      html  css  js  c++  java
  • 用kubeadm 搭建 Kubernetes

    一 搭建的方式

    Kubernetes 搭建有三种方式,简单评价一下:

        基于Docker 本地运行Kubernetes
        先决条件:
        http://www.cnblogs.com/zhangeamon/p/5197655.html
        参考资料:
        https://github.com/kubernetes/community/blob/master/contributors/devel/local-cluster/docker.md
        Install kubectl and shell auto complish:
        评价: 这种方式我没有搭建成功,一直有can not connet 127.0.0.1:8080 的问题,后面感觉是没有创建./kube目录的原因。不过没有再试
        用minikube
        minikube是一个适合于在单机环境下搭建,它是创建出一个虚拟机来,并且Kubernetes官方好像已经停止对基于Docker本地运行Kubernetes的支持,参考:https://github.com/kubernetes/minikube, 但是因为它最好要求是virtualbox作为底层虚拟化driver,而我的bare metal 已经安装kvm了,我试了下存在冲突,所以也就没有用这种方式进行安装。
        用kubeadm
        它是一个比较方便安装Kubernetes cluster的工具,我也是按照这种方式装成功的。后面会详细记录这种方式。
        一步步安装
        每一个组件每一个组件进行安装,我还没有试,可以根据:https://github.com/opsnull/follow-me-install-kubernetes-cluster, 比较麻烦。
        个人还是推荐第三种方式,对于上手来说比较方便一点,我是这几种方式都有尝试。

    二 kubeadm setup Kubernetes

    参考:
    Openstack: https://docs.openstack.org/developer/kolla-kubernetes/deployment-guide.html
    Kubernetes: https://kubernetes.io/docs/getting-started-guides/kubeadm/
    搭建环境:KVM 起的Centos7 虚拟机
    1.Turn off SELinux

    sudo setenforce 0
    sudo sed -i 's/enforcing/permissive/g' /etc/selinux/config

        1
        2

    2.Turn off firewalld

    sudo systemctl stop firewalld
    sudo systemctl disable firewalld

        1
        2

    3.Write the Kubernetes repository file

    cat <<EOF > kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
    https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOF
    sudo mv kubernetes.repo /etc/yum.repos.d

        1
        2
        3
        4
        5
        6
        7
        8
        9
        10
        11

    4.Install Kubernetes 1.6.1 or later and other dependencies

    sudo yum install -y docker ebtables kubeadm kubectl kubelet kubernetes-cni

        1

    5.To enable the proper cgroup driver, start Docker and disable CRI

    sudo systemctl enable docker
    sudo systemctl start docker

        1
        2

    CGROUP_DRIVER=$(sudo docker info | grep "Cgroup Driver" | awk '{print $3}')
    sudo sed -i "s|KUBELET_KUBECONFIG_ARGS=|KUBELET_KUBECONFIG_ARGS=--cgroup-driver=$CGROUP_DRIVER --enable-cri=false |g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    sudo sed -i "s|$KUBELET_NETWORK_ARGS| |g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

        1
        2
        3

    6.Setup the DNS server with the service CIDR:

    sudo sed -i 's/10.96.0.10/10.3.3.10/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

        1

    7.reload kubelet

    sudo systemctl daemon-reload
    sudo systemctl stop kubelet
    sudo systemctl enable kubelet
    sudo systemctl start kubelet

        1
        2
        3
        4

    8.Deploy Kubernetes with kubeadm

    sudo kubeadm init --pod-network-cidr=10.1.0.0/16 --service-cidr=10.3.3.0/24

        1

    有可能会遇到的问题:如果你是通过公司的proxy出去的网络,那么一定要把你vm的地址放到no_proxy中,否运行kubeadm,会hank在下面, 如果运行失败,执行:sudo kubeadm reset:

    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
    [apiclient] Created API client, waiting for the control plane to become ready

        1
        2
        3
        4
        5

    Note:
    Note pod-network-cidr is a network private to Kubernetes that the PODs within Kubernetes communicate on. The service-cidr is where IP addresses for Kubernetes services are allocated. There is no recommendation that the pod network should be /16 network in upstream documentation however, the Kolla developers have found through experience that each node consumes an entire /24 network, so this configuration would permit 255 Kubernetes nodes.
    运行完后:

    [preflight] Starting the kubelet service
    [certificates] Generated CA certificate and key.
    [certificates] Generated API server certificate and key.
    [certificates] API Server serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.122.29]
    [certificates] Generated API server kubelet client certificate and key.
    [certificates] Generated service account token signing key and public key.
    [certificates] Generated front-proxy CA certificate and key.
    [certificates] Generated front-proxy client certificate and key.
    [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
    [apiclient] Created API client, waiting for the control plane to become ready
    [apiclient] All control plane components are healthy after 23.768335 seconds
    [apiclient] Waiting for at least one node to register
    [apiclient] First node has registered after 4.022721 seconds
    [token] Using token: 5e0896.4cced9c43904d4d0
    [apiconfig] Created RBAC rules
    [addons] Created essential addon: kube-proxy
    [addons] Created essential addon: kube-dns

    Your Kubernetes master has initialized successfully!

    To start using your cluster, you need to run (as a regular user):

      sudo cp /etc/kubernetes/admin.conf $HOME/
      sudo chown $(id -u):$(id -g) $HOME/admin.conf
      export KUBECONFIG=$HOME/admin.conf

    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      http://kubernetes.io/docs/admin/addons/

    You can now join any number of machines by running the following on each node
    as root:

      kubeadm join --token 5e0896.4cced9c43904d4d0 192.168.122.29:6443

        1
        2
        3
        4
        5
        6
        7
        8
        9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39

    记住:最后一句话kuberadm join, slave node可以用此CLI去joint到Kubernetes集群中。
    然后:

      sudo cp /etc/kubernetes/admin.conf $HOME/
      sudo chown $(id -u):$(id -g) $HOME/admin.conf
      export KUBECONFIG=$HOME/admin.conf

        1
        2
        3

    Load the kubedm credentials into the system:

    mkdir -p $HOME/.kube
    sudo -H cp /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo -H chown $(id -u):$(id -g) $HOME/.kube/config

        1
        2
        3

    运行完后用 去check 状态:

    kubectl get nodes
    kubectl get pods -n kube-system

        1
        2

    9.Deploy CNI Driver
    CNI 组网方式:https://linux.cn/thread-15315-1-1.html
    用Flannel:
    Flannel是基于vxlan, 用vxlan 因为报文长度增加,所以效率相对低,它它是Kubernetes推荐的方式

    kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

        1

    这种方式我没有成功, flannel 这个pod一直在重启。
    用 Canal:

    wget http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

    sed -i "s@192.168.0.0/16@10.1.0.0/16@" calico.yaml
    sed -i "s@10.96.232.136@10.3.3.100@" calico.yaml
    kubectl apply -f calico.yaml

        1
        2
        3
        4
        5

    Finally untaint the node (mark the master node as schedulable) so that PODs can be scheduled to this AIO deployment:

    kubectl taint nodes --all=true  node-role.kubernetes.io/master:NoSchedule-

        1

    10.restore $KUBELET_NETWORK_ARGS

    sudo sed -i "s|$KUBELET_EXTRA_ARGS|$KUBELET_EXTRA_ARGS $KUBELET_NETWORK_ARGS|g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

    sudo systemctl daemon-reload
    sudo systemctl restart kubelet

    OLD_DNS_POD=$(kubectl get pods -n kube-system |grep dns | awk '{print $1}')
    kubectl delete pod $OLD_DNS_POD -n kube-system
    ---------------------
    作者:Frank范
    来源:CSDN
    原文:https://blog.csdn.net/u011563903/article/details/71037093
    版权声明:本文为博主原创文章,转载请附上博文链接!

  • 相关阅读:
    thinkphp 视图定义
    ThinkPHP支持模型的分层
    thinkphp 虚拟模型
    thinkphp 参数绑定
    thinkphp 自动完成
    thinkphp 自动验证
    thinkphp 子查询
    thinkphp 动态查询
    ThinkPHP sql查询
    thinkphp 统计查询
  • 原文地址:https://www.cnblogs.com/wuchangsoft/p/10041469.html
Copyright © 2011-2022 走看看