zoukankan      html  css  js  c++  java
  • yum安装k8s集群(kubernetes)

    此案例是以一个主,三个node来部署的,当然node可以根据自己情况部署

    192.168.1.130 master
    192.168.1.131 node1
    192.168.1.132 node2
    192.168.1.133 node3
    合法的

    Enable NTP on master and all nodes :

    [root@k-master ~]# yum -y install ntp
    [root@k-master ~]# systemctl start ntpd
    [root@k-master ~]# systemctl enable ntpd
    [root@k-master ~]# hwclock --systohc
    
    [root@k-node1 ~]# yum -y install ntp
    [root@k-node1 ~]# systemctl start ntpd
    [root@k-node1 ~]# systemctl enable ntpd
    [root@k-node1 ~]# hwclock --systohc
    
    [root@k-node2 ~]# yum -y install ntp
    [root@k-node2 ~]# systemctl start ntpd
    [root@k-node2 ~]# systemctl enable ntpd
    [root@k-node2 ~]# hwclock --systohc
    
    [root@k-node3 ~]# yum -y install ntp
    [root@k-node3 ~]# systemctl start ntpd
    [root@k-node3 ~]# systemctl enable ntpd
    [root@k-node3 ~]# hwclock --systohc

    Add entries in “/etc/hosts” or reccords in your DNS :

    [root@k-master ~]# grep "k-" /etc/hosts
    192.168.1.130 k-master
    192.168.1.131 k-node1
    192.168.1.132 k-node2
    192.168.1.133 k-node3
    [root@k-node1 ~]# grep "k-" /etc/hosts
    192.168.1.130 k-master
    192.168.1.131 k-node1
    192.168.1.132 k-node2
    192.168.1.133 k-node3
    [root@k-node2 ~]# grep "k-" /etc/hosts
    192.168.1.130 k-master
    192.168.1.131 k-node1
    192.168.1.132 k-node2
    192.168.1.133 k-node3
    [root@k-node3 ~]# grep "k-" /etc/hosts
    192.168.1.130 k-master
    192.168.1.131 k-node1
    192.168.1.132 k-node2
    192.168.1.133 k-node3

    Install required RPM :

    • On master :
    [root@k-master ~]# yum -y install etcd kubernetes
    ...
    ...
    ...
    Installed:
      etcd.x86_64 0:2.1.1-2.el7                       kubernetes.x86_64 0:1.0.3-0.2.gitb9a88a7.el7
    
    Dependency Installed:
      audit-libs-python.x86_64 0:2.4.1-5.el7                   checkpolicy.x86_64 0:2.1.12-6.el7
      docker.x86_64 0:1.8.2-10.el7.centos                      docker-selinux.x86_64 0:1.8.2-10.el7.centos
      kubernetes-client.x86_64 0:1.0.3-0.2.gitb9a88a7.el7      kubernetes-master.x86_64 0:1.0.3-0.2.gitb9a88a7.el7
      kubernetes-node.x86_64 0:1.0.3-0.2.gitb9a88a7.el7        libcgroup.x86_64 0:0.41-8.el7
      libsemanage-python.x86_64 0:2.1.10-18.el7                policycoreutils-python.x86_64 0:2.2.5-20.el7
      python-IPy.noarch 0:0.75-6.el7                           setools-libs.x86_64 0:3.3.7-46.el7
      socat.x86_64 0:1.7.2.2-5.el7
    
    Complete!
    • On nodes :
    [root@k-node1 ~]# yum -y install flannel kubernetes
    [root@k-node2 ~]# yum -y install flannel kubernetes
    [root@k-node3 ~]# yum -y install flannel kubernetes

    Stop the firewall

    For for many convenience, we will stop firewalls during this lab :

    [root@k-master ~]# systemctl stop firewalld && systemctl disable firewalld
    Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
    [root@k-node1 ~]# systemctl stop firewalld && systemctl disable firewalld
    Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
    [root@k-node2 ~]# systemctl stop firewalld && systemctl disable firewalld
    Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
    [root@k-node3 ~]# systemctl stop firewalld && systemctl disable firewalld
    Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

    On Kubernetes master

    • Configure “etcd” distributed key-value store :
    [root@k-master ~]# egrep -v "^#|^$" /etc/etcd/etcd.conf
    ETCD_NAME=default
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
    ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
    • Kubernetes API server configuration file :
    [root@k-master ~]# egrep -v "^#|^$" /etc/kubernetes/apiserver
    KUBE_API_ADDRESS="--address=0.0.0.0"
    KUBE_API_PORT="--port=8080"
    KUBELET_PORT="--kubelet_port=10250"
    KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:2379"
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
    KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
    KUBE_API_ARGS=""
    
    • Start all Kubernetes services :
    [root@k-master ~]# for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler
     > do
     > systemctl restart $SERVICE
     > systemctl enable $SERVICE
     > done
     Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
     Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
     Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
     Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service

    We have now those LISTEN port :

    [root@k-master ~]# netstat -ntulp | egrep -v "ntpd|sshd"
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      2913/kube-scheduler
    tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      2887/kube-controlle
    tcp        0      0 127.0.0.1:2380          0.0.0.0:*               LISTEN      2828/etcd
    tcp        0      0 127.0.0.1:7001          0.0.0.0:*               LISTEN      2828/etcd
    tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      2356/master
    tcp6       0      0 :::2379                 :::*                    LISTEN      2828/etcd
    tcp6       0      0 :::8080                 :::*                    LISTEN      2858/kube-apiserver
    tcp6       0      0 ::1:25                  :::*                    LISTEN      2356/master
    • Create “etcd” key :
    [root@k-master ~]# etcdctl mk /frederic.wou/network/config '{"Network":"172.17.0.0/16"}'
    {"Network":"172.31.0.0/16"}
    
    [root@k-master ~]# etcdctl ls /frederic.wou --recursive
    /frederic.wou/network
    /frederic.wou/network/config
    [root@k-master ~]# etcdctl get /frederic.wou/network/config
    {"Network":"172.17.0.0/16"}

    On each minion nodes

    • flannel configuration:
    [root@k-node1 ~]# egrep -v "^#|^$" /etc/sysconfig/flanneld
    FLANNEL_ETCD="http://192.168.1.130:2379"
    FLANNEL_ETCD_KEY="/frederic.wou/network"
    [root@k-node2 ~]# egrep -v "^#|^$" /etc/sysconfig/flanneld
    FLANNEL_ETCD="http://192.168.1.130:2379"
    FLANNEL_ETCD_KEY="/frederic.wou/network"
    [root@k-node3 ~]# egrep -v "^#|^$" /etc/sysconfig/flanneld
    FLANNEL_ETCD="http://192.168.1.130:2379"
    FLANNEL_ETCD_KEY="/frederic.wou/network"
    • Kubernates :
    [root@k-node1 ~]# egrep -v "^#|^$" /etc/kubernetes/config
    KUBE_LOGTOSTDERR="--logtostderr=true"
    KUBE_LOG_LEVEL="--v=0"
    KUBE_ALLOW_PRIV="--allow_privileged=false"
    KUBE_MASTER="--master=http://192.168.1.130:8080"
    [root@k-node2 ~]# egrep -v "^#|^$" /etc/kubernetes/config
    KUBE_LOGTOSTDERR="--logtostderr=true"
    KUBE_LOG_LEVEL="--v=0"
    KUBE_ALLOW_PRIV="--allow_privileged=false"
    KUBE_MASTER="--master=http://192.168.1.130:8080"
    [root@k-node3 ~]# egrep -v "^#|^$" /etc/kubernetes/config
    KUBE_LOGTOSTDERR="--logtostderr=true"
    KUBE_LOG_LEVEL="--v=0"
    KUBE_ALLOW_PRIV="--allow_privileged=false"
    KUBE_MASTER="--master=http://192.168.1.130:8080"
    • kubelet :
    [root@k-node1 ~]# egrep -v "^#|^$" /etc/kubernetes/kubelet
    KUBELET_ADDRESS="--address=0.0.0.0"
    KUBELET_PORT="--port=10250"
    KUBELET_HOSTNAME="--hostname_override=k-node1"
    KUBELET_API_SERVER="--api_servers=http://k-master:8080"
    KUBELET_ARGS=""
    [root@k-node2 ~]# egrep -v "^#|^$" /etc/kubernetes/kubelet
    KUBELET_ADDRESS="--address=0.0.0.0"
    KUBELET_PORT="--port=10250"
    KUBELET_HOSTNAME="--hostname_override=k-node2"
    KUBELET_API_SERVER="--api_servers=http://k-master:8080"
    KUBELET_ARGS=""
    [root@k-node3 ~]# egrep -v "^#|^$" /etc/kubernetes/kubelet
    KUBELET_ADDRESS="--address=0.0.0.0"
    KUBELET_PORT="--port=10250"
    KUBELET_HOSTNAME="--hostname_override=k-node3"
    KUBELET_API_SERVER="--api_servers=http://k-master:8080"
    KUBELET_ARGS=""
    • Start all services :
    [root@k-node1 ~]# for SERVICE in kube-proxy kubelet docker flanneld
    > do
    > systemctl start $SERVICE
    > systemctl enable $SERVICE
    > done
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
    A dependency job for kubelet.service failed. See 'journalctl -xe' for details.
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
    Job for flanneld.service failed because a timeout was exceeded. See "systemctl status flanneld.service" and "journalctl -xe" for details.
    Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
    Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
    [root@k-node2 ~]# for SERVICE in kube-proxy kubelet docker flanneld
    > do
    > systemctl start $SERVICE
    > systemctl enable $SERVICE
    > done
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
    A dependency job for kubelet.service failed. See 'journalctl -xe' for details.
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
    Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
    Job for flanneld.service failed because a timeout was exceeded. See "systemctl status flanneld.service" and "journalctl -xe" for details.
    Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
    Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
    [root@k-node3 ~]# for SERVICE in kube-proxy kubelet docker flanneld
    > do
    > systemctl start $SERVICE
    > systemctl enable $SERVICE
    > done
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
    A dependency job for kubelet.service failed. See 'journalctl -xe' for details.
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
    Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
    Job for flanneld.service failed because a timeout was exceeded. See "systemctl status flanneld.service" and "journalctl -xe" for details.
    Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
    Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.

    Kubernetes is now ready

    [root@k-master ~]# kubectl get nodes
    NAME            LABELS                                 STATUS
    192.168.1.131   kubernetes.io/hostname=192.168.1.131   Ready
    192.168.1.132   kubernetes.io/hostname=192.168.1.132   Ready
    192.168.1.133   kubernetes.io/hostname=192.168.1.133   Ready

    Troubleshooting

    Unable to start Docker on minion nodes

    [root@k-node1 ~]# systemctl start docker
    Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details

    Check “ntp” service :

    [root@k-node1 ~]# ntpq -p
     remote refid st t when poll reach delay offset jitter
    ==============================================================================
    +173.ip-37-59-12 36.224.68.195 2 u - 64 7 32.539 -0.030 0.477
    *moz75-1-78-194- 213.251.128.249 2 u 4 64 7 30.108 -0.988 0.967
    -ntp.tuxfamily.n 138.96.64.10 2 u 67 64 7 25.934 -1.495 0.504
    +x1.f2tec.de 10.2.0.1 2 u 62 64 7 32.307 -0.044 0.466

    Is “flanneld” up & running ?

    [root@k-node1 ~]# ip addr show dev flannel0
    3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
     link/none
     inet 172.17.85.0/16 scope global flannel0
     valid_lft forever preferred_lft forever

    Is this node able to connect to “etcd” master :

    [root@k-node1 ~]# curl -s -L http://192.168.1.130:2379/version
    {"etcdserver":"2.1.1","etcdcluster":"2.1.0"}[root@k-node1 ~]

    Is “kube-proxy” service running ?

    [root@k-node1 ~]# systemctl status kube-proxy
    ● kube-proxy.service - Kubernetes Kube-Proxy Server
       Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; disabled; vendor preset: disabled)
       Active: active (running) since Wed 2016-02-03 14:50:25 CET; 1min 0s ago
         Docs: https://github.com/GoogleCloudPlatform/kubernetes
     Main PID: 2072 (kube-proxy)
       CGroup: /system.slice/kube-proxy.service
               └─2072 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://192.168.1.130:8080
    
    Feb 03 14:50:25 k-node1 systemd[1]: Started Kubernetes Kube-Proxy Server.
    Feb 03 14:50:25 k-node1 systemd[1]: Starting Kubernetes Kube-Proxy Server...

    Try to manually start Docker daemon :

    [root@k-node1 ~]# cat /run/flannel/docker
    DOCKER_OPT_BIP="--bip=172.17.85.1/24"
    DOCKER_OPT_IPMASQ="--ip-masq=true"
    DOCKER_OPT_MTU="--mtu=1472"
    DOCKER_NETWORK_OPTIONS=" --bip=172.17.85.1/24 --ip-masq=true --mtu=1472 "
    [root@k-node1 ~]# /usr/bin/docker daemon -D --selinux-enabled --bip=172.17.85.1/24 --ip-masq=true --mtu=1472
    ...
    ...
    ...
    INFO[0001] Docker daemon                                 commit=a01dc02/1.8.2 execdriver=native-0.2 graphdriver=devicemapper version=1.8.2-el7.centos

    #转自http://frederic-wou.net/kubernetes-first-step-on-centos-7-2/

  • 相关阅读:
    多点触控(包括拖拽控件)的例子
    绑定当前对象例子——Tag="{Binding}"
    绑定自己Self
    按键(ESC ,F1,F2等)——wpf的命令处理方法
    C#基础:值类型、引用类型与ref关键字
    生成事件命令
    Prism——Region
    组合模式的一个案例说明
    Laravel 学习记录
    【Linux学习】3.Linux常见配置文件
  • 原文地址:https://www.cnblogs.com/rutor/p/10524722.html
Copyright © 2011-2022 走看看