zoukankan      html  css  js  c++  java
  • k8s简单集群搭建

    注意:在使用该文档之前,您应该要对k8s的组件有一些了解,我就不描述组件在集群中所担任的角色了, 如有需要请移步


    1 环境准备

    1.1 基本环境

    • 操作系统
    CentOS Linux release 7.4.1708 (Core)
    
    • 软件版本
    Kubernetes v1.9.1  (后面提供tar包)
    etcd Version: 3.2.18(直接yum 安装)(源码包地址:   https://github.com/coreos/etcd/releases)
    flanneld Version: 0.7.1(直接yum安装)(源码包地址:https://github.com/coreos/flannel/releases)
    docker Version: docker://1.13.1(直接yum安装)
    
    • IP部署
    master:192.168.1.192 (kube-apiserver, kube-controller-manager, kube-scheduler, etcd, flannel(非必须))
    node1:192.168.1.193  (kubelet, kube-proxy, etcd, flannel)
    node2:192.168.1.194   ( kubelet, kube-proxy, etcd, flannel)
    

    1.2 初始化操作

    • /etc/hosts配置
    127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.1.192 master
    192.168.1.193 node1
    192.168.1.194 node2
    # 将这个文件拷贝到所有的节点上(master, node1, node2)
    
    • 防火墙+selinux
    systemctl stop firewalld  # 注意这里执行后 最好用 firewall-cmd --state再确认一下,如果是Not running表示关闭了
    vim /etc/sysconfig/selinux
    SELINUX=disabled
    
    • repo文件
    cd /etc/yum.repos.d/
    cat kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=0
    # 清除yum 缓存
    yum clean all
    # 建立缓存
    yum makecache
    
    • 重启系统
      reboot

    2. 实验过程

    2.1 安装etcd集群(master, node1,node2上都要执行)

    • yum安装etcd
    yum -y install etcd
    yum -y install docker
    
    • /etc/etcd/etcd.conf
    # [member]
    ETCD_NAME=infra1   
    ETCD_DATA_DIR="/var/lib/etcd"
    ETCD_LISTEN_PEER_URLS="http://192.168.1.192:2380"
    ETCD_LISTEN_CLIENT_URLS="http://192.168.1.192:2379" # 这个参数是etcd服务器自己监听时用的,也就是说,监听本机上的哪个网卡,哪个端口
    #[cluster]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.192:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.192:2379"  #就是客户端(etcdctl/curl等)跟etcd服务进行交互时请求的url    etcdctl的底层逻辑,应该是调用curl跟etcd服务进行交换
    
    • etcd.service
    [Unit]
    Description=Etcd Server
    After=network.target
    After=network-online.target
    Wants=network-online.target
    Documentation=https://github.com/coreos
    
    [Service]
    Type=notify
    WorkingDirectory=/var/lib/etcd/
    EnvironmentFile=-/etc/etcd/etcd.conf
    ExecStart=/usr/bin/etcd 
      --name ${ETCD_NAME} 
      --initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} 
      --listen-peer-urls ${ETCD_LISTEN_PEER_URLS} 
      --listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 
      --advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} 
      --initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} 
      --initial-cluster infra1=http://192.168.1.192:2380,infra2=http://192.168.1.193:2380,infra3=http://192.168.1.194:2380 
      --initial-cluster-state new   # 设置该选项,那么在--initial-cluster中就要指定所有etcd集群成员
      --data-dir=${ETCD_DATA_DIR}
    Restart=on-failure
    RestartSec=5
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    
    • 启动etcd
    systemctl daemon-reload
    systemctl enable etcd
    systemctl start etcd
    

    2.2 配置master节点(master上执行)

    • 下载k8s 1.9.1版本包
      下载地址,这里面就包含了master和node上所需的组件
    mkdir -p /root/local/bin && mkdir /etc/kubernetes(master和node上都要执行,下面就直接使用了)
    # 将/root/local/bin追加到系统PATH路径后
    cat /root/.bash_profile
    # .bash_profile
    # Get the aliases and functions
    if [ -f ~/.bashrc ]; then
     . ~/.bashrc
    fi
    # User specific environment and startup programs
    PATH=$PATH:$HOME/bin:/root/local/bin  #就是这里啦
    export PATH
    

    source /root/.bash_profile

    
    # cd /usr/src
    wget https://storage.googleapis.com/kubernetes-release/release/v1.9.1/kubernetes-server-linux-amd64.tar.gz
    tar -xf kubernetes-server-linux-amd64.tar.gz
    cd kubernetes/server/bin
    cp -r kube-apiserver kube-controller-manager kubectl kube-scheduler /root/local/bin
    

    配置kube-apiserver

    • 配置/etc/kubernetes/apiserver
    cat /etc/kubernetes/apiserver
    ###
    # kubernetes system config
    #
    # The following values are used to configure the kube-apiserver
    #
    # The address on the local server to listen to.
    KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
    # The port on the local server to listen on.
    # KUBE_API_PORT="--port=8080"
     
    # Port minions listen on
    # KUBELET_PORT="--kubelet-port=10250"
     
    # Comma separated list of nodes in the etcd cluster
    KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.1.192:2379,http://192.168.1.193:2379,http://192.168.1.194:2379"
    # Address range to use for services
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
    # default admission control policies
    #KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
    KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
    # Add your own!
    KUBE_API_ARGS=""
    
    • 配置/etc/kubernetes/config
    cat /etc/kubernetes/config
    ###
    # kubernetes system config
    #
    # The following values are used to configure various aspects of all
    # kubernetes services, including
    #
    # kube-apiserver.service  #请注意到这里的说明,这个文件是所有节点都需要的,所以要分发到所有的节点对应的路径下
    # kube-controller-manager.service
    # kube-scheduler.service
    # kubelet.service
    # kube-proxy.service
    # logging to stderr means we get it in the systemd journal
    KUBE_LOGTOSTDERR="--logtostderr=true"
     
    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=0"
     
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow-privileged=false"
     
    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=http://192.168.1.192:8080"
    
    • 配置kube-apiserver.service (/usr/lib/systemd/system/kube-apiserver.service)
    cd /usr/lib/systemd/system/
    cat kube-apiserver.service
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    After=etcd.service
     
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/apiserver
    ExecStart=/root/local/bin/kube-apiserver 
                $KUBE_LOGTOSTDERR 
                $KUBE_LOG_LEVEL 
                $KUBE_ETCD_SERVERS 
                $KUBE_API_ADDRESS 
                $KUBE_API_PORT 
                $KUBELET_PORT 
                $KUBE_ALLOW_PRIV 
                $KUBE_SERVICE_ADDRESSES 
                $KUBE_ADMISSION_CONTROL 
                $KUBE_API_ARGS
    Restart=on-failure
    Type=notify
    LimitNOFILE=65536
     
    [Install]
    WantedBy=multi-user.target
    

    配置kube-controller-manager

    • 配置/etc/kubernetes/controller-manager
    ###
    # The following values are used to configure the kubernetes controller-manager
    # defaults from config and apiserver should be adequate
    # Add your own!
    KUBE_CONTROLLER_MANAGER_ARGS=""
    
    • 配置/usr/lib/systemd/system/kube-controller-manager.service
    cat /usr/lib/systemd/system/kube-controller-manager.service
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
     
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/controller-manager
    ExecStart=/root/local/bin/kube-controller-manager 
                $KUBE_LOGTOSTDERR 
                $KUBE_LOG_LEVEL 
                $KUBE_MASTER 
                $KUBE_CONTROLLER_MANAGER_ARGS
    Restart=on-failure
    LimitNOFILE=65536
     
    [Install]
    WantedBy=multi-user.target
    

    配置kube-scheduler

    • 配置/etc/kubernetes/scheduler
    ###
    # kubernetes scheduler config
    # default config should be adequate
    # Add your own!
    #KUBE_SCHEDULER_ARGS="--loglevel=0"
    KUBE_SCHEDULER_ARGS="--address=127.0.0.1"
    
    • 配置/usr/lib/systemd/system/kube-scheduler.service
    [Unit]
    Description=Kubernetes Scheduler Plugin
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/scheduler
    ExecStart=/root/local/bin/kube-scheduler 
                $KUBE_LOGTOSTDERR 
                $KUBE_LOG_LEVEL 
                $KUBE_MASTER 
                $KUBE_SCHEDULER_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    

    启动服务

    for i in 'kube-apiserver,kube-controller-manager,kube-scheduler'
    do
        systemctl enabld $i
        systemctl start $i
    done
    

    2.3 配置docker私有仓库(在master上执行,当然也可以是其他的节点,只要能访问到就行)

    因为没有配置TLS所以我们需要修改一下docker.service文件

    • 修改docker.service文件
    [Unit]
    Description=Docker Application Container Engine
    Documentation=http://docs.docker.com
    After=network.target rhel-push-plugin.socket registries.service
    Wants=docker-storage-setup.service
    Requires=docker-cleanup.timer
    
    [Service]
    Type=notify
    NotifyAccess=all
    EnvironmentFile=-/run/containers/registries.conf
    EnvironmentFile=-/etc/sysconfig/docker
    EnvironmentFile=-/etc/sysconfig/docker-storage
    EnvironmentFile=-/etc/sysconfig/docker-network
    Environment=GOTRACEBACK=crash
    Environment=DOCKER_HTTP_HOST_COMPAT=1
    Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
    ExecStart=/usr/bin/dockerd-current 
              --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current 
              --default-runtime=docker-runc 
              --exec-opt native.cgroupdriver=systemd 
              --userland-proxy-path=/usr/libexec/docker/docker-proxy-current 
              --init-path=/usr/libexec/docker/docker-init-current 
              --seccomp-profile=/etc/docker/seccomp.json 
              --insecure-registry=192.168.1.192:5000   # 就是这里啦,来访问该IP:PROT时不用https,node1和node2的docker这里也是需要这样配置,后面就不赘述docker的配置了
              $OPTIONS 
              $DOCKER_STORAGE_OPTIONS 
              $DOCKER_NETWORK_OPTIONS 
              $ADD_REGISTRY 
              $BLOCK_REGISTRY 
              $INSECURE_REGISTRY 
       $REGISTRIES
    ExecReload=/bin/kill -s HUP $MAINPID
    LimitNOFILE=1048576
    LimitNPROC=1048576
    LimitCORE=infinity
    TimeoutStartSec=0
    Restart=on-abnormal
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target
    
    • 启动docker
    systemctl enable docker
    systemctl start docker
    
    • 私有仓库搭建
      docker search registry # 查找一下可用的registry镜像
    [root@master ~]# docker search registry  #结果太多就列这么多吧
    INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED
    docker.io docker.io/registry The Docker Registry 2.0 implementation for... 2093 [OK]       
    docker.io docker.io/konradkleine/docker-registry-frontend Browse and modify your Docker registry in ... 194 [OK]
    docker.io docker.io/hyper/docker-registry-web Web UI, authentication service and event r... 140 [OK]
    docker.io docker.io/atcol/docker-registry-ui A web UI for easy private/local Docker Reg... 106 [OK]
    
    # 我们直接用第一个
    docker pull docker.io/registry
    # 查看我们的镜像
    docker images
    [root@master ~]# docker images
    REPOSITORY TAG IMAGE ID CREATED SIZE
    docker.io/registry latest b2b03e9146e1 12 days ago 33.3 MB
    # 启动registry镜像
    mkdir -p /home/belle/docker_registry/
    docker run -d -p 5000:5000 -v /home/belle/docker_registry/:/var/lib/registry registry
    # 查看在运行的容器
    [root@master ~]# docker ps
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    e43cd73f7f76 registry "/entrypoint.sh /e..." 5 days ago Restarting (1) 46 seconds ago suspicious_sinoussi
    # 下面就是将我们需要的镜像下载下来 然后push到我们的刚刚启动的容器中
    # 下载一个之后实验会用到的镜像 hello-world-nginx
    # 给docker.io/kitematic/hello-world-nginx 镜像打tag
    docker tag docker.io/kitematic/hello-world-nginx  192.168.1.192:5000/hello-world-nginx 
    # push
    The push refers to a repository [192.168.1.192:5000/hello-world-nginx]
    5f70bf18a086: Preparing
    5f70bf18a086: Preparing
    5f70bf18a086: Preparing
    b51acdd3ef48: Preparing
    3f47ff454588: Preparing
    f19fb69b288a: Preparing
    b11278aeb507: Preparing
    fb85701f3991: Preparing
    15235e629864: Preparing
    86882fc1175f: Preparing
    fb85701f3991: Waiting
    15235e629864: Waiting
    86882fc1175f: Waiting
    9e8c93c7ea7e: Preparing
    e66f0ebc2eef: Preparing
    6a15a6c08ef6: Preparing
    461f75075df2: Preparing
    9e8c93c7ea7e: Waiting
    e66f0ebc2eef: Waiting
    6a15a6c08ef6: Waiting
    461f75075df2: Waiting
    f19fb69b288a: Layer already exists
    b11278aeb507: Layer already exists
    fb85701f3991: Layer already exists
    15235e629864: Layer already exists
    86882fc1175f: Layer already exists
    b51acdd3ef48: Layer already exists
    3f47ff454588: Layer already exists
    5f70bf18a086: Layer already exists
    6a15a6c08ef6: Layer already exists
    e66f0ebc2eef: Layer already exists
    9e8c93c7ea7e: Layer already exists
    461f75075df2: Layer already exists
    latest: digest: sha256:583f0c9ca89415140fa80f70f8079f5138180a6dda2c3ff3920353b459e061a3 size: 3226
    # 这个结果是因为我之前就已经Push过了
    

    2.4 配置node节点

    node1和node2上相似,因为有些地方需要修改ip指向,这里以node1为例

    kubeconfig文件
    cat /etc/kubernetes/kubeconfig
    apiVersion: v1
    clusters:
    - cluster:
        server: http://192.168.1.192:8080
      name: myk8s
    contexts:
    - context:
        cluster: myk8s
        user: ""
      name: myk8s-context
    current-context: myk8s-context
    kind: Config
    preferences: {}
    users: []
    
    准备所需组件
    • 将kubelet kube-proxy中master上分发到所有的node
    cd /usr/src/kubernetes/server/bin/
    scp kubelet kube-proxy root@192.168.1.193:/root/local/bin
    scp kubelet kube-proxy root@192.168.1.194:/root/local/bin
    
    配置kubelet
    • 配置/etc/kubernetes/kubelet
    cat /etc/kubernetes/kubelet
    ###
    # kubernetes kubelet (minion) config
     
    # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
    KUBELET_ADDRESS="--address=192.168.1.193"
     
    # The port for the info server to serve on
    KUBELET_PORT="--port=10250"
     
    # You may leave this blank to use the actual hostname
    KUBELET_HOSTNAME="--hostname-override=node1"
     
    # location of the api-server
    ##KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
     
    # pod infrastructure container
    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.1.192:5000/pause-amd64"
     
    # Add your own!
    KUBELET_ARGS="--cluster-dns=10.254.0.2 --cluster-domain=cluster.local --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice --fail-swap-on=false --cgroup-driver=systemd --kubeconfig=/etc/kubernetes/kubeconfig"
    

    注意:KUBELET_ARGS中 一定要指定--cluster-dns --cluster-domain 不然之后创建deploymentpods永远是containercreating状态,并会报错:Warning MissingClusterDNS 13s (x9 over 12m) kubelet, node2 pod: "nginx-7ff779f954-j4t55_default(7dff9ffa-8afa-11e8-b3ac-000c290991f6)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
    如果--cgroup-driver=systemd那么要增加配置--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice

    • 配置kubelet.service
    cd /usr/lib/systemcd/system
    cat kubelet.service
    [Unit]
    Description=Kubernetes Kubelet Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=docker.service
    Requires=docker.service
     
    [Service]
    WorkingDirectory=/var/lib/kubelet
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/kubelet
    ExecStart=/root/local/bin/kubelet 
                $KUBE_LOGTOSTDERR 
                $KUBE_LOG_LEVEL 
                $KUBELET_API_SERVER 
                $KUBELET_ADDRESS 
                $KUBELET_PORT 
                $KUBELET_HOSTNAME 
                $KUBE_ALLOW_PRIV 
                $KUBELET_ARGS 
                $KUBELET_POD_INFRA_CONTAINER  #注意这个参数,之前这里是没有配置的,做实验到创建pods的时候怎么都不成功,结果节点的kubelet报错提示连接google的pause-amd64超时, wtf 我明明配置文件中写的自己的私有镜像,为毛还要去连接google的,所以就直接加到启动参数了。特别提醒:重启后一定要用ps aux | grep kubelet 查看一下里面是否有这个参数()
    Restart=on-failure
    KillMode=process
     
    [Install]
    WantedBy=multi-user.target
    
    配置kube-proxy
    • 配置kube-proxy.service
    [Unit]
    Description=Kubernetes Kube-Proxy Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
     
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/proxy
    ExecStart=/root/local/bin/kube-proxy 
                $KUBE_LOGTOSTDERR 
                $KUBE_LOG_LEVEL 
                $KUBE_MASTER 
                $KUBE_PROXY_ARGS
    Restart=on-failure
    LimitNOFILE=65536
     
    [Install]
    WantedBy=multi-user.target
    
    • 配置/etc/kubernetes/proxy
    cat /etc/kubernetes/proxy
    ###
    # kubernetes proxy config
    # default config should be adequate
    # Add your own!
    KUBE_PROXY_ARGS=""
    
    配置flannel

    flannel我在master,node1,node2上都安装了

    • 安装
    yum -y install flannel
    
    • 修改/etc/sysconfig/flannel
    # Flanneld configuration options  
    
    # etcd url location. Point this to the server where etcd runs
    FLANNEL_ETCD_ENDPOINTS="http://192.168.1.192:2379,http://192.168.1.193:2379,http://192.168.1.194:2379"
    # etcd config key. This is the configuration key that flannel queries
    # For address range assignment
    FLANNEL_ETCD_PREFIX="/kube-centos/network"
    # Any additional options that you want to pass
    #FLANNEL_OPTIONS=""
    

    mkdir -p /kube-centos/network

    • 配置flanneld.service
    [Unit]
    Description=Flanneld overlay address etcd agent
    After=network.target
    After=network-online.target
    Wants=network-online.target
    After=etcd.service  #注意这里
    Before=docker.service #注意这里
    
    [Service]
    Type=notify
    EnvironmentFile=/etc/sysconfig/flanneld
    EnvironmentFile=-/etc/sysconfig/docker-network
    ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
    ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    WantedBy=docker.service
    
    • 在etcd中创建网络配置
    # 在master上执行(任意安装了etcd节点的都可以)
    [root@master ~]# mkdir -p /kube-centos/network
    [root@master ~]# etcdctl --endpoints=http://192.168.1.192:2379,http://192.168.1.193:2379,http://192.168.1.194:2379 set /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'
    
    • 启动flanneld
    systemctl enable flanneld
    systemctl start flanneld
    

    使用systemctl命令启动flanneld后,会自动执行./mk-docker-opts.sh -i生成如下两个文件环境变量文件

    • /run/flannel/subnet.env
    • /run/docker_opts.env
      Docker将会读取这两个环境变量文件作为容器启动参数。
    配置docker

    在/usr/lib/systemd/system/docker.service中添加

    EnvironmentFile=-/run/flannel/docker
    EnvironmentFile=-/run/flannel/subnet.env
    
    • docker.service文件为
    [Unit]
    Description=Docker Application Container Engine
    Documentation=http://docs.docker.com
    After=network.target rhel-push-plugin.socket registries.service
    Wants=docker-storage-setup.service
    Requires=docker-cleanup.timer
    
    [Service]
    Type=notify
    NotifyAccess=all
    EnvironmentFile=-/run/containers/registries.conf
    EnvironmentFile=-/run/flannel/docker  #增加
    EnvironmentFile=-/run/flannel/subnet.env  #增加
    EnvironmentFile=-/etc/sysconfig/docker
    EnvironmentFile=-/etc/sysconfig/docker-storage
    EnvironmentFile=-/etc/sysconfig/docker-network
    Environment=GOTRACEBACK=crash
    Environment=DOCKER_HTTP_HOST_COMPAT=1
    Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
    ExecStart=/usr/bin/dockerd-current 
              --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current 
              --default-runtime=docker-runc 
              --exec-opt native.cgroupdriver=systemd 
              --userland-proxy-path=/usr/libexec/docker/docker-proxy-current 
              --init-path=/usr/libexec/docker/docker-init-current 
              --seccomp-profile=/etc/docker/seccomp.json 
              --insecure-registry=192.168.1.192:5000 
              $OPTIONS 
              $DOCKER_STORAGE_OPTIONS 
              $DOCKER_NETWORK_OPTIONS 
              $ADD_REGISTRY 
              $BLOCK_REGISTRY 
              $INSECURE_REGISTRY 
       $REGISTRIES
    ExecReload=/bin/kill -s HUP $MAINPID
    LimitNOFILE=1048576
    LimitNPROC=1048576
    LimitCORE=infinity
    TimeoutStartSec=0
    Restart=on-abnormal
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target
    
    启动服务
    for i in 'etcd flanneld kubelet kube-proxy docoker'
    do
        systemctl enable $i
        systemctl start $i
    done
    
    现在查询etcd中的内容
    [root@master ~]# export ENDPOINTS=http://192.168.1.192:2379,http://192.168.1.193:2379,http://192.168.1.194:2379
    [root@master ~]# etcdctl --endpoints=${ENDPOINTS} ls /kube-centos/network/subnets
    /kube-centos/network/subnets/172.30.6.0-24
    /kube-centos/network/subnets/172.30.61.0-24
    /kube-centos/network/subnets/172.30.67.0-24
    [root@master ~]# etcdctl --endpoints=${ENDPOINTS} get /kube-centos/network/config
    {"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}
    [root@master ~]# etcdctl --endpoints=${ENDPOINTS} get /kube-centos/network/subnets/172.30.6.0-24
    {"PublicIP":"192.168.1.192","BackendType":"vxlan","BackendData":{"VtepMAC":"22:e4:55:e3:27:f6"}}
    [root@master ~]# etcdctl --endpoints=${ENDPOINTS} get /kube-centos/network/subnets/172.30.61.0-24
    {"PublicIP":"192.168.1.193","BackendType":"vxlan","BackendData":{"VtepMAC":"c6:d3:c4:4b:d1:66"}}
    [root@master ~]# etcdctl --endpoints=${ENDPOINTS} get /kube-centos/network/subnets/172.30.67.0-24
    {"PublicIP":"192.168.1.194","BackendType":"vxlan","BackendData":{"VtepMAC":"12:0c:91:23:08:83"}}
    # 如果能看到以上的信息,说明flannel配置成功了
    

    2.5 验证

    在master上执行

    • 查看nodes的状态
    [root@master ~]# kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    node1 Ready <none> 22h v1.9.1  #是ready就OK了
    node2 Ready <none> 22h v1.9.1
    
    • 测试集群
      创建一个nginx的service试一下集群是否可用
    [root@master ~]# kubectl run nginx --replicas=2 --labels="run=load-balancer-example" --image=192.168.1.192:5000/hello-world-nginx --port=80
    deployment "nginx" created
    [root@master ~]# kubectl expose deployment nginx --type=NodePort --name=example-service
    service "example-service" exposed
    # 查看一下Pods创建结果
    [root@master ~]# kubectl get pods 
    NAME READY STATUS RESTARTS AGE
    nginx-7ff779f954-9sccd 1/1 Running 1 3h  #发现已经运行了  注意:这里的READY列是1/1,一个pod中可以有多个容器,而这里的pod中只有一个容器所有是1/1  一对多之后再写一些文档来说明
    nginx-7ff779f954-pbps4 1/1 Running 0 17m
    # 查看一下pods创建时的一些events
    [root@master ~]# kubectl get pods nginx-7ff779f954-pbps4
    Name: nginx
    Namespace: default
    CreationTimestamp: Thu, 19 Jul 2018 14:28:16 +0800
    Labels: run=load-balancer-example
    Annotations: deployment.kubernetes.io/revision=1
    Selector: run=load-balancer-example
    Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable  #要注意这里的一些参数,如果desired和available不对等,可能就出了什么问题了
    StrategyType: RollingUpdate
    MinReadySeconds: 0
    RollingUpdateStrategy: 1 max unavailable, 1 max surge
    Pod Template:
      Labels: run=load-balancer-example
      Containers:
       nginx:
        Image: 192.168.1.192:5000/hello-world-nginx  #从我们自建的仓库中拉取
        Port: 80/TCP
        Environment: <none>
        Mounts: <none>
      Volumes: <none>
    Conditions:
      Type Status Reason
      ---- ------ ------
      Available True MinimumReplicasAvailable
    OldReplicaSets: <none>
    NewReplicaSet: nginx-7ff779f954 (2/2 replicas created)
    Events: <none>
    Name: nginx-7ff779f954-pbps4
    Namespace: default
    Node: node1/192.168.1.193  #第一个节点
    Start Time: Thu, 19 Jul 2018 17:44:22 +0800
    Labels: pod-template-hash=3993359510
                    run=load-balancer-example
    Annotations: <none>
    Status: Running
    IP: 172.30.6.3
    Controlled By: ReplicaSet/nginx-7ff779f954
    Containers:
      nginx:
        Container ID: docker://df7230d3e002c53341fe559851a4d913b77aa9bff0f2d21c656ad1ed6f0bb86d
        Image: 192.168.1.192:5000/hello-world-nginx
        Image ID: docker-pullable://192.168.1.192:5000/hello-world-nginx@sha256:583f0c9ca89415140fa80f70f8079f5138180a6dda2c3ff3920353b459e061a3
        Port: 80/TCP
        State: Running
          Started: Thu, 19 Jul 2018 17:45:55 +0800
        Ready: True
        Restart Count: 0
        Environment: <none>
        Mounts: <none>
    Conditions:
      Type Status
      Initialized True 
      Ready True 
      PodScheduled True 
    Volumes: <none>
    QoS Class: BestEffort
    Node-Selectors: <none>
    Tolerations: <none>
    Events:
      Type Reason Age From Message
      ---- ------ ---- ---- -------
      Normal Scheduled 21m default-scheduler Successfully assigned nginx-7ff779f954-pbps4 to node1
      Normal Pulling 19m (x4 over 21m) kubelet, node1 pulling image "192.168.1.192:5000/hello-world-nginx"  # 成功拉取
      Normal Pulled 19m kubelet, node1 Successfully pulled image "192.168.1.192:5000/hello-world-nginx"
      Normal Created 19m kubelet, node1 Created container  # 已经创建
      Normal Started 19m kubelet, node1 Started container
    # 查看一下端口情况
    [root@master ~]# kubectl describe svc example-service
    Name: example-service
    Namespace: default
    Labels: run=load-balancer-example
    Annotations: <none>
    Selector: run=load-balancer-example
    Type: NodePort
    IP: 10.254.72.252
    Port: <unset> 80/TCP
    TargetPort: 80/TCP
    NodePort: <unset> 30782/TCP  #下面我们使用这个端口访问以下
    Endpoints: 172.30.6.2:80,172.30.6.3:80
    Session Affinity: None
    External Traffic Policy: Cluster
    Events: <none>
    # 下面还是在任意Node上执行的
    [root@node1 ~]# netstat -tlunp | grep 30782
    tcp6 1 0 :::30782 :::* LISTEN 38917/kube-proxy   #会发现node上已经监听了NodePort所指定的端口了
    [root@node1 ~]# docker ps
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    df7230d3e002 192.168.1.192:5000/hello-world-nginx@sha256:583f0c9ca89415140fa80f70f8079f5138180a6dda2c3ff3920353b459e061a3 "sh /start.sh" 17 hours ago Up 17 hours k8s_nginx_nginx-7ff779f954-pbps4_default_50953118-8b38-11e8-b3ac-000c290991f6_0
    b5e225cd6abf 192.168.1.192:5000/pause-amd64 "/pause" 17 hours ago Up 17 hours k8s_POD_nginx-7ff779f954-pbps4_default_50953118-8b38-11e8-b3ac-000c290991f6_0
    # 可以看到有两个容器在运行,一个是pod 一个是真正提供服务的hello-world-nginx
    

    可以看到我们通过node的ip可以访问到我们容器里的内容的,我们想要的结果是一个容器宕掉之后马上又能重启一个继续提供访问,所以我们这里就直接删除一个pod

    • 容灾演练
    # 从上面得我们有两个pods 分别是nginx-7ff779f954-9sccd 和 nginx-7ff779f954-pbps4
    # 我们现在删除nginx-7ff779f954-9sccd  pod
    [root@master ~]# kubectl delete pod nginx-7ff779f954-9sccd
    pod "nginx-7ff779f954-9sccd" deleted
    [root@master ~]# kubectl get pods  #会发现有自动创建了一个nginx-7ff779f954-clxf9 并且两个IP依然能够正常访问
    NAME READY STATUS RESTARTS AGE
    nginx-7ff779f954-clxf9 1/1 Running 0 4m
    nginx-7ff779f954-pbps4 1/1 Running 0 34m
    

    这次实验就到这里吧,这次各个组件的配置都比较简单,而且没有用到证书,所以就仅供"观赏"了。后面随着知识了解的深入会加一些优化配置在里面。

    2.6 一些补充

    • 尝试停掉某一node上的hello-world-nginx容器(这里是node1为例)
    [root@node1 ~]# docker stop df7230
    df7230
    [root@node1 ~]# docker ps # 再次查看容器启动的情况 发现有新的容器b06f2d805a8e已经产生
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 
    b06f2d805a8e 192.168.1.192:5000/hello-world-nginx@sha256:583f0c9ca89415140fa80f70f8079f5138180a6dda2c3ff3920353b459e061a3 "sh /start.sh" About an hour ago Up About an hour k8s_nginx_nginx-7ff779f954-pbps4_default_50953118-8b38-11e8-b3ac-000c290991f6_1
    b5e225cd6abf 192.168.1.192:5000/pause-amd64 "/pause" 20 hours ago Up 20 hours 
    [root@node1 ~]# journalctl -f -t kubelet # 查看一下kubelet的日志信息(截取了一部分)
    Jul 20 11:50:02 node1 kubelet[58163]: I0720 11:50:02.811077 58163 kuberuntime_manager.go:514] Container {Name:nginx Image:192.168.1.192:5000/hello-world-nginx Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:80 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.  # 这里就是stop后  发生的restart信息事件了(但有个问题是,虽然恢复了但是却花了好几分钟才恢复。。。)
    Jul 20 11:50:02 node1 kubelet[58163]: I0720 11:50:02.811330 58163 kuberuntime_manager.go:758] checking backoff for container "nginx" in pod "nginx-7ff779f954-pbps4_default(50953118-8b38-11e8-b3ac-000c290991f6)"
    

    这实验真是熬人。。 当我知道还有kubeadm和minikube这样的工具可以安装集群是整个人几乎是崩溃的,不过自己从头到尾搭建一次对k8s的整个流程和模块之间的关系会更加的了解,也有利于之后的一些插件的添加,期待下一篇kube-dns的诞生。。。。

  • 相关阅读:
    数据库基础
    oracle高级查询之Over();
    Java Web Servlet开发流程图(页面提交方法、Servlet跳转几种方法)
    Servlet——HttpServletRequest对象详解
    spring <context:annotation-config> 跟 <context:component-scan>诠释及区别
    Spring+SpringMVC +MyBatis整合配置文件案例66666
    eclipse新工作空间集成maven并构建新web项目
    第二章:第一个Netty程序
    第一章:Netty介绍
    eclipse 创建maven web示例
  • 原文地址:https://www.cnblogs.com/zunwen/p/9497690.html
Copyright © 2011-2022 走看看