zoukankan      html  css  js  c++  java
  • Ubuntu上手动安装Kubernetes

    背景

      两台Ubuntu16.04服务器:ip分别为192.168.56.160和192.168.56.161。。
      Kubernetes版本:1.5.5
      Docker版本:1.12.6
      etcd版本:2.2.1
      flannel版本:0.5.6
      其中160服务器既做Kubernetes的master节点,又做node节点;161服务器只做node节点。
      master节点上需要部署:kube-apiserver、kube-controller-manager、kube-scheduler、etcd服务。
      node节点上部署:kubelet、kube-proxy、docker和flannel服务。

    下载

    Kubernetes下载

      Client二进制下载:https://dl.k8s.io/v1.5.5/kubernetes-client-linux-amd64.tar.gz
      Server二进制下载:https://dl.k8s.io/v1.5.5/kubernetes-server-linux-amd64.tar.gz
      我的服务器是linux,amd64的,如果有其他环境,可以前往页面下载
      将可执行文件kubernetes目录下,server和client目中的kube-apiserver、kube-controller-manager、kubectl、kubelet、kube-proxy、kube-scheduler等都拷贝到/usr/bin/目录中。

    etcd下载

      etcd的github release下载都是放在AWS S3上(点这里)的,我这网络访问不了或者很慢,于是找了个国内的下载包(点这里)。
      除此之外,还可以自己编译etcd源码,来获取etcd的可执行文件。
      将etcd的可执行文件etcd和etcdctl拷贝到/usr/bin/目录。

    flannel下载

      flannel和etcd都是coreOS公司的产品,所以flannel的github release下载也是放在AWS S3上。不过幸好flannel的编译很简单,从github上下载,然后直接编译即可。然后会在flannel的bin或者dist目录下(版本不同可能导致目录不同)生成flannel可执行文件。

    $ git clone -b v0.5.6 https://github.com/coreos/flannel.git
    $ cd flannel
    $ ./build
    

      具体的编译方法可能会不同,请参考flannel目录下的README.md文件。
      将可执行文件flanneld拷贝到/usr/bin/目录。
      创建/usr/bin/flannel目录,并将dist目录下的mk-docker-opts.sh文件拷贝到/usr/bin/flannel/中。

    Kubernetes master配置

    etcd配置

    创建数据目录

    $ sudo mkdir -p /var/lib/etcd/
    

    创建配置目录和文件

    $ sudo mkdir -p /etc/etcd/
    $ sudo vim /etc/etcd/etcd.conf
    
    ETCD_NAME=default
    ETCD_DATA_DIR="/var/lib/etcd/"
    ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
    ETCD_ADVERTISE_CLIENT_URLS="http://192.168.56.160:2379"
    

    创建systemd文件

    $ sudo vim /lib/systemd/system/etcd.service
    
    [Unit]
    Description=Etcd Server
    Documentation=https://github.com/coreos/etcd
    After=network.target
    
    
    [Service]
    User=root
    Type=notify
    EnvironmentFile=-/etc/etcd/etcd.conf
    ExecStart=/usr/bin/etcd
    Restart=on-failure
    RestartSec=10s
    LimitNOFILE=40000
    
    [Install]
    WantedBy=multi-user.target
    

    启动服务

    $ sudo systemctl daemon-reload 
    $ sudo systemctl enable etcd
    $ sudo systemctl start etcd
    

    测试服务端口

    $ sudo systemctl status etcd
    
    ● etcd.service - Etcd Server
       Loaded: loaded (/lib/systemd/system/etcd.service; enabled; vendor preset: enabled)
       Active: active (running) since Mon 2017-03-27 11:19:35 CST; 7s ago
    ...
    

      再查看端口是否正常开放。

    $ netstat -apn | grep 2379
    tcp6       0      0 :::2379                 :::*                    LISTEN      7211/etcd 
    

    创建一个etcd网络

    $ etcdctl set /coreos.com/network/config '{ "Network": "192.168.4.0/24" }'
    

      如果部署的是etcd集群,那么每台etcd服务器上都需要执行上述步骤。但我这里只使用了standalone,所以我的etcd服务就搞定了。

    Kubernetes通用配置

    创建Kubernetes配置目录

    $ sudo mkdir /etc/kubernetes
    

    Kubernetes通用配置文件

      /etc/kubernetes/config文件中,存储的是Kubernetes各组件的通用配置信息。

    $ sudo vim /etc/kubernetes/config
    
    KUBE_LOGTOSTDERR="--logtostderr=true"
    KUBE_LOG_LEVEL="--v=0"
    KUBE_ALLOW_PRIV="--allow-privileged=false"
    KUBE_MASTER="--master=http://192.168.56.160:8080"
    

    配置kube-apiserver服务

      在Kubernetes的master主机上。

    创建kube-apiserver配置文件

      kube-apiserver的专用配置文件为/etc/kubernetes/apiserver。

    $ sudo vim /etc/kubernetes/apiserver
    
    ###
    # kubernetes system config
    #
    # The following values are used to configure the kube-apiserver
    #
    
    # The address on the local server to listen to.
    KUBE_API_ADDRESS="--address=0.0.0.0"
    #KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
    
    # The port on the local server to listen on.
    KUBE_API_PORT="--port=8080"
    
    # Port minions listen on
    KUBELET_PORT="--kubelet-port=10250"
    
    # Comma separated list of nodes in the etcd cluster
    KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.56.160:2379"
    
    # Address range to use for services
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.168.4.0/24"
    
    # default admission control policies
    KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
    
    # Add your own!
    KUBE_API_ARGS=""
    

    创建systemd文件

    $ sudo vim /lib/systemd/system/kube-apiserver.service
    
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    After=etcd.service
    Wants=etcd.service
    
    [Service]
    User=root
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/apiserver
    ExecStart=/usr/bin/kube-apiserver 
            $KUBE_LOGTOSTDERR 
            $KUBE_LOG_LEVEL 
            $KUBE_ETCD_SERVERS 
            $KUBE_API_ADDRESS 
            $KUBE_API_PORT 
            $KUBELET_PORT 
            $KUBE_ALLOW_PRIV 
            $KUBE_SERVICE_ADDRESSES 
            $KUBE_ADMISSION_CONTROL 
            $KUBE_API_ARGS
    Restart=on-failure
    Type=notify
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    

    配置kube-controller-manager服务

    创建kube-controller-manager配置文件

      kube-controller-manager的专用配置文件为/etc/kubernetes/controller-manager

    $ sudo vim /etc/kubernetes/controller-manager
    
    KUBE_CONTROLLER_MANAGER_ARGS=""
    

    创建systemd文件

    $ sudo vim /lib/systemd/system/kube-controller-manager.service
    
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=etcd.service
    After=kube-apiserver.service
    Requires=etcd.service
    Requires=kube-apiserver.service
    
    [Service]
    User=root
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/controller-manager
    ExecStart=/usr/bin/kube-controller-manager 
            $KUBE_LOGTOSTDERR 
            $KUBE_LOG_LEVEL 
            $KUBE_MASTER 
            $KUBE_CONTROLLER_MANAGER_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    

    配置kube-scheduler服务

    创建kube-scheduler配置文件

      kube-scheduler的专用配置文件为/etc/kubernetes/scheduler

    $ sudo vim /etc/kubernetes/scheduler
    
    KUBE_SCHEDULER_ARGS=""
    

    创建systemd文件

    $ sudo vim /lib/systemd/system/kube-scheduler.service
    
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    User=root
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/scheduler
    ExecStart=/usr/bin/kube-scheduler 
            $KUBE_LOGTOSTDERR 
            $KUBE_MASTER
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    

    启动Kubernetes master节点的服务

    $ sudo systemctl daemon-reload
    $ sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
    $ sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
    

    Kubernetes node配置

      Kubernetes node节点也需要配置/etc/kubernetes/config文件,内容与Kubernetes mater节点一致。

    flannel配置

    创建配置目录和文件

    $ sudo vim /etc/default/flanneld.conf
    
    # Flanneld configuration options  
    
    # etcd url location.  Point this to the server where etcd runs
    FLANNEL_ETCD_ENDPOINTS="http://192.168.56.160:2379"
    
    # etcd config key.  This is the configuration key that flannel queries
    # For address range assignment
    FLANNEL_ETCD_PREFIX="/coreos.com/network"
    
    # Any additional options that you want to pass
    #FLANNEL_OPTIONS=""
    

      其中,FLANNEL_ETCD_PREFIX选项就是刚才配置的etcd网络。

    创建systemd文件

    $ sudo vim /lib/systemd/system/flanneld.service
    
    [Unit]
    Description=Flanneld
    Documentation=https://github.com/coreos/flannel
    After=network.target
    After=etcd.service
    Before=docker.service
    
    [Service]
    User=root
    EnvironmentFile=/etc/default/flanneld.conf
    ExecStart=/usr/bin/flanneld 
            -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} 
            -etcd-prefix=${FLANNEL_ETCD_PREFIX} 
            $FLANNEL_OPTIONS
    ExecStartPost=/usr/bin/flannel/mk-docker-opts.sh -k DOCKER_OPTS -d /run/flannel/docker
    Restart=on-failure
    Type=notify
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    RequiredBy=docker.service
    

    启动服务

    $ sudo systemctl daemon-reload 
    $ sudo systemctl enable flanneld
    $ sudo systemctl start flanneld
    

    查看服务是否启动

    $ sudo systemctl status flanneld
    ● flanneld.service - Flanneld
       Loaded: loaded (/lib/systemd/system/flanneld.service; enabled; vendor preset: enabled)
       Active: active (running) since Mon 2017-03-27 11:59:00 CST; 6min ago
    ...
    

    docker配置

    docker安装

      通过apt来安装docker。

    $ sudo apt -y install docker.io
    

    使flannel作用docker网络

      修改docker的systemd配置文件。

    $ sudo mkdir /lib/systemd/system/docker.service.d
    $ sudo vim /lib/systemd/system/docker.service.d/flannel.conf
    
    [Service]
    EnvironmentFile=-/run/flannel/docker
    

      重启docker服务。

    $ sudo systemctl daemon-reload
    $ sudo systemctl restart docker
    

      查看docker是否有了flannel的网络。

    $ sudo ps -ef | grep docker
    
    root     11285     1  1 15:14 ?        00:00:01 /usr/bin/dockerd -H fd:// --bip=192.168.4.129/25 --ip-masq=true --mtu=1472
    ...
    

    配置kubelet服务

    创建kubelet的数据目录

    $ sudo mkdir /var/lib/kubelet
    

    创建kubelete配置文件

      kubelet的专用配置文件为/etc/kubernetes/kubelet

    $ sudo vim /etc/kubernetes/kubelet
    
    KUBELET_ADDRESS="--address=127.0.0.1"
    KUBELET_HOSTNAME="--hostname-override=192.168.56.161"
    KUBELET_API_SERVER="--api-servers=http://192.168.56.160:8080"
    # pod infrastructure container
    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
    KUBELET_ARGS="--enable-server=true --enable-debugging-handlers=true"
    

    创建systemd文件

    $ sudo vim /lib/systemd/system/kubelet.service
    
    [Unit]
    Description=Kubernetes Kubelet
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=docker.service
    Requires=docker.service
    
    [Service]
    WorkingDirectory=/var/lib/kubelet
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/kubelet
    ExecStart=/usr/bin/kubelet 
            $KUBE_LOGTOSTDERR 
            $KUBE_LOG_LEVEL 
            $KUBELET_API_SERVER 
            $KUBELET_ADDRESS 
            $KUBELET_PORT 
            $KUBELET_HOSTNAME 
            $KUBE_ALLOW_PRIV 
            $KUBELET_POD_INFRA_CONTAINER 
            $KUBELET_ARGS
    Restart=on-failure
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target
    

    启动kubelet服务

    $ sudo systemctl daemon-reload
    $ sudo systemctl enable kubelet
    $ sudo systemctl start kubelet
    

    配置kube-proxy服务

    创建kube-proxy配置文件

      kube-proxy的专用配置文件为/etc/kubernetes/proxy

    $ sudo vim /etc/kubernetes/proxy
    
    # kubernetes proxy config
    # default config should be adequate
    # Add your own!
    KUBE_PROXY_ARGS=""
    

    创建systemd文件

    $ sudo vim /lib/systemd/system/kube-proxy.service
    
    [Unit]
    Description=Kubernetes Proxy
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/proxy
    ExecStart=/usr/bin/kube-proxy 
            $KUBE_LOGTOSTDERR 
            $KUBE_LOG_LEVEL 
            $KUBE_MASTER 
            $KUBE_PROXY_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    

    启动kube-proxy服务

    $ sudo systemctl daemon-reload
    $ sudo systemctl enable kube-proxy
    $ sudo systemctl start kube-proxy
    

    查询node状态

      执行kubectl get node命令来查看node状态。都为Ready状态时,则说明node节点已经成功连接到master,如果不是该状态,则需要到该节点上,定位下原因。可通过journalctl -u kubelet.service命令来查看kubelet服务的日志。

    $ kubectl get node
    NAME             STATUS     AGE
    192.168.56.160   Ready      2d
    192.168.56.161   Ready      2d
    

    Kubernetes测试

      测试Kubernetes是否成功安装。

    编写yaml文件

      在Kubernetes master上创建一个nginx.yaml,用于创建一个nginx的ReplicationController。

    $ vim rc_nginx.yaml
    
    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: nginx
      labels:
        name: nginx
    spec:
      replicas: 2
      selector:
        name: nginx
      template:
        metadata:
          labels:
            name: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
    

    创建pod

      执行kubectl create命令创建ReplicationController。该ReplicationController配置中有两个副本,并且我们的环境有两个Kubernetes Node,因此,它应该会在两个Node上分别运行一个Pod。
      注意:这个过程可能会需要很长的时间,它会从网上拉取nginx镜像,还有pod-infrastructure这个关键镜像。

    $ kubectl create -f rc_nginx.yaml
    

    查询状态

      执行kubectl get pod和rc命令来查看pod和rc状态。刚开始可能会处于containerCreating的状态,待需要的镜像下载完成后,就会创建具体的容器。pod状态应该显示Running状态。

    $ kubectl get rc
    NAME      DESIRED   CURRENT   READY     AGE
    nginx     2         2         2         5m
    
    $ kubectl get pod -o wide
    NAME          READY     STATUS    RESTARTS   AGE       IP              NODE
    nginx-1j5x4   1/1       Running   0          5m        192.168.4.130   192.168.56.160
    nginx-6bd28   1/1       Running   0          5m        192.168.4.130   192.168.56.161
    

      大功告成!!!

  • 相关阅读:
    原型模式
    哈希表原理
    Pow共识算法
    C++虚函数的工作原理
    TCP三次握手与四次分手
    TCP重置报文段及RST常见场景分析
    Ping、Traceroute工作原理
    ARP协议
    Rust生命周期bound用于泛型的引用
    Linux下core dump
  • 原文地址:https://www.cnblogs.com/styshoo/p/6646694.html
Copyright © 2011-2022 走看看