zoukankan      html  css  js  c++  java
  • kubernetes 1.9.0集群完整安装部署

    一、环境准备

    1、准备三台虚拟机,具体信息如下,配置好root账户,安装好docker,安装方法参见https://www.cnblogs.com/liangyuntao-ts/p/10657009.html

    系统类型    IP地址              节点角色       CPU    Memory       Hostname
    centos7    192.168.100.101    worker        1        2G            work01
    centos7    192.168.100.102    master        1        2G            master
    centos7    192.168.100.103    worker        1        2G            work02  
    

    2、三台服务器启动docker 

    [root@server02 ~]# systemctl start docker
    [root@server02 ~]# systemctl enable docker
    [root@server02 ~]# docker version
    Client:
     Version:           18.09.6
     API version:       1.39
     Go version:        go1.10.8
     Git commit:        481bc77156
     Built:             Sat May  4 02:34:58 2019
     OS/Arch:           linux/amd64
     Experimental:      false
    
    Server: Docker Engine - Community
     Engine:
      Version:          18.09.6
      API version:      1.39 (minimum version 1.12)
      Go version:       go1.10.8
      Git commit:       481bc77
      Built:            Sat May  4 02:02:43 2019
      OS/Arch:          linux/amd64
      Experimental:     false
    

    3、系统设置,关闭防火墙,selinux,设置路由转发,不对bridge数据进行处理

    [root@server02 ~]# systemctl stop firewalld
    [root@server02 ~]# systemctl disable firewalld
    Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
    Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    
    [root@server02 ~]# systemctl status firewalld
    ● firewalld.service - firewalld - dynamic firewall daemon
       Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
       Active: inactive (dead) since 六 2019-05-18 12:35:54 CST; 55s ago
         Docs: man:firewalld(1)
     Main PID: 525 (code=exited, status=0/SUCCESS)
    
    5月 18 11:47:59 server02 firewalld[525]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -...n?).
    5月 18 11:47:59 server02 firewalld[525]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -...ame.
    5月 18 11:47:59 server02 firewalld[525]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -...n?).
    5月 18 11:47:59 server02 firewalld[525]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -...ame.
    5月 18 11:47:59 server02 firewalld[525]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -...n?).
    5月 18 11:47:59 server02 firewalld[525]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -...ame.
    5月 18 11:47:59 server02 firewalld[525]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -...n?).
    5月 18 11:47:59 server02 firewalld[525]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -...ame.
    5月 18 12:35:51 server02 systemd[1]: Stopping firewalld - dynamic firewall daemon...
    5月 18 12:35:54 server02 systemd[1]: Stopped firewalld - dynamic firewall daemon.
    Hint: Some lines were ellipsized, use -l to show in full.
    
    #写入配置文件
    [root@server02 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
     
    #生效配置文件
    [root@server02 ~]# sysctl -p /etc/sysctl.d/k8s.conf 
    

    4、配置host文件

    #配置host,使每个Node都可以通过名字解析到ip地址
    [root@server02 ~]# vi /etc/hosts
    #加入如下片段(ip地址和servername替换成自己的)
    192.168.100.101 server01
    192.168.100.102 master
    192.168.100.103 server02
    

    5、准备二进制文件 

    下载地址:

    链接:https://pan.baidu.com/s/13izNNZ3Bkem61Zemhkj8gQ
    提取码:0ykv

    下载完成后,将文件上传至所有的服务器的/home目录下


    6、准备配置文件

    #到home目录下载项目
    [root@server02 ~]# cd /home
    [root@server02 ~]# git clone https://github.com/liuyi01/kubernetes-starter.git
    #看看git内容
    [root@server02 ~]# cd ~/kubernetes-starter && ls
    

    7、修改配置文件,生产适应环境的配置,三台服务器都需要设置

    [root@server02 home]# vim kubernetes-starter/config.properties
    #kubernetes二进制文件目录,eg: /home/michael/bin
    BIN_PATH=/home/bin
    
    #当前节点ip, eg: 192.168.1.102
    NODE_IP=192.168.100.102
    
    #etcd服务集群列表, eg: http://192.168.1.102:2379
    #如果已有etcd集群可以填写现有的。没有的话填写:http://${MASTER_IP}:2379 (MASTER_IP自行替换成自己的主节点ip)
    ##如果用了证书,就要填写https://${MASTER_IP}:2379 (MASTER_IP自行替换成自己的主节点ip)
    ETCD_ENDPOINTS=http://192.168.100.102:2379
    
    #kubernetes主节点ip地址, eg: 192.168.1.102
    MASTER_IP=192.168.100.102
    
    ####根据自己的配置进行设置
    
    [root@server02 home]# ./gen-config.sh simple
    
    [root@server02 home]# mv kubernetes-bins/ bin    将该解压的文件夹修改为/home/bin ,在将该路径加入环境遍历中
    
    [root@server02 home]# vi ~/.bash_profile
    
    PATH=$PATH:$HOME/bin
    
    [root@server02 home]# export PATH=$PATH:/home/bin	//设置环境变量
    

     二、基础服务部署 

    1、ETCD服务部署,二进制文件已经准备好,现在把它做成系统服务并启动(主节点操作)

    #把服务配置文件copy到系统服务目录
    [root@server02 ~]# cp /home/kubernetes-starter/target/master-node/etcd.service /lib/systemd/system/
    #enable服务
    [root@server02 ~]# systemctl enable etcd.service
    #创建工作目录(保存数据的地方)
    [root@server02 ~]# mkdir -p /var/lib/etcd
    # 启动服务
    [root@server02 ~]# systemctl start etcd
    # 查看服务日志,看是否有错误信息,确保服务正常
    [root@server02 ~]# journalctl -f -u etcd.service
    5月 18 12:17:31 server02 etcd[2179]: dialing to target with scheme: ""
    5月 18 12:17:31 server02 etcd[2179]: could not get resolver for scheme: ""
    5月 18 12:17:31 server02 etcd[2179]: serving insecure client requests on 192.168.100.102:2379, this is strongly discouraged!
    5月 18 12:17:31 server02 etcd[2179]: ready to serve client requests
    5月 18 12:17:31 server02 etcd[2179]: dialing to target with scheme: ""
    5月 18 12:17:31 server02 etcd[2179]: could not get resolver for scheme: ""
    5月 18 12:17:31 server02 etcd[2179]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
    5月 18 12:17:31 server02 etcd[2179]: set the initial cluster version to 3.2
    5月 18 12:17:31 server02 etcd[2179]: enabled capabilities for version 3.2
    5月 18 12:17:31 server02 systemd[1]: Started Etcd Server.
    
    ####ETCD服务正常启动
    

    2、部署APIServer(主节点)

    简介:

    kube-apiserver是Kubernetes最重要的核心组件之一,主要提供以下的功能

    • 提供集群管理的REST API接口,包括认证授权(我们现在没有用到)数据校验以及集群状态变更等
    • 提供其他模块之间的数据交互和通信的枢纽(其他模块通过API Server查询或修改数据,只有API Server才直接操作etcd
    [root@server02 ~]# cd /home/
    [root@server02 home]# cp kubernetes-starter/target/master-node/kube-apiserver.service /lib/systemd/system/
    [root@server02 home]# systemctl enable kube-apiserver.service
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
    [root@server02 home]# systemctl start kube-apiserver
    [root@server02 home]# journalctl -f -u kube-apiserver
    -- Logs begin at 六 2019-05-18 11:47:54 CST. --
    5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.688480    2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.certificates.k8s.io/status: (46.900994ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
    5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.691365    2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.policy/status: (40.847972ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
    5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.692039    2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1.storage.k8s.io/status: (41.81334ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
    5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.703752    2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1.rbac.authorization.k8s.io/status: (11.64213ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
    5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.704980    2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1.networking.k8s.io/status: (13.967816ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
    5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.710226    2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.rbac.authorization.k8s.io/status: (5.19179ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
    5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.710252    2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.storage.k8s.io/status: (5.695826ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
    5月 18 12:57:29 server02 kube-apiserver[2333]: I0518 12:57:29.559583    2333 wrap.go:42] GET /api/v1/namespaces/default: (4.524421ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
    5月 18 12:57:29 server02 kube-apiserver[2333]: I0518 12:57:29.563896    2333 wrap.go:42] GET /api/v1/namespaces/default/services/kubernetes: (2.544183ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
    5月 18 12:57:29 server02 kube-apiserver[2333]: I0518 12:57:29.566296    2333 wrap.go:42] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.280719ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
    

    ####日志都是提示的,没有异常

    查看端口是否起来
    [root@server02 home]# netstat -ntlp
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 192.168.100.102:2379    0.0.0.0:*               LISTEN      2179/etcd           
    tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      2179/etcd           
    tcp        0      0 127.0.0.1:2380          0.0.0.0:*               LISTEN      2179/etcd           
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      836/sshd            
    tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1069/master         
    tcp6       0      0 :::6443                 :::*                    LISTEN      2333/kube-apiserver 
    tcp6       0      0 :::8080                 :::*                    LISTEN      2333/kube-apiserver 
    tcp6       0      0 :::22                   :::*                    LISTEN      836/sshd            
    tcp6       0      0 ::1:25                  :::*                    LISTEN      1069/master
    

    3、部署ControllerManager (主节点)

    Controller Manager由kube-controller-manager和cloud-controller-manager组成,是Kubernetes的大脑,它通过apiserver监控整个集群的状态,并确保集群处于预期的工作状态。 kube-controller-manager由一系列的控制器组成,像Replication Controller控制副本,Node Controller节点控制,Deployment Controller管理deployment等等 cloud-controller-manager在Kubernetes启用Cloud Provider的时候才需要,用来配合云服务提供商的控制

    [root@server02 home]# cp kubernetes-starter/target/master-node/kube-controller-manager.service /lib/systemd/system/
    [root@server02 home]# systemctl enable kube-controller-manager.service
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
    [root@server02 home]# systemctl start kube-controller-manager.service
    [root@server02 home]# journalctl -f -u kube-controller-manager
    -- Logs begin at 六 2019-05-18 11:47:54 CST. --
    5月 18 13:26:11 server02 systemd[1]: Started Kubernetes Controller Manager.
    5月 18 13:26:11 server02 systemd[1]: Starting Kubernetes Controller Manager...
    

    4、部署Scheduler(主节点)

    kube-scheduler负责分配调度Pod到集群内的节点上,它监听kube-apiserver,查询还未分配Node的Pod,然后根据调度策略为这些Pod分配节点。我们前面讲到的kubernetes的各种调度策略就是它实现的。

    [root@server02 home]# cp kubernetes-starter/target/master-node/kube-scheduler.service /lib/systemd/system/
    [root@server02 home]# systemctl enable kube-schedule.service
    Failed to execute operation: No such file or directory
    [root@server02 home]# systemctl enable kube-scheduler.service
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
    [root@server02 home]# systemctl start kube-scheduler.service
    [root@server02 home]# journalctl -f -u kube-scheduler
    -- Logs begin at 六 2019-05-18 11:47:54 CST. --
    5月 18 13:29:09 server02 systemd[1]: Starting Kubernetes Scheduler...
    5月 18 13:29:10 server02 kube-scheduler[2430]: W0518 13:29:10.675474    2430 server.go:159] WARNING: all flags than --config are deprecated. Please begin using a config file ASAP.
    5月 18 13:29:10 server02 kube-scheduler[2430]: I0518 13:29:10.728026    2430 server.go:551] Version: v1.9.0
    5月 18 13:29:10 server02 kube-scheduler[2430]: I0518 13:29:10.729972    2430 factory.go:837] Creating scheduler from algorithm provider 'DefaultProvider'
    5月 18 13:29:10 server02 kube-scheduler[2430]: I0518 13:29:10.730027    2430 factory.go:898] Creating scheduler with fit predicates 'map[MaxAzureDiskVolumeCount:{} NoDiskConflict:{} CheckNodeMemoryPressure:{} NoVolumeZoneConflict:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} MatchInterPodAffinity:{} GeneralPredicates:{} CheckNodeDiskPressure:{} CheckNodeCondition:{} PodToleratesNodeTaints:{} CheckVolumeBinding:{}]' and priority functions 'map[SelectorSpreadPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{}]'
    5月 18 13:29:10 server02 kube-scheduler[2430]: I0518 13:29:10.730923    2430 server.go:570] starting healthz server on 127.0.0.1:10251
    5月 18 13:29:11 server02 kube-scheduler[2430]: I0518 13:29:11.642607    2430 controller_utils.go:1019] Waiting for caches to sync for scheduler controller
    5月 18 13:29:11 server02 kube-scheduler[2430]: I0518 13:29:11.743117    2430 controller_utils.go:1026] Caches are synced for scheduler controller
    5月 18 13:29:11 server02 kube-scheduler[2430]: I0518 13:29:11.766782    2430 leaderelection.go:174] attempting to acquire leader lease...
    5月 18 13:29:11 server02 kube-scheduler[2430]: I0518 13:29:11.786299    2430 leaderelection.go:184] successfully acquired lease kube-system/kube-scheduler
    

    5、部署CalicoNode(所有节点)

    Calico实现了CNI接口,是kubernetes网络方案的一种选择,它一个纯三层的数据中心网络方案(不需要Overlay),并且与OpenStack、Kubernetes、AWS、GCE等IaaS和容器平台都有良好的集成。 Calico在每一个计算节点利用Linux Kernel实现了一个高效的vRouter来负责数据转发,而每个vRouter通过BGP协议负责把自己上运行的workload的路由信息像整个Calico网络内传播——小规模部署可以直接互联,大规模下可通过指定的BGP route reflector来完成。 这样保证最终所有的workload之间的数据流量都是通过IP路由的方式完成互联的。

    Calico是通过系统服务+docker方式完成

    [root@server02 home]# cp kubernetes-starter/target/all-node/kube-calico.service /lib/systemd/system/
    [root@server02 home]# systemctl enable kube-calico.service
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-calico.service to /usr/lib/systemd/system/kube-calico.service.
    [root@server02 home]# systemctl start kube-calico
    [root@server02 home]# journalctl -f -u kube-calico
    -- Logs begin at 六 2019-05-18 11:47:54 CST. --
    5月 18 13:31:56 server02 systemd[1]: Started calico node.
    5月 18 13:31:56 server02 systemd[1]: Starting calico node...
    5月 18 13:31:59 server02 docker[2462]: Unable to find image 'registry.cn-hangzhou.aliyuncs.com/imooc/calico-node:v2.6.2' locally
    5月 18 13:32:12 server02 docker[2462]: v2.6.2: Pulling from imooc/calico-node
    5月 18 13:32:12 server02 docker[2462]: 6d987f6f4279: Pulling fs layer
    5月 18 13:32:12 server02 docker[2462]: 451e44d240b0: Pulling fs layer
    5月 18 13:32:12 server02 docker[2462]: 564d30bd7dc2: Pulling fs layer
    5月 18 13:32:12 server02 docker[2462]: 39b8f29b8ec9: Pulling fs layer
    5月 18 13:32:12 server02 docker[2462]: cd8e6a6bdbfe: Pulling fs layer
    5月 18 13:32:12 server02 docker[2462]: 39b8f29b8ec9: Waiting
    5月 18 13:32:12 server02 docker[2462]: cd8e6a6bdbfe: Waiting
    5月 18 13:32:13 server02 docker[2462]: 564d30bd7dc2: Verifying Checksum
    5月 18 13:32:13 server02 docker[2462]: 564d30bd7dc2: Download complete
    5月 18 13:32:15 server02 docker[2462]: 6d987f6f4279: Verifying Checksum
    5月 18 13:32:15 server02 docker[2462]: 6d987f6f4279: Download complete
    5月 18 13:32:16 server02 docker[2462]: 451e44d240b0: Verifying Checksum
    5月 18 13:32:16 server02 docker[2462]: 451e44d240b0: Download complete
    5月 18 13:32:19 server02 docker[2462]: 6d987f6f4279: Pull complete
    5月 18 13:32:20 server02 docker[2462]: 39b8f29b8ec9: Verifying Checksum
    5月 18 13:32:20 server02 docker[2462]: 39b8f29b8ec9: Download complete
    5月 18 13:32:20 server02 docker[2462]: 451e44d240b0: Pull complete
    5月 18 13:32:21 server02 docker[2462]: 564d30bd7dc2: Pull complete
    5月 18 13:32:21 server02 docker[2462]: 39b8f29b8ec9: Pull complete
    5月 18 13:32:26 server02 docker[2462]: cd8e6a6bdbfe: Verifying Checksum
    5月 18 13:32:26 server02 docker[2462]: cd8e6a6bdbfe: Download complete
    

    第一次需要从docker hub上面拉镜像,需要一段时间,等待下载完成,验证calico的可用性

    [root@server02 home]# docker ps
    CONTAINER ID        IMAGE                                                        COMMAND             CREATED             STATUS              PORTS               NAMES
    f0722de926f6        registry.cn-hangzhou.aliyuncs.com/imooc/calico-node:v2.6.2   "start_runit"       49 seconds ago      Up 42 seconds 
    

    查看节点情况,应该每个节点都建立连接

    [root@server02 kubernetes-starter]# calicoctl node status
    Calico process is running.
    
    IPv4 BGP status
    +-----------------+-------------------+-------+----------+--------+
    |  PEER ADDRESS   |     PEER TYPE     | STATE |  SINCE   |  INFO  |
    +-----------------+-------------------+-------+----------+--------+
    | 192.168.100.103 | node-to-node mesh | start | 05:57:38 | Active |
    | 192.168.100.101 | node-to-node mesh | start | 05:57:38 | Active |
    +-----------------+-------------------+-------+----------+--------+
    
    IPv6 BGP status
    No IPv6 peers found.

    记录一次错误,work节点的calico网络无法启动,原因是IP地址被占用,在部署之前我们需要修改好主机名,否则后面容易出现冲突,ETCD记录的主机名对应的IP

    可以修改,但是这个环境的ETCD无法修改

    6、下面配置kubectl命令(任意节点,这个我们也部署在主节点)

    kubectl是Kubernetes的命令行工具,是Kubernetes用户和管理员必备的管理工具。 kubectl提供了大量的子命令,方便管理Kubernetes集群中的各种功能。 

    设置api-server和上下文

    #指定apiserver地址(ip替换为你自己的api-server地址)
    [root@server02 home]# kubectl config set-cluster kubernetes  --server=http://192.168.100.102:8080
    Cluster "kubernetes" set.
    
    #指定设置上下文,指定cluster
    [root@server02 home]# kubectl config set-context kubernetes --cluster=kubernetes
    Context "kubernetes" created.
    #选择默认的上下文
    [root@server02 home]# kubectl config use-context kubernetes
    Switched to context "kubernetes".
    

    7、配置Kuberlet(work节点) 

    每个工作节点上都运行一个kubelet服务进程,默认监听10250端口,接收并执行master发来的指令,管理Pod及Pod中的容器。每个kubelet进程会在API Server上注册节点自身信息,定期向master节点汇报节点的资源使用情况,并通过cAdvisor监控节点和容器的资源。

    #确保相关目录存在
    [root@server01 home]# mkdir -p /var/lib/kubelet
    [root@server01 home]# mkdir -p /etc/kubernetes
    [root@server01 home]# mkdir -p /etc/cni/net.d
    
    #复制kubelet服务配置文件
    [root@server01 home]# cp kubernetes-starter/target/worker-node/kubelet.service /lib/systemd/system
    #复制kubelet依赖的配置文件
    [root@server01 home]# cp kubernetes-starter/target/worker-node/kubelet.kubeconfig /etc/kubernetes/
    #复制kubelet用到的cni插件配置文件
    [root@server01 home]# cp kubernetes-starter/target/worker-node/10-calico.conf /etc/cni/net.d/
    
    [root@server01 home]# systemctl enable kubelet.service
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    [root@server01 home]# systemctl start kubelet
    [root@server01 kubernetes-starter]# journalctl -f -u kubelet
    -- Logs begin at 六 2019-05-18 11:47:58 CST. --
    5月 18 14:17:59 server01 kubelet[8077]: I0518 14:17:59.886468    8077 manager.go:1178] Started watching for new ooms in manager
    5月 18 14:17:59 server01 kubelet[8077]: I0518 14:17:59.939249    8077 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
    5月 18 14:17:59 server01 kubelet[8077]: I0518 14:17:59.976545    8077 kubelet_node_status.go:431] Recording NodeHasSufficientDisk event message for node 192.168.100.101
    5月 18 14:17:59 server01 kubelet[8077]: I0518 14:17:59.976684    8077 kubelet_node_status.go:431] Recording NodeHasSufficientMemory event message for node 192.168.100.101
    5月 18 14:17:59 server01 kubelet[8077]: I0518 14:17:59.976713    8077 kubelet_node_status.go:431] Recording NodeHasNoDiskPressure event message for node 192.168.100.101
    5月 18 14:17:59 server01 kubelet[8077]: I0518 14:17:59.976736    8077 kubelet_node_status.go:82] Attempting to register node 192.168.100.101
    5月 18 14:18:00 server01 kubelet[8077]: I0518 14:18:00.385257    8077 manager.go:329] Starting recovery of all containers
    5月 18 14:18:00 server01 kubelet[8077]: I0518 14:18:00.773129    8077 kubelet_node_status.go:85] Successfully registered node 192.168.100.101
    

    8、给集群增加Service的功能----Kube-Proxy(工作节点)

    #确保工作目录存在
    [root@server03 kubernetes-starter]# mkdir -p /var/lib/kube-proxy
    #复制kube-proxy服务配置文件
    [root@server03 kubernetes-starter]# cp target/worker-node/kube-proxy.service /lib/systemd/system/
    #复制kube-proxy依赖的配置文件
    [root@server03 kubernetes-starter]# cp target/worker-node/kube-proxy.kubeconfig /etc/kubernetes/
    
    [root@server03 kubernetes-starter]# systemctl enable kube-proxy.service
    [root@server03 kubernetes-starter]# systemctl start kube-proxy
    [root@server03 kubernetes-starter]# journalctl -f -u kube-proxy
    -- Logs begin at 四 2019-05-16 13:20:42 CST. --
    5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.425552   28949 conntrack.go:83] Setting conntrack hashsize to 32768
    5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.426442   28949 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
    5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.426524   28949 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
    5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.428522   28949 config.go:202] Starting service config controller
    5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.428558   28949 controller_utils.go:1019] Waiting for caches to sync for service config controller
    5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.428613   28949 config.go:102] Starting endpoints config controller
    5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.428622   28949 controller_utils.go:1019] Waiting for caches to sync for endpoints config controller
    5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.529703   28949 controller_utils.go:1026] Caches are synced for endpoints config controller
    5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.529834   28949 controller_utils.go:1026] Caches are synced for service config controller
    5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.530025   28949 proxier.go:329] Adding new service port "default/kubernetes:https" at 10.68.0.1:443/TCP
    

    9、配置Kube-dns功能 (主节点)

    Kube-dns为Kubernetes集群提供命名服务,主要用来解析集群服务名和波德的主机名。目的是让吊舱可以通过名字访问到集群内服务。它通过添加甲记录的方式实现名字和服务的解析普通。的service会解析到service-ip.headless service会解析到pod列表。

    #到kubernetes-starter目录执行命令 
    $ kubectl create -f target / services / kube-dns.yaml
    

    三、集群认证和授权

    1、先删除测试使用的pod和deploy,停掉服务(所有节点)

    #停掉worker节点的服务
    [root@server02 ~]# service kubelet stop && rm -fr /var/lib/kubelet/*
    [root@server02 ~]# service kube-proxy stop && rm -fr /var/lib/kube-proxy/*
    [root@server02 ~]# service kube-calico stop
    
    #停掉master节点的服务
    [root@server02 ~]# systemctl stop kube-calico
    [root@server02 ~]# systemctl stop kube-scheduler
    [root@server02 ~]# systemctl stop kube-controller-manager
    [root@server02 ~]# systemctl stop kube-apiserver
    [root@server02 ~]# systemctl stop etcd && rm -fr /var/lib/etcd/*

    2、生成配置(所有节点)

    $ cd ~/kubernetes-starter
    #按照配置文件的提示编辑好配置
    $ vi config.properties
    #生成配置
    $ ./gen-config.sh with-ca
    

    3、安装cfssl(所有节点)

    [root@server01 ~]# cd /usr/local/bin/
    [root@server01 bin]# wget --no-check-certificate https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
    [root@server01 bin]# wget --no-check-certificate https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
    [root@server01 bin]# chmod +x cfssl*
    #验证
    [root@server01 bin]# cfssl version

    4、生成根证书(主节点)

    #所有证书相关的东西都放在这
    [root@server02 ~]# mkdir -p /etc/kubernetes/ca
    #准备生成证书的配置文件
    [root@server02 ~]#cp /home/kubernetes-starter/target/ca/ca-config.json /etc/kubernetes/ca
    [root@server02 ~]# cp /home/kubernetes-starter/target/ca/ca-csr.json /etc/kubernetes/ca
    #生成证书和秘钥
    [root@server02 ~]# cd /etc/kubernetes/ca
    [root@server02 ~]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
    #生成完成后会有以下文件(我们最终想要的就是ca-key.pem和ca.pem,一个秘钥,一个证书)
    [root@server02 ~]# ls
    ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem
    

      

    本文为博主原创文章,转载请标注来源。
  • 相关阅读:
    排序算法总结
    NAT协议 私有和公有ip如何相互转换。
    Redis的两种持久化方式
    分布式系统CAP理论
    常见容错机制:failover、failfast、failback、failsafe
    Redis分布式锁的正确实现方式
    Ticket机制
    微信小程序网络请求封装
    js+ajax 上传多图片,并删除
    js+ajax 上传单图片
  • 原文地址:https://www.cnblogs.com/liangyuntao-ts/p/10885352.html
Copyright © 2011-2022 走看看