zoukankan      html  css  js  c++  java
  • kubernetes基础集群搭建

    1、首先准备三台机器,centos7

    我的机器是:

    10.0.0.11   k8s-master

    10.0.0.12   k8s-node-1

    10.0.0.13   k8s-node-2

    2、关闭三台机器的防火墙以及setenforce

    systemctl stop firewalld

    systemctl disable firewalld.service

    setenforce 0

    3、编辑三台机器的hosts

    [root@k8s-master ~]# vim /etc/hosts
    
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    10.0.0.11 master
    10.0.0.11 etcd
    10.0.0.11 registry
    10.0.0.12 node-1
    10.0.0.13 node-2
    

    Etcd是一个高可用的键值存储系统,主要用于共享配置和服务发现,它通过Raft一致性算法处理日志复制以保证强一致性,我们可以理解它为一个高可用强一致性的服务发现存储仓库。

    在kubernetes集群中,etcd主要用于配置共享和服务发现

    Etcd主要解决的是分布式系统中数据一致性的问题,而分布式系统中的数据分为控制数据和应用数据,etcd处理的数据类型为控制数据,对于很少量的应用数据也可以进行处理。

    4、在master结点上安装etcd

    [root@localhost ~]# yum install etcd -y
    

    4.1、修改etcd配置文件

    #[Member]
    #ETCD_CORS=""
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    #ETCD_WAL_DIR=""
    #ETCD_LISTEN_PEER_URLS="http://localhost:2380"
    ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001" 
    #ETCD_MAX_SNAPSHOTS="5"
    #ETCD_MAX_WALS="5"
    ETCD_NAME="master"
    #ETCD_SNAPSHOT_COUNT="100000"
    #ETCD_HEARTBEAT_INTERVAL="100"
    #ETCD_ELECTION_TIMEOUT="1000"
    #ETCD_QUOTA_BACKEND_BYTES="0"
    #ETCD_MAX_REQUEST_BYTES="1572864"
    #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
    #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
    #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
    #
    #[Clustering]
    #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
    ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
    #ETCD_DISCOVERY=""
    #ETCD_DISCOVERY_FALLBACK="proxy"
    #ETCD_DISCOVERY_PROXY=""
    #ETCD_DISCOVERY_SRV=""
    #ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
    #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    #ETCD_INITIAL_CLUSTER_STATE="new"
    #ETCD_STRICT_RECONFIG_CHECK="true"
    #ETCD_ENABLE_V2="true"
    

    4.2、启动etcd服务,并测试

    [root@localhost ~]# systemctl start etcd
    [root@localhost ~]# etcdctl set testdir/testkey0 0
    0
    [root@localhost ~]# etcdctl get testdir/testkey0 
    0
    [root@localhost ~]# etcdctl -C http://etcd:4001 cluster-health
    member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
    cluster is healthy
    [root@localhost ~]# etcdctl -C http://etcd:2379 cluster-health
    member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
    cluster is healthy
    

    5、在master、node节点上安装docker,并启动docker服务

    yum -y install docker
    systemctl enable docker
    systemctl restart docker
    

    6、在master、node结点上安装kubernetes

    yum -y install kubernetes
    

    6.1、修改master节点上的配置文件

    [root@k8s-master ~]# vim /etc/kubernetes/apiserver 
    
    ###
    # kubernetes system config
    #
    # The following values are used to configure the kube-apiserver
    #
    
    # The address on the local server to listen to.
    KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
    
    # The port on the local server to listen on.
    KUBE_API_PORT="--port=8080"
    
    # Port minions listen on
    # KUBELET_PORT="--kubelet-port=10250"
    
    # Comma separated list of nodes in the etcd cluster
    KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
    
    # Address range to use for services
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
    
    # default admission control policies
    KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
    
    # Add your own!
    KUBE_API_ARGS=""
    

    6.2、master节点上修改k8s的config文件

    [root@k8s-master ~]# vim /etc/kubernetes/config 
    
    ###
    # kubernetes system config
    #
    # The following values are used to configure various aspects of all
    # kubernetes services, including
    #
    #   kube-apiserver.service
    #   kube-controller-manager.service
    #   kube-scheduler.service
    #   kubelet.service
    #   kube-proxy.service
    # logging to stderr means we get it in the systemd journal
    KUBE_LOGTOSTDERR="--logtostderr=true"
    
    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=0"
    
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow-privileged=false"
    
    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=http://10.0.0.11:8080"
    

    6.3、master节点上启动服务

    [root@localhost ~]# systemctl enable kube-apiserver.service
    [root@localhost ~]# systemctl restart kube-apiserver.service
    [root@localhost ~]# systemctl enable kube-controller-manager.service
    [root@localhost ~]# systemctl restart kube-controller-manager.service
    [root@localhost ~]# systemctl enable kube-scheduler.service
    [root@localhost ~]# systemctl restart kube-scheduler.service
    

    6.4、在node节点上修改配置文件,并启动服务,此步操作适用于node节点。

    [root@localhost ~]# vim /etc/kubernetes/config 
    
    ###
    # kubernetes system config
    #
    # The following values are used to configure various aspects of all
    # kubernetes services, including
    #
    #   kube-apiserver.service
    #   kube-controller-manager.service
    #   kube-scheduler.service
    #   kubelet.service
    #   kube-proxy.service
    # logging to stderr means we get it in the systemd journal
    KUBE_LOGTOSTDERR="--logtostderr=true"
    
    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=0"
    
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow-privileged=false"
    
    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=http://10.0.0.11:8080"
    
    [root@localhost ~]# vim /etc/kubernetes/kubelet 
    
    ###
    # kubernetes kubelet (minion) config
    
    # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
    KUBELET_ADDRESS="--address=0.0.0.0"
    
    # The port for the info server to serve on
    # KUBELET_PORT="--port=10250"
    
    # You may leave this blank to use the actual hostname
    KUBELET_HOSTNAME="--hostname-override=10.0.0.13"  #是那个node就改成那个node的ip
    
    # location of the api-server
    KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"
    
    # pod infrastructure container
    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
    
    # Add your own!
    KUBELET_ARGS=""
    
    systemctl enable kubelet.service
    systemctl restart kubelet.service
    systemctl enable kube-proxy.service
    systemctl restart kube-proxy.service
    

    7、在master节点上进行测试,看看node节点是否存活

    [root@localhost ~]# kubectl -s http://10.0.0.11:8080 get node
    NAME        STATUS    AGE
    10.0.0.12   Ready     55s
    10.0.0.13   Ready     1m
    [root@localhost ~]# kubectl get nodes
    NAME        STATUS    AGE
    10.0.0.12   Ready     1m
    10.0.0.13   Ready     2m

    至此一套k8s集群搭建完毕,但还缺少网络组建,可以根据下面的操作继续搭建

    8、在master、node节点上安装flannel

    yum -y install flannel
    

    8.1、在master节点上修改flannel的配置文件

    [root@k8s-master ~]# vim /etc/sysconfig/flanneld 
    
    # Flanneld configuration options  
    
    # etcd url location.  Point this to the server where etcd runs
    FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
    
    # etcd config key.  This is the configuration key that flannel queries
    # For address range assignment
    FLANNEL_ETCD_PREFIX="/atomic.io/network"
    
    # Any additional options that you want to pass
    #FLANNEL_OPTIONS=""
    

    8.2、配置flannel,以及启动服务,启动flannel后需要对docker等其他组件进行重启

     etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
    { "Network": "10.0.0.0/16" }
    
    systemctl enable flanneld.service
    systemctl restart flanneld.service 
    systemctl restart docker
    systemctl restart kube-apiserver.service
    systemctl restart kube-controller-manager.service
    systemctl restart kube-scheduler.service
    

    8.3、在node节点上修改flannel的配置文件

    [root@localhost ~]# vim /etc/sysconfig/flanneld 
    
    # Flanneld configuration options  
    
    # etcd url location.  Point this to the server where etcd runs
    FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
    
    # etcd config key.  This is the configuration key that flannel queries
    # For address range assignment
    FLANNEL_ETCD_PREFIX="/atomic.io/network"
    
    # Any additional options that you want to pass
    #FLANNEL_OPTIONS=""
    

    8.4、启动node节点服务,启动flannel后需要对docker等其他组件进行重启

    systemctl enable flanneld.service
    systemctl restart flanneld.service 
    systemctl restart docker
    systemctl restart kubelet.service
    systemctl restart kube-proxy.service
  • 相关阅读:
    eclipse export runnable jar(导出可执行jar包) runnable jar可以执行的
    mave常用指令
    771. Jewels and Stones珠宝数组和石头数组中的字母对应
    624. Maximum Distance in Arrays二重数组中的最大差值距离
    724. Find Pivot Index 找到中轴下标
    605. Can Place Flowers零一间隔种花
    581. Shortest Unsorted Continuous Subarray连续数组中的递增异常情况
    747. Largest Number At Least Twice of Others比所有数字都大两倍的最大数
    643. Maximum Average Subarray I 最大子数组的平均值
    414. Third Maximum Number数组中第三大的数字
  • 原文地址:https://www.cnblogs.com/Zrecret/p/14086278.html
Copyright © 2011-2022 走看看