zoukankan      html  css  js  c++  java
  • k8s搭建

    一、单机版快速入门(Master和Node在同一台机器)

    1.安装etcd和kubernetes软件

    yum install -y etcd kubernetes

    2.启动服务

    systemctl start etcd
    systemctl start docker
    systemctl start kube-apiserver
    systemctl start kube-controller-manager
    systemctl start kube-scheduler
    systemctl start kubelet
    systemctl start kube-proxy

    3.实例配置(tomcat)

    3.1 mytomcat.rc.yaml

    apiVersion: v1
    kind: ReplicationController
    metadata:
     name: mytomcat
    spec:
     replicas: 2
     selector:
      app: mytomcat
     template:
      metadata:
       labels:
        app: mytomcat
      spec:
       containers:
        - name: mytomcat
          image: tomcat:7-jre7
          ports:
          - containerPort: 8080

    创建RC:

    kubectl create -f mytomcat.rc.yaml

    3.2 mytomcat.svc.yaml

    apiVersion: v1
    kind: Service
    metadata:
     name: mytomcat
    spec:
     type: NodePort
     ports:
      - port: 8080
        nodePort: 30001
     selector:
      app: mytomcat

    创建service

    kubectl create -f mytomcat.svc.yaml

    常见错误解决方案:

    ● 通过kubectl describe发现docker pull失败

    参见集群安装常见报错

    ● 外部网不能访问

    vim /etc/sysctl.conf,添加

    net.ipv4.ip_forward=1

    若还是不能访问,则执行

    iptables -P FORWARD ACCEPT

    ● kubectl get pods时报No resources found

    1)vim /etc/kubernetes/apiserver

    找到KUBE_ADMISSION_CONTROL这行,删去",ServiceAccount"

    2)重启apiserver

    systemctl restart kube-apiserver

    4.测试访问172.17.213.105:30001

    二、二进制安装k8s集群

    0.环境准备:

    Master:192.168.25.130  Node1:192.168.25.131  Node2:192.168.25.132。这三个虚拟机均关闭防火墙

    下载k8s二进制包,解压会生成kubernetes目录(版本一定要下对,血的教训):

    https://dl.k8s.io/v1.9.10/kubernetes-server-linux-amd64.tar.gz

    1.Master安装

    1.1安装docker

    1.2安装etcd

    1)下载并解压(下对版本):https://github.com/etcd-io/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz

    2)将etcd和etcdctl文件复制到/usr/bin目录

    3)vi /usr/lib/systemd/system/etcd.service

    [Unit]
    Description=Etcd Server
    After=network.target
    [Service]
    Type=simple
    EnvironmentFile=-/etc/etcd/etcd.conf
    WorkingDirectory=/var/lib/etcd/
    ExecStart=/usr/bin/etcd
    Restart=on-failure
    [Install]
    WantedBy=multi-user.target

    4)启动并测试etcd

    systemctl daemon-reload
    systemctl enable etcd.service
    mkdir -p /var/lib/etcd/
    systemctl start etcd.service
    etcdctl cluster-health

    1.3安装kube-apiserver

    1)cd kubernetes/server/bin

    cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/

    2)vi /usr/lib/systemd/system/kube-apiserver.service

    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=etcd.service
    Wants=etcd.service
    [Service]
    EnvironmentFile=/etc/kubernetes/apiserver
    ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
    Restart=on-failure
    Type=notify
    [Install]
    WantedBy=multi-user.target

    3)mkdir /etc/kubernetes -> vi /etc/kubernetes/apiserver,生产环境--insecure-bind-address要填指定ip

    KUBE_API_ARGS="--storage-backend=etcd3 --etcd-servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=169.169.0.0/16 --service-node-port-range=1-65535 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota --logtostderr=true --log-dir=/var/log/kubernetes --v=2"

    1)vi /usr/lib/systemd/system/kube-controller-manager.service

    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=kube-apiserver.service
    Requires=kube-apiserver.service
    [Service]
    EnvironmentFile=-/etc/kubernetes/controller-manager
    ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    [Install]
    WantedBy=multi-user.target

    2)vi /etc/kubernetes/controller-manager

    KUBE_CONTROLLER_MANAGER_ARGS="--master=http://192.168.25.130:8080 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"

    1.5安装kube-scheduler,也依赖于kube-apiserver服务

    1)vi /usr/lib/systemd/system/kube-scheduler.service

    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=kube-apiserver.service
    Requires=kube-apiserver.service
    [Service]
    EnvironmentFile=-/etc/kubernetes/scheduler
    ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    [Install]
    WantedBy=multi-user.target

    2)vi /etc/kubernetes/scheduler

    KUBE_SCHEDULER_ARGS="--master=http://192.168.25.130:8080 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"

    1.6按顺序启动服务

    systemctl daemon-reload
    systemctl enable kube-apiserver.service
    systemctl start kube-apiserver.service
    systemctl enable kube-controller-manager.service
    systemctl start kube-controller-manager.service
    systemctl enable kube-scheduler.service
    systemctl start kube-scheduler.service

    检查每个服务的健康状态:

    systemctl status kube-apiserver.service
    systemctl status kube-controller-manager.service
    systemctl status kube-scheduler.service

    2.Node1安装

    2.1安装docker

    2.2进入kubernetes/server/bin目录

    cp kubelet kube-proxy /usr/bin/

    2.3安装kubelet

    1)vi /usr/lib/systemd/system/kubelet.service

    [Unit]
    Description=Kubernetes Kubelet Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=docker.service
    Requires=docker.service
    [Service]
    WorkingDirectory=/var/lib/kubelet
    EnvironmentFile=-/etc/kubernetes/kubelet
    ExecStart=/usr/bin/kubelet $KUBELET_ARGS
    Restart=on-failure
    KillMode=process
    [Install]
    WantedBy=multi-user.target

    2)创建所需目录

    mkdir -p /var/lib/kubelet
    mkdir /var/log/kubernetes

    3)vi /etc/kubernetes/kubelet。若kubelet报错需查看日志,则将--logtostderr改为true再启动

    KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --hostname-override=192.168.25.131 --logtostderr=false --log-dir=/var/log/kubernetes --v=2 --fail-swap-on=false --cgroup-driver=systemd"

    若启动kubelet时报错kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

    解决方案1:将上边的--cgroup-driver=systemd改为cgroupfs

    解决方案2:vi /usr/lib/systemd/system/docker.service -> 将--exec-opt native.cgroupdriver=cgroupfs改为systemd -> systemctl daemon-reload -> systemctl restart docker

    4)vi /etc/kubernetes/kubeconfig

    apiVersion: v1
    kind: Config
    clusters:
      - cluster:
          server: http://192.168.25.130:8080
        name: local
    contexts:
      - context:
          cluster: local
        name: mycontext
    current-context: mycontext

    2.4安装kube-proxy

    1)vi /usr/lib/systemd/system/kube-proxy.service

    [Unit]
    Description=Kubernetes Kube-proxy Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.service
    Requires=network.service
    [Service]
    EnvironmentFile=/etc/kubernetes/proxy
    ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    KillMode=process
    [Install]
    WantedBy=multi-user.target

    2)vi /etc/kubernetes/proxy

    KUBE_PROXY_ARGS="--master=http://192.168.25.130:8080 --hostname-override=192.168.25.131 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"

    2.5启动并查看状态

    systemctl daemon-reload
    systemctl enable kubelet
    systemctl start kubelet
    systemctl status kubelet
    systemctl enable kube-proxy
    systemctl start kube-proxy
    systemctl status kube-proxy

    3.Node2安装

    同Node1,参考笔记博客复制虚拟机

    4.示例测试

    4.1查看集群状态和集群组件状态

    kubectl get nodes
    kubectl get cs

    4.2nginx示例测试

    1)vi nginx-rc.yaml

    apiVersion: v1
    kind: ReplicationController
    metadata:
     name: nginx
    spec:
     replicas: 3
     selector:
      app: nginx
     template:
      metadata:
       labels:
        app: nginx
      spec:
       containers:
       - name: nginx
         image: nginx
         ports:
         - containerPort: 80

    2)vi nginx-svc.yaml

    apiVersion: v1
    kind: Service
    metadata:
     name: nginx
    spec:
     type: NodePort
     ports:
      - port: 80
        nodePort: 33333
     selector:
       app: nginx

    3)创建nginx示例

    kubectl create -f nginx-rc.yaml
    kubectl create -f nginx-svc.yaml

    4)执行kubectl get pods,若状态显示为Running即测试通过

    5.常见报错解决

    ● kubelet docker pull失败导致pod状态一直为ContainerCreating

    1)进入Master节点,执行

    docker pull registry
    docker run -di --name=registry -p 5000:5000 registry

    2)vi /etc/docker/daemon.json,增加私有仓库

    {
    "registry-mirrors": ["https://registry.docker-cn.com", "http://hub-mirror.c.163.com","https://docker.mirrors.ustc.edu.cn"],
    "insecure-registries":["192.168.25.130:5000"]
    }

    3)systemctl restart docker

    4)将pause镜像推到私有仓库

    docker pull kubernetes/pause
    docker tag docker.io/kubernetes/pause:latest 192.168.25.130:5000/google_containers/pauseamd64.3.0
    docker push 192.168.25.130:5000/google_containers/pause-amd64.3.0

    5)进入Node节点,vi /etc/docker/daemon.json,增加私有仓库

    {
    "registry-mirrors": ["https://registry.docker-cn.com", "http://hub-mirror.c.163.com","https://docker.mirrors.ustc.edu.cn"],
    "insecure-registries":["192.168.25.130:5000"]
    }

    6)vi /etc/kubernetes/kubelet,追加参数

    KUBELET_ARGS="--pod_infra_container_image=192.168.25.130:5000/google_containers/pauseamd64.3.0"

    7)systemctl restart kubelet

    ● kubelet报错unknown container “/system.slice/kubelet.service”

    1)vi /etc/kubernetes/kubelet,追加

     --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice

    2)systemctl restart kubelet

    ● kubelet报错invalid token,导致Master获取不到Node节点

    把所有yaml(这里指kubeconfig.yaml)文件里的制表符用空格代替(血的教训)

  • 相关阅读:
    python ddt 传多个参数值示例
    Appium 输入 & 符号,实际输入&-
    curl 调用jenkins的api
    Android WebView的Js对象注入漏洞解决方案
    Could not find com.android.tools.build:gradle:1.3.0.
    react-native疑难
    win上搭建react-native android环境
    gradle大体内容
    android studio This client is too old to work with the working copy at
    sharedPreference
  • 原文地址:https://www.cnblogs.com/naixin007/p/14516283.html
Copyright © 2011-2022 走看看