zoukankan      html  css  js  c++  java
  • K8s+dashboard安装部署【h】

    系统安装
    使用虚拟机安装两个centos系统,在/etc/hosts里增加两行
    192.168.140.128 kuber-master
    192.168.140.129 kuber-node1

    关闭防火墙
    systemctl stop firewalld & systemctl disable firewalld

    关闭selinux
    sed -i ‘s/enforcing/disabled/’ /etc/selinux/config
    setenforce 0

    关闭swap
    swapoff -a #临时关闭
    vim /etc/fstab #注释掉swap即可永久关闭

    配置阿里云yum源、配置docker仓库、配置K8S的yum源

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo

    [kubernetes]

    name=Kubernetes

    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

    enabled=1

    gpgcheck=1

    repo_gpgcheck=1

    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

    http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

    EOF
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    yum makecache

    安装docker
    yum install -y --setopt=obsoletes=0 docker-ce
    运行docker --version,可以看到安装
    [root@kuber-master ~]# docker --version
    Docker version 19.03.1, build 74b1e89
    启动Docker服务并激活开机启动:systemctl start docker & systemctl enable docker

    安装k8s的组件
    yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
    启动kubelet并设置kubelet开机自启
    systemctl enable kubelet&& systemctl start kubelet
    运行下面的命令可以查看 初始化kubernets需要的docker镜像
    kubeadm config images list

    [root@kuber-master ~]# kubeadm config images list
    W0811 01:47:06.624865 116757 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    W0811 01:47:06.625033 116757 version.go:99] falling back to the local client version: v1.15.2
    k8s.gcr.io/kube-apiserver:v1.15.2
    k8s.gcr.io/kube-controller-manager:v1.15.2
    k8s.gcr.io/kube-scheduler:v1.15.2
    k8s.gcr.io/kube-proxy:v1.15.2
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.3.10
    k8s.gcr.io/coredns:1.3.1
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    这些镜像都要pull到docker里面去

    下载k8s需要的镜像
    创建/etc/sysctl.d/k8s.conf文件,添加如下内容:
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1

    执行下命令使修改生效
    modprobe br_netfilter
    sysctl -p /etc/sysctl.d/k8s.conf

    修改docker镜像加速地址

    vim /etc/docker/daemon.json
    {
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
    "max-size": "100m"
    },
    "storage-driver": "overlay2",
    "storage-opts": [
    "overlay2.override_kernel_check=true"
    ],
    "registry-mirrors": ["https://dnh4r4lu.mirror.aliyuncs.com"]
    }
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    【1】:阿里云docker仓库 https://dev.aliyun.com/search.html
    【2】:进去注册帐号后,点击自己的管理中心。
    【3】:在管理中心点击加速器,右边面板会有你的加速地址,右边面板下面有详细设置步骤。

    并重启docker生效
    systemctl daemon-reload
    systemctl restart docker.service

    在线下载镜像文件

    MY_REGISTRY=registry.cn-hangzhou.aliyuncs.com/openthings

    #基本组件
    docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-apiserver:v1.15.2
    docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-controller-manager:v1.15.2
    docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-scheduler:v1.15.2
    docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.15.2
    docker pull ${MY_REGISTRY}/k8s-gcr-io-etcd:3.3.10
    docker pull ${MY_REGISTRY}/k8s-gcr-io-pause:3.1
    docker pull ${MY_REGISTRY}/k8s-gcr-io-coredns:1.3.1

    # 修改tag
    docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-apiserver:v1.15.2 k8s.gcr.io/kube-apiserver:v1.15.2
    docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-scheduler:v1.15.2 k8s.gcr.io/kube-scheduler:v1.15.2
    docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-controller-manager:v1.15.2 k8s.gcr.io/kube-controller-manager:v1.15.2
    docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.15.2 k8s.gcr.io/kube-proxy:v1.15.2
    docker tag ${MY_REGISTRY}/k8s-gcr-io-etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
    docker tag ${MY_REGISTRY}/k8s-gcr-io-pause:3.1 k8s.gcr.io/pause:3.1
    docker tag ${MY_REGISTRY}/k8s-gcr-io-coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

    ## 删除镜像
    docker rmi ${MY_REGISTRY}/k8s-gcr-io-kube-apiserver:v1.15.2
    docker rmi ${MY_REGISTRY}/k8s-gcr-io-kube-controller-manager:v1.15.2
    docker rmi ${MY_REGISTRY}/k8s-gcr-io-kube-scheduler:v1.15.2
    docker rmi ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.15.2
    docker rmi ${MY_REGISTRY}/k8s-gcr-io-etcd:3.3.10
    docker rmi ${MY_REGISTRY}/k8s-gcr-io-pause:3.1
    docker rmi ${MY_REGISTRY}/k8s-gcr-io-coredns:1.3.1
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    =以上这些操作在两个节点上都需要执行========

    在master节点上初始化kubernetes
    运行安装命令
    kubeadm init --kubernetes-version=v1.15.2 --apiserver-advertise-address=192.168.140.128 --pod-network-cidr=10.244.0.0/16

    为kubectl准备Kubeconfig文件

    mkdir -p $HOME/.kube
    cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    chown root.root /root/.kube/config
    1
    2
    3
    安装配置flannel网络
    为了使 flannel 正常工作,你必须将选项 --pod-network-cidr=10.244.0.0/16 传递给 kubeadm init。

    通过执行 sysctl net.bridge.bridge-nf-call-iptables=1 命令,将 /proc/sys/net/bridge/bridge-nf-call-iptables 设置为 1 以便将桥接的 IPv4 流量传递给 iptables 的链。 这是一些 CNI 插件工作的要求,有关详细信息,请参阅此处。

    注意 flannel 可以运行在 amd64、arm、arm64、ppc64le架构的机器上。

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

    也可以使用如下文件,把kube-flannel.yml上传到master,在master上执行
    kubectl apply -f kube-flannel.yml

    在node节点上加入集群
    kubeadm join 192.168.140.128:6443 --token 5vwsgc.v1trlo6i3nbexktf
    –discovery-token-ca-cert-hash sha256:ecc6c9911b42ebe9c04a9a5a4555f1d737ef5e3a9039fb6eab3f56bb45545c4a

    查看节点运行状态

    [root@kuber-master ~]# kubectl get node
    NAME STATUS ROLES AGE VERSION
    kuber-master Ready master 45m v1.15.2
    kuber-node1 Ready <none> 43m v1.15.2
    [root@kuber-master ~]# kubectl get pods -n kube-system -o wide
    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    coredns-5c98db65d4-z4p5b 1/1 Running 0 44m 10.244.0.2 kuber-master <none> <none>
    coredns-5c98db65d4-zlzkd 1/1 Running 0 44m 10.244.0.3 kuber-master <none> <none>
    etcd-kuber-master 1/1 Running 0 43m 192.168.140.128 kuber-master <none> <none>
    kube-apiserver-kuber-master 1/1 Running 0 43m 192.168.140.128 kuber-master <none> <none>
    kube-controller-manager-kuber-master 1/1 Running 1 43m 192.168.140.128 kuber-master <none> <none>
    kube-flannel-ds-amd64-7qfxf 1/1 Running 0 43m 192.168.140.128 kuber-master <none> <none>
    kube-flannel-ds-amd64-xnl69 1/1 Running 0 43m 192.168.140.129 kuber-node1 <none> <none>
    kube-proxy-p6gvx 1/1 Running 0 43m 192.168.140.129 kuber-node1 <none> <none>
    kube-proxy-xgdxj 1/1 Running 0 44m 192.168.140.128 kuber-master <none> <none>
    kube-scheduler-kuber-master 1/1 Running 1 43m 192.168.140.128 kuber-master <none> <none>
    kubernetes-dashboard-7d75c474bb-9xt67 1/1 Running 0 41m 10.244.1.3 kuber-node1 <none> <none>
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    kubernetes dashboard安装
    下载 Dashboard yaml 文件
    wget http://pencil-file.oss-cn-hangzhou.aliyuncs.com/blog/kubernetes-dashboard.yaml

    打开下载的文件添加一项:type: NodePort,暴露出去 Dashboard 端口,方便外部访问。

    ......
    # ------------------- Dashboard Service ------------------- #

    kind: Service
    apiVersion: v1
    metadata:
    labels:
    k8s-app: kubernetes-dashboard
    name: kubernetes-dashboard
    namespace: kube-system
    spec:
    type: NodePort # 新增
    ports:
    - port: 443
    targetPort: 8443
    selector:
    k8s-app: kubernetes-dashboard
    ......
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    在 yaml 文件 kubernetes-dashboard.yaml 中拉取了一个镜像 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1,没有配置 docker 代理网络的可能拉取不下来,可以使用预先下载好的kubernetes-dashboard-amd64-1.10.1.tar文件,上传到master和node节点,执行docker load -i kubernetes-dashboard-amd64-1.10.1.tar
    下载地址:https://pan.baidu.com/s/1MyL1fAus1WRV_lA6N0mT_w

    还需要修改文件里面的镜像拉取方式如下

    ......
    spec:
    containers:
    - name: kubernetes-dashboard
    image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
    imagePullPolicy: IfNotPresent
    ......
    1
    2
    3
    4
    5
    6
    7
    也可以使用如下文件

    部署
    kubectl create -f kubernetes-dashboard.yaml

    kubectl get pods --all-namespaces -o wide | grep dashboard
    kube-system kubernetes-dashboard-5f7b999d65-h96kl 1/1 Running 1 23h 10.244.0.7 k8s-master <none> <none>
    1
    2
    创建简单用户
    创建服务账号和集群角色绑定配置文件
    创建 dashboard-adminuser.yaml 文件,加入以下内容:

    vim dashboard-adminuser.yaml
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    labels:
    k8s-app: kubernetes-dashboard
    name: kubernetes-dashboard-admin
    namespace: kube-system

    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
    name: kubernetes-dashboard-admin
    labels:
    k8s-app: kubernetes-dashboard
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: cluster-admin
    subjects:
    - kind: ServiceAccount
    name: kubernetes-dashboard-admin
    namespace: kube-system
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    创建用户和角色绑定
    kubectl apply -f dashboard-adminuser.yaml

    查看Token

    kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubernetes-dashboard-admin-token | awk '{print $1}')
    Name: kubernetes-dashboard-admin-token-2mvzg
    Namespace: kube-system
    Labels: <none>
    Annotations: kubernetes.io/service-account.name: kubernetes-dashboard-admin
    kubernetes.io/service-account.uid: c8f781f7-3a0b-44b2-ae27-aa4810781242

    Type: kubernetes.io/service-account-token

    Data
    ====
    ca.crt: 1025 bytes
    namespace: 11 bytes
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi0ybXZ6ZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImM4Zjc4MWY3LTNhMGItNDRiMi1hZTI3LWFhNDgxMDc4MTI0MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.Cr0Ry6B3nOvGI4YqrWR7L7Fl8znGH2XfNiGKAX7v2jSGti4fWLLzM1HWXuRvOp1h6sU01u1Jy8YQCdNsmqii0buLHuuNOWKnPk15lZ03K58uJevuaYNG35NbLreoi9EF0Ec7PkYgHAMdlygtPZhmT4MRE1pe8RtrfOBjvKQfy3Db81y6DN3BEet8LLCpXCXuL7ZQoJQYhsQc0ypdXRRFaZ8yzt9YsZwFrA7vUlTJO55hou4825HbTYnkrjTJme_BnvJk-G3eZ1cUryZb1ADwvPr5ij-_6RqxcOIwITF3dyCiS4s078Nn97TMvjBwwS_17Yo4ZOInUBHgbfv4P3DlAg
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    登录 Dashboard
    查看 Dashboard 端口号

    kubectl get svc -n kube-system
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 73m
    kubernetes-dashboard NodePort 10.96.243.241 <none> 443:30391/TCP 70m
    1
    2
    3
    4
    访问 Dashboard
    https://192.168.140.128:30391/#!/login

    选择令牌,并输入上文中保留的 token 即可登录

    注意事项
    Dashboard需要master和node直接网络通讯正常才能使用,需要coredns、kube-flannel-ds保持running状态才能正常运行,如果dashboard无法访问,需要先查看这些pod是否正常。

    kubectl get pods -n kube-system -o wide
    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    coredns-5c98db65d4-z4p5b 1/1 Running 0 44m 10.244.0.2 kuber-master <none> <none>
    coredns-5c98db65d4-zlzkd 1/1 Running 0 44m 10.244.0.3 kuber-master <none> <none>
    etcd-kuber-master 1/1 Running 0 43m 192.168.140.128 kuber-master <none> <none>
    kube-apiserver-kuber-master 1/1 Running 0 43m 192.168.140.128 kuber-master <none> <none>
    kube-controller-manager-kuber-master 1/1 Running 1 43m 192.168.140.128 kuber-master <none> <none>
    kube-flannel-ds-amd64-7qfxf 1/1 Running 0 43m 192.168.140.128 kuber-master <none> <none>
    kube-flannel-ds-amd64-xnl69 1/1 Running 0 43m 192.168.140.129 kuber-node1 <none> <none>
    kube-proxy-p6gvx 1/1 Running 0 43m 192.168.140.129 kuber-node1 <none> <none>
    kube-proxy-xgdxj 1/1 Running 0 44m 192.168.140.128 kuber-master <none> <none>
    kube-scheduler-kuber-master 1/1 Running 1 43m 192.168.140.128 kuber-master <none> <none>
    kubernetes-dashboard-7d75c474bb-9xt67 1/1 Running 0 41m 10.244.1.3 kuber-node1 <none> <none>
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13

  • 相关阅读:
    使用事物码SAT检测SAP CRM中间件的传输性能
    显示SAP CRM Product hierarchy的一个小工具
    如何使用SAP CRM中间件从ERP往CRM下载Service Master
    SAP CL_CRM_BOL_ENTITY单元测试方法
    SAP CRM错误消息 Specify at least one number for the business partner
    Java注解@Autowired的工作原理
    Spring里component-scan的工作原理
    Spring框架里解析配置文件的准确位置
    SAP CRM状态字段下拉列表里数据的填充原理
    用户自定义协议client/server代码示例
  • 原文地址:https://www.cnblogs.com/ExMan/p/11655771.html
Copyright © 2011-2022 走看看