zoukankan      html  css  js  c++  java
  • K8S搭建

    ########K8S搭建开始########
    1.准备三台机器,系统使用centos7(2核心以上 不然会出错)
    需要绑定一下host
    172.24.16.153 k8s-master
    172.24.16.154 k8s-node1
    172.24.16.155 k8s-node2
    2.修改机器名,每台机器修改成相应的名称
    hostnamectl --static set-hostname XXXX

    3.安装docker
    安装docker必须要内核版本为3.10以上 uname -r 查看当前内核版本。

    安装docker执行如下几步,每台机器上面执行
    1.yum install -y yum-utils device-mapper-persistent-data 安装docker所需要的依赖包
    2.yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo 配置安装源docker仓库
    3.yum -y install docker-ce 安装 docker
    4.systemctl start docker 启动docker
    5.systemctl enable docker.service 加入开机自启动项 centos 7 之后不用chkconfig 查看开启自启动项systemctl list-unit-files


    4.修改机器配置
    1.关闭selinux
    sed -i ‘s/enforcing/disabled/g’ /etc/selinux/config
    setenforce 0

    2.关闭防火墙
    systemctl stop firewalld
    systemctl disable firewalld

    3.关闭swap 分区
    swapoff -a

    4.配置转发参数
    cat <<EOF>> /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables=1
    net.bridge.bridge-nf-call-iptables=1
    EOF

    sysctl --system #是添加的参数生效 流量必须经过防火墙

    5.安装kubectl组件
    配置kubernetes yum源

    cat <<EOF>> /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF


    yum安装kubeadm kubectl kubelet kubelet 组件
    yum install -y kubeadm kubectl kubelet kubelet
    systemctl start kubelet && systemctl enable kubelet
    ######################################################上面内容在node或者master上面都要做##################################################################################


    在master机器上面初始化集群(初始化时间较长 需要下载很多image)
    kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12

    ################################################整个初始化过程#############################################################################
    初始化过程日志
    [root@k8s-master ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=172.24.0.0/16
    W0906 05:55:13.280608 1455 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    W0906 05:55:13.280772 1455 version.go:99] falling back to the local client version: v1.15.3
    [init] Using Kubernetes version: v1.15.3
    [preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.2. Latest validated version: 18.09
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Activating the kubelet service
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.24.16.153 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.24.16.153 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.24.0.1 172.24.16.153]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 39.007624 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
    [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [kubelet-check] Initial timeout of 40s passed.
    [bootstrap-token] Using token: ffrvkv.jy4i66d7j1hwp9q3
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy

    Your Kubernetes control-plane has initialized successfully!

    To start using your cluster, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    https://kubernetes.io/docs/concepts/cluster-administration/addons/

    Then you can join any number of worker nodes by running the following on each as root:

    kubeadm join 172.24.16.153:6443 --token ffrvkv.jy4i66d7j1hwp9q3
    --discovery-token-ca-cert-hash sha256:09d663f66a2906cb9da84a16c06886d2c528c83f07d8e3dbd9a047843f0bd7c8


    ###############################################################################################################################################

    1.按照上面执行一下
    mkdir -p ~/.kube #创建隐藏目录

    cp -rp /etc/kubernetes/admin.conf ~/.kube/config #拷贝配置文件


    chown $(id -u):$(id -g) $HOME/.kube/config #修改目录文件的宿主 如果是root用户可以不用修改


    ###############################################整个初始化过程################################################################################################

    执行安装网络组件(k8s多层网络专用组件)
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

    kubectl get pod -n kube-system -o wide
    kubectl get node
    执行这个查看所有master服务都完毕之后 master处于ready状态开始安装K8S界面UI组件


    #################################################################################################################################


    添加如下内容在kubernetes-dashboard.yaml里面

    #####################################kubernetes-dashboard.yaml文件内容###########################################################################################
    # Copyright 2017 The Kubernetes Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    # http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.

    # ------------------- Dashboard Secret ------------------- #

    apiVersion: v1
    kind: Secret
    metadata:
    labels:
    k8s-app: kubernetes-dashboard
    name: kubernetes-dashboard-certs
    namespace: kube-system
    type: Opaque

    ---
    # ------------------- Dashboard Service Account ------------------- #

    apiVersion: v1
    kind: ServiceAccount
    metadata:
    labels:
    k8s-app: kubernetes-dashboard
    name: kubernetes-dashboard
    namespace: kube-system

    ---
    # ------------------- Dashboard Role & Role Binding ------------------- #

    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
    name: kubernetes-dashboard-minimal
    namespace: kube-system
    rules:
    # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
    - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create"]
    # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
    - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["create"]
    # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
    - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
    - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics from heapster.
    - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster"]
    verbs: ["proxy"]
    - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
    verbs: ["get"]

    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
    name: kubernetes-dashboard-minimal
    namespace: kube-system
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: Role
    name: kubernetes-dashboard-minimal
    subjects:
    - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kube-system

    ---
    # ------------------- Dashboard Deployment ------------------- #

    kind: Deployment
    apiVersion: apps/v1
    metadata:
    labels:
    k8s-app: kubernetes-dashboard
    name: kubernetes-dashboard
    namespace: kube-system
    spec:
    replicas: 1
    revisionHistoryLimit: 10
    selector:
    matchLabels:
    k8s-app: kubernetes-dashboard
    template:
    metadata:
    labels:
    k8s-app: kubernetes-dashboard
    spec:
    containers:
    - name: kubernetes-dashboard
    image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
    ports:
    - containerPort: 8443
    protocol: TCP
    args:
    - --auto-generate-certificates
    # Uncomment the following line to manually specify Kubernetes API server Host
    # If not specified, Dashboard will attempt to auto discover the API server and connect
    # to it. Uncomment only if the default does not work.
    # - --apiserver-host=http://my-address:port
    volumeMounts:
    - name: kubernetes-dashboard-certs
    mountPath: /certs
    # Create on-disk volume to store exec logs
    - mountPath: /tmp
    name: tmp-volume
    livenessProbe:
    httpGet:
    scheme: HTTPS
    path: /
    port: 8443
    initialDelaySeconds: 30
    timeoutSeconds: 30
    volumes:
    - name: kubernetes-dashboard-certs
    secret:
    secretName: kubernetes-dashboard-certs
    - name: tmp-volume
    emptyDir: {}
    serviceAccountName: kubernetes-dashboard
    # Comment the following tolerations if Dashboard must not be deployed on master
    tolerations:
    - key: node-role.kubernetes.io/master
    effect: NoSchedule

    ---
    # ------------------- Dashboard Service ------------------- #

    kind: Service
    apiVersion: v1
    metadata:
    labels:
    k8s-app: kubernetes-dashboard
    name: kubernetes-dashboard
    namespace: kube-system
    spec:
    type: NodePort
    ports:
    - port: 443
    targetPort: 8443
    nodePort: 31620
    selector:
    k8s-app: kubernetes-dashboard


    在执行下面的命令
    kubectl create -f kubernetes-dashboard.yaml

    ####################################################################################################################

    kubectl get service -n kube-system
    查看已经装好的UI界面组件的端口信息
    [root@k8s-master ~]# kubectl get service -n kube-system
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kube-dns ClusterIP 172.24.0.10 <none> 53/UDP,53/TCP,9153/TCP 14m
    kubernetes-dashboard NodePort 172.24.105.20 <none> 443:31620/TCP 2m53s


    #################################################创建kubectl用户token登录###############################################################################
    kubectl create -f dashboard-adminuser.yaml


    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: admin-user
    namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: admin-user
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: cluster-admin
    subjects:
    - kind: ServiceAccount
    name: admin-user
    namespace: kube-system

    查找admin-user的token
    [root@k8s-master ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
    Name: admin-user-token-77wpr
    Namespace: kube-system
    Labels: <none>
    Annotations: kubernetes.io/service-account.name: admin-user
    kubernetes.io/service-account.uid: 3212a17c-665e-409f-b7db-98ce335a200f

    Type: kubernetes.io/service-account-token

    Data
    ====
    ca.crt: 1025 bytes
    namespace: 11 bytes
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTc3d3ByIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzMjEyYTE3Yy02NjVlLTQwOWYtYjdkYi05OGNlMzM1YTIwMGYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.PH-789Lxi6LXWJLCy9zvTyXb14kuvwmaU_y7kit9adhMguVAFOn0hKmjuKSJ5LYpK1xV9LwFA3KldMKGny8fNdHxytPIyJfnWFYpbleo02w3LX_3T9fQ-kRyHmx7X2KsKce_HV4ZG9cxQgS6p-6nXFwns31TitOHOC1DA6EPDWECBsCXb5GVsFc8eyeHfo37RXnrSxdByWxU2_stfke_Q1l6OD_lOi_H7dNDygqX96TbNZndEiiqF-9HBx4gZa8PaCisJe5eVJn5kHV5GavdWXyxEWD9QMKfIAyqR33fje7Enk3V7fegof97uqOdvPIInAgEi7lFhhOZmlrbvnRUCg


    #################################################################################################################################################


    这个时候可以输入IP:端口登录UI管理界面


    执行kubeadm join 加入集群在node1上执行
    kubeadm join 172.24.16.153:6443 --token ffrvkv.jy4i66d7j1hwp9q3
    --discovery-token-ca-cert-hash sha256:09d663f66a2906cb9da84a16c06886d2c528c83f07d8e3dbd9a047843f0bd7c8


    kubectl create deployment nginx --image=nginx:alpine
    kubectl scale deployment nginx --replicas=2


    kubectl get node

    kubectl get pod -n kube-system -o wide


    kubectl describe pod monitor-grafana-7c84895d84-9hh9v --namespace kube-system


    ##########################################################docker 远程仓库搭建##############################################
    docker run --entrypoint htpasswd registry:2.5 -Bbn dingwenze yicai127 >> /data/k8s-node2/auth/htpasswd

    vi /data/k8s-node2/config/config.yml #创建config文件

    version: 0.1
    log:
    fields:
    service: registry
    storage:
    delete:
    enabled: true
    cache:
    blobdescriptor: inmemory
    filesystem:
    rootdirectory: /var/lib/registry
    http:
    addr: :5000
    headers:
    X-Content-Type-Options: [nosniff]
    health:
    storagedriver:
    enabled: true
    interval: 10s
    threshold: 3

    #docker run 启动参数
    docker run -d -p 5000:5000 --restart=always --name=registry
    > -v /data/k8s-node2/config/:/etc/docker/registry/
    > -v /data/k8s-node2/auth/:/auth/
    > -e "REGISTRY_AUTH=htpasswd"
    > -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm"
    > -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd
    > -v /data/k8s-node2/:/var/lib/registry/
    > registry:2.5

    ###创建连接docker 私有仓库的连接
    kubectl create secret docker-registry registry-key --docker-server=k8s-node2:5000 --docker-username=dingwenze --docker-password=yicai127

    ####################################################创建kuboard文件###############################################################################################
    kubectl apply -f kuboard.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: kuboard
    namespace: kube-system
    annotations:
    k8s.eip.work/displayName: kuboard
    k8s.eip.work/ingress: "true"
    k8s.eip.work/service: NodePort
    k8s.eip.work/workload: kuboard
    labels:
    k8s.eip.work/layer: monitor
    k8s.eip.work/name: kuboard
    spec:
    replicas: 1
    selector:
    matchLabels:
    k8s.eip.work/layer: monitor
    k8s.eip.work/name: kuboard
    template:
    metadata:
    labels:
    k8s.eip.work/layer: monitor
    k8s.eip.work/name: kuboard
    spec:
    containers:
    - name: kuboard
    image: eipwork/kuboard:latest
    imagePullPolicy: Always

    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: kuboard
    namespace: kube-system
    spec:
    type: NodePort
    ports:
    - name: http
    port: 80
    targetPort: 80
    nodePort: 32567
    selector:
    k8s.eip.work/layer: monitor
    k8s.eip.work/name: kuboard

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: kuboard-user
    namespace: kube-system

    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: kuboard-user
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: cluster-admin
    subjects:
    - kind: ServiceAccount
    name: kuboard-user
    namespace: kube-system

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: kuboard-viewer
    namespace: kube-system

    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: kuboard-viewer
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: view
    subjects:
    - kind: ServiceAccount
    name: kuboard-viewer
    namespace: kube-system

    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: kuboard-viewer-node
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: system:node
    subjects:
    - kind: ServiceAccount
    name: kuboard-viewer
    namespace: kube-system

    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: kuboard-viewer-pvp
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: system:persistent-volume-provisioner
    subjects:
    - kind: ServiceAccount
    name: kuboard-viewer
    namespace: kube-system

    ---
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
    name: kuboard
    namespace: kube-system
    annotations:
    nginx.org/websocket-services: "kuboard"
    nginx.com/sticky-cookie-services: "serviceName=kuboard srv_id expires=1h path=/"
    spec:
    rules:
    - host: kuboard.cn
    http:
    paths:
    - path: /
    backend:
    serviceName: kuboard
    servicePort: http

    查找登录用户的token
    kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}')

    ############################################################################################################################################

  • 相关阅读:
    Hadoop集群时间同步
    Hadoop学习笔记
    分布式系统搭建
    ubuntu主机名修改
    自定义MapReduce中数据类型
    MapReduce执行流程及程序编写
    YARN框架详解
    Maven下从HDFS文件系统读取文件内容
    Maven搭建Hadoop开发环境
    hdfs文件系统架构详解
  • 原文地址:https://www.cnblogs.com/iantest/p/14040621.html
Copyright © 2011-2022 走看看