zoukankan      html  css  js  c++  java
  • 如何在ARM上运行k3s? 窥探k3s启动过程!,内附容器镜像多平台包构建

    开始之前

    最近在对华为云鲲鹏服务器(一种ARM服务器arm64)运行容器可行性做验证,顺便了解了很多ARM和容器相关的知识。一提到arm运行容器首先想到的是k3s,下面是用k3s快速搭建一个kubernetes系统,运行arm的镜像的案例。

    环境准备

    arm的环境想到最多的估计就是树莓派,这玩意儿平常用的比较少,成本还是相对比较高,要玩arm的环境推荐华为云的鲲鹏服务器。

    环境准备:

    1. 通过华为云购买一台2u4g的鲲鹏服务器。
    2. 安装欧拉(一种类似CentOS的系统, openEuler)系统
    3. 下载k3s-arm64程序(不建议k3s-install.sh安装主要是墙的问题)

    节点准备

    由于k3s需要使用iptable相关功能,所以需要对系统进行设置,配置方法如下(自行转化):

    
    ---
    - name: Set SELinux to disabled state
      selinux:
        state: disabled
      when: ansible_distribution in ['CentOS', 'Red Hat Enterprise Linux']
    
    - name: Enable IPv4 forwarding
      sysctl:
        name: net.ipv4.ip_forward
        value: "1"
        state: present
        reload: yes
    
    - name: Enable IPv6 forwarding
      sysctl:
        name: net.ipv6.conf.all.forwarding
        value: "1"
        state: present
        reload: yes
    
    - name: Add br_netfilter to /etc/modules-load.d/
      copy:
        content: "br_netfilter"
        dest: /etc/modules-load.d/br_netfilter.conf
      when: ansible_distribution in ['CentOS', 'Red Hat Enterprise Linux']
    
    - name: Load br_netfilter
      modprobe:
        name: br_netfilter
        state: present
      when: ansible_distribution in ['CentOS', 'Red Hat Enterprise Linux']
    
    - name: Set bridge-nf-call-iptables (just to be sure)
      sysctl:
        name: "{{ item }}"
        value: "1"
        state: present
        reload: yes
      when: ansible_distribution in ['CentOS', 'Red Hat Enterprise Linux']
      loop:
        - net.bridge.bridge-nf-call-iptables
        - net.bridge.bridge-nf-call-ip6tables
    
    - name: Add /usr/local/bin to sudo secure_path
      lineinfile:
        line: 'Defaults    secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin'
        regexp: "Defaults(\s)*secure_path(\s)*="
        state: present
        insertafter: EOF
        path: /etc/sudoers
        validate: 'visudo -cf %s'
      when: ansible_distribution in ['CentOS', 'Red Hat Enterprise Linux']
    
    

    文件来源: k3s-ansible/roles/prereq/tasks/main.yml

    启动k3s,观察启动过程

    这个步骤很简单,直接上命令

    ./k3s-arm64 server --disable traefik
    

    来观察启动日志

    1. 启动api-server
    INFO[2020-05-26T19:06:51.465258246+08:00] Starting k3s v1.18.2+k3s1 (698e444a)
    INFO[2020-05-26T19:06:51.482543936+08:00] Kine listening on unix://kine.sock
    INFO[2020-05-26T19:06:51.686990438+08:00] Active TLS secret  (ver=) (count 7): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattl
    e.io/cn-172.27.16.180:172.27.16.180 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.
    svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:b3257e2631debdf2681f1c0f10956c55d518909c88e3581e6a5d97562ab0
    fabf]
    INFO[2020-05-26T19:06:51.691943011+08:00] Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=No
    de,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/ranch
    er/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/ser
    ver/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserv
    er.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-a
    llowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requesthea
    der-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/serve
    r/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var
    /lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
    Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments.
    I0526 19:06:51.692400    1698 server.go:682] external host was not specified, using 172.27.16.180
    I0526 19:06:51.692649    1698 server.go:166] Version: v1.18.2+k3s1
    
    1. 启动 kube-schedulerkube-controller-manager
    INFO[2020-05-26T19:06:57.102712037+08:00] Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --p
    ort=10251 --secure-port=0
    INFO[2020-05-26T19:06:57.104170746+08:00] Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file
    =/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kube
    config --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/
    tls/service.key --use-service-account-credentials=true
    I0526 19:06:57.107918    1698 controllermanager.go:161] Version: v1.18.2+k3s1
    I0526 19:06:57.108605    1698 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
    
    INFO[2020-05-26T19:06:58.129139918+08:00] Running cloud-controller-manager --allocate-node-cidrs=true --allow-untagged-cloud=true --bind-address=127.0.0.1 --cloud-provider=k3s --clu
    ster-cidr=10.42.0.0/16 --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m --secure-port=0
    Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
    I0526 19:06:58.134700    1698 controllermanager.go:120] Version: v1.18.2+k3s1
    W
    
    
    1. 启动 kube-proxykubelet
    INFO[2020-05-26T19:06:57.794366512+08:00] Starting k3s.cattle.io/v1, Kind=Addon controller
    INFO[2020-05-26T19:06:57.794634665+08:00] Node token is available at /var/lib/rancher/k3s/server/token
    INFO[2020-05-26T19:06:57.794673746+08:00] To join node to cluster: k3s agent -s https://172.27.16.180:6443 -t ${NODE_TOKEN}
    INFO[2020-05-26T19:06:57.795436726+08:00] Waiting for master node  startup: resource name may not be empty
    I0526 19:06:57.863811    1698 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
    INFO[2020-05-26T19:06:57.903238233+08:00] Starting helm.cattle.io/v1, Kind=HelmChart controller
    INFO[2020-05-26T19:06:57.903286923+08:00] Starting batch/v1, Kind=Job controller
    INFO[2020-05-26T19:06:57.903313164+08:00] Starting /v1, Kind=Node controller
    INFO[2020-05-26T19:06:57.903343024+08:00] Starting /v1, Kind=Service controller
    INFO[2020-05-26T19:06:57.903385605+08:00] Starting /v1, Kind=Pod controller
    INFO[2020-05-26T19:06:57.903411445+08:00] Starting /v1, Kind=Endpoints controller
    I0526 19:06:57.921880    1698 controller.go:606] quota admission added evaluator for: deployments.apps
    
    INFO[2020-05-26T19:06:59.097796332+08:00] Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=c
    groupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3
    s/data/bd19bf78c5988a4ae051b7995298195fce498a99fd094991a90fd12a201fe63d/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=/run/k3s/containerd/
    containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=image
    fs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ecs-b87a --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig -
    -kubelet-cgroups=/systemd/system.slice --node-labels= --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/systemd/system.slice --serialize-image-pulls=false --tls-c
    ert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key
    Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation t
    imeline before being removed.
    INFO[2020-05-26T19:06:59.113169378+08:00] Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=ecs-b87a --kubeconfig=/var/lib/rancher/
    k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables
    W0526 19:06:59.113328    1698 server.go:225] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
    INFO[2020-05-26T19:06:59.128654056+08:00] waiting for node ecs-b87a: nodes "ecs-b87a" not found
    I0526 19:06:59.146328    1698 server.go:413] Version: v1.18.2+k3s1
    I
    
    1. 启动网络组件flannel
    I0526 19:07:12.146803    1698 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
    I0526 19:07:12.193748    1698 flannel.go:78] Wrote subnet file to /run/flannel/subnet.env
    I0526 19:07:12.193768    1698 flannel.go:82] Running backend.
    I0526 19:07:12.193774    1698 vxlan_network.go:60] watching for new subnet leases
    I0526 19:07:12.201828    1698 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
    I0526 19:07:12.201875    1698 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
    I0526 19:07:12.202960    1698 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
    I0526 19:07:12.202999    1698 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
    I0526 19:07:12.204866    1698 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
    I0526 19:07:12.205931    1698 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
    I0526 19:07:12.207820    1698 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN
    I0526 19:07:12.209118    1698 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
    I0526 19:07:12.210083    1698 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
    I0526 19:07:12.211653    1698 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
    I0526 19:07:12.214128    1698 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
    I0526 19:07:12.217520    1698 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
    I0526 19:07:12.225410    1698 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN
    I0526 19:07:12.228914    1698 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
    
    

    部署一个演示程序

    alias k='$(pwd)/k3s-arm64 kubectl'
    k run hello-arm  --image=registry.cn-hangzhou.aliyuncs.com/k8ops/hello:0.1
    k get pod 
    k logs -f hello-arm
    

    输出下面日志确定OK

    NAME        READY   STATUS      RESTARTS   AGE
    hello-arm   0/1     Completed   0          2s
    
    Hello, my architecture is Linux buildkitsandbox 4.19.76-linuxkit #1 SMP Thu Oct 17 19:31:58 UTC 2019 aarch64 Linux
    

    如何实现容器镜像多平台包构建 ?

    呼之欲出buildx

    啥子是buildx ? 自己bing, 文档地址:docker/buildx

    安装

    1. buildx安装
    # 从 https://github.com/docker/buildx/releases 下载 对应平台的buildx,比如: buildx-v0.4.1.darwin-amd64
    mkdir -p $HOME/.docker/cli-plugins/
    mv buildx-v0.4.1.darwin-amd64 $HOME/.docker/cli-plugins/docker-buildx
    export DOCKER_CLI_EXPERIMENTAL=enabled
    docker buildx version 
    

    输出下面日志是OK

    github.com/docker/buildx v0.4.1 bda4882a65349ca359216b135896bddc1d92461c
    
    1. 创建builder的时候需要的镜像装备
    docker pull registry.cn-hangzhou.aliyuncs.com/k8ops/moby-buildkit:buildx-stable-1
    docker tag registry.cn-hangzhou.aliyuncs.com/k8ops/moby-buildkit:buildx-stable-1 moby/buildkit:buildx-stable-1
    docker images |grep moby
    

    输出效果如下:

    moby/buildkit                                                                        buildx-stable-1     f2a88cb62c92        5 weeks ago         82.8MB
    registry.cn-hangzhou.aliyuncs.com/k8ops/moby-buildkit                                buildx-stable-1     f2a88cb62c92        5 weeks ago         82.8MB
    
    1. 创建builder
    docker buildx create --name mybuilder
    docker buildx use mybuilder
    docker buildx inspect --bootstrap
    
    [+] Building 12.0s (1/1) FINISHED
     => [internal] booting buildkit                                                                                                                                                11.9s
     => => pulling image moby/buildkit:buildx-stable-1                                                                                                                             10.9s
     => => creating container buildx_buildkit_mybuilder0                                                                                                                            1.1s
    Name:   mybuilder
    Driver: docker-container
    
    Nodes:
    Name:      mybuilder0
    Endpoint:  unix:///var/run/docker.sock
    Status:    running
    Platforms: linux/amd64, linux/arm64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
    

    创建hello镜像

    file: hello.c

    /*
     * hello.c
     */
    #include <stdio.h>
    #include <stdlib.h>
     
    #ifndef ARCH
    #define ARCH "Undefined"
    #endif  
    
    int main() {
      printf("Hello, my architecture is %s
    ", ARCH);
      exit(0);
    }
    

    file: Dockerfile

    #
    # Dockerfile
    #
    FROM alpine AS builder 
    RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g' /etc/apk/repositories 
      && apk update 
    RUN apk add build-base 
    WORKDIR /home
    COPY hello.c .
    RUN gcc "-DARCH="`uname -a`"" hello.c -o hello
     
    FROM alpine 
    WORKDIR /home
    COPY --from=builder /home/hello .
    ENTRYPOINT ["./hello"] 
    
    mkdir huawei-arm-k3s
    touch hello.c Dockerfile
    docker buildx build --platform linux/arm,linux/arm64,linux/amd64 -t registry.cn-hangzhou.aliyuncs.com/k8ops/hello:0.1 . --push
    docker buildx imagetools inspect registry.cn-hangzhou.aliyuncs.com/k8ops/hello:0.1
    
    Name:      registry.cn-hangzhou.aliyuncs.com/k8ops/hello:0.1
    MediaType: application/vnd.docker.distribution.manifest.list.v2+json
    Digest:    sha256:6c7714c17223d788c0bf26f9f45e1af9c5edd3478c990c4051a72d8e9cd5aa5c
    
    Manifests:
      Name:      registry.cn-hangzhou.aliyuncs.com/k8ops/hello:0.1@sha256:9eb62f00457b9eff252fc6143dfd33d80d31a5013cf08a8ab0132394262198e7
      MediaType: application/vnd.docker.distribution.manifest.v2+json
      Platform:  linux/arm64
    
      Name:      registry.cn-hangzhou.aliyuncs.com/k8ops/hello:0.1@sha256:d7244652b1ce9b0bd1060be8a861a5aff2d5a893cd0ba3afb0db68a225034893
      MediaType: application/vnd.docker.distribution.manifest.v2+json
      Platform:  linux/amd64
    

    本地验证

    docker run --rm -it registry.cn-hangzhou.aliyuncs.com/k8ops/hello:0.1@sha256:d7244652b1ce9b0bd1060be8a861a5aff2d5a893cd0ba3afb0db68a225034893
    
    Unable to find image 'registry.cn-hangzhou.aliyuncs.com/k8ops/hello:0.1@sha256:d7244652b1ce9b0bd1060be8a861a5aff2d5a893cd0ba3afb0db68a225034893' locally
    sha256:d7244652b1ce9b0bd1060be8a861a5aff2d5a893cd0ba3afb0db68a225034893: Pulling from k8ops/hello
    cbdbe7a5bc2a: Already exists
    e26bd31f84ac: Already exists
    b72c012d327f: Pull complete
    Digest: sha256:d7244652b1ce9b0bd1060be8a861a5aff2d5a893cd0ba3afb0db68a225034893
    Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/k8ops/hello@sha256:d7244652b1ce9b0bd1060be8a861a5aff2d5a893cd0ba3afb0db68a225034893
    Hello, my architecture is Linux buildkitsandbox 4.19.76-linuxkit #1 SMP Thu Oct 17 19:31:58 UTC 2019 x86_64 Linux
    
  • 相关阅读:
    如何保证service不被系统杀死
    查找算法
    java多线程学习
    设计模式-单例
    Python2.7-内置类型
    Python2.7-内置函数
    准备要学的东西
    Python-2.7 : 编码问题及encode与decode
    【JZOJ4637】大鱼海棠【博弈论】
    【JZOJ4637】大鱼海棠【博弈论】
  • 原文地址:https://www.cnblogs.com/k8ops/p/12969127.html
Copyright © 2011-2022 走看看