zoukankan      html  css  js  c++  java
  • calico for kubernetes

    (这一篇中很多错误,勿参考!)
    The reference urls:
    https://github.com/projectcalico/calico-docker/blob/master/docs/kubernetes/KubernetesIntegration.md
     
    I have 3 hosts: 10.11.151.97, 10.11.151.100, 10.11.150.101. Unfortunately, there is no internet access in all 3 hosts.  Following the guide, I build the Kubernetes cluster in 'bash command' mode, rather than the 'service mode' described in the reference.
    10.11.151.97 is the kubernetes master, the other two are its nodes.
     

    1, Run Etcd Cluster

    etcd_token=kb3-etcd-cluster
    local_name=kbetcd0
    local_ip=10.11.151.97
    local_peer_port=4010
    local_client_port1=4011
    local_client_port2=4012
    node1_name=kbetcd1
    node1_ip=10.11.151.100
    node1_port=4010
    node2_name=kbetcd2
    node2_ip=10.11.151.101
    node2_port=4010
     
     
    ./etcd -name $local_name 
    -initial-advertise-peer-urls http://$local_ip:$local_peer_port 
    -listen-peer-urls http://0.0.0.0:$local_peer_port 
    -listen-client-urls http://0.0.0.0:$local_client_port1,http://0.0.0.0:$local_client_port2 
    -advertise-client-urls http://$local_ip:$local_client_port1,http://$local_ip:$local_client_port2 
    -initial-cluster-token $etcd_token 
    -initial-cluster $local_name=http://$local_ip:$local_peer_port,$node1_name=http://$node1_ip:$node1_port,$node2_name=http://$node2_ip:$node2_port 
    -initial-cluster-state new &
    

      

    In each host, run etcd as this command since the etcd should run in cluster mode. If succeed, you should see 'published {Name: *} to cluster *' output. 
     

    2, Setup Master

    2.1 Start Kubernetes 

    Run kube-apiserver:
    ./kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://127.0.0.1:4012 --kubelet_port=10250 --allow_privileged=false --service-cluster-ip-range=172.16.0.0/12 --insecure-bind-address=0.0.0.0 --insecure-port=8080 2>&1 > apiserver.out &
    Run kube-controller-manager:
    ./kube-controller-manager --logtostderr=true --v=0 --master=http://tc-151-97:8080 --cloud-provider="" 2>&1 >controller.out &
    

      Run kube-scheduler:

    ./kube-scheduler --logtostderr=true --v=0 --master=http://tc-151-97:8080 2>&1 > scheduler.out &
    

    2.2 Install calico in on Master

    sudo ETCD_AUTHORITY=127.0.0.1:4011 ./calicoctl node
    

      

    3, Setup Nodes

    3.1 Install calico

    For the nodes have no internet access, I downloaded the calico plugin mannual from:
    https://github.com/projectcalico/calico-kubernetes/releases/tag/v0.6.0
    

    Move the plugin to the kubernetes plugin directory:

    sudo mv calico_kubernetes /usr/libexec/kubernetes/kubelet-plugins/net/exec/calico/calico
    

    Start the calico:

    sudo ETCD_AUTHORITY=127.0.0.1:4011 ./calicoctl node
    

    3.2 Start kubelet with calico network:

    Start the kubelet with --network-plugin parameter:
    ./kube-proxy --logtostderr=true --v=0 --master=http://tc-151-97:8080 --proxy-mode=iptables &
    ./kubelet --logtostderr=true --v=0 --api_servers=http://tc-151-97:8080 --address=0.0.0.0 —-network-plugin=calico --allow_privileged=false --pod-infra-container-image=10.11.150.76:5000/kubernetes/pause:latest &
    

    Here is the kubelet command output:

    I1124 15:11:52.226324 28368 server.go:808] Watching apiserver
    I1124 15:11:52.393448 28368 plugins.go:56] Registering credential provider: .dockercfg
    I1124 15:11:52.398087 28368 server.go:770] Started kubelet
    E1124 15:11:52.398190 28368 kubelet.go:756] Image garbage collection failed: unable to find data for container /
    I1124 15:11:52.398165 28368 server.go:72] Starting to listen on 0.0.0.0:10250
    W1124 15:11:52.401695 28368 kubelet.go:775] Failed to move Kubelet to container "/kubelet": write /sys/fs/cgroup/memory/kubelet/memory.swappiness: invalid argument
    I1124 15:11:52.401748 28368 kubelet.go:777] Running in container "/kubelet"
    I1124 15:11:52.497377 28368 factory.go:194] System is using systemd
    I1124 15:11:52.610946 28368 kubelet.go:885] Node tc-151-100 was previously registered
    I1124 15:11:52.734788 28368 factory.go:236] Registering Docker factory
    I1124 15:11:52.735851 28368 factory.go:93] Registering Raw factory
    I1124 15:11:52.969060 28368 manager.go:1006] Started watching for new ooms in manager
    I1124 15:11:52.969114 28368 oomparser.go:199] OOM parser using kernel log file: "/var/log/messages"
    I1124 15:11:52.970296 28368 manager.go:250] Starting recovery of all containers
    I1124 15:11:53.148967 28368 manager.go:255] Recovery completed
    I1124 15:11:53.240408 28368 manager.go:104] Starting to sync pod status with apiserver
    I1124 15:11:53.240439 28368 kubelet.go:1953] Starting kubelet main sync loop.
    

      

    I do not know wheather the kubelet is run right. Someone tell me how to verify it ?
     
    I do the same process in another node.
     

    3, Create some pods and test.

    apiVersion: v1
    kind: ReplicationController
    metadata:
    name: test-1
    spec:
    replicas: 1
    template:
    metadata:
    labels:
    app: test-1
    spec:
    containers:
    - name: iperf
    image: 10.11.150.76:5000/openxxs/iperf:1.2
    nodeSelector:
    kubernetes.io/hostname: tc-151-100
    ---
    apiVersion: v1
    kind: ReplicationController
    metadata:
    name: test-2
    spec:
    replicas: 1
    template:
    metadata:
    labels:
    app: test-2
    spec:
    containers:
    - name: iperf
    image: 10.11.150.76:5000/openxxs/iperf:1.2
    nodeSelector:
    kubernetes.io/hostname: tc-151-100
    ---
    apiVersion: v1
    kind: ReplicationController
    metadata:
    name: test-3
    spec:
    replicas: 1
    template:
    metadata:
    labels:
    app: test-3
    spec:
    containers:
    - name: iperf
    image: 10.11.150.76:5000/openxxs/iperf:1.2
    nodeSelector:
    kubernetes.io/hostname: tc-151-101
    ---
    apiVersion: v1
    kind: ReplicationController
    metadata:
    name: test-4
    spec:
    replicas: 1
    template:
    metadata:
    labels:
    app: test-4
    spec:
    containers:
    - name: iperf
    image: 10.11.150.76:5000/openxxs/iperf:1.2
    nodeSelector:
    kubernetes.io/hostname: tc-151-101
    
    ./kubectl create -f test.yaml
    

    This command create 4 pods, 2 for 10.11.151.100, 2 for 10.11.151.101.

    [@tc_151_97 /home/domeos/openxxs/bin]# ./kubectl get pods
    NAME READY STATUS RESTARTS AGE
    test-1-1ztr2 1/1 Running 0 5m
    test-2-8p2sr 1/1 Running 0 5m
    test-3-1hkwa 1/1 Running 0 5m
    test-4-jbdbq 1/1 Running 0 5m
    

      

    [@tc-151-100 /home/domeos/openxxs/bin]# docker ps
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    6dfc83ec1d12 10.11.150.76:5000/openxxs/iperf:1.2 "/block" 6 minutes ago Up 6 minutes k8s_iperf.a4ede594_test-1-1ztr2_default_f1b54d0b-927c-11e5-a77a-782bcb435e46_ca4496d0
    78087a93da00 10.11.150.76:5000/openxxs/iperf:1.2 "/block" 6 minutes ago Up 6 minutes k8s_iperf.a4ede594_test-2-8p2sr_default_f1c2da7d-927c-11e5-a77a-782bcb435e46_330d815c
    f80a1474f4c4 10.11.150.76:5000/kubernetes/pause:latest "/pause" 6 minutes ago Up 6 minutes k8s_POD.34f4dfd2_test-2-8p2sr_default_f1c2da7d-927c-11e5-a77a-782bcb435e46_af7199c0
    eb14879757e6 10.11.150.76:5000/kubernetes/pause:latest "/pause" 6 minutes ago Up 6 minutes k8s_POD.34f4dfd2_test-1-1ztr2_default_f1b54d0b-927c-11e5-a77a-782bcb435e46_af2cc1c3
    8accff535ff9 calico/node:latest "/sbin/start_runit" 27 minutes ago Up 27 minutes calico-node
    In the node 10.11.151.100, the calico status:
    [@tc-151-100 ~/baoquanwang/calico-docker-utils]$ sudo ETCD_AUTHORITY=127.0.0.1:4011 ./calicoctl status
    calico-node container is running. Status: Up 24 minutes
    Running felix version 1.2.0
     
    IPv4 BGP status
    +---------------+-------------------+-------+----------+------------------------------------------+
    | Peer address | Peer type | State | Since | Info |
    +---------------+-------------------+-------+----------+------------------------------------------+
    | 10.11.151.101 | node-to-node mesh | start | 07:18:44 | Connect Socket: Connection refused |
    | 10.11.151.97 | node-to-node mesh | start | 07:07:40 | Active Socket: Connection refused |
    +---------------+-------------------+-------+----------+------------------------------------------+
     
    IPv6 BGP status
    +--------------+-----------+-------+-------+------+
    | Peer address | Peer type | State | Since | Info |
    +--------------+-----------+-------+-------+------+
    +--------------+-----------+-------+-------+------+ 
    However, in another node 10.11.151.101:
    [@tc-151-101 ~/baoquanwang/calico-docker-utils]$ sudo ETCD_AUTHORITY=127.0.0.1:4011 ./calicoctl status
    calico-node container is running. Status: Up 2 minutes
    Running felix version 1.2.0
     
    IPv4 BGP status
    Unable to connect to server control socket (/etc/service/bird/bird.ctl): Connection refused
     
     
    IPv6 BGP status
    +--------------+-----------+-------+-------+------+
    | Peer address | Peer type | State | Since | Info |
    +--------------+-----------+-------+-------+------+
    +--------------+-----------+-------+-------+------+
    

    What has happened ?

     
    And that, there is no calico ip route in both nodes:
    [@tc-151-100 ~/baoquanwang/calico-docker-utils]$ ip route
    default via 10.11.151.254 dev em1 proto static metric 1024
    10.11.151.0/24 dev em1 proto kernel scope link src 10.11.151.100
    172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42.1
    
    [@tc-151-101 ~/baoquanwang/calico-docker-utils]$ ip route
    default via 10.11.151.254 dev em1 proto static metric 1024
    10.11.151.0/24 dev em1 proto kernel scope link src 10.11.151.101
    172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42.1
    There is no log output in /var/log/calico/kubernetes/calico.log.
     
     
     
     
     
     
  • 相关阅读:
    使用插件和不使用插件实现select的框
    利用sshtunnel实现跳板机的效果[嵌套ssh实现]
    laravel中get()与 first()区别、collection与stdClass的区别
    Laravel 安全:避免 SQL 注入
    MAC 终端走代理服务器
    Python--Virtualenv简明教程(转载https://www.jianshu.com/p/08c657bd34f1)
    Charles 如何抓取https数据包
    爬虫出现Forbidden by robots.txt(转载 https://blog.csdn.net/zzk1995/article/details/51628205)
    Scrapy框架的学习(6.item介绍以及items的使用(提前定义好字段名))转载https://blog.csdn.net/wei18791957243/article/details/86259688
    python xpath
  • 原文地址:https://www.cnblogs.com/tingfengainiaini/p/4991841.html
Copyright © 2011-2022 走看看