zoukankan      html  css  js  c++  java
  • 干货 | TiDB Operator实践

    干货 | TiDB Operator实践

    北京it爷们儿 京东云开发者社区  4天前

    K8s和TiDB都是目前开源社区中活跃的开源产品,TiDB Operator项目是一个在K8s上编排管理TiDB集群的项目。本文详细记录了部署K8s及install TiDB Operator的详细实施过程,希望能对刚"入坑"的同学有所帮助。

    一、环境

    Ubuntu 16.04
    K8s 1.14.1

    二、Kubespray安装K8s

    配置免密登录

    1yum -y install expect
    
    • vi /tmp/autocopy.exp
    1#!/usr/bin/expect 
    2 
    3set timeout 
    4set user_hostname [lindex $argv ] 
    5set password [lindex $argv ] 
    6spawn ssh-copy-id $user_hostname 
    7    expect {
     8        "(yes/no)?" 9        {
    10            send "yes
    "
    11            expect "*assword:" { send "$password
    "}
    12        }
    13        "*assword:"
    14        {
    15            send "$password
    "
    16        }
    17    }18expect eof
    
    1ssh-keyscan addedip  >> ~/.ssh/known_hosts 
    2
    3ssh-keygen -t rsa -P ''
    4 
    5for i in 10.0.0.{31,32,33,40,10,20,50}; do  ssh-keyscan $i  >> ~/.ssh/known_hosts ; done
     6
     7/tmp/autocopy.exp root@addeip
     8ssh-copy-id addedip 
    9
    10/tmp/autocopy.exp root@10.0.0.31
    11/tmp/autocopy.exp root@10.0.0.32
    12/tmp/autocopy.exp root@10.0.0.33
    13/tmp/autocopy.exp root@10.0.0.40
    14/tmp/autocopy.exp root@10.0.0.10
    15/tmp/autocopy.exp root@10.0.0.20
    16/tmp/autocopy.exp root@10.0.0.50
    

    配置Kubespray

    1pip install -r requirements.txt2cp -rfp inventory/sample inventory/mycluster
    
    • inventory/mycluster/inventory.ini

    • inventory/mycluster/inventory.ini

     1# ## Configure 'ip' variable to bind kubernetes services on a
     2# ## different ip than the default iface 3# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value. 
    4[all] 
    5# node1 ansible_host=95.54.0.12  # ip=10.3.0.1 etcd_member_name=etcd1 6# node2 ansible_host=95.54.0.13  # ip=10.3.0.2 etcd_member_name=etcd2 7# node3 ansible_host=95.54.0.14  # ip=10.3.0.3 etcd_member_name=etcd3 8# node4 ansible_host=95.54.0.15  # ip=10.3.0.4 etcd_member_name=etcd4 9# node5 ansible_host=95.54.0.16  # ip=10.3.0.5 etcd_member_name=etcd5
    10# node6 ansible_host=95.54.0.17  # ip=10.3.0.6 etcd_member_name=etcd6
    11etcd1 ansible_host=10.0.0.31 etcd_member_name=etcd1
    12etcd2 ansible_host=10.0.0.32 etcd_member_name=etcd2
    13etcd3 ansible_host=10.0.0.33 etcd_member_name=etcd3
    14master1 ansible_host=10.0.0.40
    15node1 ansible_host=10.0.0.10
    16node2 ansible_host=10.0.0.20
    17node3 ansible_host=10.0.0.50
    18
    19# ## configure a bastion host if your nodes are not directly reachable
    20# bastion ansible_host=x.x.x.x ansible_user=some_user
    21
    22[kube-master]
    23# node1
    24# node2
    25master1
    26[etcd]
    27# node1
    28# node2
    29# node3
    30etcd1
    31etcd2
    32etcd3
    33
    34[kube-node
    ]35# node2
    36# node3
    37# node4
    38# node5
    39# node6
    40node1
    41node2
    42node3
    43
    44[k8s-cluster:children
    ]45kube-master
    46kube-node
    

    节点所需镜像的文件

    由于某些镜像国内无法访问需要现将镜像通过代理下载到本地然后上传到本地镜像仓库或DockerHub,同时修改配置文件,个别组件存放位置https://storage.googleapis.com,需要新建Nginx服务器分发文件。

    建立Nginx服务器

    • ~/distribution/docker-compose.yml

    • 创建文件目录及Nginx配置文件目录

    • ~/distribution/conf.d/open_distribute.conf

    • 启动

    • 下载并上传所需文件 具体版本号参考roles/download/defaults/main.yml文件中kubeadm_version、kube_version、image_arch参数

    • 安装Docker及Docker-Compose

     1apt-get install  2apt-transport-https  3ca-certificates  4curl  5gnupg-agent  6software-properties-common 7 8curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 910add-apt-repository 11"deb [arch=amd64] https://download.docker.com/linux/ubuntu 12$(lsb_release -cs) 13stable"1415apt-get update1617apt-get install docker-ce docker-ce-cli containerd.io1819chmod +x /usr/local/bin/docker-compose20sudo curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
    
    • 新建Nginx docker-compose.yml
    1mkdir ~/distribution2vi ~/distribution/docker-compose.yml
    
     1#  distribute 2version: '2' 3services:     4    distribute: 5        image: nginx:1.15.12 6        volumes: 7            - ./conf.d:/etc/nginx/conf.d 8            - ./distributedfiles:/usr/share/nginx/html 9        network_mode: "host"10        container_name: nginx_distribute 
    
    1mkdir ~/distribution/distributedfiles2mkdir ~/distribution/3mkdir ~/distribution/conf.d4vi ~/distribution/conf.d/open_distribute.conf
    
     1#open_distribute.conf 2 3server { 4    #server_name distribute.search.leju.com; 5        listen 8888; 6 7    root /usr/share/nginx/html; 8 9    add_header Access-Control-Allow-Origin *;  10    add_header Access-Control-Allow-Headers X-Requested-With;  11    add_header Access-Control-Allow-Methods GET,POST,OPTIONS;  1213    location / {14    #    index index.html;15                autoindex on;        16    }17    expires off;18    location ~ .*.(gif|jpg|jpeg|png|bmp|swf|eot|ttf|woff|woff2|svg)$ {19        expires -1;20    }2122    location ~ .*.(js|css)?$ {23        expires -1 ;24    }25} # end of public static files domain : [ distribute.search.leju.com ]
    
    1docker-compose up -d
    
    1wget https://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubeadm23scp /tmp/kubeadm  10.0.0.60:/root/distribution/distributedfiles45wget https://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/hyperkube
    
    • 需要下载并上传到私有仓库的镜像
     1docker pull k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.4.0 2docker tag k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.4.0 jiashiwen/cluster-proportional-autoscaler-amd64:1.4.0 3docker push jiashiwen/cluster-proportional-autoscaler-amd64:1.4.0 4 5docker pull k8s.gcr.io/k8s-dns-node-cache:1.15.1 6docker tag k8s.gcr.io/k8s-dns-node-cache:1.15.1 jiashiwen/k8s-dns-node-cache:1.15.1 7docker push jiashiwen/k8s-dns-node-cache:1.15.1 8 9docker pull gcr.io/google_containers/pause-amd64:3.110docker tag gcr.io/google_containers/pause-amd64:3.1 jiashiwen/pause-amd64:3.111docker push jiashiwen/pause-amd64:3.11213docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.114docker tag gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1 jiashiwen/kubernetes-dashboard-amd64:v1.10.115docker push jiashiwen/kubernetes-dashboard-amd64:v1.10.11617docker pull gcr.io/google_containers/kube-apiserver:v1.14.118docker tag gcr.io/google_containers/kube-apiserver:v1.14.1 jiashiwen/kube-apiserver:v1.14.119docker push jiashiwen/kube-apiserver:v1.14.12021docker pull gcr.io/google_containers/kube-controller-manager:v1.14.122docker tag gcr.io/google_containers/kube-controller-manager:v1.14.1 jiashiwen/kube-controller-manager:v1.14.123docker push jiashiwen/kube-controller-manager:v1.14.12425docker pull gcr.io/google_containers/kube-scheduler:v1.14.126docker tag gcr.io/google_containers/kube-scheduler:v1.14.1 jiashiwen/kube-scheduler:v1.14.127docker push jiashiwen/kube-scheduler:v1.14.12829docker pull gcr.io/google_containers/kube-proxy:v1.14.130docker tag gcr.io/google_containers/kube-proxy:v1.14.1 jiashiwen/kube-proxy:v1.14.131docker push jiashiwen/kube-proxy:v1.14.13233docker pull gcr.io/google_containers/pause:3.134docker tag gcr.io/google_containers/pause:3.1 jiashiwen/pause:3.135docker push jiashiwen/pause:3.13637docker pull gcr.io/google_containers/coredns:1.3.138docker tag gcr.io/google_containers/coredns:1.3.1 jiashiwen/coredns:1.3.139docker push  jiashiwen/coredns:1.3.1
    
    • 用于下载上传镜像的脚本
     1#!/bin/bash 2 3privaterepo=jiashiwen 4 5k8sgcrimages=( 6cluster-proportional-autoscaler-amd64:1.4.0 7k8s-dns-node-cache:1.15.1 8) 910gcrimages=(11pause-amd64:3.112kubernetes-dashboard-amd64:v1.10.113kube-apiserver:v1.14.114kube-controller-manager:v1.14.115kube-scheduler:v1.14.116kube-proxy:v1.14.117pause:3.118coredns:1.3.119)202122for k8sgcrimageName in ${k8sgcrimages[@]} ; do23echo $k8sgcrimageName24docker pull k8s.gcr.io/$k8sgcrimageName25docker tag k8s.gcr.io/$k8sgcrimageName $privaterepo/$k8sgcrimageName26docker push $privaterepo/$k8sgcrimageName27done282930for gcrimageName in ${gcrimages[@]} ; do31echo $gcrimageName32docker pull gcr.io/google_containers/$gcrimageName33docker tag gcr.io/google_containers/$gcrimageName $privaterepo/$gcrimageName34docker push $privaterepo/$gcrimageName35done
    
    • 修改文件inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml,修改K8s镜像仓库
    1# kube_image_repo: "gcr.io/google-containers"2kube_image_repo: "jiashiwen"
    
    • 修改roles/download/defaults/main.yml
     1#dnsautoscaler_image_repo: "k8s.gcr.io/cluster-proportional-autoscaler-{{   image_arch }}" 2dnsautoscaler_image_repo: "jiashiwen/cluster-proportional-autoscaler-{{   image_arch }}" 3 4#kube_image_repo: "gcr.io/google-containers" 5kube_image_repo: "jiashiwen" 6 7#pod_infra_image_repo: "gcr.io/google_containers/pause-{{ image_arch }}" 8pod_infra_image_repo: "jiashiwen/pause-{{ image_arch }}" 910#dashboard_image_repo: "gcr.io/google_containers/kubernetes-dashboard-{{   image_arch }}"11dashboard_image_repo: "jiashiwen/kubernetes-dashboard-{{ image_arch }}"1213#nodelocaldns_image_repo: "k8s.gcr.io/k8s-dns-node-cache"14nodelocaldns_image_repo: "jiashiwen/k8s-dns-node-cache"1516#kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/  release/{{ kubeadm_version }}/bin/linux/{{ image_arch }}/kubeadm"17kubeadm_download_url: "http://10.0.0.60:8888/kubeadm"1819#hyperkube_download_url: "https://storage.googleapis.com/  kubernetes-release/release/{{ kube_version }}/bin/linux/{{ image_arch }}/  hyperkube"20hyperkube_download_url: "http://10.0.0.60:8888/hyperkube"
    

    三、执行安装

    • 安装命令
    1ansible-playbook -i inventory/mycluster/inventory.ini cluster.yml
    
    • 重置命令
    1ansible-playbook -i inventory/mycluster/inventory.ini reset.yml
    

    四、验证K8s集群

    安装Kubectl

    • 本地浏览器打开https://storage.googleapis.com/kubernetes-release/release/stable.txt得到最新版本为v1.14.1

    • 用上一步得到的最新版本号v1.7.1替换下载地址中的$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)得到真正的下载地址https:// storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubectl

    • 上传下载好的kubectl

    1scp /tmp/kubectl root@xxx:/root
    
    • 修改属性
    1chmod +x ./kubectl2mv ./kubectl /usr/local/bin/kubectl
    
    • Ubuntu
    
    1sudo snap install kubectl --classic
    
    
    • CentOS

    将master节点上的~/.kube/config 文件复制到你需要访问集群的客户端上即可

    1scp 10.0.0.40:/root/.kube/config ~/.kube/config
    

    执行命令验证集群

    1kubectl get nodes2kubectl cluster-info
    

    五、TiDB-Operaor部署

    安装helm

    https://blog.csdn.net/bbwangj/article/details/81087911

    • 安装helm
    1curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh2chmod 700 get_helm.sh3./get_helm.sh
    
    • 查看helm版本
    1helm version
    
    • 初始化
    1helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.13.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
    

    为K8s提供 local volumes

    • 参考文档https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md
      tidb-operator启动会为pd和tikv绑定pv,需要在discovery directory下创建多个目录

    • 格式化并挂载磁盘

    1mkfs.ext4 /dev/vdb2DISK_UUID=$(blkid -s UUID -o value /dev/vdb) 3mkdir /mnt/$DISK_UUID4mount -t ext4 /dev/vdb /mnt/$DISK_UUID
    
    • /etc/fstab持久化mount
    1echo UUID=`sudo blkid -s UUID -o value /dev/vdb` /mnt/$DISK_UUID ext4 defaults 0 2 | sudo tee -a /etc/fstab
    
    • 创建多个目录并mount到discovery directory
    1for i in $(seq 1 10); do2sudo mkdir -p /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i}3sudo mount --bind /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i}4done
    
    • /etc/fstab持久化mount
    1for i in $(seq 1 10); do2echo /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i} none bind 0 0 | sudo tee -a /etc/fstab3done
    
    • 为tidb-operator创建local-volume-provisioner
    1$ kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml2$ kubectl get po -n kube-system -l app=local-volume-provisioner3$ kubectl get pv --all-namespaces | grep local-storage 
    

    六、Install TiDB Operator

    • 项目中使用了gcr.io/google-containers/hyperkube,国内访问不了,简单的办法是把镜像重新push到dockerhub然后修改charts/tidb-operator/values.yaml
     1scheduler: 2  # With rbac.create=false, the user is responsible for creating this   account 3  # With rbac.create=true, this service account will be created 4  # Also see rbac.create and clusterScoped 5  serviceAccount: tidb-scheduler 6  logLevel: 2 7  replicas: 1 8  schedulerName: tidb-scheduler 9  resources:10    limits:11      cpu: 250m12      memory: 150Mi13    requests:14      cpu: 80m15      memory: 50Mi16  # kubeSchedulerImageName: gcr.io/google-containers/hyperkube17  kubeSchedulerImageName: yourrepo/hyperkube18  # This will default to matching your kubernetes version19  # kubeSchedulerImageTag: latest
    
    • TiDB Operator使用CRD扩展Kubernetes,因此要使用TiDB Operator,首先应该创建TidbCluster自定义资源类型。
    1kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml2kubectl get crd tidbclusters.pingcap.com
    
    • 安装TiDB-Operator
    1$ git clone https://github.com/pingcap/tidb-operator.git2$ cd tidb-operator3$ helm install charts/tidb-operator --name=tidb-operator   --namespace=tidb-admin4$ kubectl get pods --namespace tidb-admin -l app.kubernetes.io/  instance=tidb-operator
    

    七、部署TiDB

    1helm install charts/tidb-cluster --name=demo --namespace=tidb2watch kubectl get pods --namespace tidb -l app.kubernetes.io/instance=demo -o wide
    

    八、验证

    安装MySQL客户端

    • 参考文档https://dev.mysql.com/doc/refman/8.0/en/linux-installation.html

    • CentOS安装

    1wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm2yum localinstall mysql80-community-release-el7-3.noarch.rpm -y3yum repolist all | grep mysql4yum-config-manager --disable mysql80-community5yum-config-manager --enable mysql57-community6yum install mysql-community-client
    
    • Ubuntu安装
    1wget https://dev.mysql.com/get/mysql-apt-config_0.8.13-1_all.deb2dpkg -i mysql-apt-config_0.8.13-1_all.deb3apt update45# 选择MySQL版本6dpkg-reconfigure mysql-apt-config7apt install mysql-client -y
    

    九、映射TiDB端口

    • 查看TiDB Service
    1kubectl get svc --all-namespaces
    
    • 映射TiDB端口
    1# 仅本地访问2kubectl port-forward svc/demo-tidb 4000:4000 --namespace=tidb34# 其他主机访问5kubectl port-forward --address 0.0.0.0 svc/demo-tidb 4000:4000 --namespace=tidb
    
    • 首次登录MySQL
    1mysql -h 127.0.0.1 -P 4000 -u root -D test
    
    • 修改TiDB密码
    1SET PASSWORD FOR 'root'@'%' = 'wD3cLpyO5M'; FLUSH PRIVILEGES;
    

    趟坑小记

    1、K8s国内安装

    K8s镜像多在gcr.io国内访问不到,基本做法是把镜像导入DockerHub或者私有镜像,这一点在K8s部署章节有详细过程就不累述了。

    2、TiDB-Operator 本地存储配置

    Operator在启动集群时pd和TiKV需要绑定本地存储如果挂载点不足会导致pod启动过程中找不到可已bond的pv始终处于pending或createing状态,详细配请参阅https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md中“Sharing a disk filesystem by multiple filesystem PVs”一节,同一块磁盘绑定多个挂载目录,为Operator提供足够的bond

    3、MySQL客户端版本问题

    目前TiDB只支持MySQL5.7版本客户端8.0会报ERROR 1105 (HY000): Unknown charset id 255

    image


    image

    [文章转载自公众号

    北京IT爷们儿  北京IT爷们儿](https://mp.weixin.qq.com/s?__biz=MzU1OTgxMTg2Nw==&mid=2247484786&idx=1&sn=80ad85a74492fde34b43c33809514233&chksm=fc10d906cb675010a3084862d751d953505cabfd302ab79a5a968a88df05c9ecf98181f50b26&scene=38&key=8da6ecf36bb695ea809f5f3b9dca0d0f2466e0eccb29ab9941b9d2ec04a43cd5162913fe02627f768ddba10ec265a7472f1d0a80361744cecdb91ffc8ba34de9114a15e06b57df9605a974d06777f06e&ascene=0&uin=MjEwNzgxNDA0Mw%3D%3D&devicetype=iMac+MacBookPro14%2C3+OSX+OSX+10.13.4+build(17E199)&version=12031810&nettype=WIFI&lang=zh_CN&fontScale=100&pass_ticket=%2BKtIb08dLLqrJZUcG5553zcS91cWUiQqxrFsZaFu2TtPn9oKJ%2FvUyHd9cIKuc%2BS4##)

    阅读原文

  • 相关阅读:
    docker--docker介绍
    docker--虚拟化
    高级运维工程师的打怪升级之路
    mysql常用函数
    CentOS 7 下使用 Firewall
    51nod 1094 和为k的连续区间(map+前缀和)
    51nod 1092 回文字符串(dp)
    51nod 1062 序列中最大的数(打表预处理)
    51nod 1284 2 3 5 7的倍数(容斥原理+反面思考)
    51nod 1347 旋转字符串(思维好题)
  • 原文地址:https://www.cnblogs.com/jdclouddeveloper/p/10930499.html
Copyright © 2011-2022 走看看