部署规划
ip地址 主机名 操作系统 磁盘 172.21.16.8 ceph1 centos7.6 3*10G 172.21.16.10 ceph2 centos7.6 3*10G 172.21.16.11 ceph3 centos7.6 3*10G 确保三台机器防火墙和selinux全部关闭
时间同步
[cephadmin@ceph1 ~]$ yum -y install ntpdate ntp [cephadmin@ceph1 ~]$ ntpdate cn.ntp.org.cn [cephadmin@ceph1 ~]$ systemctl restart ntpd && systemctl enable ntpd 三台机器都要执行
添加用户
部署ceph最好自己创建一个用户去部署,不要用ceph和root用户,三台机器都要执行
[cephadmin@ceph1 ~]$ useradd cephadmin [cephadmin@ceph1 ~]$ echo "cephadmin" | passwd --stdin cephadmin [cephadmin@ceph1 ~]$ echo "cephadmin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadmin [cephadmin@ceph1 ~]$ chmod 0440 /etc/sudoers.d/cephadmin
配置sudo不需要tty
sed -i 's/Default requiretty/#Default requiretty/' /etc/sudoers
修改主机名
三台机器都要执行
第一台机器 [cephadmin@ceph1 ~]$ hostnamectl set-hostname ceph1 第二台机器 [cephadmin@ceph1 ~]$ hostnamectl set-hostname ceph2 第三台机器 [cephadmin@ceph1 ~]$ hostnamectl set-hostname ceph3
主机名解析
三台机器都要执行
[cephadmin@ceph1 ~]$ cat /etc/hosts 172.21.0.8 ceph1 172.21.0.10 ceph2 172.21.0.11 ceph3
ssh无密钥登录
只需要在部署节点上(ceph1)上执行
[cephadmin@ceph1 ~]$ ssh-keygen [cephadmin@ceph1 ~]$ ssh-copy-id cephadmin@ceph1 [cephadmin@ceph1 ~]$ ssh-copy-id cephadmin@ceph2 [cephadmin@ceph1 ~]$ ssh-copy-id cephadmin@ceph3
安装软件包
三台机器都要执行
[root@ceph1 ~]#wget -O /etc/yum.repos.d/ceph.repo https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/ceph.repo
[cephadmin@ceph1 ~]$sudo yum install -y ceph ceph-radosgw
只需要在部署节点上执行(ceph1)
[cephadmin@ceph1 ~]$ sudo yum install -y ceph-deploy python-pip
安装ceph软件
在部署节点上运行(ceph1)
[cephadmin@ceph1 ~]$mkdir my-cluster [cephadmin@ceph1 ~]$cd my-cluster [cephadmin@ceph1 ~]$ceph-deploy new ceph1 ceph2 ceph3 [cephadmin@ceph1 ~]$vim ceph.conf [global] ..... public network = 172.21.16.0/20 cluster network = 172.21.16.0/20
集群初始化
[cephadmin@ceph1 ~]$ ceph-deploy --overwrite-conf config push ceph1 ceph2 ceph3
[cephadmin@ceph1 ~]$ ceph-deploy mon create-initial #/配置初始 monitors并收集所有密钥 [cephadmin@ceph1 ~]$ ceph-deploy admin ceph1 ceph2 ceph3 #把配置信息拷贝到各节点 [cephadmin@ceph1 ~]$sudo chown -R cephadmin:cephadmin /etc/ceph #三台机器都需要执行 [cephadmin@ceph1 ~]$ ceph -s cluster: id: d64f9fbf-b948-4e50-84f0-b30073161ef6 health: HEALTH_OK services: mon: 3 daemons, quorum ceph1,ceph2,ceph3 mgr: ceph1(active), standbys: ceph2, ceph3 osd: 9 osds: 9 up, 9 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0B usage: 9.04GiB used, 80.9GiB / 90.0GiB avail pgs:
配置OSD
for dev in /dev/vdb /dev/vdc /dev/vdd;do ceph-deploy disk zap ceph1 $dev;ceph-deploy osd create ceph1 --data $dev;ceph-deploy disk zap ceph2 $dev;ceph-deploy osd create ceph2 --data $dev;ceph-deploy disk zap ceph3 $dev;ceph-deploy osd create ceph3 --data $dev; done
部署mgr
L版本以后才需要执行
ceph-deploy mgr create ceph1 ceph2 ceph3
开启 dashboard 模块
[cephadmin@ceph1 ~]$ ceph mgr module enable dashboard
验证
客户端配置
创建池
[root@ceph1 ~]# ceph osd pool create rbd 64 pool 'mypool' created [root@ceph1 ~]# ceph osd lspools 1 rbd,2 mypool,
确定 pg_num 取值是强制性的,因为不能自动计算。下面是几个常用的值(总的pg):
• 少于 5 个 OSD 时可把 pg_num 设置为 128
• OSD 数量在 5 到 10 个时,可把 pg_num 设置为 512
• OSD 数量在 10 到 50 个时,可把 pg_num 设置为 4096
• OSD 数量大于 50 时,你得理解权衡方法、以及如何自己计算 pg_num 取值
创建块设备
[root@VM-0-12-centos ceph]# rbd create pv1 --size 2048 --name client.rbd
查看块信息
[root@VM-0-12-centos ceph]# rbd ls --name client.rbd pv1 rbd1
rbd ls --name client.rbd # 查看所有的块 rbd ls -p rbd --name client.rbd # -p 指定 池的名称 rbd list --name client.rbd
[root@VM-0-12-centos ceph]# rbd --image pv1 info --name client.rbd rbd image 'pv1': size 2GiB in 512 objects order 22 (4MiB objects) block_name_prefix: rbd_data.10c36b8b4567 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten flags: create_timestamp: Mon Sep 7 17:45:26 2020
映射块设备,可能会报错
[root@VM-0-12-centos ceph]# rbd map --image pv1 --name client.rbd rbd: sysfs write failed RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable pv1 object-map fast-diff deep-flatten". In some cases useful info is found in syslog - try "dmesg | tail". rbd: map failed: (6) No such device or address
原因是因为3.10内核以上的版本无法支持块映射,解决办法如下:
1.动态禁用
[root@VM-0-12-centos ceph]# rbd feature disable pv1 exclusive-lock object-map deep-flatten fast-diff --name client.rbd 解释: layering: 分层支持 exclusive-lock: 排它锁定支持 object-map: 对象映射支持(需要排它锁定(exclusive-lock)) deep-flatten: 快照扁平化支持(snapshot flatten support), fast-diff: 快速差异计算(需要 object-map)
2.创建RBD镜像时,只启用 分层特性
[root@VM-0-12-centos ceph]# rbd create pv1 --size 2048 --image-feature layering --name client.rbd
3.配置文件添加如下一行
[root@VM-0-12-centos ceph]# cat /etc/ceph/ceph.conf [global] .... rbd_default_features = 1 ....
4.动态禁用块设备的特性
[root@VM-0-12-centos ceph]# rbd feature disable pv1 exclusive-lock object-map deep-flatten fast-diff -n client.rbd rbd: failed to update image features: (22) Invalid argument2020-09-07 17:59:13.888311 7f807aca1d40 -1 librbd::Operations: one or more requested features are already disabled
创建块设备映射
[root@VM-0-12-centos ceph]# rbd map --image pv1 --name client.rbd /dev/rbd0
查看映射关系
[root@VM-0-12-centos ceph]# rbd showmapped --name client.rbd id pool image snap device 0 rbd pv1 - /dev/rbd0
创建文件系统并挂载
[root@VM-0-12-centos ceph]# fdisk -l ...... Disk /dev/rbd0: 2147 MB, 2147483648 bytes, 4194304 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes
[root@VM-0-12-centos ceph]# fdisk /dev/rbd0 [root@VM-0-12-centos ceph]# mkfs.xfs /dev/rbd0p1 [root@VM-0-12-centos ceph]# mount /dev/rbd0p1 /data/ [root@VM-0-12-centos ceph]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 24K 1.9G 1% /dev/shm tmpfs 1.9G 516K 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/vda1 50G 2.4G 45G 6% / tmpfs 379M 0 379M 0% /run/user/0 /dev/rbd0p1 2.0G 33M 2.0G 2% /data
验证一下
[root@VM-0-12-centos data]# dd if=/dev/zero of=./test.db count=50 bs=10M 50+0 records in 50+0 records out 524288000 bytes (524 MB) copied, 0.311465 s, 1.7 GB/s [root@VM-0-12-centos data]# ls test.db [root@VM-0-12-centos data]# du -sh test.db 500M test.db
我特意找了一个单机测试了一下速度,如下:
[root@VM-0-15-centos ~]# dd if=/dev/zero of=./test.db count=50 bs=10M 50+0 records in 50+0 records out 524288000 bytes (524 MB) copied, 0.602673 s, 870 MB/s
设置自动挂载
[root@VM-0-12-centos data]#wget -O /usr/local/bin/rbd-mount https://raw.githubusercontent.com/aishangwei/ceph-demo/master/client/rbd-mount [root@VM-0-12-centos data]#vim /usr/local/bin/rbd-mount ...... rbd map $rbdimage --id rbd --keyring /etc/ceph/ceph.client.rbd.keyring
[root@VM-0-12-centos data]# chmod +x /usr/local/bin/rbd-mount
[root@VM-0-12-centos data]# wget -O /etc/systemd/system/rbd-mount.service https://raw.githubusercontent.com/aishangwei/ceph-demo/master/client/rbd-mount.service
[root@VM-0-12-centos data]# systemctl daemon-reload
[root@VM-0-12-centos data]# systemctl enable rbd-mount.service
Created symlink from /etc/systemd/system/multi-user.target.wants/rbd-mount.service to /etc/systemd/system/rbd-mount.service.
重启机器验证吧!!