所有 Ceph 部署都始于 Ceph 存储集群。一个 Ceph 集群可以包含数千个存储节点,最简系统至少需要一个监视器和两个 OSD 才能做到数据复制。Ceph 文件系统、 Ceph 对象存储、和 Ceph 块设备从 Ceph 存储集群读出和写入数据。
Ceph架构图
1、集群配置
|
节点
|
IP
|
功能
|
|---|---|---|
| ceph01 | 0 | deploy,mon,osd*2,mds |
| ceph02 | 1 | mon,osd*2 |
| ceph03 | 2 | mon,osd*2 |
2、系统版本
#cat /etc/redhat-releaseCentOS Linux release 7.3.1611 (Core) |
3、三个节点分别额外挂载两块20G磁盘
#lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTfd0 2:0 1 4K 0 disksda 8:0 0 20G 0 disk├─sda1 8:1 0 476M 0 part /boot└─sda2 8:2 0 19.5G 0 part /sdb 8:16 0 20G 0 disksdc 8:32 0 23G 0 disksr0 11:0 1 1024M 0 rom |
4、关闭防火墙
#vim /etc/selinux/configdisabled#setenforce 0#systemctl stop firewalld#systemctl disable firewalld |
5、添加hosts
#vim /etc/hosts0 ceph011 ceph022 ceph03 |
6、配置SSH免密登陆
#ssh-keygen#ssh-copy-id ceph01#ssh-copy-id ceph02#ssh-copy-id ceph03 |
7、校对时间
#yum install -y ntp ntpdate#ntpdate pool.ntp.org |
8、添加yum源
#vim /etc/yum.repos.d/ceph.repo[ceph]name=Ceph packages for $basearchbaseurl=http://download.ceph.com/rpm-jewel/el7/$basearchenabled=1priority=2gpgcheck=1type=rpm-mdgpgkey=https://download.ceph.com/keys/release.asc[ceph-noarch]name=Ceph noarch packagesbaseurl=http://download.ceph.com/rpm-jewel/el7/noarchenabled=1priority=2gpgcheck=1type=rpm-mdgpgkey=https://download.ceph.com/keys/release.asc[ceph-source]name=Ceph source packagesbaseurl=http://download.ceph.com/rpm-jewel/el7/SRPMSenabled=0priority=2gpgcheck=1type=rpm-mdgpgkey=https://download.ceph.com/keys/release.asc |
8、导入密钥
#rpm --import 'https://download.ceph.com/keys/release.asc' |
9、安装ceph客户端
#yum install -y ceph ceph-radosgw rdate |
10、安装 ceph-deploy
#yum update -y#yum install -y ceph-deploy |
11、创建集群
#mkdir -pv /opt/cluster#cd /opt/cluster#ceph-deploy new ceph01 ceph02 ceph03#lsceph.conf ceph-deploy-ceph.log ceph.mon.keyring |
12、修改配置文件,添加public_network,并稍微增大mon之间时差允许范围(默认为0.05s,现改为2s)
#vim ceph.conf[global]fsid = 79aa9d8d-65e4-4a9d-84c4-50dbd3db337emon_initial_members = ceph01, ceph02mon_host = 192.168.135.163,192.168.135.164auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephxpublic_network = 0.0.0.0/24mon_clock_drift_allowed = 2 |
13、部署MON
#ceph-deploy mon create-initial#ceph -scluster 6fb69a7a-647a-4cb6-89ad-583729eb0406health HEALTH_ERR no osdsmonmap e1: 3 mons at {ceph01=0:6789/0,ceph02=1:6789/0,ceph03=2:6789/0} election epoch 8, quorum 0,1,2 ceph01,ceph02,ceph03osdmap e1: 0 osds: 0 up, 0 in flags sortbitwise,require_jewel_osdspgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 64 creating |
14、部署OSD
#ceph-deploy disk zap ceph01:sdb ceph01:sdc#ceph-deploy disk zap ceph02:sdb ceph02:sdc#ceph-deploy disk zap ceph03:sdb ceph03:sdc#ceph-deploy osd prepare ceph01:sdb:sdc#ceph-deploy osd prepare ceph02:sdb:sdc#ceph-deploy osd prepare ceph03:sdb:sdc#ceph-deploy osd activate ceph01:sdb1:sdc1#ceph-deploy osd activate ceph02:sdb1:sdc1#ceph-deploy osd activate ceph03:sdb1:sdc1#lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTfd0 2:0 1 4K 0 disksda 8:0 0 20G 0 disk├─sda1 8:1 0 476M 0 part /boot└─sda2 8:2 0 19.5G 0 part /sdb 8:16 0 20G 0 disk└─sdb1 8:17 0 20G 0 part /var/lib/ceph/osd/ceph-0sdc 8:32 0 23G 0 disk└─sdc1 8:33 0 5G 0 partsr0 11:0 1 1024M 0 rom |
15、再次查看集群状态,如果是active+clean状态就是正常的。
#ceph -scluster 6fb69a7a-647a-4cb6-89ad-583729eb0406health HEALTH_OKmonmap e1: 3 mons at {ceph01=0:6789/0,ceph02=1:6789/0,ceph03=2:6789/0} election epoch 8, quorum 0,1,2 ceph01,ceph02,ceph03osdmap e15: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osdspgmap v32: 64 pgs, 1 pools, 0 bytes data, 0 objects 101 MB used, 61305 MB / 61406 MB avail 64 active+clean |
至此,集群安装完毕。

