本次实验演示如何快速地在centos上部署ceph分布式存储集群。目标是在ceph-node1和ceph-node2上创建一个两节点的集群。
环境
硬件环境
主机名 | IP | 功能 | 备注 |
ceph-node1 | 192.168.1.120 | deploy,mon*1,osd*3 | |
ceph-node2 | 192.168.1.121 | deploy,mon*1,osd*3 | |
ceph-node3 | 192.168.1.122 | 横向扩展节点 | |
cloud | 192.168.1.102 | openstack ocata | |
test | 192.168.1.123 | Openstack测试环境,Rally,Sharker |
软件环境
操作系统:Centos 7.3
Openstack:Ocata
Ceph:Jewel
安装Ceph
准备repo
在所有ceph节点上准备下面的源。
yum clean all
rm -rf /etc/yum.repos.d/*.repo
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo
sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo
sed -i 's/$releasever/7/g' /etc/yum.repos.d/CentOS-Base.repo
#vi /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
yum update -y
操作系统配置
1:无密码访问配置
在deploy节点(ceph-node1)上执行。配置deploy节点和其他ceph节点之间的无密码访问。
sudo su -
ssh-keygen
ssh-copy-id root@ceph-node2
ssh-copy-id root@cloud
2:启用Ceph monitor OSD端口,在所有ceph节点上执行
firewall-cmd --zone=public --add-port=6789/tcp --permanent
firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent
firewall-cmd --reload
firewall-cmd --zone=public --list-all
3:禁用Selinux,在所有ceph节点上执行
setenforce 0
4:安装ntp,在所有ceph节点上执行
yum install ntp ntpdate -y
systemctl restart ntpdate.service
systemctl restart ntpd.service
systemctl enable ntpd.service ntpdate.service
部署Ceph集群
1:安装ceph-deploy
yum install ceph-deploy -y
2:用Ceph-deploy创建Ceph集群
mkdir /etc/ceph
cd /etc/ceph
ceph-deploy new ceph-node1
生成一个新的ceph集群,集群包括ceph配置文件以及monitor的密钥环。
3:安装ceph二进制软件包
ceph-deploy install --no-adjust-repos ceph-node1 ceph-node2
4:修改ceph 配置文件
[global]
fsid = 7bac6963-0e1d-4cea-9e2e-f02bbae96ba7
mon_initial_members = ceph-node1
mon_host = 192.168.1.101
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = 192.168.1.0/24
5:在ceph-node1上创建第一个ceph monitor
ceph-deploy mon create-initial
6:在ceph-node1上创建OSD
ceph-deploy disk list ceph-node1(列出disk)
ceph-deploy disk zap ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
ceph-deploy osd create ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
7:用Ceph-deploy在ceph-node2上创建monitor
ceph-deploy mon create ceph-node2
ceph –s
ceph mon stat
8:在ceph-node2上创建OSD
ceph-deploy disk zap ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
ceph-deploy osd create ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
9:调整rbd存储池的pg_num和pgp_num
ceph osd pool set rbd pg_num 256
ceph osd pool set rbd pgp_num 256
总结
通过上面的步骤,一个all in one的ceph就成功部署了。
ceph -s
阅读全文:http://click.aliyun.com/m/16676/