集群信息
192.168.236.131 k8s-master
192.168.236.132 k8s-node01
192.168.236.133 k8s-node02
一、准备工作:
1:主机名和host设置
1)主机名
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02
bash 显示新的主机名
hostname
2)host
vim /etc/hosts
加入以下内容
192.168.236.131 k8s-master
192.168.236.132 k8s-node01
192.168.236.133 k8s-node02
2:ssh无密码登录
1)生成公钥
[root@k8s-master yum.repos.d]# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:/1Ai8KkSiGORNcOBcoAVI6KjBOJhdTEzznkn/8OI6FY root@k8s-master The key's randomart image is: +---[RSA 2048]----+ |B=BB.*. | |Xo=o* = | |+* +.o . | |oo.. .o+. | |+.. . S.. . | |.. ..E.o+o | | ..o. .o+ | | .o o. | | .. . | +----[SHA256]-----+
2)将公钥复制到其他机器
ssh-copy-id -i /root/.ssh/id_rsa.pub k8s-node01
ssh-copy-id -i /root/.ssh/id_rsa.pub k8s-node02
ssh-copy-id -i /root/.ssh/id_rsa.pub k8s-master
3:安全设置
1)关闭selinux
[root@k8s-node01 ~]# getenforce
Enforcing
[root@k8s-node01 ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@k8s-node01 ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
[root@k8s-node01 ~]# setenforce 0
[root@k8s-node01 ~]# getenforce
Permissive
2)防火墙
[root@k8s-node01 ~]# firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: ens33 sources: services: dhcpv6-client ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: [root@k8s-node01 ~]# systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@k8s-node01 ~]# systemctl stop firewalld
[root@k8s-node01 ~]# firewall-cmd --list-all
FirewallD is not running
4:ntp时间同步
[root@k8s-master yum.repos.d]# ntpq -pn bash: ntpq: command not found... [root@k8s-master yum.repos.d]# yum install ntp -y
master
[root@k8s-master ~]# systemctl restart ntpd [root@k8s-master ~]# systemctl enable ntpd Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service. [root@k8s-master ~]# ntpq -pn remote refid st t when poll reach delay offset jitter ============================================================================== 193.182.111.12 .INIT. 16 u - 64 0 0.000 0.000 0.000 193.182.111.14 .INIT. 16 u - 64 0 0.000 0.000 0.000 162.159.200.123 .INIT. 16 u - 64 0 0.000 0.000 0.000 162.159.200.1 .INIT. 16 u - 64 0 0.000 0.000 0.000
[root@k8s-master ~]# ntpstat
node01
vim /etc/ntp.conf
注释以前的server
新增server 192.168.236.131
node02
vim /etc/ntp.conf
注释以前的server
新增server 192.168.236.131
5:yum源设置
https://developer.aliyun.com/mirror/
1)centos源
[root@k8s-master ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo --2020-08-13 16:00:27-- https://mirrors.aliyun.com/repo/Centos-7.repo Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 183.2.160.240, 183.2.199.237, 113.96.108.117, ... Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|183.2.160.240|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 2523 (2.5K) [application/octet-stream] Saving to: ‘/etc/yum.repos.d/CentOS-Base.repo’ 100%[===========================================================================================================================>] 2,523 --.-K/s in 0s 2020-08-13 16:00:30 (348 MB/s) - ‘/etc/yum.repos.d/CentOS-Base.repo’ saved [2523/2523] [root@k8s-master ~]# yum makecache Loaded plugins: fastestmirror, langpacks Loading mirror speeds from cached hostfile * base: mirrors.aliyun.com * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com base | 3.6 kB 00:00:00 docker-ce-stable | 3.5 kB 00:00:00 extras | 2.9 kB 00:00:00 kubernetes/signature | 454 B 00:00:00 kubernetes/signature | 1.4 kB 00:00:00 !!! updates | 2.9 kB 00:00:00 (1/3): extras/7/x86_64/filelists_db | 217 kB 00:00:00 (2/3): extras/7/x86_64/other_db | 124 kB 00:00:00 (3/3): updates/7/x86_64/filelists_db | 2.1 MB 00:00:01 Metadata Cache Created
2)epel源
[root@k8s-master yum.repos.d]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo --2020-08-13 16:07:01-- http://mirrors.aliyun.com/repo/epel-7.repo Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 183.2.160.243, 183.2.199.241, 113.96.108.120, ... Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|183.2.160.243|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 664 [application/octet-stream] Saving to: ‘/etc/yum.repos.d/epel.repo’ 100%[===========================================================================================================================>] 664 --.-K/s in 0s 2020-08-13 16:07:01 (165 MB/s) - ‘/etc/yum.repos.d/epel.repo’ saved [664/664]
3)ceph源
vim /etc/yum.repos.d/ceph.repo
把如下内容粘帖进去,保存到 /etc/yum.repos.d/ceph.repo 文件中。
[Ceph] name=Ceph packages for $basearch baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/ enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc priority=1 [Ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/ enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc priority=1 [ceph-source] name=Ceph source packages baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS/ enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc priority=1
二、集群搭建
1:安装Ceph Deploy (master安装就可以了)
在线安装
yum -y install python-setuptools yum update && yum -y install ceph-deploy yum -y install yum-plugin-priorities
离线安装
rpm -ipv python-setuptools-0.9.8-7.el7.noarch.rpm ceph-deploy-2.0.1-0.noarch.rpm
2:部署monitor
1)先创建一个目录,后续都在这个目录下面工作
mkdir ceph
cd ceph
如果已经安装过可以通过下面的命令清楚配置
ceph-deploy purge {ceph-node} [{ceph-node}] ceph-deploy purgedata {ceph-node} [{ceph-node}] ceph-deploy forgetkeys rm ceph.*
初始化ceph
#ceph-deploy new {node-name}
把 Ceph 配置文件里的默认副本数从 3 改成 2 ,这样只有两个 OSD 也可以达到 active + clean 状态。把 osd pool default size = 2 加入 [global] 段:
sed -i '$aosd pool default size = 2' ceph.conf
2)所有节点安装下面软件
在线安装
yum -y install ceph ceph-mon ceph-mgr ceph-radosgw ceph-mds
离线安装
rpm -ipv ceph-common-14.2.11-0.el7.x86_64.rpm ceph-base-14.2.11-0.el7.x86_64.rpm ceph-selinux-14.2.11-0.el7.x86_64.rpm ceph-14.2.11-0.el7.x86_64.rpm ceph-mon-14.2.11-0.el7.x86_64.rpm ceph-mgr-14.2.11-0.el7.x86_64.rpm ceph-radosgw-14.2.11-0.el7.x86_64.rpm ceph-mds-14.2.11-0.el7.x86_64.rpm ceph-osd-14.2.11-0.el7.x86_64.rpm ceph-release-1-1.el7.noarch.rpm
注:为什么不用ceph-deploy install,因为这个修改源到国外服务器导致版本或者安装不成功。
3)初始化mon
ceph-deploy mon create-initial
4)将ceph.client.admin.keyring拷贝到各个节点上
ceph-deploy admin k8s-master k8s-node01 k8s-node02
查看是否配置成功
[root@k8s-master ceph]# ceph -s cluster: id: 84fab04a-4282-4d7f-b44b-4f2dfd6450a2 health: HEALTH_OK services: mon: 1 daemons, quorum k8s-master (age 4m) mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:
3:部署MGR
[root@k8s-master ceph]# ceph-deploy mgr create k8s-master
4:部署OSD
ceph-deploy osd create --data /dev/sdb k8s-node01
ceph-deploy osd create --data /dev/sdb k8s-node02
三、 文件系统服务器使用
1)部署元数据
[root@k8s-master ceph]# ceph-deploy mds create k8s-master
2)创建文件系统
[root@k8s-master ceph]# ceph osd pool create cephfs_data 32 pool 'cephfs_data' created [root@k8s-master ceph]# ceph osd pool create cephfs_meta 32 pool 'cephfs_meta' created [root@k8s-master ceph]# ceph fs new mycephfs cephfs_meta cephfs_data new fs with metadata pool 2 and data pool 1
Pool操作
1)查看
ceph osd lspools
2)创建
ceph osd pool create testpool 100 #这里的100指的是PG组
3)删除
ceph osd pool delete testpool testpool --yes-i-really-really-mean-it #pool的名字需要重复两次
4)查看详细信息
rados df
参考
https://docs.ceph.com/docs/master/install/ceph-deploy/quick-ceph-deploy/
https://developer.aliyun.com/article/761298?spm=a2c6h.14164896.0.0.6e604392QEBI8o