zoukankan      html  css  js  c++  java
  • 基于CentOS 7.3 安装Ceph Jewel 10.2.9

    https://www.lijiawang.org/posts/intsall-ceph.html

    配置说明:

    采用了4台centos7.3系统的虚拟机,1台Ceph-Master作为安装节点,NTP Server;3台Ceph节点,既作为OSD节点,也作为Monitor节点。每个OSD节点有6个盘:300G的系统盘,3个2TB作为SATA池的OSD,800GB作为SSD池的OSD,240GB SSD盘作为日志盘。

    环境准备

    这里安装centos7.3的操作系统我就不多说了,下面我说一下环境准备工作。

    1.检查操作系统的版本

    1
    2
    # cat /etc/redhat-release
    CentOS Linux release 7.3.1611 (Core)

    2.查看系统内核版本

    1
    2
    # uname -r
    3.10.0-514.26.2.el7.x86_64

    3.关闭防火墙和selinux

    1
    2
    3
    4
    sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
    setenforce 0
    systemctl stop firewalld
    systemctl disable firewalld

    4.查看设备信息(所有的OSD节点)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    # lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 300G 0 disk
    ├─sda1 8:1 0 1G 0 part /boot
    └─sda2 8:2 0 299G 0 part
    ├─cl-root 253:0 0 50G 0 lvm /
    ├─cl-swap 253:1 0 2G 0 lvm [SWAP]
    └─cl-home 253:2 0 247G 0 lvm /home
    sdb 8:16 0 2T 0 disk
    sdc 8:32 0 2T 0 disk
    sdd 8:48 0 2T 0 disk
    sde 8:64 0 800G 0 disk
    sdf 8:80 0 240G 0 disk
    sr0 11:0 1 1024M 0 rom

    5.查看网卡配置(所有OSD节点)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:06:c8:a4 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.11/24 brd 172.16.0.255 scope global eth0
    valid_lft forever preferred_lft forever
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:06:c8:ae brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.11/24 brd 192.168.0.255 scope global eth1
    valid_lft forever preferred_lft forever

    6.在所有节点配置hosts解析

    1
    2
    3
    4
    172.16.0.10 ceph-master
    172.16.0.11 ceph-node-1
    172.16.0.12 ceph-node-2
    172.16.0.13 ceph-node-3

    7.安装基础的软件包(所有节点)

    1
    yum install tree nmap sysstat lrzsz dos2unix wegt git net-tools -y

    8.建立SSH通信(在ceph-master节点上执行)
    (1)生成秘钥文件

    1
    ssh-keygen -t rsa

    (2)拷贝秘钥文件

    1
    2
    3
    4
    ssh-copy-id root@ceph-master
    ssh-copy-id root@ceph-node-1
    ssh-copy-id root@ceph-node-2
    ssh-copy-id root@ceph-node-3

    环境准备基本完成

    配置NTP服务

    首先我们要在所有的节点上安装NTP服务

    1
    # yum install -y ntp

    CEPH-MASTER节点上配置

    1.修改NTP配置文件/etc/ntp.conf

    1
    2
    3
    4
    5
    6
    7
    8
    # vim /etc/ntp.conf
    #server 0.centos.pool.ntp.org iburst
    #server 1.centos.pool.ntp.org iburst
    #server 2.centos.pool.ntp.org iburst
    #server 3.centos.pool.ntp.org iburst
    restrict 172.16.0.0 mask 255.255.255.0 nomodify notrap
    server 127.127.1.0 minpoll 4
    fudge 127.127.1.0 stratum 0

    2.修改配置文件/etc/ntp/step-tickers

    1
    2
    3
    # vim /etc/ntp/step-tickers
    #0.centos.pool.ntp.org
    127.127.1.0

    3.启动NTP服务,并设置开机启动

    1
    systemctl enable ntpd ; systemctl start ntpd

    在所有的OSD节点配置

    1.修改NTP配置文件/etc/ntp.conf

    1
    2
    3
    4
    5
    6
    # vim /etc/ntp.conf
    #server 0.centos.pool.ntp.org iburst
    #server 1.centos.pool.ntp.org iburst
    #server 2.centos.pool.ntp.org iburst
    #server 3.centos.pool.ntp.org iburst
    server 172.16.0.10

    2.启动NTP服务,并设置开机启动

    1
    systemctl enable ntpd ; systemctl start ntpd

    验证NTP

    在所有节点执行

    1
    2
    3
    4
    5
    # ntpq -p
     
    remote refid st t when poll reach delay offset jitter
    ==============================================================================
    *ceph-master .LOCL. 1 u 16 64 377 0.269 0.032 0.269

    前面有 * 表示已经同步。

    安装ceph

    ####跟新系统源(所有节点执行)

    1
    2
    3
    4
    5
    6
    7
    8
    rm -rf /etc/yum.repos.d/*.repo
    wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
    sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo
    sed -i 's/$releasever/7/g' /etc/yum.repos.d/CentOS-Base.repo
    sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo
    yum clean all
    yum makecache fast

    安装CEPH-DEPLOY

    以下在ceph-master节点上执行

    1
    # yum install http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm

    查看CEPH-DEPLOY版本

    1
    2
    # ceph-deploy --version
    1.5.38

    创建CEPH集群

    1
    ceph-deploy new ceph-node-1 ceph-node-2 ceph-node-3

    编辑CEPH配置文件

    global下添加以下配置:

    1
    2
    3
    4
    5
    # vim ceph.conf
    [global]
    mon_clock_drift_allowed = 5
    osd_journal_size = 20480
    public_network=172.16.0.0/24

    安装CEPH

    直接指定源地址,不用担心ceph.com官方源无法下载了。

    1
    ceph-deploy install --release jewel --repo-url http://mirrors.163.com/ceph/rpm-jewel/el7 --gpg-url http://mirrors.163.com/ceph/keys/release.asc ceph-master ceph-node-1 ceph-node-2 ceph-node-3

    检查CEPH版本

    1
    2
    ceph -v
    ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)

    初始化MOM节点

    1
    ceph-deploy mon create-initial

    查看集群状态(在OSD节点上查看)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    # ceph -s
    cluster 60597e53-ad29-44bd-8dcd-db6aeae6f580
    health HEALTH_ERR
    no osds
    monmap e2: 3 mons at {ceph-node-1=172.16.0.11:6789/0,ceph-node-2=172.16.0.12:6789/0,ceph-node-3=172.16.0.13:6789/0}
    election epoch 6, quorum 0,1,2 ceph-node-1,ceph-node-2,ceph-node-3
    osdmap e1: 0 osds: 0 up, 0 in
    flags sortbitwise,require_jewel_osds
    pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
    0 kB used, 0 kB / 0 kB avail
    64 creating

    配置管理节点ceph-master

    为什么每次查看ceph –s的只能在OSD节点上执行,而不能在Master节点上执行?
    ceph-deploy把配置文件和admin密钥拷贝到Master节点

    1
    # ceph-deploy admin ceph-master

    确保你对ceph.client.admin.keyring有正确的操作权限

    1
    # chmod +r /etc/ceph/ceph.client.admin.keyring

    检查集群的健康状况

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    # ceph -s
    cluster 60597e53-ad29-44bd-8dcd-db6aeae6f580
    health HEALTH_ERR
    no osds
    monmap e2: 3 mons at {ceph-node-1=172.16.0.11:6789/0,ceph-node-2=172.16.0.12:6789/0,ceph-node-3=172.16.0.13:6789/0}
    election epoch 6, quorum 0,1,2 ceph-node-1,ceph-node-2,ceph-node-3
    osdmap e1: 0 osds: 0 up, 0 in
    flags sortbitwise,require_jewel_osds
    pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
    0 kB used, 0 kB / 0 kB avail
    64 creating

    OSD节点配置

    磁盘分区

    240GSSD盘分出420G的分区用作 journal

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    # fdisk /dev/sdf
    Welcome to fdisk (util-linux 2.23.2).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.
    Device does not contain a recognized partition table
    Building a new DOS disklabel with disk identifier 0x9ec0a047.
    Command (m for help): g
    Building a new GPT disklabel (GUID: 31D328DD-E9A0-4306-9C99-8D42F7BA8008)
    Command (m for help): n
    Partition number (1-128, default 1):
    First sector (2048-503316446, default 2048):
    Last sector, +sectors or +size{K,M,G,T,P} (2048-503316446, default 503316446): +20G
    Created partition 1
    Command (m for help): n
    Partition number (2-128, default 2):
    First sector (41945088-503316446, default 41945088):
    Last sector, +sectors or +size{K,M,G,T,P} (41945088-503316446, default 503316446): +20G
    Created partition 2
    Command (m for help): n
    Partition number (3-128, default 3):
    First sector (83888128-503316446, default 83888128):
    Last sector, +sectors or +size{K,M,G,T,P} (83888128-503316446, default 503316446): +20G
    Created partition 3
    Command (m for help): n
    Partition number (4-128, default 4):
    First sector (125831168-503316446, default 125831168):
    Last sector, +sectors or +size{K,M,G,T,P} (125831168-503316446, default 503316446): +20G
    Created partition 4
    Command (m for help): p
    Disk /dev/sdf: 257.7 GB, 257698037760 bytes, 503316480 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: gpt
    # Start End Size Type Name
    1 2048 41945087 20G Linux filesyste
    2 41945088 83888127 20G Linux filesyste
    3 83888128 125831167 20G Linux filesyste
    4 125831168 167774207 20G Linux filesyste
    Command (m for help): w
    The partition table has been altered!
    Calling ioctl() to re-read partition table.
    Syncing disks.

    查看分区

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    # lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 300G 0 disk
    ├─sda1 8:1 0 1G 0 part /boot
    └─sda2 8:2 0 299G 0 part
    ├─cl-root 253:0 0 50G 0 lvm /
    ├─cl-swap 253:1 0 2G 0 lvm [SWAP]
    └─cl-home 253:2 0 247G 0 lvm /home
    sdb 8:16 0 2T 0 disk
    sdc 8:32 0 2T 0 disk
    sdd 8:48 0 2T 0 disk
    sde 8:64 0 800G 0 disk
    sdf 8:80 0 240G 0 disk
    ├─sdf1 8:81 0 20G 0 part
    ├─sdf2 8:82 0 20G 0 part
    ├─sdf3 8:83 0 20G 0 part
    └─sdf4 8:84 0 20G 0 part
    sr0 11:0 1 1024M 0 rom

    修改 JOURNAL 分区权限

    1
    chown ceph:ceph /dev/sdf[1-4]

    添加OSD

    ceph-master执行
    先部署SATA

    1
    ceph-deploy osd prepare ceph-node-1:/dev/sdb:/dev/sdf1 ceph-node-1:/dev/sdc:/dev/sdf2 ceph-node-1:/dev/sdd:/dev/sdf3 ceph-node-2:/dev/sdb:/dev/sdf1 ceph-node-2:/dev/sdc:/dev/sdf2 ceph-node-2:/dev/sdd:/dev/sdf3 ceph-node-3:/dev/sdb:/dev/sdf1 ceph-node-3:/dev/sdc:/dev/sdf2 ceph-node-3:/dev/sdd:/dev/sdf3

    再部署SSD

    1
    ceph-deploy osd prepare ceph-node-1:/dev/sde:/dev/sdf4 ceph-node-2:/dev/sde:/dev/sdf4 ceph-node-3:/dev/sde:/dev/sdf4

    先部署SATA,再部署SSD,这样序号就会连续了…
    查看集群状态

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    # ceph -s
    cluster f9dba93b-c40f-4b31-8e6d-6ea830cad944
    health HEALTH_OK
    monmap e1: 3 mons at {ceph-node-1=172.16.0.11:6789/0,ceph-node-2=172.16.0.12:6789/0,ceph-node-3=172.16.0.13:6789/0}
    election epoch 6, quorum 0,1,2 ceph-node-1,ceph-node-2,ceph-node-3
    osdmap e56: 12 osds: 12 up, 12 in
    flags sortbitwise,require_jewel_osds
    pgmap v142: 128 pgs, 1 pools, 0 bytes data, 0 objects
    413 MB used, 20821 GB / 20821 GB avail
    128 active+clean

    查看OSD的分布

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    # ceph osd tree
    ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
    -1 20.33363 root default
    -2 6.77788 host ceph-node-1
    0 1.99899 osd.0 up 1.00000 1.00000
    1 1.99899 osd.1 up 1.00000 1.00000
    4 1.99899 osd.4 up 1.00000 1.00000
    9 0.78090 osd.9 up 1.00000 1.00000
    -3 6.77788 host ceph-node-2
    2 1.99899 osd.2 up 1.00000 1.00000
    3 1.99899 osd.3 up 1.00000 1.00000
    5 1.99899 osd.5 up 1.00000 1.00000
    10 0.78090 osd.10 up 1.00000 1.00000
    -4 6.77788 host ceph-node-3
    6 1.99899 osd.6 up 1.00000 1.00000
    7 1.99899 osd.7 up 1.00000 1.00000
    8 1.99899 osd.8 up 1.00000 1.00000
    11 0.78090 osd.11 up 1.00000 1.00000

    安装完毕

    ceph安装完毕

  • 相关阅读:
    排序算法
    存储5——逻辑卷管理LVM
    php && 逻辑与运算符使用说明
    php分页代码
    PHP中获取当前页面的完整URL
    php 文件上传后缀名与文件类型对照表(几乎涵盖所有文件)
    生成订单唯一id
    JS 返回上一步(退回上一步上一个网页)
    php实现的太平洋时间和北京时间互转的自定义函数
    php 上传视频的代码
  • 原文地址:https://www.cnblogs.com/wangmo/p/11302293.html
Copyright © 2011-2022 走看看