zoukankan      html  css  js  c++  java
  • ceph部署

    一、部署准备:

    准备5台虚拟机(linux系统为centos7.6版本)   

    1台部署节点(配一块硬盘,运行ceph-depoly)

    3台ceph节点(每台配置两块硬盘,第一块为系统盘并运行mon,第二块作为osd数据盘)

    1台客户端(可以使用ceph提供的文件系统,块存储,对象存储)

    (1)所有ceph集群节点(包括客户端)设置静态域名解析;

    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.24.11 dlp
    192.168.24.8 controller
    192.168.24.9 compute
    192.168.24.10 storage

    (2)所有集群节点(包括客户端)创建cent用户,并设置密码,后执行如下命令:

    useradd cent && echo "123" | passwd --stdin cent
    
    echo
    -e 'Defaults:cent !requiretty cent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
    chmod
    440 /etc/sudoers.d/ceph

    (3)在部署节点切换为cent用户,设置无密钥登陆各节点包括客户端节点

    su – cent
    
    ceph@dlp15:
    17:01~#ssh-keygen
    ceph@dlp15:
    17:01~#ssh-copy-id controller
    ceph@dlp15:
    17:01~#ssh-copy-id compute
    ceph@dlp15:
    17:01~#ssh-copy-id storage
    ceph@dlp15:
    17:01~#ssh-copy-id dlp

    (4)在部署节点切换为cent用户,在cent用户家目录,设置如下文件:vi~/.ssh/config# create new ( define all nodes and users )

    su – cent
    
    cd .ssh
    vim config
    Host dlp Hostname dlp User cent Host controller Hostname controller User cent Host compute Hostname compute User cent Host storage Hostname storage User cent
    chmod
    600 ~/.ssh/config

    二、所有节点配置国内ceph源:

    (1)all-node(包括客户端)在/etc/yum.repos.d/创建 ceph-yunwei.repo

    cd /etc/yum.repos.d
    
    vim ceph
    -yunwei.repo
    [ceph
    -yunwei] name=ceph-yunwei-install baseurl=https://mirrors.aliyun.com/centos/7.6.1810/storage/x86_64/ceph-jewel/ enable=1 gpgcheck=0

    (2)到国内ceph源中https://mirrors.aliyun.com/centos/7.6.1810/storage/x86_64/ceph-jewel/下载如下所需rpm包。注意:红色框中为ceph-deploy的rpm,只需要在部署节点安装,下载需要到https://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/中找到最新对应的ceph-deploy-xxxxx.noarch.rpm 下载

    ceph-10.2.11-0.el7.x86_64.rpm
    ceph-base-10.2.11-0.el7.x86_64.rpm
    ceph-common-10.2.11-0.el7.x86_64.rpm
    ceph-deploy-1.5.39-0.noarch.rpm
    ceph-devel-compat-10.2.11-0.el7.x86_64.rpm
    cephfs-java-10.2.11-0.el7.x86_64.rpm
    ceph-fuse-10.2.11-0.el7.x86_64.rpm
    ceph-libs-compat-10.2.11-0.el7.x86_64.rpm
    ceph-mds-10.2.11-0.el7.x86_64.rpm
    ceph-mon-10.2.11-0.el7.x86_64.rpm
    ceph-osd-10.2.11-0.el7.x86_64.rpm
    ceph-radosgw-10.2.11-0.el7.x86_64.rpm
    ceph-resource-agents-10.2.11-0.el7.x86_64.rpm
    ceph-selinux-10.2.11-0.el7.x86_64.rpm
    ceph-test-10.2.11-0.el7.x86_64.rpm
    libcephfs1-10.2.11-0.el7.x86_64.rpm
    libcephfs1-devel-10.2.11-0.el7.x86_64.rpm
    libcephfs_jni1-10.2.11-0.el7.x86_64.rpm
    libcephfs_jni1-devel-10.2.11-0.el7.x86_64.rpm
    librados2-10.2.11-0.el7.x86_64.rpm
    librados2-devel-10.2.11-0.el7.x86_64.rpm
    libradosstriper1-10.2.11-0.el7.x86_64.rpm
    libradosstriper1-devel-10.2.11-0.el7.x86_64.rpm
    librbd1-10.2.11-0.el7.x86_64.rpm
    librbd1-devel-10.2.11-0.el7.x86_64.rpm
    librgw2-10.2.11-0.el7.x86_64.rpm
    librgw2-devel-10.2.11-0.el7.x86_64.rpm
    python-ceph-compat-10.2.11-0.el7.x86_64.rpm
    python-cephfs-10.2.11-0.el7.x86_64.rpm
    python-rados-10.2.11-0.el7.x86_64.rpm
    python-rbd-10.2.11-0.el7.x86_64.rpm
    rbd-fuse-10.2.11-0.el7.x86_64.rpm
    rbd-mirror-10.2.11-0.el7.x86_64.rpm
    rbd-nbd-10.2.11-0.el7.x86_64.rpm

    (3)将下载好的rpm拷贝到所有节点,并安装。注意ceph-deploy-xxxxx.noarch.rpm 只有部署节点用到,其他节点不需要,部署节点也需要安装其余的rpm

    (4)在部署节点(cent用户下执行):安装 ceph-deploy,在root用户下,进入下载好的rpm包目录,执行:

    yum localinstall -y ./*

    注意:如遇到如下报错:

     

    处理办法:

    #安装依赖包
    python-distribute
    #将这个源移走
    rdo-release-yunwei.repo 
    #在安装ceph-deploy-1.5.39-0.noarch.rpm
    yum localinstall ceph-deploy-1.5.39-0.noarch.rpm -y
    #查看版本:
    ceph -v

    (5)在部署节点(cent用户下执行):配置新集群

    ceph-deploy new controller compute storage
    
    vim ./ceph.conf
    
    #添加:
    
    osd_pool_default_size = 2
     可选参数如下:
    public_network = 192.168.254.0/24
    cluster_network = 172.16.254.0/24
    osd_pool_default_size = 3
    osd_pool_default_min_size = 1
    osd_pool_default_pg_num = 8
    osd_pool_default_pgp_num = 8
    osd_crush_chooseleaf_type = 1
      
    [mon]
    mon_clock_drift_allowed = 0.5
      
    [osd]
    osd_mkfs_type = xfs
    osd_mkfs_options_xfs = -f
    filestore_max_sync_interval = 5
    filestore_min_sync_interval = 0.1
    filestore_fd_cache_size = 655350
    filestore_omap_header_cache_size = 655350
    filestore_fd_cache_random = true
    osd op threads = 8
    osd disk threads = 4
    filestore op threads = 8
    max_open_files = 655350

    (6)在部署节点执行(cent用户下执行):所有节点安装ceph软件

    所有节点有如下软件包:

    root@rab116:13:59~/cephjrpm#ls
    ceph-10.2.11-0.el7.x86_64.rpm               ceph-resource-agents-10.2.11-0.el7.x86_64.rpm    librbd1-10.2.11-0.el7.x86_64.rpm
    ceph-base-10.2.11-0.el7.x86_64.rpm          ceph-selinux-10.2.11-0.el7.x86_64.rpm            librbd1-devel-10.2.11-0.el7.x86_64.rpm
    ceph-common-10.2.11-0.el7.x86_64.rpm        ceph-test-10.2.11-0.el7.x86_64.rpm               librgw2-10.2.11-0.el7.x86_64.rpm
    ceph-devel-compat-10.2.11-0.el7.x86_64.rpm  libcephfs1-10.2.11-0.el7.x86_64.rpm              librgw2-devel-10.2.11-0.el7.x86_64.rpm
    cephfs-java-10.2.11-0.el7.x86_64.rpm        libcephfs1-devel-10.2.11-0.el7.x86_64.rpm        python-ceph-compat-10.2.11-0.el7.x86_64.rpm
    ceph-fuse-10.2.11-0.el7.x86_64.rpm          libcephfs_jni1-10.2.11-0.el7.x86_64.rpm          python-cephfs-10.2.11-0.el7.x86_64.rpm
    ceph-libs-compat-10.2.11-0.el7.x86_64.rpm   libcephfs_jni1-devel-10.2.11-0.el7.x86_64.rpm    python-rados-10.2.11-0.el7.x86_64.rpm
    ceph-mds-10.2.11-0.el7.x86_64.rpm           librados2-10.2.11-0.el7.x86_64.rpm               python-rbd-10.2.11-0.el7.x86_64.rpm
    ceph-mon-10.2.11-0.el7.x86_64.rpm           librados2-devel-10.2.11-0.el7.x86_64.rpm         rbd-fuse-10.2.11-0.el7.x86_64.rpm
    ceph-osd-10.2.11-0.el7.x86_64.rpm           libradosstriper1-10.2.11-0.el7.x86_64.rpm        rbd-mirror-10.2.11-0.el7.x86_64.rpm
    ceph-radosgw-10.2.11-0.el7.x86_64.rpm       libradosstriper1-devel-10.2.11-0.el7.x86_64.rpm  rbd-nbd-10.2.11-0.el7.x86_64.rpm

    所有节点安装上述软件包(包括客户端):

    yum localinstall ./* -y

    (7)在部署节点执行,所有节点安装ceph软件

    ceph-deploy install dlp controller compute storage

    (8)在部署节点初始化集群(cent用户下执行):

    ceph-deploy mon create-initial

    (9)每个节点将第二块硬盘做分区(注意,存储节点的磁盘是sdc

    fdisk /dev/sdb

    列出节点磁盘

    ceph-deploy disk list controller

    擦净节点磁盘

    ceph-deploy disk zap controller:/dev/sdb1

    (10)准备Object Storage Daemon:

    ceph-deploy osd prepare controller:/dev/sdb1 compute:/dev/sdb1 storage:/dev/sdb1

    (11)激活Object Storage Daemon:

    ceph-deploy osd activate controller:/dev/sdb1 compute:/dev/sdb1 storage:/dev/sdb1

    (12)在部署节点transfer config files

    ceph-deploy admin dlp controller compute storage
    
    (每个节点做)sudo chmod
    644 /etc/ceph/ceph.client.admin.keyring

    (13)在ceph集群中任意节点检测:

    ceph -s

    三、客户端设置:

    (1)客户端也要有cent用户:

    useradd cent && echo "123" | passwd --stdin cent
    
    echo
    -e 'Defaults:cent !requiretty cent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
    chmod440
    /etc/sudoers.d/ceph

    在部署节点执行,安装ceph客户端及设置:

    ceph-deploy install controller
    
    ceph-deploy admin controller

    (2)客户端执行

    sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

    (3)客户端执行,块设备rdb配置:

    创建rbd:rbd create disk01(起的名字) --size 10G(指定rdb大小) --image-feature layering
    删除:rbd rm disk01(rdb名字)
    列示rbd:rbd ls –l

    现在只是创建好了一块10G的硬盘,要想用的话,还需要映射

    映射rbd的image map:sudo rbd map disk01
    取消映射:sudo rbd unmap disk01
    显示map:rbd showmapped

    映射完就可以用lsblk查看到这块硬盘了,但还需要格式化挂载才可以用

    格式化disk01文件系统xfs:sudo mkfs.xfs /dev/rbd0
    挂载硬盘:sudo mount /dev/rbd0 /mnt
    验证是否挂着成功:df -hT

    如果不想用这块硬盘了,怎么办?

    umonut /dev/rbd0 /mnt
    
    sudo rbd unmap disk01
    
    lsblk
    
    rbd rm disk01

    (4)File System配置:

    在部署节点执行,选择一个node来创建MDS:

    ceph-deploy mds create node1

    以下操作在node1上执行:

    sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

    在MDS节点node1上创建 cephfs_data 和  cephfs_metadata 的 pool

    ceph osd pool create cephfs_data 128
    
    ceph osd pool create cephfs_metadata 128

    开启pool:

    ceph fs new cephfs cephfs_metadata cephfs_data
    显示ceph fs:
    ceph fs ls
    
    ceph mds stat

    以下操作在客户端执行,安装ceph-fuse:

    yum -y install ceph-fuse

    获取admin key:

    sshcent@node1"sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key
    
    chmod600 admin.key

    挂载ceph-fs:

    mount-t ceph node1:6789:/ /mnt -o name=admin,secretfile=admin.key
    
    df-h

    停止ceph-mds服务:

    systemctl stop ceph-mds@node1
    
    ceph mds fail
    0
    ceph fs rm cephfs
    --yes-i-really-mean-it
    ceph osd lspools
    显示结果:
    0 rbd,1 cephfs_data,2 cephfs_metadata,
    ceph osd pool rm cephfs_metadata cephfs_metadata
    --yes-i-really-really-mean-it

    四、删除环境:

    ceph-deploy purge dlp node1 node2 node3 controller
    
    ceph
    -deploy purgedata dlp node1 node2 node3 controller
    ceph
    -deploy forgetkeys
    rm
    -rf ceph*
  • 相关阅读:
    MFC程序自动生成dump Windbg文件
    .net 播放音频(使用winmm.dll)
    media player 网页代码属性
    标记ATL控件为安全控件
    js方法重写
    cab包inf文件配置
    凌阳单片机(61板)USB下载线原理与制作
    html 智能检查,修复
    差异性发展 浙江工商局长郑宇民“智斗”央视女主持董倩
    什么是有效高效的沟通
  • 原文地址:https://www.cnblogs.com/shuaiyin/p/11043421.html
Copyright © 2011-2022 走看看