zoukankan      html  css  js  c++  java
  • ceph-nautilus版本部署

    实验目的:

      ceph版本使用,体验ceph新特性,使用单机部署体验rbd/bgw/cephfs,cephfs需要mds服务,rbd/bgw不需要mds服务

    实验环境:

    • Ubuntu 18.04.3 LTS
    • ceph-nautilus

    注意:ceph-octopus部署出现很多错误,不太稳定就回退到上个版本ceph-nautilus

    实验操作:

    hosts/firewalled/disk

    root@ubuntu:~# hostname
    ubuntu
    root@ubuntu:~# ping ubuntu
    PING ubuntu (192.168.3.103) 56(84) bytes of data.
    64 bytes from ubuntu (192.168.3.103): icmp_seq=1 ttl=64 time=0.015 ms

    root@ubuntu:~# ufw status
    Status: inactive   ###防火墙未使用

    root@ubuntu:~# lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 20G 0 disk
    └─sda1 8:1 0 20G 0 part /
    sdb 8:16 0 20G 0 disk    ###一块空闲的磁盘

    添加国内源

    root@ubuntu:~# wget -q -O- 'https://mirrors.cloud.tencent.com/ceph/keys/release.asc' | sudo apt-key add -       ###添加效验
    OK
    root@ubuntu:~# echo deb https://mirrors.cloud.tencent.com/ceph/debian-nautilus/ $(lsb_release -sc) main | tee /etc/apt/sources.list.d/ceph.list

    deb https://mirrors.cloud.tencent.com/ceph/debian-nautilus/ bionic main
    root@ubuntu:~#
    root@ubuntu:~# apt-get update
    Hit:1 http://mirrors.aliyun.com/ubuntu bionic InRelease

    安装ceph-deploy及初始化集群mon

    root@ubuntu:~# apt-get install -y ceph-deploy

    root@ubuntu:~# mkdir -p /etc/cluster-ceph  ###配置存储路径

    root@ubuntu:~# cd /etc/cluster-ceph
    root@ubuntu:/etc/cluster-ceph#
    root@ubuntu:/etc/cluster-ceph# pwd
    /etc/cluster-ceph
    root@ubuntu:/etc/cluster-ceph# ceph-deploy new `hostname`

    vi ceph.conf    ###新增两个,一个osd默认副本1个 active+clean

    osd_pool default size = 1    
    osd_pool defaultmin size = 1

    节点安装ceph相关软件

    root@ubuntu:/etc/cluster-ceph# export CEPH_DEPLOY_REPO_URL=https://mirrors.cloud.tencent.com/ceph/debian-nautilus/
    root@ubuntu:/etc/cluster-ceph# export CEPH_DEPLOY_GPG_URL=https://mirrors.cloud.tencent.com/ceph/keys/release.asc
    root@ubuntu:/etc/cluster-ceph# ceph-deploy install --release nautilus `hostname`

    ###ERROR

    [ubuntu][DEBUG ] Unpacking radosgw (15.1.0-1bionic) ...
    [ubuntu][DEBUG ] Errors were encountered while processing:
    [ubuntu][DEBUG ]  /tmp/apt-dpkg-install-sKeUKm/35-ceph-base_15.1.0-1bionic_amd64.deb
    [ubuntu][WARNIN] E: Sub-process /usr/bin/dpkg returned an error code (1)
    [ubuntu][ERROR ] RuntimeError: command returned non-zero exit status: 100
    [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --a
    ssume-yes -q --no-install-recommends install ceph ceph-osd ceph-mds ceph-mon radosgw
    
    [ubuntu][WARNIN] E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).   ###尝试修复
    [ubuntu][ERROR ] RuntimeError: command returned non-zero exit status: 100
    [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --a
    ssume-yes -q --no-install-recommends install ceph ceph-osd ceph-mds ceph-mon radosgw
    root@ubuntu:/etc/cluster-ceph# apt --fix-broken install
    Reading package lists... Done
    Building dependency tree       
    
    
    (Reading database ... 120818 files and directories currently installed.)
    Preparing to unpack .../ceph-base_15.1.0-1bionic_amd64.deb ...
    Unpacking ceph-base (15.1.0-1bionic) .............................................................................................] 
    dpkg: error processing archive /var/cache/apt/archives/ceph-base_15.1.0-1bionic_amd64.deb (--unpack):
     trying to overwrite '/usr/share/man/man8/ceph-deploy.8.gz', which is also in package ceph-deploy 1.5.38-0ubuntu1
    Errors were encountered while processing:
     /var/cache/apt/archives/ceph-base_15.1.0-1bionic_amd64.deb
    E: Sub-process /usr/bin/dpkg returned an error code (1)

    解决:

    存在一个ceph-base安装失败导致其他依赖这个包的软件依次失败

    root@ubuntu:/etc/cluster-ceph# ll /var/cache/apt/archives/ceph-base_15.1.0-1bionic_amd64.deb
    -rw-r--r-- 1 root root 5167392 Jan 29 16:43 /var/cache/apt/archives/ceph-base_15.1.0-1bionic_amd64.deb

    root@ubuntu:/etc/cluster-ceph# useradd ceph   ###添加账户     apt --fix-broken  install   ###自动修复存在异常的deb
    root@ubuntu:/etc/cluster-ceph# dpkg -i --force-overwrite /var/cache/apt/archives/ceph-base_15.1.0-1bionic_amd64.deb

    root@ubuntu:/etc/cluster-ceph# ceph-deploy install  --release octopus  `hostname`

    root@ubuntu:/etc/cluster-ceph# ceph -v
    ceph version 15.1.0 (49b0421165765bbcfb07e5aa7a818a47cc023df7) octopus (rc)

    初始化mon/mgr

    ceph-deploy mon create-initial

    ceph-deploy admin `hostname`

    ceph-deploy  mgr create `hostname`

    root@c1:~# systemctl list-units 'ceph*' --type=service    ###查看ceph运行的相关服务
    UNIT LOAD ACTIVE SUB DESCRIPTION
    ceph-crash.service loaded active running Ceph crash dump collector
    ceph-mgr@c1.service loaded active running Ceph cluster manager daemon
    ceph-mon@c1.service loaded active running Ceph cluster monitor daemon

    初始化osd bluestore /mds

    root@ubuntu:/etc/cluster-ceph# ceph-deploy osd create -h    ###查看命令的帮助信息,不同发行版本后续参数都不一样!真caodan
    usage: ceph-deploy osd create [-h] [--data DATA] [--journal JOURNAL]
                                  [--zap-disk] [--fs-type FS_TYPE] [--dmcrypt]
                                  [--dmcrypt-key-dir KEYDIR] [--filestore]
                                  [--bluestore] [--block-db BLOCK_DB]
                                  [--block-wal BLOCK_WAL] [--debug]
                                  [HOST]
    
    positional arguments:
      HOST                  Remote host to connect
    
    optional arguments:
      -h, --help            show this help message and exit
      --data DATA           The OSD data logical volume (vg/lv) or absolute path   ###vg/lv/device
                            to device
      --journal JOURNAL     Logical Volume (vg/lv) or path to GPT partition
      --zap-disk            DEPRECATED - cannot zap when creating an OSD
      --fs-type FS_TYPE     filesystem to use to format DEVICE (xfs, btrfs)
      --dmcrypt             use dm-crypt on DEVICE
      --dmcrypt-key-dir KEYDIR
                            directory where dm-crypt keys are stored
      --filestore           filestore objectstore
      --bluestore           bluestore objectstore
      --block-db BLOCK_DB   bluestore block.db path
      --block-wal BLOCK_WAL
                            bluestore block.wal path
      --debug               Enable debug mode on remote ceph-volume calls

    root@ubuntu:/etc/cluster-ceph# ceph-deploy osd create --bluestore `hostname` --data /dev/sdb

    root@ubuntu:/etc/cluster-ceph# ceph -s
    cluster:
    id: 081a571c-cb0b-452f-b583-ab4f82f8344a
    health: HEALTH_OK

    services:
    mon: 1 daemons, quorum ubuntu (age 5m)
    mgr: ubuntu(active, since 4m)
    osd: 1 osds: 1 up (since 3m), 1 in (since 3m)

    data:
    pools: 0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage: 1.0 GiB used, 18 GiB / 19 GiB avail
    pgs:

    root@ubuntu:/etc/cluster-ceph# ceph-deploy mds create `hostname`    ###cephfs类型存储,必须依赖mds(元数据)

    root@ubuntu:/etc/cluster-ceph# systemctl list-units 'ceph*' --type=service    ###查看ceph运行的服务,默认就是开机自启动
    UNIT LOAD ACTIVE SUB DESCRIPTION
    ceph-crash.service loaded active running Ceph crash dump collector
    ceph-mgr@ubuntu.service loaded active running Ceph cluster manager daemon
    ceph-mon@ubuntu.service loaded active running Ceph cluster monitor daemon
    ceph-osd@0.service loaded active running Ceph object storage daemon osd.0

    ceph-mds@ubuntu.service loaded active running Ceph metadata server daemon

    cephfs挂载  mount

    root@ubuntu:/etc/cluster-ceph# ceph osd pool create -h    ###多看帮助命令,了解基本的命令组成

    osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated| create pool
    erasure} {<erasure_code_profile>} {<rule>} {<int>} {<int>}
    {<int[0-]>} {<int[0-]>} {<float[0.0-1.0]>}

    root@ubuntu:/etc/cluster-ceph# ceph osd pool create cephfs_data 128 128
    pool 'cephfs_data' created
    root@ubuntu:/etc/cluster-ceph# ceph osd pool create cephfs_metadata 128 128
    pool 'cephfs_metadata' created
    root@ubuntu:/etc/cluster-ceph#
    root@ubuntu:/etc/cluster-ceph# ceph osd pool ls
    cephfs_data
    cephfs_metadata

    root@ubuntu:/etc/cluster-ceph# ceph fs new -h

    fs new <fs_name> <metadata> <data> {--force} {--allow-dangerous- make new filesystem using named pools <metadata> and <data>
    metadata-overlay}
    root@ubuntu:/etc/cluster-ceph# ceph fs new cephfs cephfs_metadata cephfs_data   ###创建cephfs
    new fs with metadata pool 2 and data pool 1
    root@ubuntu:/etc/cluster-ceph#
    root@ubuntu:/etc/cluster-ceph# ceph fs ls
    name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
    root@ubuntu:/etc/cluster-ceph#mkdir /ceph   ###创建挂载点

    root@ubuntu:/etc/cluster-ceph# cat ceph.client.admin.keyring   ###查看mount ceph的认证信息
    [client.admin]
    key = AQDDUD5e4S95LhAAVgxDj5jC+QxU0KEvZ6XgBA==
    caps mds = "allow *"
    caps mgr = "allow *"
    caps mon = "allow *"
    caps osd = "allow *"

    root@ubuntu:/etc/cluster-ceph# mount -o name=admin,secret=AQDDUD5e4S95LhAAVgxDj5jC+QxU0KEvZ6XgBA== -t ceph 192.168.3.103:6789:/ /ceph/
    root@ubuntu:/etc/cluster-ceph# root@ubuntu:/etc/cluster-ceph#
    root@ubuntu:/etc/cluster-ceph#
    root@ubuntu:/etc/cluster-ceph# df -hT
    Filesystem Type Size Used Avail Use% Mounted on
    udev devtmpfs 1.9G 0 1.9G 0% /dev
    tmpfs tmpfs 393M 1020K 392M 1% /run
    /dev/sda1 ext4 20G 5.5G 14G 30% /
    tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
    tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
    tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
    tmpfs tmpfs 393M 0 393M 0% /run/user/0
    tmpfs tmpfs 2.0G 52K 2.0G 1% /var/lib/ceph/osd/ceph-0
    192.168.3.103:6789:/ ceph 18G 0 18G 0% /ceph
    root@ubuntu:/etc/cluster-ceph# ll /ceph/
    total 0
    root@ubuntu:/etc/cluster-ceph#
    root@ubuntu:/etc/cluster-ceph#
    root@ubuntu:/etc/cluster-ceph# touch /ceph/sb
    root@ubuntu:/etc/cluster-ceph# ll /ceph/
    total 0
    -rw-r--r-- 1 root root 0 Feb 7 22:43 sb

    ###ERROR

    tail -f /var/log/kern.log     ###挂载查看内核日志

    Feb  7 22:42:08 ubuntu kernel: [ 2436.477124] libceph: mon0 192.168.3.103:6789 session established
    Feb  7 22:42:08 ubuntu kernel: [ 2436.477983] libceph: client4207 fsid 081a571c-cb0b-452f-b583-ab4f82f8344a
    Feb  7 22:42:08 ubuntu kernel: [ 2436.478050] ceph: probably no mds server is up
    Feb  7 22:42:45 ubuntu kernel: [ 2473.042195] libceph: mon0 192.168.3.103:6789 session established
    Feb  7 22:42:45 ubuntu kernel: [ 2473.042338] libceph: client4215 fsid 081a571c-cb0b-452f-b583-ab4f82f8344a

    根据日志,mds服务为创建导致的

    root@ubuntu:/etc/cluster-ceph# ceph-deploy mds create `hostname`

    总结:不同版本,部分命令不一样,很不爽!所以要针对ceph-xxx版本提问题

  • 相关阅读:
    使用Google浏览器做真机页面调试
    JavaScript从作用域到闭包
    用canvas实现一个colorpicker
    你还在为移动端选择器picker插件而捉急吗?
    我是这样写文字轮播的
    高性能JS-DOM
    ExtJs4学习(四):Extjs 中id与itemId的差别
    MongoDB 安装与启动
    UML应用:业务内涵的分析抽象&amp;表达
    MySQL 错误日志(Error Log)
  • 原文地址:https://www.cnblogs.com/xiaochina/p/12275313.html
Copyright © 2011-2022 走看看