zoukankan      html  css  js  c++  java
  • ceph-pve英语

    adapted accordingly
    并相应地调整

    silos
    n. 筒仓;粮仓;贮仓(silo的复数)

    saturate
    vt. 浸透,使湿透;使饱和,使充满
    While one HDD might not saturate a 1 Gb link

    likelihood
    n. 可能性,可能

    aforementioned
    adj. 上述的;前面提及的

    fail-safe
    n. 自动防故障装置

    colocated
    驻扎在同一地点

    budgeted
    adj. 已安排预算的

    devoted
    adj. 献身的;忠诚的

    In general SSDs will provide more IOPs than spinning disks. This fact and the higher cost may make a class
    based Section 4.2.9 separation of pools appealing. Another possibility to speedup OSDs is to use a faster
    disk as journal or DB/WAL device, see creating Ceph OSDs Section 4.2.7. If a faster disk is used for multiple
    OSDs, a proper balance between OSD and WAL / DB (or journal) disk must be selected, otherwise the faster
    disk becomes the bottleneck for all linked OSDs.
    Aside from the disk type, Ceph best performs with an even sized and distributed amount of disks per node.
    For example, 4 x 500 GB disks with in each node is better than a mixed setup with a single 1 TB and three
    250 GB disk.
    One also need to balance OSD count and single OSD capacity. More capacity allows to increase storage
    density, but it also means that a single OSD failure forces ceph to recover more data at once.

    OSDs can also be backed by a combination of devices, like a HDD for most data and an SSD (or partition of an SSD) for some metadata.

    BlueStore allows its internal journal (write-ahead log) to be written to a separate, high-speed device (like an SSD, NVMe, or NVDIMM) to increased performance.

    However, the most common practice is to partition the journal drive (often an SSD),
    and mount it such that Ceph uses the entire partition for the journal.

    Sizing for block.db should be as large as possible to avoid performance penalties otherwise.

    When using a mixed spinning and solid drive setup it is important to make a large-enough block.db logical volume for Bluestore.

    The Ceph objecter handles where to place the objects and the tiering agent determines when to flush objects from the cache to the backing storage tier.

    VMIDs < 100 are reserved for internal purposes, and VMIDs need to be unique cluster wide.

    block就是primary device
    block.db or block.wal
    A DB device (identified as block.db
    A WAL device (identified as block.wal


    两种普通的使用场景
    1BLOCK (DATA) ONLY
    it makes sense to just deploy with block only and not try to separate block.db or block.wal.

    2BLOCK AND BLOCK.DB


    -----------------------------------------------
    Bibliography

    n. 参考书目;文献目录

    The Proxmox VE management tool (pvesh) allows to directly invoke API function, without using the REST/HTTPS server.

    # single time output
    pve# ceph -s
    # continuously output status changes (press CTRL+C to stop)
    pve# ceph -w

    -------------------------------------
    A volume is identified by the <STORAGE_ID>, followed by a storage type dependent volume name, separated by colon. A valid <VOLUME_ID> looks like:

    local:230/example-image.raw

    pvesm path <VOLUME_ID>

    root@cu-pve04:/var/lib/vz# pvesm path kycfs:iso/CentOS-7-x86_64-Minimal-1810.iso
    /mnt/pve/kycfs/template/iso/CentOS-7-x86_64-Minimal-1810.iso

    root@cu-pve04:/var/lib/vz# pvesm path kycrbd:vm-102-disk-0
    rbd:kycrbd/vm-102-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/kycrbd.keyring

  • 相关阅读:
    [转]在efcore 中创建类 通过实现IEntityTypeConfiguration<T>接口 实现实体类的伙伴类 实现FluentApi
    jboss反序列化漏洞(CVE-2017-12149)
    第一阶段 3、javascript
    vue创建新项目
    vue引入git项目运行测试相关
    javascript基础知识梳理
    关于模式识别作业——利用分类器实现手写数字识别
    Guava 学习
    读书清单
    @JsonInclude(Include.NON_NULL)全局配置
  • 原文地址:https://www.cnblogs.com/createyuan/p/10862225.html
Copyright © 2011-2022 走看看