zoukankan      html  css  js  c++  java
  • ceph rgw multisite基本用法

    Realm:
    Zonegroup: 理解为数据中心,由一个或多个Zone组成,每个Realm有且仅有 一个Master Zonegroup,用于处理系统变更,其他的称为Slave Zonegroup,元数据与Master Zonegroup保持一致;

    Zone: Zone是一个逻辑概念,包含一个或者多个RGW实例。每个Zonegroup有且仅有一个Master Zone,用于处理bucket和user等元数据变更。

    Period: 保存realm当前的配置信息,使用epoch维护版本信息。

    Metadata Sync:Zone是一个逻辑概念,包含一个或者多个RGW实例。每个Zonegroup有且仅有一个Master Zone,用于处理bucket和user等元数据变更。

    systemctl restart ceph-radosgw@rgw.hostname

    创建realm:

    A realm contains the notion of periods. Each period represents the state of the zone group and zone configuration in time.Each time you make a change to a zonegroup or zone, update the period and commit it.

    realm里所有的元数据名称都是全局唯一的,无法创建同名的用户(uid)和bucket, container;
    radosgw-admin realm create --rgw-realm=Giant --default
    

    查看存在的realm:

    radosgw-admin realm list
    

    创建Master zonegroup:

    删除 Default ZoneGroup并创建Master ZoneGroup
    为了前向兼容,所以会存在默认的zonegroup,需要删除
    
    radosgw-admin zonegroup delete --rgw-zonegroup=default
    radosgw-admin zonegroup create --rgw-zonegroup=beijing --endpoints=beijing.com --master --default
    

    查看zonegroup相关信息:

    radosgw-admin zonegroup list
    radosgw-amdin zonegroup get {zonegroup name}
    

    创建Master zone:

    删除default Zone 并创建Master Zone
    为了前向兼容,所以默认存在zone,需要删除
    radosgw-admin zone delete --rgw-zone=default
    #创建Master zone,并指定zonegroup
    radosgw-admin zone create --rgw-zonegroup=beijing --rgw-zone=beijing --endpoints=beijing.com  --access-key=admin --secret=admin --default --master
    

    Secondary Zones:

    You must execute metadata operations, such as user creation, on a host within the master zone. The master zone and the secondary zone can receive bucket operations, but the secondary zone redirects bucket operations to the master zone. If the master zone is down, bucket operations will fail.
    

    Bucket Sharding

    主要为了解决.rgw.buckets.index pool的性能问题;该pool下存放了bucket index data;

    rgw_override_bucket_index_max_shards 
    
    default: 0 #不启用分片
    推荐值: {number of objects expected in a bucket / 100,000}
    max value: 7877
    

    The default value for rgw_max_objs_per_shard is 100k objects per shard.

    动态修改sharding值:
    rgw_dynamic_reshading = true

    rgw_reshard_num_logs: The number of shards for the resharding log. The default value is 16.

    rgw_reshard_bucket_lock_duration: The duration of the lock on a bucket during resharding. The default value is 120 seconds.

    rgw_dynamic_resharding: Enables or disables dynamic resharding. The default value is true.

    rgw_max_objs_per_shard: The maximum number of objects per shard. The default value is 100000 objects per shard.

    rgw_reshard_thread_interval: The maximum time between rounds of reshard thread processing. The default value is 600 seconds.

    将bucket加入resharding 队列:

    radosgw-admin bucket reshard add --bucket <bucket_name> --num-shards <new number of shards>
    

    查看resharding队列:

    radosgw-admin bucket reshard list
    

    手动执行reshard:

    radosgw-admin bucket reshard process
    

    取消在bucket resharding期间取消resharding:

    radosgw-admin bucket reshard cancel --bucket <bucket_name>
    

    压缩:compression

    压缩插件:
    zlib: 支持
    snappy, zstd: 预览版

    radosgw-admin zone placement modify --rgw-zone=default --placement-id=default-placement --compression=zlib
    

    After enabling or disabling compression, restart the Ceph Object Gateway instance so the change will take effect.

    $ radosgw-admin bucket stats --bucket=
    {
    ...
    "usage": {
    "rgw.main": {
    "size": 1075028,
    "size_actual": 1331200,
    "size_utilized": 592035,
    "size_kb": 1050,
    "size_kb_actual": 1300,
    "size_kb_utilized": 579,
    "num_objects": 104
    }
    },
    ...
    }

    The size_utilized and size_kb_utilized fields represent the total size of compressed data in bytes and kilobytes respectively.

    Quota 管理:

    Quotas include the maximum number of objects in a bucket and the maximum storage size in megabytes.
    
    IMPORT: Buckets with a large number of objects can cause serious performance issues. 
    The recommended maximum number of objects in a one bucket is 100,000. To increase this number, 
    configure bucket index sharding
    

    Set User Quotas:

    radosgw-admin quota set --quota-scope=user --uid=<uid> [--max-objects=<num objects>] [--max-size=<max size>]
    eg:
    	radosgw-admin quota set --quota-scope=user --uid=johndoe --max-objects=1024 --max-size=1024
    A negative value for num objects and / or max size means that the specific quota attribute check is disabled.
    

    Ebable and Disable User Quotas:

    #radosgw-admin quota enable --quota-scope=user --uid=<uid>
    #radosgw-admin quota disable --quota-scope=user --uid=<uid>
    

    Set Bucket Quotas:

    #radosgw-admin quota set --uid=<uid> --quota-scope=bucket [--max-objects=<num objects>] [--max-size=<max size]
    

    Enable and Disable Bucket Quotas

    #radosgw-amdin quota enable --quota-scope=bucket --uid=<uid>
    #radosgw-admin quota-disable --quota-scope=bucket --uid=<uid>
    

    Get Quota Settings

    #radosgw-admin user info --uid=<uid>
    

    Update Quota Stats

    #radosgw-admin user stats --uid=<uid> --sync-stats
    

    Get User Quota Usage Stats

    #radosgw-admin user stats --uid=<uid>
    

    Quota Cache:

    rgw bucket quota ttl, rgw user quota bucket sync interval, rgw user quota sync interval.
    

    统计用户

    #radosgw-admin usage show --uid=johndeo --start-data=2012-03-01 --end-date=2012-04-01
    #radosgw-amdin usage show --show-log-entri
    

    清理孤儿对象:

    create a new log pool:
    1、rados mkpool .log
    2、radosgw-admin orphans find --pool=<data_pool> --job-id=<job_name> [--num-shards=<num_shards>] [--orphan-stale-secs=<seconds>]
    	<Search for orphan objects>
    3、radosgw-admin orphans find --pool=.rgw.buckets --job-id=abc123
    4、Clean up the search data:
    	radosgw-amdin orphans finish --job-id=abc123
    

    Zones:

    Ceph Object Gateway supports the notion of zones. A zone defines a logical group consisting of one or more Ceph Object Gateway instances.

  • 相关阅读:
    SQLyog连接mysql8,报错1251
    Oracle日期函数
    git 回退
    git新建分支并指定拉去远程分支
    git创建分支并拉去远端分支代码
    git创建空白分支
    Maven 本地仓库明明有jar包,pom文件还是报错解决办法(Missing artifact...jar)
    SqlHelper类
    ADO.NET中的模型及对象
    MVC过滤器---异常处理过滤器
  • 原文地址:https://www.cnblogs.com/chris-cp/p/8550516.html
Copyright © 2011-2022 走看看