zoukankan      html  css  js  c++  java
  • ceph S3客户端操作--s3cmd

    S3 client 访问ceph rgw

    安装:

    yum install s3cmd

    验证安装是否成功:

    $s3cmd --version
    s3cmd version 1.5.2 #表示安装成功

    在ceph管理主机上创建S3用户:

    sudo radosgw-admin user create --uid="test" --display-name="zhangsan"

    查看用户:

    sudo radosgw-admin user info --uid="test" 

    生成json格式的用户信息:

    {
        "user_id": "test",
        "display_name": "zhangjian",
        "email": "",
        "suspended": 0,
        "max_buckets": 1000,
        "auid": 0,
        "subusers": [
            {
                "id": "test:swift",
                "permissions": "full-control"
            }
        ],
        "keys": [
            {
                "user": "test",
                "access_key": "K770VAKJYC9PB0O9A113",
                "secret_key": "Y1P8tZWsrul1ZOTMPqCiZqNMh13a1IGRxtgYC14f"
            }
        ],
        "swift_keys": [
            {
                "user": "test:swift",
                "secret_key": "uoMo0ZYb9xlYanLAeGrzTQlT0ZBn8K6FaODTinKh"
            }
        ],
        "caps": [],
        "op_mask": "read, write, delete",
        "default_placement": "",
        "placement_tags": [],
        "bucket_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        },
        "temp_url_keys": []
    }
    

    S3 中access_key和secret_key分别扮演者用户名ID和用户密码的角色。 
    access_key–>用户名ID 
    secret_key–>用户密码

    配置S3 client:

    $ s3cmd --configure
    
    Enter new values or accept defaults in brackets with Enter.
    Refer to user manual for detailed description of all options.
    
    Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
    Access Key []: 3WD7VXGJ8ZO0UBTFM5XR
    Secret Key []: 11111111
    Default Region [US]: 
    
    Encryption password is used to protect your files from reading
    by unauthorized persons while in transfer to S3
    Encryption password: 
    Path to GPG program [/usr/bin/gpg]: 
    
    When using secure HTTPS protocol all communication with Amazon S3
    servers is protected from 3rd party eavesdropping. This method is
    slower than plain HTTP, and can only be proxied with Python 2.7 or newer
    Use HTTPS protocol [No]: 
    
    On some networks all internet access must go through a HTTP proxy.
    Try setting it here if you can't connect to S3 directly
    HTTP Proxy server name: 
    
    New settings:
      Access Key: 3WD7VXGJ8ZO0UBTFM5XR
      Secret Key: 11111111
      Default Region: US
      Encryption password: 
      Path to GPG program: /usr/bin/gpg
      Use HTTPS protocol: False
      HTTP Proxy server name: 
      HTTP Proxy server port: 0
    
    Test access with supplied credentials? [Y/n] n
    
    Save settings? [y/N] y
    Configuration saved to '/root/.s3cfg'
    

    在这个交互配置过程中,只配置了其中access_key和secret_key,如果要正常使用自己搭建的存储还有3项需要配置:

      1. cloudfont_host
      2. host_base
      3. host_bucket 
        其对应的配置如下:
    $cat .s3cfg
    [default]
    access_key = K770VAKJYC9PB0O9A113
    access_token = 
    add_encoding_exts = 
    add_headers = 
    bucket_location = US
    ca_certs_file = 
    cache_file = 
    check_ssl_certificate = True
    check_ssl_hostname = True
    cloudfront_host = cephcloud.com
    default_mime_type = binary/octet-stream
    delay_updates = False
    delete_after = False
    delete_after_fetch = False
    delete_removed = False
    dry_run = False
    enable_multipart = True
    encoding = UTF-8
    encrypt = False
    expiry_date = 
    expiry_days = 
    expiry_prefix = 
    follow_symlinks = False
    force = False
    get_continue = False
    gpg_command = /usr/bin/gpg
    gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
    gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
    gpg_passphrase = 
    guess_mime_type = True
    host_base = cephcloud.com
    host_bucket = %(bucket)s.cephcloud.com
    human_readable_sizes = False
    invalidate_default_index_on_cf = False
    invalidate_default_index_root_on_cf = True
    invalidate_on_cf = False
    kms_key = 
    limitrate = 0
    list_md5 = False
    log_target_prefix = 
    long_listing = False
    max_delete = -1
    mime_type = 
    multipart_chunk_size_mb = 15
    multipart_max_chunks = 10000
    preserve_attrs = True
    progress_meter = True
    proxy_host = 
    proxy_port = 0
    put_continue = False
    recursive = False
    recv_chunk = 4096
    reduced_redundancy = False
    requester_pays = False
    restore_days = 1
    secret_key = Y1P8tZWsrul1ZOTMPqCiZqNMh13a1IGRxtgYC14f
    send_chunk = 4096
    server_side_encryption = False
    signature_v2 = False
    simpledb_host = sdb.amazonaws.com
    skip_existing = False
    socket_timeout = 300
    stats = False
    stop_on_error = False
    storage_class = 
    urlencoding_mode = normal
    use_https = False
    use_mime_magic = True
    verbosity = WARNING
    website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
    website_error = 
    website_index = index.html
    

    测试S3 client:

    1.创建bucket:

    $ s3cmd mb s3://first_bucket
    Bucket 's3://first_bucket/' created

    2.列举buckets:

    $ s3cmd -v  ls
    2015-09-14 06:31  s3://first_bucket

    3.删除bucket:

    $ s3cmd rb s3://first_bucket
    Bucket 's3://first_bucket/' removed

    4.上传文件对象:

    $ s3cmd put file.txt s3://first_bucket
    file.txt -> s3://first_bucket/file.txt  [1 of 1]
     12 of 12   100% in    0s   180.54 B/s  done

    在ceph管理主机上查看建立
    bucket的信息
    sudo radosgw-admin bucket list --uid=test
    [
    "frist_bucket"
    ]
    
    sudo radosgw-admin bucket stats --uid=test
    [
        {
            "bucket": "frist_bucket",
            "pool": "default.rgw.buckets.data",
            "index_pool": "default.rgw.buckets.index",
            "id": "89f1b54d-1a56-4cc2-a642-dab15e837719.14226.2",
            "marker": "89f1b54d-1a56-4cc2-a642-dab15e837719.14226.2",
            "owner": "test",
            "ver": "0#3",
            "master_ver": "0#0",
            "mtime": "2018-01-10 10:57:24.860956",
            "max_marker": "0#",
            "usage": {
                "rgw.main": {
                    "size_kb": 3,
                    "size_kb_actual": 4,
                    "num_objects": 1
                }
            },
            "bucket_quota": {
                "enabled": false,
                "max_size_kb": -1,
                "max_objects": -1
            }
        }
    ]
    

      

    基本操作

    Commands:
    Make bucket
    s3cmd mb s3://BUCKET
    Remove bucket
    s3cmd rb s3://BUCKET
    List objects or buckets
    s3cmd ls [s3://BUCKET[/PREFIX]]
    List all object in all buckets
    s3cmd la 
    Put file into bucket
    s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX]
    Get file from bucket
    s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
    Delete file from bucket
    s3cmd del s3://BUCKET/OBJECT
    Delete file from bucket (alias for del)
    s3cmd rm s3://BUCKET/OBJECT
    Restore file from Glacier storage
    s3cmd restore s3://BUCKET/OBJECT
    Synchronize a directory tree to S3 (checks files freshness using size and md5 checksum, unless overridden by options, see below)
    s3cmd sync LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR
    Disk usage by buckets
    s3cmd du [s3://BUCKET[/PREFIX]]
    Get various information about Buckets or Files
    s3cmd info s3://BUCKET[/OBJECT]
    Copy object
    s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
    Modify object metadata
    s3cmd modify s3://BUCKET1/OBJECT
    Move object
    s3cmd mv s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
    Modify Access control list for Bucket or Files
    s3cmd setacl s3://BUCKET[/OBJECT]
    Modify Bucket Policy
    s3cmd setpolicy FILE s3://BUCKET
    Delete Bucket Policy
    s3cmd delpolicy s3://BUCKET
    Modify Bucket CORS
    s3cmd setcors FILE s3://BUCKET
    Delete Bucket CORS
    s3cmd delcors s3://BUCKET
    Modify Bucket Requester Pays policy
    s3cmd payer s3://BUCKET
    Show multipart uploads
    s3cmd multipart s3://BUCKET [Id]
    Abort a multipart upload
    s3cmd abortmp s3://BUCKET/OBJECT Id
    List parts of a multipart upload
    s3cmd listmp s3://BUCKET/OBJECT Id
    Enable/disable bucket access logging
    s3cmd accesslog s3://BUCKET
    Sign arbitrary string using the secret key
    s3cmd sign STRING-TO-SIGN
    Sign an S3 URL to provide limited public access with expiry
    s3cmd signurl s3://BUCKET/OBJECT <expiry_epoch|+expiry_offset>
    Fix invalid file names in a bucket
    s3cmd fixbucket s3://BUCKET[/PREFIX]
    Create Website from bucket
    s3cmd ws-create s3://BUCKET
    Delete Website
    s3cmd ws-delete s3://BUCKET
    Info about Website
    s3cmd ws-info s3://BUCKET
    Set or delete expiration rule for the bucket
    s3cmd expire s3://BUCKET
    Upload a lifecycle policy for the bucket
    s3cmd setlifecycle FILE s3://BUCKET
    Remove a lifecycle policy for the bucket
    s3cmd dellifecycle s3://BUCKET
    List CloudFront distribution points
    s3cmd cflist 
    Display CloudFront distribution point parameters
    s3cmd cfinfo [cf://DIST_ID]
    Create CloudFront distribution point
    s3cmd cfcreate s3://BUCKET
    Delete CloudFront distribution point
    s3cmd cfdelete cf://DIST_ID
    Change CloudFront distribution point parameters
    s3cmd cfmodify cf://DIST_ID
    Display CloudFront invalidation request(s) status
    s3cmd cfinvalinfo cf://DIST_ID[/INVAL_ID]

     
     
  • 相关阅读:
    二进制编译http
    http服务
    FTP服务
    DAY01
    直流电机调速作业
    机械大楼电梯控制项目
    仿真作业
    第六周作业
    第五周作业
    第四周仿真作业
  • 原文地址:https://www.cnblogs.com/kuku0223/p/8257553.html
Copyright © 2011-2022 走看看