zoukankan      html  css  js  c++  java
  • windows下使用Rclone挂载ceph对象存储为本地磁盘

    在windows下可以使用Rclone来挂载Ceph对象存储为本地磁盘,很方便使用。

    1、首先下载适用于 Windows 的 rclone 和相关的依赖工具winfsp

    https://rclone.org/downloads/

    http://www.secfs.net/winfsp/rel/

     

    2、安装软件

    将rclone下载到本地后解压到一个目录下

     winfsp下载下来后安装一下,安装过程很简单,按照提示一路点击下一步即可。

    3、配置rclone环境变量

    配置rclone环境变量,方便使用,如果不配置环境变量,每次在cmd命令界面输入rclone命令时需要输入rclone的绝对路径

    首先打开系统属性-环境变量

     在系统变量中找到PATH,点击编辑

     点击新建,将rclone的存放文件夹路径输入即可

    打开一个cmd命令行,在界面中输入命令:rclone version,查看rclone版本信息,输出如下图所示,就表示环境变量已经配置好

     4、配置rclone

    C:Windowssystem32>rclone config
    2021/01/28 19:18:40 NOTICE: Config file "C:\Users\Administrator\.config\rclone\rclone.conf" not found - using defaults
    No remotes found - make a new one
    n) New remote
    s) Set configuration password
    q) Quit config
    n/s/q> n                                   ## 选择"n",创建一个新配置
    name> test_gw                       ## 输入一个名称,不能包含特殊符号、中文或空格
    Type of storage to configure.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
     1 / 1Fichier
        "fichier"
     2 / Alias for an existing remote
        "alias"
     3 / Amazon Drive
        "amazon cloud drive"
     4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)
        "s3"
     5 / Backblaze B2
        "b2"
     6 / Box
        "box"
     7 / Cache a remote
        "cache"
     8 / Citrix Sharefile
        "sharefile"
     9 / Dropbox
        "dropbox"
    10 / Encrypt/Decrypt a remote
        "crypt"
    11 / FTP Connection
        "ftp"
    12 / Google Cloud Storage (this is not Google Drive)
        "google cloud storage"
    13 / Google Drive
        "drive"
    14 / Google Photos
        "google photos"
    15 / Hubic
        "hubic"
    16 / In memory object storage system.
        "memory"
    17 / Jottacloud
        "jottacloud"
    18 / Koofr
        "koofr"
    19 / Local Disk
        "local"
    20 / Mail.ru Cloud
        "mailru"
    21 / Mega
        "mega"
    22 / Microsoft Azure Blob Storage
        "azureblob"
    23 / Microsoft OneDrive
        "onedrive"
    24 / OpenDrive
        "opendrive"
    25 / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
        "swift"
    26 / Pcloud
        "pcloud"
    27 / Put.io
        "putio"
    28 / QingCloud Object Storage
        "qingstor"
    29 / SSH/SFTP Connection
        "sftp"
    30 / Sugarsync
        "sugarsync"
    31 / Tardigrade Decentralized Cloud Storage
        "tardigrade"
    32 / Transparently chunk/split large files
        "chunker"
    33 / Union merges the contents of several upstream fs
        "union"
    34 / Webdav
        "webdav"
    35 / Yandex Disk
        "yandex"
    36 / http Connection
        "http"
    37 / premiumize.me
        "premiumizeme"
    38 / seafile
        "seafile"
    Storage> 4                       ## 选择“4”,因为我们要使用的是s3
    ** See help for s3 backend at: https://rclone.org/s3/ **
    
    Choose your S3 provider.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
     1 / Amazon Web Services (AWS) S3
        "AWS"
     2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
        "Alibaba"
     3 / Ceph Object Storage
        "Ceph"
     4 / Digital Ocean Spaces
        "DigitalOcean"
     5 / Dreamhost DreamObjects
        "Dreamhost"
     6 / IBM COS S3
        "IBMCOS"
     7 / Minio Object Storage
        "Minio"
     8 / Netease Object Storage (NOS)
        "Netease"
     9 / Scaleway Object Storage
        "Scaleway"
    10 / StackPath Object Storage
        "StackPath"
    11 / Tencent Cloud Object Storage (COS)
        "TencentCOS"
    12 / Wasabi Object Storage
        "Wasabi"
    13 / Any other S3 compatible provider
        "Other"
    provider> xiang       ## 输入一个名称,或者输入“3”
    Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
    Only applies if access_key_id and secret_access_key is blank.
    Enter a boolean value (true or false). Press Enter for the default ("false").
    Choose a number from below, or type in your own value
     1 / Enter AWS credentials in the next step
        "false"
     2 / Get AWS credentials from the environment (env vars or IAM)
        "true"
    env_auth>                 ## 直接回车
    AWS Access Key ID.
    Leave blank for anonymous access or runtime credentials.
    Enter a string value. Press Enter for the default ("").
    access_key_id> 0GA1LO5QXYOAFO4FY1DG                 ## 输入对象用户的访问密钥
    AWS Secret Access Key (password)
    Leave blank for anonymous access or runtime credentials.
    Enter a string value. Press Enter for the default ("").
    secret_access_key> h3VcSH0K1vYtIBbc3vz2gvpVX3fAjFAZWwgBzkbT        ## 输入对象用户的安全密钥
    Region to connect to.
    Leave blank if you are using an S3 clone and you don't have a region.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
     1 / Use this if unsure. Will use v4 signatures and an empty region.
        ""
     2 / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
        "other-v2-signature"
    region>                   ## 直接回车,使用 v4
    Endpoint for S3 API.
    Required when using an S3 clone.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
    endpoint> http://192.168.3.11:7480           ## 输入网关地址
    Location constraint - must be set to match the Region.
    Leave blank if not sure. Used when creating buckets only.
    Enter a string value. Press Enter for the default ("").
    location_constraint>                ## 直接回车
    Canned ACL used when creating buckets and storing or copying objects.
    
    This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
    
    For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
    
    Note that this ACL is applied when server side copying objects as S3
    doesn't copy the ACL from the source but rather writes a fresh one.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
     1 / Owner gets FULL_CONTROL. No one else has access rights (default).
        "private"
     2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
        "public-read"
       / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
     3 | Granting this on a bucket is generally not recommended.
        "public-read-write"
     4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
        "authenticated-read"
       / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
     5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
        "bucket-owner-read"
       / Both the object owner and the bucket owner get FULL_CONTROL over the object.
     6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
        "bucket-owner-full-control"
    acl>                 ## 直接回车
    Edit advanced config? (y/n)
    y) Yes
    n) No (default)
    y/n>               ## 直接回车
    
    Remote config
    --------------------
    
    [test_gw]
    type = s3
    provider = xiang
    access_key_id = 0GA1LO5QXYOAFO4FY1DG
    secret_access_key = h3VcSH0K1vYtIBbc3vz2gvpVX3fAjFAZWwgBzkbT
    
    endpoint = http://192.168.3.11:7480
    --------------------
    
    y) Yes this is OK (default)
    e) Edit this remote
    d) Delete this remote
    y/e/d>               ## 确认上面的配置正确无误后,直接回车
    Current remotes:
    
    Name                 Type
    ====                 ====
    test_gw              s3
    
    e) Edit existing remote
    n) New remote
    d) Delete remote
    r) Rename remote
    c) Copy remote
    s) Set configuration password
    q) Quit config
    e/n/d/r/c/s/q> q           ##输入“q”,完成配置

    配置完成后,在C:Users你的用户名.config clone文件夹下面可以看到一个名称为rclone.conf的文件,这个文件就是rclone的配置文件,如果要修改rclone的配置,可以用记事本打开这个文件进行修改

     

     5、挂载ceph对象存储为本地磁盘

    首先我们在ceph节点后台查看一下bucket列表,

    radosgw-admin bucket list

    在windows下开启一个cmd命令行,在界面中输入以下命令:

    rclone mount -vv test2:/bucket1  Q: --cache-dir c:	emp  --allow-other --attr-timeout 5m --vfs-cache-mode full --vfs-cache-max-age 2h --vfs-cache-max-size 10G --vfs-read-chunk-size-limit 100M --buffer-size 100M --fast-list --checkers 64 --transfers 64  &
    • rclone mount:rclone 挂载命令

    • -vv 调试模式,将所有运行状态输出到终端显示,以便查看命令执行情况

    • test2:/bucket1:test2 为生成配置文件第一步时设置的name名称 ,bucket1 是桶名称

    • Q: :挂载的磁盘盘符名称,不能用已经分配了的磁盘盘符

    • --cache-dir:上传文件前会先将文件缓存到这个目录下,在往桶中写入

    • --allow-other:允许非当前 rclone 用户外其它用户进行访问

    • --attr-timeout 5m:文件属性缓存,(大小,修改时间等)的时间。如果 VPS 配置比较低,建议适当提高这个值,避免过多内核交互,降低资源占用。

    • -vfs-cache-mode full:开启 VFS 文件缓存,可减少 rclone 与 API 交互,同时可提高文件读写效率

    • --vfs-cache-max-age 2h:VFS 文件缓存时间,默认是1个小时,注意这个时间是从文件上传到缓存中成功后的时间而不是文件上传到远程成功后的时间,这里设置 24 小时,如果文件很少更改,建议设置更长时间

    • --vfs-cache-max-size 10G:VFS文件缓存上限大小,建议不超过当前空余磁盘的50%,注意实际使用时可能会超过这个上限值,因为首先只有当有文件上传时才会去检测一次剩余的缓存容量空间,其次当正在有文件上传,而这个文件大小已经超出了缓存上限容量的剩余空间时,rclone不会删除正在上传的文件的缓存文件,而是待文件上传到桶成功后,才会去自动清除最开始上传的缓存文件以使总空间维持在上限以内

    • --vfs-read-chunk-size-limit 100M:分块读取大小,这里设置的是100M,可提高文件读的效率,比如1G的文件,大致分为10个块进行读取,但与此同时API请求次数也会增多,从桶中读取文件时只会下载文件的该参数值+--buffer-size参数大小的内容到本地,这个参数的大小是存在磁盘上,--buffer-size是存在内存中

    • --buffer-size 100M:内存缓存,如果您内存比较小,可降低此值,如果内存比较大,可适当提高

    • --fast-list:如果你文件或者文件夹数量多加上该参数,但会增加内存消耗

    • --checkers 64:并行检查文件数量,默认为8

    • --transfers 64:文件并行传输数量 默认为4

    • --daemon:指后台方式运行,linux上支持,windows下不支持

    注意我这里是开启了调试模式,所以输出的信息比较多,正常情况下命令执行后出现“The service rclone has been started.”提示,即表示挂载成功,如果不需要开启调试模式,去除-vv参数即可。

    挂载成功后,在我的电脑中可以看到一个Q盘,这个盘就是ceph上名称为bucket1的桶,读写操作就跟使用本地磁盘一样。

    开启了调试模式时,可以在cmd命令行界面中实时查看文件的上传进度。

    rclone默认是将桶挂载为本地磁盘,加上--fuse-flag --VolumePrefix=servershare参数可挂载为网络驱动器(据官方介绍在windows系统下,挂载为网络驱动器时性能会好一点,这一点我也没有验证过),如果要挂载多个桶,将命令中的share更改为其他名称防止冲突:

    rclone mount test2:/bucket1 q: --fuse-flag --VolumePrefix=servershare --cache-dir D:media --vfs-cache-mode writes &

     6、开机自动挂载

    在cmd命令行界面中执行rclone挂载命令后,不能关闭该命令行窗口,窗口一旦关闭,挂载的盘符就消失不见了,使用起来会不太方便,可以编写一个VBScript 脚本,然后将该脚本或脚本的快捷方式放在开机启动文件夹下面,这样每次系统启动后,目录就能自动挂载了。

    首先新建一个文本文档,将以下命令复制进去,然后将文本文档的后缀改成.vbs

    dim objShell 
    set objShell=wscript.createObject("WScript.Shell") 
    iReturnCode=objShell.Run("rclone mount  test2:/bucket1  Q: --cache-dir c:	emp  --allow-other --attr-timeout 5m --vfs-cache-mode full --vfs-cache-max-age 2h --vfs-cache-max-size 10G --vfs-read-chunk-size-limit 100M --buffer-size 100M --fast-list --checkers 64 --transfers 64",0,TRUE)

    在开始菜单的搜索栏内或者在“运行”窗口内输入以下命令后回车:shell:Common Startup,打开开机启动文件夹

    将vbs脚本放到C:ProgramDataMicrosoftWindowsStart MenuProgramsStartUp文件夹下面

     设置完成后,重新启动一下系统,目录就自动挂载上了,是不是很方便呢。

  • 相关阅读:
    P1144 最短路计数 题解 最短路应用题
    C++高精度加减乘除模板
    HDU3746 Teacher YYF 题解 KMP算法
    POJ3080 Blue Jeans 题解 KMP算法
    POJ2185 Milking Grid 题解 KMP算法
    POJ2752 Seek the Name, Seek the Fame 题解 KMP算法
    POJ2406 Power Strings 题解 KMP算法
    HDU2087 剪花布条 题解 KMP算法
    eclipse创建maven项目(详细)
    maven的作用及优势
  • 原文地址:https://www.cnblogs.com/xzy186/p/14430650.html
Copyright © 2011-2022 走看看