zoukankan      html  css  js  c++  java
  • C工程 交互 ceph 分布式存储系统

    网上看到有人问,如何在C项目里调用ceph系统对外提供的API,实现分布式存储。

    我在网上搜到了相关信息,但是因为不是会员无法追加答案,故而,贴于此。

    赠予有缘人:)

    ————————————————————————————————————

    The Ceph Storage Cluster provides the basic storage service that allows Ceph to uniquely deliver object, block, and file storage in one unified system. However, you are not limited to using the RESTful, block, or POSIX interfaces. Based upon RADOS, the librados API enables you to create your own interface to the Ceph Storage Cluster.

    The librados API enables you to interact with the two types of daemons in the Ceph Storage Cluster:

    • The Ceph Monitor, which maintains a master copy of the cluster map.
    • The Ceph OSD Daemon (OSD), which stores data as objects on a storage node.

    This guide provides a high-level introduction to using librados. Refer to 体系结构 for additional details of the Ceph Storage Cluster. To use the API, you need a running Ceph Storage Cluster. See Installation (Quick) for details.

    获取 LIBRADOS

    你的客户端应用必须绑定 librados 才能连接 Ceph 存储集群。在写使用 librados 的应用程序前,要安装 librados 及其他依赖包。 librados API 本身是用 C++ 实现的,另外有 C 、 Python 、 Java 和 PHP 绑定。

    GETTING LIBRADOS FOR C/C++

    To install librados development support files for C/C++ on Debian/Ubuntu distributions, execute the following:

      sudo apt-get install librados-dev

    配置集群句柄

    Ceph Client, via librados, interacts directly with OSDs to store and retrieve data. To interact with OSDs, the client app must invoke librados and connect to a Ceph Monitor. Once connected, librados retrieves the Cluster Map from the Ceph Monitor. When the client app wants to read or write data, it creates an I/O context and binds to a pool. The pool has an associated ruleset that defines how it will place data in the storage cluster. Via the I/O context, the client provides the object name to librados, which takes the object name and the cluster map (i.e., the topology of the cluster) and computes the placement group and OSD for locating the data. Then the client application can read or write data. The client app doesn’t need to learn about the topology of the cluster directly.

    The Ceph Storage Cluster handle encapsulates the client configuration, including:

    • The user ID for rados_create() or user name for rados_create2() (preferred).
    • The cephx authentication key
    • The monitor ID and IP address
    • Logging levels
    • Debugging levels

    Thus, the first steps in using the cluster from your app are to 1) create a cluster handle that your app will use to connect to the storage cluster, and then 2) use that handle to connect. To connect to the cluster, the app must supply a monitor address, a username and an authentication key (cephx is enabled by default).

    Tip

     

    Talking to different Ceph Storage Clusters – or to the same cluster with different users – requires different cluster handles.

    RADOS provides a number of ways for you to set the required values. For the monitor and encryption key settings, an easy way to handle them is to ensure that your Ceph configuration file contains a keyring path to a keyring file and at least one monitor address (e.g,. mon host). For example:

    [global]
    mon host = 192.168.1.1
    keyring = /etc/ceph/ceph.client.admin.keyring

    Once you create the handle, you can read a Ceph configuration file to configure the handle. You can also pass arguments to your app and parse them with the function for parsing command line arguments (e.g., rados_conf_parse_argv()), or parse Ceph environment variables (e.g., rados_conf_parse_env()). Some wrappers may not implement convenience methods, so you may need to implement these capabilities. The following diagram provides a high-level flow for the initial connection.

    Once connected, your app can invoke functions that affect the whole cluster with only the cluster handle. For example, once you have a cluster handle, you can:

    • Get cluster statistics
    • Use Pool Operation (exists, create, list, delete)
    • Get and set the configuration

    One of the powerful features of Ceph is the ability to bind to different pools. Each pool may have a different number of placement groups, object replicas and replication strategies. For example, a pool could be set up as a “hot” pool that uses SSDs for frequently used objects or a “cold” pool that uses erasure coding.

    The main difference in the various librados bindings is between C and the object-oriented bindings for C++, Java and Python. The object-oriented bindings use objects to represent cluster handles, IO Contexts, iterators, exceptions, etc.

    EXAMPLE(要链接librados.so)

    #include <stdio.h>
    #include <string.h>
    #include <rados/librados.h>
    
    int main (int argc, char argv**)
    {
    
            /* Declare the cluster handle and required arguments. */
            rados_t cluster;
            char cluster_name[] = "ceph";
            char user_name[] = "client.admin";
            uint64_t flags;
    
            /* Initialize the cluster handle with the "ceph" cluster name and the "client.admin" user */
            int err;
            err = rados_create2(&cluster, cluster_name, user_name, flags);
    
            if (err < 0) {
                    fprintf(stderr, "%s: Couldn't create the cluster handle! %s
    ", argv[0], strerror(-err));
                    exit(EXIT_FAILURE);
            } else {
                    printf("
    Created a cluster handle.
    ");
            }
    
    
            /* Read a Ceph configuration file to configure the cluster handle. */
            err = rados_conf_read_file(cluster, "/etc/ceph/ceph.conf");
            if (err < 0) {
                    fprintf(stderr, "%s: cannot read config file: %s
    ", argv[0], strerror(-err));
                    exit(EXIT_FAILURE);
            } else {
                    printf("
    Read the config file.
    ");
            }
    
            /* Read command line arguments */
            err = rados_conf_parse_argv(cluster, argc, argv);
            if (err < 0) {
                    fprintf(stderr, "%s: cannot parse command line arguments: %s
    ", argv[0], strerror(-err));
                    exit(EXIT_FAILURE);
            } else {
                    printf("
    Read the command line arguments.
    ");
            }
    
            /* Connect to the cluster */
            err = rados_connect(cluster);
            if (err < 0) {
                    fprintf(stderr, "%s: cannot connect to cluster: %s
    ", argv[0], strerror(-err));
                    exit(EXIT_FAILURE);
            } else {
                    printf("
    Connected to the cluster.
    ");
            }

    CREATING AN I/O CONTEXT

    Once your app has a cluster handle and a connection to a Ceph Storage Cluster, you may create an I/O Context and begin reading and writing data. An I/O Context binds the connection to a specific pool. The user must have appropriate CAPS permissions to access the specified pool. For example, a user with read access but not write access will only be able to read data. I/O Context functionality includes:

    • Write/read data and extended attributes
    • List and iterate over objects and extended attributes
    • Snapshot pools, list snapshots, etc.

    RADOS enables you to interact both synchronously and asynchronously. Once your app has an I/O Context, read/write operations only require you to know the object/xattr name. The CRUSH algorithm encapsulated in librados uses the cluster map to identify the appropriate OSD. OSD daemons handle the replication, as described in Smart Daemons Enable Hyperscale. The librados library also maps objects to placement groups, as described in Calculating PG IDs.

    The following examples use the default data pool. However, you may also use the API to list pools, ensure they exist, or create and delete pools. For the write operations, the examples illustrate how to use synchronous mode. For the read operations, the examples illustrate how to use asynchronous mode.

    Important

     

    Use caution when deleting pools with this API. If you delete a pool, the pool and ALL DATA in the pool will be lost.

    EXAMPLE

    #include <stdio.h>
    #include <string.h>
    #include <rados/librados.h>
    
    int main (int argc, const char argv**)
    {
            /*
             * Continued from previous C example, where cluster handle and
             * connection are established. First declare an I/O Context.
             */
    
            rados_ioctx_t io;
            char *poolname = "data";
    
            err = rados_ioctx_create(cluster, poolname, &io);
            if (err < 0) {
                    fprintf(stderr, "%s: cannot open rados pool %s: %s
    ", argv[0], poolname, strerror(-err));
                    rados_shutdown(cluster);
                    exit(EXIT_FAILURE);
            } else {
                    printf("
    Created I/O context.
    ");
            }
    
            /* Write data to the cluster synchronously. */
            err = rados_write(io, "hw", "Hello World!", 12, 0);
            if (err < 0) {
                    fprintf(stderr, "%s: Cannot write object "hw" to pool %s: %s
    ", argv[0], poolname, strerror(-err));
                    rados_ioctx_destroy(io);
                    rados_shutdown(cluster);
                    exit(1);
            } else {
                    printf("
    Wrote "Hello World" to object "hw".
    ");
            }
    
            char xattr[] = "en_US";
            err = rados_setxattr(io, "hw", "lang", xattr, 5);
            if (err < 0) {
                    fprintf(stderr, "%s: Cannot write xattr to pool %s: %s
    ", argv[0], poolname, strerror(-err));
                    rados_ioctx_destroy(io);
                    rados_shutdown(cluster);
                    exit(1);
            } else {
                    printf("
    Wrote "en_US" to xattr "lang" for object "hw".
    ");
            }
    
            /*
             * Read data from the cluster asynchronously.
             * First, set up asynchronous I/O completion.
             */
            rados_completion_t comp;
            err = rados_aio_create_completion(NULL, NULL, NULL, &comp);
            if (err < 0) {
                    fprintf(stderr, "%s: Could not create aio completion: %s
    ", argv[0], strerror(-err));
                    rados_ioctx_destroy(io);
                    rados_shutdown(cluster);
                    exit(1);
            } else {
                    printf("
    Created AIO completion.
    ");
            }
    
            /* Next, read data using rados_aio_read. */
            char read_res[100];
            err = rados_aio_read(io, "hw", comp, read_res, 12, 0);
            if (err < 0) {
                    fprintf(stderr, "%s: Cannot read object. %s %s
    ", argv[0], poolname, strerror(-err));
                    rados_ioctx_destroy(io);
                    rados_shutdown(cluster);
                    exit(1);
            } else {
                    printf("
    Read object "hw". The contents are:
     %s 
    ", read_res);
            }
    
            /* Wait for the operation to complete */
            rados_wait_for_complete(comp);
    
            /* Release the asynchronous I/O complete handle to avoid memory leaks. */
            rados_aio_release(comp);
    
    
            char xattr_res[100];
            err = rados_getxattr(io, "hw", "lang", xattr_res, 5);
            if (err < 0) {
                    fprintf(stderr, "%s: Cannot read xattr. %s %s
    ", argv[0], poolname, strerror(-err));
                    rados_ioctx_destroy(io);
                    rados_shutdown(cluster);
                    exit(1);
            } else {
                    printf("
    Read xattr "lang" for object "hw". The contents are:
     %s 
    ", xattr_res);
            }
    
            err = rados_rmxattr(io, "hw", "lang");
            if (err < 0) {
                    fprintf(stderr, "%s: Cannot remove xattr. %s %s
    ", argv[0], poolname, strerror(-err));
                    rados_ioctx_destroy(io);
                    rados_shutdown(cluster);
                    exit(1);
            } else {
                    printf("
    Removed xattr "lang" for object "hw".
    ");
            }
    
            err = rados_remove(io, "hw");
            if (err < 0) {
                    fprintf(stderr, "%s: Cannot remove object. %s %s
    ", argv[0], poolname, strerror(-err));
                    rados_ioctx_destroy(io);
                    rados_shutdown(cluster);
                    exit(1);
            } else {
                    printf("
    Removed object "hw".
    ");
            }
    }

    CLOSING SESSIONS

    EXAMPLE

    rados_ioctx_destroy(io);
    rados_shutdown(cluster);

    Finally:

     网上文章多是出于各个大公司的技术内涵研究人员之手,介绍一堆理论,但是唯独不说实际开发中如何用

    这篇文章算是比较好的补充吧。希望可以帮到需要的人!!!

  • 相关阅读:
    Roofline Model与深度学习模型的性能分析
    卷积神经网络的复杂度分析
    CNN中卷积层的计算细节
    ImageNet 历届冠军最新评析:哪个深度学习模型最适合你?
    最新ICE源码编译安装
    CNN 模型压缩与加速算法综述
    YAML 与 front-matter
    VMware 虚拟机快照、克隆、磁盘扩容
    ubuntu16.04安装Nvidia显卡驱动、CUDA8.0和cudNN V6
    宏使用 Tricks
  • 原文地址:https://www.cnblogs.com/woodzcl/p/7992998.html
Copyright © 2011-2022 走看看