zoukankan      html  css  js  c++  java
  • Avro schemas are defined with JSON . This facilitates implementation in languages that already have JSON libraries.

    https://avro.apache.org/docs/current/

    Introduction

    Apache Avro™ is a data serialization system.

    Avro provides:

    • Rich data structures.
    • A compact, fast, binary data format.
    • A container file, to store persistent data.
    • Remote procedure call (RPC).
    • Simple integration with dynamic languages. Code generation is not required to read or write data files nor to use or implement RPC protocols. Code generation as an optional optimization, only worth implementing for statically typed languages.

    Schemas

    Avro relies on schemas. When Avro data is read, the schema used when writing it is always present. This permits each datum to be written with no per-value overheads, making serialization both fast and small. This also facilitates use with dynamic, scripting languages, since data, together with its schema, is fully self-describing.

    When Avro data is stored in a file, its schema is stored with it, so that files may be processed later by any program. If the program reading the data expects a different schema this can be easily resolved, since both schemas are present.

    When Avro is used in RPC, the client and server exchange schemas in the connection handshake. (This can be optimized so that, for most calls, no schemas are actually transmitted.) Since both client and server both have the other's full schema, correspondence between same named fields, missing fields, extra fields, etc. can all be easily resolved.

    Avro schemas are defined with JSON . This facilitates implementation in languages that already have JSON libraries.

    Comparison with other systems

    Avro provides functionality similar to systems such as ThriftProtocol Buffers, etc. Avro differs from these systems in the following fundamental aspects.

    • Dynamic typing: Avro does not require that code be generated. Data is always accompanied by a schema that permits full processing of that data without code generation, static datatypes, etc. This facilitates construction of generic data-processing systems and languages.
    • Untagged data: Since the schema is present when data is read, considerably less type information need be encoded with data, resulting in smaller serialization size.
    • No manually-assigned field IDs: When a schema changes, both the old and new schema are always present when processing data, so differences may be resolved symbolically, using field names.
  • 相关阅读:
    yafu安装使用方法以及mismatched parens解决方法
    Bubble Babble Binary Data Encoding的简介以及bubblepy的安装使用方法
    python-gzip解压缩(实验吧SOS)
    python用模块zlib压缩与解压字符串和文件的方法
    IDA-IDC脚本编写语法
    南邮CTF密码学,mixed_base64
    各种类型文件头标准编码
    实验吧,心中无码
    Vue + webpack的纯前端项目如何配置外部配置文件
    js小技巧
  • 原文地址:https://www.cnblogs.com/rsapaper/p/7764487.html
Copyright © 2011-2022 走看看