zoukankan      html  css  js  c++  java
  • Pixel VS Point, FrameBuffer VS RenderBuffer

    // How iOS app MVC works

    View, Window,

    AppDelegate

    ViewController, RootViewController

    // On Pixel VS Point

    The 'point' (pt) on the other hand is a unit of length, commonly used to measure the height of a font, but technically capable of measuring any length. In applications, 1pt is equal to exactly 1/72th of an inch; in traditional print technically 72pt is 0.996264 inches, although I think you'll be forgiven for rounding it up!

    How many pixels = 1pt depends on the resolution of your image. If your image is 72ppi (pixels per inch), then one point will equal exactly one pixel.

    Framebuffer, Renderbuffer

    iOS works in points - not pixels. This is to make it easier to work with sizing and position with different scale displays.

    Eg: an iPhone 3Gs at 1x scale has a width of 320 points (which happens to coincide with 320 pixels that the display physically has), then iPhone 4 came along with the retina display (at 2x scale) where its width is still 320 points, but works out to be 640 physical pixels. The screen renders the UI twice the size as the 3Gs, but fit its into the same physical space. Because of the increased pixel density, this increases the quality of the display.

    The Frame Buffer object is not actually a buffer, but an aggregator object that contains one or more attachments, which by their turn, are the actual buffers. You can understand the Frame Buffer as C structure where every member is a pointer to a buffer. Without any attachment, a Frame Buffer object has very low footprint.

    Now each buffer attached to a Frame Buffer can be a Render Buffer or a texture.

    The Render Buffer is an actual buffer (an array of bytes, or integers, or pixels). The Render Buffer stores pixel values in native format, so it's optimized for offscreen rendering. In other words, drawing to a Render Buffer can be much faster than drawing to a texture. The drawback is that pixels uses a native, implementation-dependent format, so that reading from a Render Buffer is much harder than reading from a texture. Nevertheless, once a Render Buffer has been painted, one can copy its content directly to screen (or to other Render Buffer, I guess), very quickly using pixel transfer operations. This means that a Render Buffer can be used to efficiently implement the double buffer pattern that you mentioned.

    Render Buffers are a relatively new concept. Before them, a Frame Buffer was used to render to a texture, which can be slower because a texture uses a standard format. It is still possible to render to a texture, and that's quite useful when one needs to perform multiple passes over each pixel to build a scene, or to draw a scene on a surface of another scene!

    The OpenGL wiki has this page that shows more details and links.

    Apple/Xcode/objectiveC:    

    [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer];

    https://developer.apple.com/library/ios/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithOpenGLESContexts/WorkingwithOpenGLESContexts.html

  • 相关阅读:
    说说委托是个什么东西,以及委托有什么用
    Fedora/Centos使用dnf/yum为Firefox安装Flash,两行命令超简单
    Debian中的NVIDIA显卡驱动安装——超简单,一行命令
    对象、字段、属性、方法、成员、接口各自含义
    C#中的字段与属性的区别及属性的作用
    C#的foreach遍历循环和隐式类型变量
    依赖注入的通俗讲解,设计低耦合的系统
    群晖moments提示错误代码117的解决方案
    群晖数据库错误解决
    解决终端录制工具asciinema 的错误
  • 原文地址:https://www.cnblogs.com/antai/p/4899712.html
Copyright © 2011-2022 走看看