zoukankan      html  css  js  c++  java
  • Pixel VS Point, FrameBuffer VS RenderBuffer

    // How iOS app MVC works

    View, Window,

    AppDelegate

    ViewController, RootViewController

    // On Pixel VS Point

    The 'point' (pt) on the other hand is a unit of length, commonly used to measure the height of a font, but technically capable of measuring any length. In applications, 1pt is equal to exactly 1/72th of an inch; in traditional print technically 72pt is 0.996264 inches, although I think you'll be forgiven for rounding it up!

    How many pixels = 1pt depends on the resolution of your image. If your image is 72ppi (pixels per inch), then one point will equal exactly one pixel.

    Framebuffer, Renderbuffer

    iOS works in points - not pixels. This is to make it easier to work with sizing and position with different scale displays.

    Eg: an iPhone 3Gs at 1x scale has a width of 320 points (which happens to coincide with 320 pixels that the display physically has), then iPhone 4 came along with the retina display (at 2x scale) where its width is still 320 points, but works out to be 640 physical pixels. The screen renders the UI twice the size as the 3Gs, but fit its into the same physical space. Because of the increased pixel density, this increases the quality of the display.

    The Frame Buffer object is not actually a buffer, but an aggregator object that contains one or more attachments, which by their turn, are the actual buffers. You can understand the Frame Buffer as C structure where every member is a pointer to a buffer. Without any attachment, a Frame Buffer object has very low footprint.

    Now each buffer attached to a Frame Buffer can be a Render Buffer or a texture.

    The Render Buffer is an actual buffer (an array of bytes, or integers, or pixels). The Render Buffer stores pixel values in native format, so it's optimized for offscreen rendering. In other words, drawing to a Render Buffer can be much faster than drawing to a texture. The drawback is that pixels uses a native, implementation-dependent format, so that reading from a Render Buffer is much harder than reading from a texture. Nevertheless, once a Render Buffer has been painted, one can copy its content directly to screen (or to other Render Buffer, I guess), very quickly using pixel transfer operations. This means that a Render Buffer can be used to efficiently implement the double buffer pattern that you mentioned.

    Render Buffers are a relatively new concept. Before them, a Frame Buffer was used to render to a texture, which can be slower because a texture uses a standard format. It is still possible to render to a texture, and that's quite useful when one needs to perform multiple passes over each pixel to build a scene, or to draw a scene on a surface of another scene!

    The OpenGL wiki has this page that shows more details and links.

    Apple/Xcode/objectiveC:    

    [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer];

    https://developer.apple.com/library/ios/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithOpenGLESContexts/WorkingwithOpenGLESContexts.html

  • 相关阅读:
    python多进程(二)
    PLSQL配置
    sql语句之左连结
    点击lable标出现下拉搜索框及选择功能
    angularjs前端分页自定义指令pagination
    未经整理的工作中遇到的小问题
    晒一晒工作内容.....呵呵勿喷
    配置chrome支持本地(file协议)ajax请求
    html5+angularjs+bootstrap+springmvc+mybatis模糊查询Deme
    oracle的sql积累..&..decode函数使用
  • 原文地址:https://www.cnblogs.com/antai/p/4899712.html
Copyright © 2011-2022 走看看