zoukankan      html  css  js  c++  java
  • Pixel VS Point, FrameBuffer VS RenderBuffer

    // How iOS app MVC works

    View, Window,

    AppDelegate

    ViewController, RootViewController

    // On Pixel VS Point

    The 'point' (pt) on the other hand is a unit of length, commonly used to measure the height of a font, but technically capable of measuring any length. In applications, 1pt is equal to exactly 1/72th of an inch; in traditional print technically 72pt is 0.996264 inches, although I think you'll be forgiven for rounding it up!

    How many pixels = 1pt depends on the resolution of your image. If your image is 72ppi (pixels per inch), then one point will equal exactly one pixel.

    Framebuffer, Renderbuffer

    iOS works in points - not pixels. This is to make it easier to work with sizing and position with different scale displays.

    Eg: an iPhone 3Gs at 1x scale has a width of 320 points (which happens to coincide with 320 pixels that the display physically has), then iPhone 4 came along with the retina display (at 2x scale) where its width is still 320 points, but works out to be 640 physical pixels. The screen renders the UI twice the size as the 3Gs, but fit its into the same physical space. Because of the increased pixel density, this increases the quality of the display.

    The Frame Buffer object is not actually a buffer, but an aggregator object that contains one or more attachments, which by their turn, are the actual buffers. You can understand the Frame Buffer as C structure where every member is a pointer to a buffer. Without any attachment, a Frame Buffer object has very low footprint.

    Now each buffer attached to a Frame Buffer can be a Render Buffer or a texture.

    The Render Buffer is an actual buffer (an array of bytes, or integers, or pixels). The Render Buffer stores pixel values in native format, so it's optimized for offscreen rendering. In other words, drawing to a Render Buffer can be much faster than drawing to a texture. The drawback is that pixels uses a native, implementation-dependent format, so that reading from a Render Buffer is much harder than reading from a texture. Nevertheless, once a Render Buffer has been painted, one can copy its content directly to screen (or to other Render Buffer, I guess), very quickly using pixel transfer operations. This means that a Render Buffer can be used to efficiently implement the double buffer pattern that you mentioned.

    Render Buffers are a relatively new concept. Before them, a Frame Buffer was used to render to a texture, which can be slower because a texture uses a standard format. It is still possible to render to a texture, and that's quite useful when one needs to perform multiple passes over each pixel to build a scene, or to draw a scene on a surface of another scene!

    The OpenGL wiki has this page that shows more details and links.

    Apple/Xcode/objectiveC:    

    [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer];

    https://developer.apple.com/library/ios/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithOpenGLESContexts/WorkingwithOpenGLESContexts.html

  • 相关阅读:
    求凸包 cogs896
    oc中的正则表达式基本语法(一)
    oc中数据库使用详细解释(二)
    oc中数据库使用详细解释(一)
    NSSearchPathForDirectoriesInDomains函数详解
    通讯录.数据来自字典
    类似新闻客户端.UIPageControl和UIScroll的结合使用,滑点控制图片页码.显示图片页码
    关于target...action中的一点体会
    一些比较给力的IOS学习网址推荐
    点击return收回键盘(不用inputview)
  • 原文地址:https://www.cnblogs.com/antai/p/4899712.html
Copyright © 2011-2022 走看看