zoukankan      html  css  js  c++  java
  • Indirect Illumination in mental ray

    Direct Illumination

    To understand indirect illumination, first we must clarify direct illumination. A shader that uses direct illumination, calculates the light coming directly from lights. We show this here below left with a straight line from the light to the surface we are modeling in the shader.

    Indirect illumination is the light that comes from everywhere else, mostly other objects in the scene. But it also comes from environments modeled by environment shaders. We show this far right with a straight line from the other object to the surface we are modeling in the shader.

    At any given point, a shader may use both the direct and indirect illumination to calculate the color we see at that point. We show this with a line from each source of light above the surface we are modeling in the shader. We are looking at two points, each causing a surface material shader to be run at the top of each sphere.

    Think of the above part of the model as the input, or the incoming illumination calculations. So now, how do we derive the output, the color we see at each point? We must determine how the incoming illumination interacts with the surface to produce that color. With surface appearance modeling, we usually separate the surface behavior, the way it interacts with light, into three categories of reflection or transmission -- diffuse(D), glossy (G) and specular(S).

    Using this approach, a surface can be modeled as a combination of these categories. Typically, we split the incoming illumination into a portion of each type of interaction. We control this contribution with the input parameters of our shaders in one monolithic shader, or a shader network designed to combine or layer these effects.

    However, note that terminology varies from application to application and shader to shader. And we do not typically see the glossy interaction separated from the specular(mirror) interaction in the direct part of the model. For example, in the typical Phong or Blinn shader, the specular controls match our glossy category as the width of the highlight controls the same behavior as the width of the glossy cone. Also, we often see the term blurry used to describe indirect glossy interactions.

    Indirect Illumination

    When talking about indirect illumination in computer graphics, we usually do not include the specular and glossy part of the surface model. We mean the indirect diffuse illumination, the light from other objects that affects the diffuse component of our surface model. This is because the indirect specular contribution is handled by tracing specular (mirror) reflection or refraction rays. And the glossy contribution is handled by tracing multiple reflection or refraction rays in a cone around the mirror reflection, or refractied, direction.

    I find it much easier to present various techniques within the framework of these different categories of modeling the surface characteristics. The light hits a surface from a direct or an indirect source. And the surface responds with reflection or refraction (transmission) of that light. And this interaction can be further separated into a combination of diffuse, glossy, and specular behavior.

    Once we break it down into these categories, we put various shader implementation techniques into them. The traditional lambert shader calculates the diffuse contribution of direct light coming in to an object and multiplies it by the color of the object to derive what we see as a result.

    The traditional Phong or Blinn shader will do the same calculation as the Lambert, but will add a direct specular/glossy contribution of light. In these models, it is typical that the object color considered for specular/glossy reflection is different than the diffuse, so a specular color is multiplied by that specular/glossy contribution before it is added to the diffuse contribution which was multiplied by the diffuse object color.

    Now, for indirect illumination contribution for specular and glossy categories of the surface model, a shader typically traces either reflected or refracted rays. The model may then multiply this by either the specular color, or by a separate reflection color depending on the shader implementation. (Note that I have not seen any CG literature which categorizes specular and glossy ray tracing reflection into a separate category of indirect illumination, because most literature really means indirect diffuse illumination when using the term. Because of the confusion I have seen from teaching about shaders, shader networks, and indirect illumination surface modeling, I like to make this distinction to help clarify a complete picture of our understanding.) If ray tracing is not available, this can also be modeled with a reflection map.

    Finally, we wish to calculate the indirect illumination for the diffuse category of the surface model. For this we wish to gather all the light coming from the various objects or environment in the scene. For any given point of interest, we need to check all the light coming in from the objects in the hemisphere above that point.

    For this we have several implementation techniques including final gathering, global illumination, caustics, and simple ambient light estimation. The latter is historically the first well used technique, but for reasons of conceptual clarity, I will discuss it after the others.

    Maybe the most obvious technique to envision is final gathering. In finalgathering, we explicitly shoot rays above a point of interest. We call that point of itnerest the finalgather point. When any of these rays hits an object we run the object's material shader. When any of these rays misses a scene object, we run the environment shader. We average all of these rays to calculate the average radiance. (Actually the rays are distributed in a cosine weighted manner above the point to give us a correct lambertian/diffuse distribution. This means there are more rays sent in the direction of the normal.)

    With final gathering, we don't calculate the average radiance at each point hit by the eye rays that are used to construct a scene image. In other words, we don't shoot the hemisphere of rays every time we run the material shader. Instead, we first calculate final gather points across the scene, and then use the nearest ones to interpolate a value, when we run the material shader. The fg points are already there when we sample the viewing plane. (Note that we sample the viewing plane in what we call the rendering phase of the render.) One way of thinking about this is that our precaculation is caching indirect diffuse illumination results. Finally, this shader multiplies the average radiance by the diffuse color to add to the rest of our categories -- direct d,g and s, and indirect g and s.

    The next technique to do the same thing is global illumination. With global illumination, we first spread light energy around the scene, storing it on diffuse surfaces. We start by shooting photons from the lights in the scene. Each photon carries a unit of energy, so for example, a 100 watt light bulb shooting 100 photons would put 1 watt into each photon. Now the first diffuse surface that the photon hits will not store the photon because that would double for our direct diffuse illumination already covered by most shaders. So the photon either reflects or refracts around the scene, and stores energy on any diffuse surface after that first diffuse hit. Because of this light bouncing simulation, light has reached diffuse surfaces from all directions in a natural physical distribution pattern.

    So now at render time our material shader will look for nearby photons to calculate the average radiance due to indirect diffuse illumination. Then, that will be multiplied by the diffuse color.

    If we use final gathering together with global illumination (photon tracing), then at render time, we do not use the photons, but instead use the final gather calculation for average radiance. However, at final gather point creation time, when a final gather ray hits an object and runs a material shader, that material shader will use it's nearby photons to calculate average radiance.

    With caustics, like global illumination, we also shoot photons from lights. However, a caustic photon is one that interacts with a specular surface before it hits a diffuse surface. So in the photon tracing phase, we identify which photons that hit a diffuse surface came from a specular surface rather than a glossy or diffuse one. This allows us to optimize calculations for the caustic effects.

    Ambient Occlusion

    Now, what about ambient occlusion?

    First, in our traditional CG surface models, such as Phong or Blinn, we allowed for an extra ambient term in our shaders. This was just a color value added in to our light calculation. In many of the traditionally designed shaders, there were two inputs for this, one for the color, and the other for the amount of ambient light. These two are multiplied before adding into the final result. Often the ambient color was specified to be the same as the diffuse color, and then that ambient amount, called ambience in the mib_illum_* shaders, represented a rough estimate of the indirect illumination.

    So the ambient inputs to the traditional shaders were the way to add extra color into a surface model, which accounted for indirect illumination. However, since this represented a constant pedestal of indirect light, it did not account well for the self-shadowing effect geometric detail had on ambient light, or the darkenning caused by other possibly occluding objects. In other words, in areas of low exposure, one would like a way to cut down on this basic pedestal of illuminated color. Enter ambient occlusion. We can accomplish this by estimating the exposure of the surface, and cutting down this pedestal, or multiplying the pedestal times an occlusion factor. Simple ambient occlusion returns a factor between 0 and 1 representing how much a given point on the surface is exposed. A value of 1 represents a point fully exposed to the ambient light. A value of 0 represents a point fully occluded from the ambient light.

    So, with ambient occlusion, we give a modern value to our old model for incorporating indirect illumination which adds an ambient term into a shader. Typically, we would not use ambient occlusion techniques at the same time as final gathering or global illumination. But in the ambient occlusion tips, we explain a combination technique that may be useful. That technique is incorporated into the structure of the architectural shaders.

    Lighting in the indirect age

    In traditional CG, another one of the techniques to substitute for indirect lighting was to use more lights. Add lights that fill in the shadowy areas with light, thereby faking direct illumination.

    Also, white bounce cards in studio lighting can now be modeled more like this actual situation with final gathering. In fact, the light hitting the card doesn't even need to be modeled. Instead, the card itself can just return a color representing the light bouncing off of it. We could call that a virtual light.

  • 相关阅读:
    Golang1.14.2 环境的安装
    Golang Module快速入门
    VS Code配置Go语言开发环境
    Go语言Mac、Linux、Windows 下交叉编译
    centerOS7 Yum 安装lnmp环境
    初步了解Fork/Join框架
    聊聊并发(七)——Java中的阻塞队列
    如何快速成长为技术大牛
    多线程-interrupt(),isInterrupted(),interrupted()(转)
    Rabbit MQ
  • 原文地址:https://www.cnblogs.com/len3d/p/730297.html
Copyright © 2011-2022 走看看