In games it is currently very common to model human hair using hair cards or shells where multiple hair fibers are clumped together as a 2d surface strips. These cards are generated using a combination of tooling and manual authoring by artists. This work can be very time consuming and it can be very difficult to get right.
If done well these can look good, but they do not work equally well for all hairstyles and therefore limit possible visual variety. And since it is a very coarse approximation of hair there are also limits on how realistic any physics simulation or rendering can be.
It is also common to use very simplified shading models which do not capture the full complexity of lighting in human hair, this can be especially noticeable when shading lightly colored hair.
由于通常都只能采用比较简单的渲染模型,导致渲染时也无法搞定复杂的光照情况,尤其是为浅色头发着色时。
Strand based rendering, where hair fibers are modelled as individual strands, or curves, is the current state of the art when rendering hair offline. ...
发丝渲染,指头发纤维被建模成独立的丝或曲线,这是目前离线渲染中美术人员制作头发的方式。
It also requires less authoring time than hair cards since you do no longer need to create them. And since the process of creating hair cards usually also involves creating hair strands for projecting onto the hair cards, you basically save time on that whole step. ...
With hair-strands it is also possible to do more granular culling for both physics and simulation, and easier to generate automatic LODs via decimation.
头发丝方案还可以在物理和模拟方面做更多颗粒度的剔除,也能更容易的通过降采样的方式来自动生成LOD。
The goal of this project is to get as close as possible to movie quality hair, using-hair strands, while still achieving real time frame rates.
...The final problem relates to the rendering and how to do it in a way that is performant and does not introduce a lot of aliasing due the thin and numerous nature of hair.
For single scattering we use a BSDF based on an original model for hair created by Marschner et.al. in 2003. The model is a far-field model which means that it is meant to model the visual properties of hair when seen from a distance, not for single fiber closeups.
对于单次散射我们使用的BSDF( Bidirectional Scattering Distribution Function 双向散射分布函数 )基于2003年的Marschner模型。这个模型是一个远场模型,意味着这个模型是对远处观察头发的一种归纳,不适合近处观察头发丝。
This model was later improved for path-tracing by Disney and Weta Digital and approximated for real time use by Karis, parts of which work we also incorporate.
It contains parameters such as surface roughness, absorption and cuticle tilt angle.
它包含了如下参数:表面粗糙度、吸收度和角质层倾斜角
Different types of light paths are evaluated separately and added together.
不同类型的光路径是分别评估计算并加总在一起的。
These are R , which are reflective paths TT which is transmission through the fiber, and TRT which is transmission with a single internal reflection. These paths are also enumerated as p0, p1 and p2.
For the longitudinal scattering M, each path type is modelled using a single gaussian lobe with the parameters depending on the longitudinal roughness and the cuticle tilt angle.
The azimuthal scattering is split up into an Attenuation factor A, which accounts for Fresnel reflection and absorption in the fiber, and a distribution D which is the lobe modelling how the light scatters when it is reflected or exits the fiber.
To properly simulate surface roughness for transmitted paths, this product is then integrated over the width the hair strand to get the total contribution in a specific outgoing direction.
Here is a plot showing this approximation for three different absorption values with the reference integral drawn as crosses and the approximation drawn as a solid line.
这里用图表展示了3种不同吸收度值下预估值和积分值的对比情况,虚线是积分值,实线是预估值。
And here is a plot showing how the approximation Karis used stacks up and one can see that it has some problems with grazing angles, especially with more translucent, brighter hair.
For the distribution we use a LUT. The distribution depends on roughness, azimuthal outgoing angle and the longitudinal outgoing angle which means that the LUT becomes be three dimensional.
So the parameters a and b, in the gaussian, are fitted to the integral and we then store them in a two-channel 2D texture.
函数中的a和b,通过拟合到积分的方式将结果存储到一个两通道的2D纹理中。
*这一页主要总结了前面近似推算分布和衰减的思路,分布使用了LUT,衰减假设了h等于0
And now for the final TRT path. For the distribution we improved upon Karis approximation by adding a scale factor 𝑠𝑟 which you can see highlighted in the equation here. This scale factor was manually adapted to approximate the effect of surface roughness, like this.
This approximation is, however, still quite pretty coarse and may need some more work to improve the visual quality in some cases.
这一预估仍然非常粗糙,在某些情况下仍然需要做改进以提高视觉效果。
The attenuation term we approximate in the same way we did for the transmissive path, but here instead we use an h value of square-root of three divided by 2. Which is the same constant used in Karis approximation.
In contrast with single scattering, which aims at capturing how light behaves in a single fiber, multiple scattering tries to model the effect when light travels through many fibers.
和单次散射相比,多次散射的目标是实现光通过很多头发丝时的效果。
This means that we need to evaluate multiple paths that the light travel between a light source and the camera. This is of course not feasible for real-time rendering, so we need to approximate this effect as well.
In our implementation we use an approximation called Dual Scattering. The point of dual scattering is to approximate multiple scattering as a combination of two components.
Local scattering accounts for scattering in the neighborhood of the shading point and accounts for a lot of the visible hair coloring.
Local scattering负责处理着色点的相邻元素和大部分头发颜色可见性问题。
Global scattering is meant to capture the effect of outside light travelling through the hair volume.
Global scattering目的是实现光路穿过整个头发体积后的效果。
The reason that the dual scattering approximation works well for hair is because most light is only scattered in a forward direction. So basically because we have more contribution from TT than TRT.
Global scattering is estimated by only considering scattering along a shadow path, or light direction. Therefore we need some way of estimating the amount of hair between two points in the hair-volume in the light direction.
Global scattering的近似方案中,我们只考虑沿着阴影路径或光线方向的散射。因而我们需要能估计光线方向上,在头发体积中两点之间的头发数量的方式。
We do this the same way the authors did in the dual scattering paper; we use Deep Opacity Maps. Deep opacity maps are similar to Opacity shadow maps, a technique where shadow maps for a volumetric object is generated in a lot of slices over the object.
As a lower quality fallback one can also estimate the attenuation using a hair density constant and the Beer-Lambert law. But this will of course not adapt with the actual changes of the hair volume.
The hair-strands are tessellated and rendered as triangle strips so we must take special care to properly handle aliasing. Since the strands are very thin, they will usually have a width that is less than that of a screen pixel.
We therefore need to take the pixel size into account when tessellating, and increase the width appropriately, or we will risk getting missing or broken up hair strands.
为此我们在拼构时需要把像素尺寸列入考虑,并以合适的方式增加宽度,否则就会有丢失或破坏头发丝的风险。
Unfortunately, this will have another not so nice side effect which can cause the hair to look too thick and more like thicker spaghetti or straw. Another problem is that the amount of overdraw which will be massive and hurt performance a lot.
Just enabling MSAA does unfortunately not solve all problems. While it does improve on aliasing issues, by taking more samples per pixel, and therefore allows us to keep the thin hair appearance. It will suffer an even bigger performance hit due to overdraw, because there will be more of it.
With the visibility buffer we can do a relatively quick rasterization pass, with MSAA, for all hair strands. We can then use that information to do a screen-space shading pass to get the final antialiased render.
To reduce this over shading we also run a sample deduplication pass on the visibility buffer so that we only shade samples within a pixel when they are considered different.
This reduces the number of pixel-shader invocations greatly and it gave us roughly a 2 times performance increase compared to just using the visibility buffer.
这极大的减少了像素着色的调用,比起只使用可见性缓冲提升了大约2倍的性能。
*这里就不翻译了,完整的管线对应的就是前面提高的几个部分的组合。
5 性能总览
At Frostbite we usually work very close with the game teams to ease the introduction of new tech. And when they have it, they are usually very good at finding ways to get more with less.
In any case, here are some numbers showing what the performance is currently like on a regular PS4 at 900p resolution, without MSAA, with the hair covering a big chunk of the screen.
这里展示了一些头发覆盖画面主要内容时的渲染数据——基于PS4普通版900分辨率,关闭MSAA。
The main reason for the long render times are currently that our GPU utilization is very low, something we are currently investigating different ways to improve.
长头发耗时较长的主要原因时我们对GPU的利用率还很低,这也是我们正在研究改进的方向。
In comparison some of the alternative hair simulation system only simulate about 1% of all hair strands. Early experiments show that we can get a 5x performance boost by simulating only 1/10th of all strands and interpolating the results.
评论区
共 3 条评论热门最新