Changelog 9 In Which A Tradition Is Upheld
TL;DR
Another changelog is out, showing off the electronic infrastructure system for wiring up game logic without any scripting at all.
This week…
- Transparency support in renderer
- Fixed Particles
- In World 3D UI
Under The Hood
As I said in the video deferred renderers fundamentally cannot support transparent objects, why is this? Traditional renderers work by rendering all the models in the scene, one by one, and applying lighting to them all individually - this means that the cost of rendering the scene is:
Cost = Number_Of_Lights * Number_Of_Models
This is a problem! You really want to have lots of lights and lots of models in your scene but this cost builds up too fast to allow that. Deferred renderers are a clever technique to reduce the cost to considerably less:
Cost = Number_Of_Lights + Number_Of_Models
How do they do this? Basically instead of rendering out every model with lighting and the full works everything is rendered out as data. When you render a model you don’t output the colour of the pixel you instead output the information about the model at that pixel, thing like:
- Surface Colour
- Material Specularity (shininess)
- Normal Vector
This means that you build up a screen size buffer of information about what data should be rendered for each pixel. After you’ve rendered everything you take these data buffers (called G-Buffers) and calculate lighting which is of course only calculated once for the data in each pixel.
The Transparency Problem
The implicit assumption here is that every pixel has one piece of data which should be used for lighting calculations, obviously this is definitely not the case when you have a transparent thing - now you have 2 bits of data (opaque thing and transparent thing) on the same pixel!
Hack 1 - Stochastic Transparency
One possibility is to render every pixel with either the opaque data, or the transparent data on a probabalistic basis. If the transparent thing is 50% transparent then there’s a 50% chance the pixel will contain that data. Afterwards you render lighting normally and then blur everything together a bit to mix the contributions from opaque and transparent things.
This has the advantage of fitting easily into the deferred rendering pipeline but ends up breaking just about everything else. Any effects which looks at a pixel and the surrounding neighbours will now be sampling randomly from 2 different items and this will break most post processing effects. For example SSAO looks at pixel depths to work out how smooth your surface is, and makes it darker around corners, when you’ve got a transparent item on top of another item it’s as if you’ve got a surface which keeps flickering between 2 depths and EVERYTHING is a corner!
Hack 2 - Deep G-Buffer
Another approach is to store data for multiple layers in a very big G-Buffer, then you can have a more complicated lighting system which evaluates multiple depths of pixels at once to work out what colour the pixel should be.
The problem is that biggest drawback of deferred rendering is the memory consumption of the G-buffer. With deep G-Buffers you’ve now effectively got extra G-Buffers and you’ve probably just tripled (or worse) your memory usage!
Hack 3 - Forward Rendering
The classic approach is to render the opaque bits of the scene using a deferred renderer and then to follow it up by rendering all the transparent bits on top using a non deferred renderer. Again you can get the “perfect” results from this system.
The really obvious problem here is you’ve got 2 COMPLETELY SEPARATE RENDERERS! Maintaining two renderers at once sounds like a complete nightmare.
Hack 4 - Re-entrant Deferred Pipeline
I suggest you read the article (linked in the title) for a more in depth look at how this technique works.
This technique means that you can run the deferred pipeline using the previous pass as an extra input. Then you render your scene several times (once for each layer of transparency) using the output from the rendering of the previous layer as your starting point. In this way you do deferred shading on the transparent things and combine it with the deferred shading of the things from the deeper layer. Everything is deferred shading, yay!
This is actually a pretty neat approach as everything is using deferred rendering and require no special handling to use, you just throw models in the scene (semi transparent or not) and it’ll get rendered. There are, however, quite a few technical drawbacks with this approach:
- Shadow maps must be either redrawn or saved between your passes. This either increases memory usage, or render time.
- Post-Processing may still be slightly complicated due to only having access to the material information from the last stage (e.g. a motion blue based on velocity information in the buffer would not blur things which were occluded correctly)
- Post-Processing techniques cannot use the stencil buffer (because it’s already in use by this rendering system)
- Fully HDR pipeline is tricky/impossible.
Heist
I’m not sure exactly what I’ll use in the end. Right now I’m going with the forward rendering technique because it was the simplest to get working - the “forward renderer” that I use to render transparencies totally ignores lights, so that was pretty easy (cheap to render too) but isn’t a great final solution!