In this update, I'll look at the lighting system in Blue Comb.

A 3D scene is made up of a bunch of triangles with pictures on them. If you don't apply any lighting, you get something like this:

There's a silhouette there, but you can't make out any of the details. The simplest approach to lighting a 3D scene is to imagine a light source, and then darken each part of the model according to how much it faces away from that light. This can be done very quickly, and it looks like this:

Straight away we've improved on the previous image in terms of communicating the shape of the object we're looking at. On the other hand, it looks horrible - like a photograph taken using on-camera flash. It's rather flat, interior parts of the model are much lighter than they should be, and it's just floating in front of the work surface.

What's missing is a shadow:

Awesome! It now looks like an actual thing. It's very dramatic. In fact, if we were to criticise, it would be that it's perhaps too dramatic. The parts that are in shadow are pitch black. If we want people to be able to see and play with shadowed bits, we need to put a bit of extra light in there:

Well, OK. Now it's lost its deep shadows, but it's also lost its pop.

Until recently, that was how games generally did their lighting. There were a small number of light sources, an ambient light (the bit extra we added to make it less contrasty), and some system for casting shadows. The ambient light is there because we've only been considering local lighting effects - the interaction between a particular light and a particular point on an object. In reality, a light doesn't just stop the moment it hits an object - some of the light bounces off, and goes on to light another object. Eventually some light will bounce its way into every nook and cranny, but the intensity and colour of light at any location is dependent on the entire scene.

Calculating this global illumination is currently an interesting area in realtime graphics. It's still computationally intensive enough that there isn't a single approach that will work in all situations on all systems, but it is now possible to pick a set of compromises, or a subset of the problem, and come up with a workable solution.

The first step towards global illumination was a technique called ambient occlusion. This is still essentially a local illumination scheme, but now the ambient light is affected by the amount of self-shadowing at each point on the model. Calculating the self shadowing is still an expensive operation, but for a fixed model it only needs to be done once when the model is created, and the results stored in a texture for use in the game. It allows for very soft and subtle lighting, and it was this look I had in mind when I started Blue Comb.

If we remove the shadow and the ambient light, and switch to ambient occlusion, the scene looks like this:

Now, the fact that I'm simulating Origami means that I'm dealing with a different set of issues from most games. One point of difference is that the model is constantly changing shape, and its shape is entirely (and unpredictably) dependent on the player's actions. This means that I can't use any pre-processing techniques to speed things up at runtime. On the other hand, it's fundamentally a low polygon model, and there's nothing in the scene except for the model and the (flat) work surface. So techniques that would be impractically slow in a modern game turn out to be solvable by brute force.

What I do is take a simplified version of the model (flatten the faces, ignore the curves around the folds, and reduce any curls to a few sharp bends). Then I go through each face and render the whole scene in black from its point of view, multiple times, each time skewing its view direction to cover the whole visible hemisphere. All of these light views get accumulated together, and by the end of the process I have a standard ambient occlusion map. This works great, and on the PC I can accumulate a few hundred passes each frame in realtime, which makes a lovely soft image.

But in doing this, I realised I was only a short step away from quite a nice global illumination system. Instead of rendering the scene in black to record how occluded a point on the model is, I could render the scene fully textured and lit, and record how much light is bouncing to that point on the model:

There are some complications to doing this. I still need the ambient occlusion term, because the simplified mesh I use for the lighting doesn't capture all the surface detail I want. So occlusion gets accumulated into the alpha channel of the light map, while the bounce light is accumulated into the colour channels.

Previously I only needed a single bit for each light accumulated, so an 8 bit texture was all that was needed for 255 light directions. Recording bounced light, I need more precision. On the PC I use a higher precision texture format; on the iPad I apply some dithering to try to preserve the colour detail.

For speed I'm accumulating every light direction directly into the light map (rather than repeatedly rendering the light for a particular direction, accumulating it into the texture, rendering another direction, accumulating that, etc). The tricky part is that I have to ensure each pixel is only rendered to once per light direction (otherwise it will contribute too much to the accumulated light map). I use the z buffer to ensure a single write per direction, but I can't use it to resolve which is the correct face to write, and instead have to z sort the scene for each face.

On slower systems (including iPad), the number of directions I can accumulate in a single frame is down in the tens, which produces multiple shadows rather than a smooth gradation from light to shade. I allow this during folding operations, but as soon as the movement slows down I start accumulating the results from previous frames to bring the quality back up.

This image shows the effect of accumulating just the occlusion term over increasing numbers of light directions. From top left to bottom right, it's accumulating 1, 4, 16 then about 8000.

This image shows the bounce light for the same number of light directions. The scenes with fewer directions are less acceptable than when it's purely occlusion - the coloured dots appear as solid projections on the surface when they should be just casting a subtle hue.

Note how much lighter the inside of the model is when all the light is allowed to bounce around in there.

Finally, the light bounced onto any part of the model actually comes from the preceding frame. If the scene changes abruptly, then this component of the lighting will be wrong. I rely on the fact that manipulating the Origami model is somewhat damped for the sake of a nicer user experience, and the indirect light from a few frames ago is never far offset from the correct position.

It would be possible to split up the process of accumulating light directions so that it updated the light map a few times per frame, but in practice the lag isn't noticeable. This image shows how long it takes to propagate the light through the model. The majority of the effect is there by the second bounce.

The big win from this approach is that it makes the paper nice and translucent without the various hacks I was previously forced to use. Because it's accumulating light from every direction, surfaces which aren't actually touching cast a very soft, blurred light on each other. However, when they are touching, you see quite a clear image. This video of some aimless folding with a particularly translucent paper stock shows the effect. As the curl flattens out, the paper underneath becomes visible. (Try it with real paper!)

Apart from looking cool, I use this feature in the game part of the game for aligning patterns.