Decorating your models is an important part of Folding, so I need to be able
to add overlay textures in a way that doesn't destroy the lovely papery look of
the thing. The trouble is, you can get very close to your model, particularly
if it's intricately folded, and there is a limit to how high a resolution the
textures can be. If you zoom in and are faced with jagged edges and bilinear
filtering artefacts, the game stops saying 'I'm a piece of paper' and starts
saying 'I'm just an app'.
There's a trick for rendering decals in games, where the blurry filtered
texture is thresholded to give a sharp outline (a version of this technique is
here). I use a variation of this which handles coloured decals: I first
saturate the colours in the source texture, and store the intensity in the
alpha channel so that I can reconstruct the original colour. When I come to
render it on the paper, I use the paper's detail heightmap to threshold the
decal's alpha value. This means that from a distance the decal averages out to
its original colour, but close up you see solid ink which has stuck to paper
fibres and is missing from some of the hollows. And instead of blurry jagged
edges, you get papery fibrous ink blotches.
I've exchanged one low resolution artefact for a different low resolution
artefact - but it is now, at least, one which is consistent with the world it's
I'm hard at work on the system for decorating your models with drawings and
decals at the moment. It's a particularly satisfying exercise - you paint zebra
stripes on your origami horse, then print it out as a square of splintered
zebra bits. When you fold the printout, it magically reassembles itself into a
I'll make a video, but it's not yet ready for primetime. Instead, here's a
picture of a pajarita:
The pajarita is a classic Origami model. When you build it out of paper, you
make a few creases in a sheet of paper, and it sort of magically pops into
existence. I wasn't sure it would be possible to build using the operations
available in Blue Comb, but after a bit of fiddling around I found a short
sequence of squashes which did the job.
I thought I should make a video of folding a complete crane in one take. It
shows the current state of the code, glitches and all. Still lots left on the
todo list, particularly the settling & intersection physics.
I only take it as far as the flat model - the UI for posing it in 3D is
still debug only.
The turning over effect is new from last week. In my original plan, turning
the model completely over was done as an ordinary fold pulled all the way. This
had the advantage of simplicity and didn't require any hated gestures, UI or
tutorials. Unfortunately it turned out to be annoyingly cumbersome -
particularly in well folded models with lots of flaps to gather up. For the new
system, I've changed the navigation system. Previously, dragging the
worksurface moved the camera. Now it turns the whole model over, and moving the
camera is achieved with an alternate drag (two fingers on a touch device, right
click on a mouse).
Moving the camera is something which I'd like the player to not have to do
at all (and I'm still finessing that side of things), so pushing it down in
priority makes sense. The next thing I need to do is some user testing to see
how discoverable the turn drag is. Turning by pulling a fold still works (as
well as it did before), so it's an easy win.
I'm certainly enjoying the new system. Playing around with folds is much
more fun now - I realise that before, as the need to turn the model over
approached, I was getting a sinking feeling that was spoiling the experience.
Now the joy is back.
In this update, I'll look at the lighting system in Blue Comb.
A 3D scene is made up of a bunch of triangles with pictures on them. If you
don't apply any lighting, you get something like this:
There's a silhouette there, but you can't make out any of the details. The
simplest approach to lighting a 3D scene is to imagine a light source, and then
darken each part of the model according to how much it faces away from that
light. This can be done very quickly, and it looks like this:
Straight away we've improved on the previous image in terms of communicating
the shape of the object we're looking at. On the other hand, it looks horrible
- like a photograph taken using on-camera flash. It's rather flat, interior
parts of the model are much lighter than they should be, and it's just floating
in front of the work surface.
What's missing is a shadow:
Awesome! It now looks like an actual thing. It's very dramatic. In fact, if
we were to criticise, it would be that it's perhaps too dramatic. The
parts that are in shadow are pitch black. If we want people to be able to see
and play with shadowed bits, we need to put a bit of extra light in
Well, OK. Now it's lost its deep shadows, but it's also lost its pop.
Until recently, that was how games generally did their lighting. There were
a small number of light sources, an ambient light (the bit extra we added to
make it less contrasty), and some system for casting shadows. The ambient light
is there because we've only been considering local lighting effects - the
interaction between a particular light and a particular point on an object. In
reality, a light doesn't just stop the moment it hits an object - some of the
light bounces off, and goes on to light another object. Eventually some light
will bounce its way into every nook and cranny, but the intensity and colour of
light at any location is dependent on the entire scene.
Calculating this global illumination is currently an interesting area in
realtime graphics. It's still computationally intensive enough that there isn't
a single approach that will work in all situations on all systems, but it is
now possible to pick a set of compromises, or a subset of the problem, and come
up with a workable solution.
The first step towards global illumination was a technique called ambient
occlusion. This is still essentially a local illumination scheme, but now the
ambient light is affected by the amount of self-shadowing at each point on the
model. Calculating the self shadowing is still an expensive operation, but for
a fixed model it only needs to be done once when the model is created, and the
results stored in a texture for use in the game. It allows for very soft and
subtle lighting, and it was this look I had in mind when I started Blue
If we remove the shadow and the ambient light, and switch to ambient
occlusion, the scene looks like this:
Now, the fact that I'm simulating Origami means that I'm dealing with a
different set of issues from most games. One point of difference is that the
model is constantly changing shape, and its shape is entirely (and
unpredictably) dependent on the player's actions. This means that I can't use
any pre-processing techniques to speed things up at runtime. On the other hand,
it's fundamentally a low polygon model, and there's nothing in the scene except
for the model and the (flat) work surface. So techniques that would be
impractically slow in a modern game turn out to be solvable by brute force.
What I do is take a simplified version of the model (flatten the faces,
ignore the curves around the folds, and reduce any curls to a few sharp bends).
Then I go through each face and render the whole scene in black from its point
of view, multiple times, each time skewing its view direction to cover the
whole visible hemisphere. All of these light views get accumulated together,
and by the end of the process I have a standard ambient occlusion map. This
works great, and on the PC I can accumulate a few hundred passes each frame in
realtime, which makes a lovely soft image.
But in doing this, I realised I was only a short step away from quite a nice
global illumination system. Instead of rendering the scene in black to record
how occluded a point on the model is, I could render the scene fully textured
and lit, and record how much light is bouncing to that point on the
There are some complications to doing this. I still need the ambient
occlusion term, because the simplified mesh I use for the lighting doesn't
capture all the surface detail I want. So occlusion gets accumulated into the
alpha channel of the light map, while the bounce light is accumulated into the
Previously I only needed a single bit for each light accumulated, so an 8
bit texture was all that was needed for 255 light directions. Recording bounced
light, I need more precision. On the PC I use a higher precision texture
format; on the iPad I apply some dithering to try to preserve the colour
For speed I'm accumulating every light direction directly into the light map
(rather than repeatedly rendering the light for a particular direction,
accumulating it into the texture, rendering another direction, accumulating
that, etc). The tricky part is that I have to ensure each pixel is only
rendered to once per light direction (otherwise it will contribute too much to
the accumulated light map). I use the z buffer to ensure a single write per
direction, but I can't use it to resolve which is the correct face to write,
and instead have to z sort the scene for each face.
On slower systems (including iPad), the number of directions I can
accumulate in a single frame is down in the tens, which produces multiple
shadows rather than a smooth gradation from light to shade. I allow this during
folding operations, but as soon as the movement slows down I start accumulating
the results from previous frames to bring the quality back up.
This image shows the effect of accumulating just the occlusion term over
increasing numbers of light directions. From top left to bottom right, it's
accumulating 1, 4, 16 then about 8000.
This image shows the bounce light for the same number of light directions.
The scenes with fewer directions are less acceptable than when it's purely
occlusion - the coloured dots appear as solid projections on the surface when
they should be just casting a subtle hue.
Note how much lighter the inside of the model is when all the light is
allowed to bounce around in there.
Finally, the light bounced onto any part of the model actually comes from
the preceding frame. If the scene changes abruptly, then this component of the
lighting will be wrong. I rely on the fact that manipulating the Origami model
is somewhat damped for the sake of a nicer user experience, and the indirect
light from a few frames ago is never far offset from the correct position.
It would be possible to split up the process of accumulating light
directions so that it updated the light map a few times per frame, but in
practice the lag isn't noticeable. This image shows how long it takes to
propagate the light through the model. The majority of the effect is there by
the second bounce.
The big win from this approach is that it makes the paper nice and
translucent without the various hacks I was previously forced to use. Because
it's accumulating light from every direction, surfaces which aren't actually
touching cast a very soft, blurred light on each other. However, when they are
touching, you see quite a clear image. This video of some aimless folding with
a particularly translucent paper stock shows the effect. As the curl flattens
out, the paper underneath becomes visible. (Try it with real paper!)
Apart from looking cool, I use this feature in the game part of the
game for aligning patterns.
In this update, I'm going to look at how I make the paper for Blue Comb.
A big part of the appeal of Origami (the real thing, and my simulation of
it) is the aesthetic of paper. I wanted the materials in the game to capture as
much of that look as I could achieve.
With no material applied, the simulation looks like this:
It has some slightly papery crumples on it, but otherwise it looks like it's
made of plastic. What I want is more like this:
Or even this:
So how's that done?
There are 3 elements we need to achieve that look:
Diffuse texture - this is the colouring of the paper. With your standard
bleached machine-made paper, this is basically just white. However, there are
lots of interesting craft papers whose fibres display a lot of variation, and I
wanted to be able to simulate them.
Surface normal - this captures the bumpiness of the surface. The tiny
variations in height caused by the fibrous structure of the paper produce
variations in light & shadow that become increasingly visible with
shallower angles of light.
Opacity map - This is possibly the most important part of the papery
illusion. Paper is very thin, so light shining through from behind contributes
a lot to the overall lighting. You can clearly see this effect in the paper
faces sticking up in the images above.
The first thing I did was to buy a bunch of paper from the art shop, and try
capturing these details directly. This image shows a very fibrous sheet of
paper under varying lighting conditions. It's the same portion of the same
piece of paper in each grid square, but you can see how different the lighting
can make it look.
The top row has the light coming mostly from behind, showing the effect of
varying opacity. The bigger, thicker fibres block more light, and are
The middle row has very glancing light coming in from the left, and is
dominated by surface shadows. (This paper was artfully crumpled, so the
contribution of the fibres' shadows is hard to make out.)
The bottom row has the light coming increasingly in line with the viewer,
and shows the surface colour.
For a while I got quite excited about the idea of using these images
directly. The material shader would take the direction of the incoming light on
the paper, and use that to look up a paper image taken under the same lighting.
It seemed like a promising approach, but ultimately it always looked like a
computer model with a photograph textured onto it.
Next, I tried capturing the various elements from real samples of paper. The
diffuse and opacity maps are straightforward enough, but capturing the normal
map proved very difficult. There are various tools out there which can
reconstruct the surface from images taken under controlled lighting conditions,
but because paper's details are so small, I struggled to get useable results
which aligned convincingly with the other textures.
What I failed to try until just a couple of minutes ago while writing this
blog entry was treating the opacity map as a height map and deriving the normal
map from that. In other words, take a backlit image of the paper, and, on the
assumption that the darker areas are where the paper is thicker, and the
thicker areas are where the surface is higher, run it through a standard filter
which computes surface normals from height maps. Early tests look promising.
Anyway! I didn't do that; I decided to generate the paper procedurally,
giving me full control over every aspect of the appearance, the ability to pull
out any properties I need for rendering, and seamless tiling.
Paper is made by mixing a whole lot of fibres into water, and then draining
the water from them. The fibres end up randomly layered together. To simulate
this, I first need to be able to create something that looks like a fibre. What
I do for each fibre is create a spline
from some random control points, and then add variations of that spline by
randomly perturbing the control points. This produces a satisfyingly fibrous
clump of wiggles.
The length of the splines, the number of them that get clumped together, the
amount of variation between each offset, the colour, width and so on is all
controllable, allowing me to create a huge range of different types of paper
These fibres are then layered together, alternately flattening and building
up each pass. The texture map is taken from the colour of the topmost fibre,
with a height map and an opacity map taken from different flattening
approaches. Again, the number of layers and the way they're accumulated is
controllable to create different varieties of paper.
I thought it was about time I started blogging about project Blue Comb, also
known as The Origami Game.
Blue Comb has been in development for a rather long time now, and for most
of that time I haven't been entirely sure just what it is. It grew out of an
idea I was working on for a story-based game. I was thinking about the sort of
feel and style I wanted for it, and what was achievable for a tiny team on a
tiny budget. Origami seemed like the perfect theme, with low-polygon models
(easy for the machine to handle; easy for me to create) and the opportunity for
a beautiful papery visual style. I started coming up with a design where you'd
explore a world made up of collages, and along the way you'd discover new
Origami models that you could transform into - a fish would allow you to swim,
a bird would let you fly and so on.
So I sat down and started coding. Clearly the first thing I needed was some
sort of editor to create the models… and that's where I got trapped. It turns
out that creating an editor for Origami is really tricky and
really interesting. Eventually it became apparent that this game was
going to be mostly about the editor, and the exploring portion was off at a bit
of weird angle to it.
For a while I decided that I would do away with any game aspect - it would
be purely an app for learning, creating and sharing Origami designs. Eventually
I settled on the minimum viable game for it - a puzzle where you unlock Origami
designs by matching patterns as you fold, but with a sandbox mode that contains
All The Work.
In this Blog post, I'd like to talk a bit about some of the folds that exist
in Origami, and how to achieve them in a simulation.
The first type of fold is the valley or mountain fold. This is where you
simply bend the paper along a line. If you bend it towards you, the crease is
like a valley. If you bend it away, it's a tiny mountain. Simulating this is
easy - if you drag any point on the paper, we just need to create a crease line
half way between where the drag started and where it is now. A valley fold puts
the action where you can see it, so we do that by default; to create a mountain
fold, you'll need to turn the paper over and create a valley from the other
This video shows some aimless valley fold creation, and also shows how it
tries snapping the fold to likely points and edges.
The first few folds are straightforward enough, but we soon hit what turns
out to be quite a tricky problem with simulated Origami. You can put a crease
across wherever you like and you'll have a valid fold, but there are situations
where moving a crease a small amount will result in a large change in the
folded result. The following video shows how I handle the situation where
dragging a crease will suddenly involve a previously unfolded part of the
paper. The active fold uncurls a little way while the newly involved fold curls
up to join it. With a mouse this is easy enough, but on the iPad it means that
your finger starts getting ahead of the drag point. I experimented with various
way of resolving the offset. The best solution I found was simply zeroing the
physics when the catchup curl is complete, so that it slowly accelerates to
rejoin your finger (allowing you time to adjust where you're pointing).
During the catchup curl, you have the opportunity to decide whether you
really mean to involve the new flap of paper - it takes a certain distance to
commit to it (which is proportional to the size of the flap), so there's time
to stop and go back if your intention is to snap to the edge of the flap.
Mountain and valley folds are all very well, but most of the interesting
models involve some slightly more complex moves. (There is an Origami
discipline called Pureland which aims to use
only those folds, and it's fun to see how much expression you can achieve
within such constraints). I was going to be disappointed if I couldn't create
the classic crane model within my simulator. In fact, my goal was to be able to
create it using intuitive prods at the onscreen model, with no buttons or
This sort of model requires a few techniques. First, there is the squash
fold, which looks like this:
To perform a squash fold, you first need to have a model that has creases
defining two flaps which hinge over, and a flap carried between them. Where
you'd start a valley fold by picking a vertex or a silhouette edge to start
dragging, for a squash fold you put your finger in the middle of a crease and
smudge it sideways. The simulation searches for a valid pivot point and
collects all the affected faces, and drags them around. If there are multiple
valid squash folds, you smudge back past the starting point and in again, and
it will cycle to the next possibility.
Then there's the petal fold:
If you look carefully, you'll see that the petal fold is just two squash
folds stuck together in the middle. The tricky part is that we can't choose a
starting point for the drag which says unambiguously that this is a petal fold
rather than a valley fold. What happens is the simulation assumes that if a
petal fold is possible from your start point then it's probably what you want.
Again, multiple valid petal folds can exist at one location, so the same trick
of dragging back and coming in again cycles through them, and this is where you
can cycle on to the valley fold if wanted.
Cycling through possible folds is shown in this video:
And a related technique, dragging back to gather multiple flaps into a fold,
is shown here:
The last major fold required for the crane (and most of the traditional
models) is the inside reverse fold. This often used for heads, beaks, legs and
so on, and involves opening up a flap enough to flip it inside out, like
It's achieved by pushing on the edge of the fold. The simulation just
assumes that picking an edge that is reversible should invoke an inside
reverse. If you actually want to perform a valley fold, you pick the pointy end
There is another reverse fold, the outside reverse, which is very similar
but ends with the flap folding around outside of itself rather than flipping
inside. I currently handle this with some restrictions that I'm hoping to
overcome. Models such as the traditional swan can be folded, but require an
additional step to set up.
That's probably enough for this update, but I'll finish with a couple more
video clips. This one is me folding (most of) the bird base:
And this one shows how you can scrub through a folding sequence if you're
learning how to fold a model for reals. (The simulation automatically combines
multiple symmetrical folds into a single step where that would improve
If it seems I've been quiet recently, that's because I've had my head down
pummelling away at Project Blue Comb. There's a blog post coming on that later,
but as always there's just one more thing I simply must add before it's ready
to be seen.
However, I couldn't see a new platform being launched without porting
Ancient Frog to it. Windows 8 is a pretty radical reboot of the franchise
(consult the internet to hear what a brilliant / terrible idea that is). I was
keen to jump in and see what it's like to develop for.
It's already live on the Windows store
here so if you're giving the Windows 8 preview a go, grab it now!
I got Ancient Frog working pretty quickly (it's very malleable by now, and
all the early work on resolution / aspect ratio independence, and the later
work on up-rezzing, have paid off well). There were a few things I did to make
it sit better with the new Windows style (previously called "Metro").
I finally updated the level choosing interface, which was a weak point from
the beginning. Reworking that to the Windows tile grid worked out very well -
it's not just that it makes it consistent with the rest of the platform; the
tile system provides a very simple set of rules for laying out UI
I changed some of the in game UI to use charms & swiping. Hints,
undo/redo, volume and so on are now summoned up in the Windows manner. I was a
little more wary of doing this - a game is, by its nature, its own UI. I'm not
sure whether consistency with the platform is more or less potentially
confusing than consistency with the game's own world. However, the end result
does seem to be improved by doing things Windows' way (it's not as if the
original mishmash of buttons placed wherever they would fit was some precious
The feature I really like is snapping. In Windows 8, you can drag an app to
the side of the screen, to run side-by-side with another app. It's a nice
compromise between the simplicity / restrictiveness of full screen apps, and
the power / mess of layered windows. On a tablet, where managing windows with a
finger is pretty awkward, it's particularly welcome. The really clever part is
that the snapped application is 320 pixels wide - the (logical) width of a
smartphone app. So if you're building a Windows 8 app, you're encouraged to
implement a smartphone-usable view of it right alongside your tablet / desktop
view. I'm assuming Microsoft is hoping that this will stop them falling into
the hole that Android tablets have ended up in, where developers are encouraged
to write one app for all form factors. Without carefully tailoring the
experience to the device size, you end up with some pretty odd layouts of
I have (of course) already built separate phone and tablet versions of
Ancient Frog. For the Windows 8 port, these had to be combined so that it could
switch between them based on window size. This was a little more complicated
than adding the phone assets to the build. For a start, 320 pixels is just the
width - the height is whatever height the user's monitor is, so the aspect
ratio can easily go beyond the limits of the phone version's background
textures. The desktop / tablet version is much more flexible in how it can be
laid out, but assumes more space around the board for the frog to breathe. I
had to change the camera to zoom in as close as the current puzzle allows, and
rearrange UI elements to get out of the way vertically. The end result is that
the big screen version can now reformat itself right down to the small screen
layout, and beyond into crazy stretched territory.
Given that Ancient Frog was developed primarily on a Windows PC, and for
most of its development was intended for release only on Windows, it's a little
odd that this port ended up involving the most work. This is because Windows 8
has introduced some important changes. The new style of applications need to be
built on WinRT to be eligible for the Windows store. (That's "WinRT" - Windows
RunTime - not to be confused with "Windows RT", which is the ARM edition of
Porting Ancient Frog to a new platform has always been pretty simple to kick
off - create a new project in that platform's development environment (ideally
using a '3D app' template or simple sample), add all the Frog source files to
it, and call Frog's Draw() from the sample's render loop. This is how I started
the Windows 8 port, and moving from the creaky old Win32 initialisation &
game loop to the nice modern XAML setup was pretty painless. There's just one
gotcha - there's no OpenGL in WinRT.
This is the first port I've done which didn't use OpenGL. It's been a while
since I've even used DirectX, so I wasn't sure, when I started working on it,
how much work would be involved and whether it would be worth the effort. My
assumption was that, since both APIs exist to communicate with the same 3D
hardware with largely the same constraints, there would probably be a pretty
simple mapping from one to the other so how hard could it be?
OpenGL calls all go through my own layer which handles all the messy
platform specific setup stuff (setting up contexts, getting function pointers
for extensions and so on). It also handles the bits that were removed from
OpenGL ES 2.0 (all the matrix operations). For the D3D version I continued this
approach, removing all the rest of the actual OpenGL calls from the back end.
The game code and data remain unchanged, and the draw stuff is translated into
D3D on the fly. I drew the line at shaders - automatically converting GLSL to
HLSL was more work than my handful of shaders warrants, so I just rewrote them
I was up and running very quickly, but various problems became apparent as I
worked through the rest of the port. Problems with some early 3D drivers made
me wary about relying on modifying D3D's rasterizer state, so I reorder
vertices as I pass them through, rather than changing the cull mode. The
biggest problem was performance - it worked fine on a normal desktop PC (it's
never been a game for pushing the boundaries of the hardware), but when I tried
it on ARM it ground to a halt. I wasn't too worried at first - the iPad port
took a bit of massaging to get up to speed, and I thought I could simply reuse
the work I'd done there (reducing some of the scene complexity and baking in
the lighting to reduce fillrate). Performance improved, but was still not
acceptable (and still nowhere near what the hardware was capable of).
There is a simple two step process for speeding up rendering:
1 - draw less stuff
2 - batch everything that is drawn
I'd covered step 1, so I had a closer look at step 2. The obvious candidates
were already well batched (text, for instance, all goes off in a single call).
However, the frog itself isn't. Rendering the articulated character was pretty
much the first piece of code I wrote for the first prototype of what turned
into Ancient Frog, and it's implemented very naively. Each movable part (torso,
eyes, leg segments, toes) is a separate quad, and I'm setting up a matrix for
each one to scale, rotate and translate it into position. Now, that's only 32
quads (and another 32 for the shadow), and it's never given me trouble on the
low powered devices I've used before, but I'd run out of stuff to draw less of.
I still wanted to do it all on the back end, so I reordered my D3D code to hold
off issuing any draw calls until forced to by a state change, and moved the
transform from the vertex shader onto the CPU at the point where I'm copying
the vertices over to D3D.
This fixed the problem completely - ARM performance went straight up to
where I'd expected it to be from the start.
Buy Ancient Frog. Buy it.
Anyway, here it
is. Available in 32-bit & 64-bit Intel versions, and a spiffy new ARM
version all ready for when Windows RT devices actually exist.
The missus got a weather station for her birthday. I mounted it on a
convenient sturdy post at the end of the garden, and it started gathering
(That fence is on a bit of a slope, and the post is wonky, but thanks to
some carefully engineered wedges the weather station is level.)
Now, there's not much point gathering data if you don't upload it somewhere.
It's a pretty heavy duty unit (a Davis Vantage Pro2), so I expected it to be
easy to interface to a PC. Surprisingly, it doesn't come with that ability out
of the box - you have to buy a little dongle and some software for 290 NZD, or,
if you want it to upload its results automatically without a PC, 500 NZD. That
seemed on the outrageous side to me, so I went and googled for cheap
It didn't take long to find this
blog post from someone who'd worked out how to interface to the unit
without the dongle. It uses a 3.3v serial connection, which you can get pretty
cheaply - but that's still not cheap enough for me, and still requires a PC
always on to upload the data to the web.
Luckily, I had an old Linksys WRT54G sitting around. It's a classic
hacker-friendly device, very easy to install linux on, and, most usefully, has
a connector inside that handles serial at 3.3v. It's been sitting in my study
acting as a dumb switch since it was superseded by the AirPort. Time to dust it
off and give it some gainful employment.
I installed OpenWRT on it, drilled a hole
in its case, soldered some wires in, and hooked it up to the Davis console.
Fired up minicom, typed some commands, and - hey presto! - it worked. I spent
some time happily turning the console's backlight on and off. This was too
Jony Ive eat your heart out.
Well yes. As it turned out, it was too easy. Getting from this stage to
having something which would automatically update a website with the results
turned out to be quite a slog. Still, I did it so that you don't have to!
My original notion was to install the wview daemon on the router. This
handles all sorts of weather stations, serves up its own web pages, and can
forward the results to a variety of other weather sites. But to build wview, I
first had to build the OpenWRT toolchain, and stuff the firmware full of the
relevant dependencies. After a couple of brickings (including one that I could
only recover by jamming
a fork in the router), I realised that I simply wasn't going to get
something the size of wview crammed into the 4MB of flash available to me.
Next I looked to the various Perl scripts that people have written for
communicating with their weather stations. But it turned out that I couldn't
even fit Perl into the firmware. It was clear that I had to look at the problem
from the other side - see what was available on the router, then try to use
that. As it turned out, Lua was already included, so that's what I had to
The next problem turned out to be communicating over the serial connection.
It had been straightforward using minicom, but accessing /dev/ttyS1 from within
Lua gave very inconsistent results. It turns out that by default, Unix still
assumes that a serial port exists so that dumb terminals can connect to it.
When it sees data coming in, it tries to respond to it on that assumption. So
the Davis console would start sending its data, and the router would
immediately start responding "OH HAI HOW ARE YOU TODAY", which would cause the
console to cancel in confusion.
I built a new version of the firmware with stty enabled, and that let me
apply these intuitive settings:
The final hurdle was that Lua doesn't seem to offer a non-blocking read or a
way to spawn new threads, and I was worried that the script could too easily
get hung up waiting for data from the console. My solution was a little clunky,
but should be robust; I have two separate scripts - one periodically sending a
request for more data, and the other constantly reading the serial port looking
for anything it recognises as a valid packet.
The request script is incredibly simple - it sends a carriage return to wake
up the console, waits a second, then requests a single LOOP packet:
and an entry in /etc/crontab to run it every 5 minutes:
*/5 * * * * /root/davissnd.sh > /dev/null
The receiving script has a bit more work to do. It looks for the start of a
loop packet ("LOO..."), pulls in enough data for a whole packet, then checks
that it has a valid CRC. If so, it picks out the relevant data and sends it on
to wunderground. It should err on the side of quietly dropping data if there's
a problem, rather than sending bad data to the web.
davisrcv.lua (If you use it, you'll have
to edit the script to add your own station id and password)
All that's left is to kick it off on startup, by adding the following to
Palm held a competition to promote their PDK. Ancient Frog was eligible,
but (possibly because it had passed its peak before the competition started)
didn't fare as well as I'd hoped.
On the other hand, there was a category for free apps, and for a while the
number of entrants was smaller than the number of prizes on offer. This looked
like a good opportunity to bang out some free apps - it meant I might win
something, I could experiment with what genres of app generate the most
interest, and I could have a bit of fun doing some rapid prototyping. All
models of Pre and Pixi have OpenGL ES2.0 hardware, so I could go to town with
I gave myself no more than two days per app. The speed of development and
the limited scale of what I expected to achieve meant that it was a proper
hackfest. All the code went into one ugly C file. Magic numbers, hard-coded
data, copy-and-paste coding - everything that shaves a few minutes off writing
a bit of code at the expense of making it unmaintainable, bug-prone and
difficult to build upon. I felt simultaneously dirty and liberated.
The first thing I made was Earth Now. It shows a 3D view of the Earth (rendered using
NASA satellite imagery, including normal mapping from the height data and
specular mapping for water / ice), and downloads the latest global cloud cover
imagery to display on top. I thought it would be a quick app to ease myself in
to things, but it ended up taking two days - most of which was locating a
decent source of cloud imagery, and writing my own http code to get at it
through its content network.
The next one was a sandpit app called
Liquid Planet. It combined a bunch of tricks that I found useful in
museum interactives - GPU-based interactive water and earth, with a bathymetric
model of the Earth that you can flood and drain. It's quite fun drowning
continents, or seeing just how much land there was during the last ice age (New
Zealand was huge!). This one took less than a day, but was unfortunately a bit
too demanding of the phone's GPU. (I'm used to doing this sort of thing on
high-end PC gaming cards, and while it's impressive how much you can do in a
shader on a Pre, it can't really handle the number of textures I wanted).
Then, a music app called Tonal Pool. I've traditionally been very bad at handling
sound in my games, generally seeing it as the area responsible for the majority
of the game's data footprint, the majority of the annoying configy crashy bugs,
and the least important part of the experience. (Throughout my time in the
games industry, I'd hear people repeatedly argue that it's the most important
part of the experience, but really - try playing a game with the speakers
turned off. Oh, you already do! Now try playing with the screen turned
Anyway, it's time I made my peace with that particular sense, so I decided
to make an app that was all about the sound - a Tenori-On-style
sequencer. Of course, I still didn't trust it to survive on sound alone, so
it's framed in a nice little GPU-based interactive water pool, with ripples
going off as each peg sounds. I was pleased with the result - one day of work,
and I had several nice emails about it.
Banner came next. It was a bit of an
experiment - I wanted to see if I could create a software version of those
light sticks that you wave to spell out messages using persistence of vision.
The screen can only handle 60fps, which is definitely at the low end of what
you need to pull off the trick, and there's an accelerometer to help sort out
You draw your design on the screen with a rudimentary paint application,
then start waving the phone. It automatically senses this, and starts flashing
a line of dots to recreate your image in the air. It works, but only in a
really dark room, and it takes some practice to get good results. After a day
on it, I decided that it was never going to be good enough to meet the Free App
Downloader's expectations, and rather than get slaughtered in the comments, I
decided not to publish it. (Search the iPhone app store for similar apps if you
want to read how people respond to basic physical limitations...)
Finally, my favourite: Ancient Pond. (Yes - I liked it so much I gave it the
Ancient seal of quality). It's your basic fish pond app, using (ahem)
that GPU water effect again, this time with a nice flocking algorithm running
under it to make fish dart and swim about. It took a couple of days, and then
another day or so updating it to add different themes and do some tweaking of
the experience. I managed to keep the framerate up to an acceptable level
(particularly on the Pixi, which, surprisingly, handles it better than the Pre)
by strictly limiting the effects. So there's a theme which has shadows and
refraction on the water, and a theme which instead has specular highlights and
Ancient Pond took off really well - there's clearly an interest in
this sort of app, and I really enjoyed myself writing it. It's very tempting to
turn it into a real app, cross-platform, with downloadable creatures and
ornaments and so on. I'm still mulling over whether to put Blue Comb
on hold and do this first.
In the meantime, it would be nice to add a few more features, but it's right
up against the hack-it-now-and-never-look-back approach I used to create it. To
do anything more to it at all would really mean tearing it down and starting
So, how did I fare in the competition?
Well, the results
don't seem to be up yet, but I've had an email which confirms that my placement
in the leaderboard was accurate. Ancient Frog snagged one of the $1000
HP store credit prizes, as did Ancient Pond and Tonal Pool.
Both of them hovered tantalisingly in the $10,000 actual money prize category,
but were finally beaten out. Earth Now and the free version of
Ancient Frog were also in the running until the last few days of the
competition. Liquid Planet fizzled, and I ended up pulling it from the
I had a lot of fun, and I have a nice little purse which I have to
spend on fun things, so that's great. And since the HP store sells Canon
lenses, and I live for Canon lenses, I'm a happy man.
Ancient Frog won another award - "Most Elegant Application" in the Intel
Atom Developer Challenge.
I may not be smiling in that photo, but that's because I'm not very good
around cameras. I'm smiling over here though.
The Intel Atom edition of Ancient Frog contains all the stuff that I
couldn't get running on iPad because of its limited fillrate. I submitted it
for the "Best of Games" part of the contest, but through my own incompetence I
missed that deadline. Winning a Grand Prize instead was a delightful
Ancient Frog currently runs on all of the devices in this picture, and a few
more that happened not to be there when I was playing with my camera.
The iPad port is out and failing to set the world alight. It did pretty well
in its first week, getting featured by Apple. But featured on the iPad in
launch week turns out to fall a long way short of featuring on the iPhone, and
when the week was up, it didn't have any momentum.
The Palm Prē version is out too. I really enjoyed doing that one - I discovered at
GDC that Palm was opening up native development, got a Prē straight away, and
had something running in a couple of days. It's essentially the Windows build
in its iPhone-sized debugging configuration, with the OpenGL ES code and
textures from the iPhone version. Wonderfully painless, and with the crisper
screen, and the river-polished-stone feel of the Prē, it's beautiful. My
favourite version so far. It takes a while for the sales figures to come
through from Palm, so I don't really know how it's doing yet - but it doesn't
need to do anything spectacular to justify the effort.
The Android port is still rumbling along. I have something that runs well on
the Nexus One, although it lacks sound, there are some situations where it
fails to recreate textures correctly, and the levels all need redoing (again).
(This time they need to be higher res and aspect-ratio agnostic versions of the
original iPhone levels - the iPad versions only make sense on a physically
larger screen). The textures also can't be compressed, because there's no
standard GPU on Android devices, and there's a silly restriction on how much
data you can fit in internal storage. So lots of bitty work to get it to run
reliably on a decent cross section of the various devices out there, and still
no way to charge money for it from New Zealand. I keep picking away at it
because at some point it'll probably be worth the pain to nail all of the
The desktop version, OS-X and Windows. I keep flip-flopping on this. I'm
just not sure what sort of market there is for a game like Ancient Frog on the
desktop. The App Store has lowered everyone's price expectations, and I don't
think I'd be able to sell nearly enough copies at $5-ish to be worth the
support hassles of getting it running on decade-old malware-ridden boxes.
The netbook version. This is basically the desktop version, but with Intel
handling some of the config testing woes, and the possibility of prizes in
their competition, I'm tipped in favour of it.
With all of this stuff, Ancient Frog continues to take up a couple of days a
week of my time. The rest of the time I'm working on two new game prototypes,
hoping to get one of them to a critical point where I can announce it and go
all in on it.
I decided to take the plunge and release the iPad version of Ancient Frog on
iPad launch day. It's a bit of a risk - I'm really not comfortable releasing
something that I haven't seen running on the actual target hardware - but I
thought it would be more likely to get noticed if it was in at the beginning.
It also did me good to have a tight and immovable deadline to work towards.
It's very hard to buckle down and finish a game, even one that's supposedly
already finished, and the launch deadline forced me to finally decide what's in
and what's out, and to polish off all the loose ends. To massacre a
I did leave some stuff out. There's supposed to be interactive water, but I
was worried about the performance, so that's waiting for my iPad to turn up.
(It was a bit of a saga in itself, getting one shipped out to New Zealand, but
it's finally FedExing its way across the Pacific.) As soon as I've plugged in
all the missing stuff and buffed up the performance, I'll be releasing a lovely
I called the iPad version Ancient Frog HD. This may have been a mistake - the HD suffix has been
applied to a lot of iPhone titles that have been lazily up-rezzed with no
regard to the different form factor and demands of the iPad. I like to think
that Ancient Frog HD is a significant upgrade to the original title (it
certainly took a significant amount of time & effort), and I'm not sure
that's the message that people are getting.
It had a little feature in 'what's hot' for the first week, which produced
reasonable sales. However, the iPad app store is already crowded, and the user
base is still much smaller than the iPhone, so I'm hoping for a nice steady
If you're lucky enough to have an iPad, you can check it out on iTunes
You may have missed it - the news slipped out pretty quietly - but Apple has
announced a new product line.
The iPad is essentially a scaled up iPod touch, so naturally I'm interested
in what it means for Ancient Frog.
According to the announcement (and my experiments in the simulator), iPhone
apps work on the iPad straight out of the box, with no changes needed. However,
since the screen is higher resolution (1024x768 instead of 320x480), and a
different aspect ratio (4x3 instead of 2x3), such apps won't look their best.
Below are a couple of mockups of what you'd see if you had an iPad right now
and ran Ancient Frog on it. (All of the pictures are shrunk down to fit on your
screen - click on them to see them full res.)
The first mode just runs the iPhone app at its original resolution, giving
you a little window in the middle of the screen:
It also gives you the option to run it "2x" - doubling the size of each pixel
(in each direction), so the app uses more of the iPad's screen (640x960 of it).
This makes it bigger, but either blurry or blocky depending on whether or not
the device is going to filter the image when it scales up. I suspect they'll go
for blocky, which is what I've simulated here.
Now, neither of these options is particularly elegant or attractive. With a
bit of fiddling around though, I can make the current iPhone application
recognise the iPad and behave more like a native application on that platform.
What I've done here is run it at 768x1024, but allowing it to letterbox
slightly to retain the original aspect ratio (luckily the ragged border gives
me a neat way to bring the edges in a bit, as well as a bit of room to lose
some pixels top and bottom). This already looks way better than the previous
shot - lots of elements are still blurry, but things that appear at varying
scales in the game are already at a higher resolution. This means the text, the
daisy and the particle effects are all crisp, which makes the whole thing seem
So that's good - I can put a bit of work into the current app, and release a
free update that makes it play nicely on the iPad. But it's still blurry - what
would be involved in making it look great on the new hardware?
That's where it gets a bit tricky. The work involved suddenly goes up by an
order of magnitude. Every level has to be reworked, and I have to handle the
original aspect ratio, as well as two new ones for the iPad (for portrait and
landscape). The download size also goes up - currently I fit just under the
10MB limit that allows people to download over their phone. With the background
textures doubled up, I'm over 20MB and start to lose impulse buys (as well as
bloating out the app for everyone, regardless of whether they have an
So my plan is to leave it at that - an incremental upgrade to make the
experience worthwhile on the iPad - and offer a separate, HD, version for iPad
only. This will use the reworked levels I've been beavering away at for the
last few months, and make use of its groovy auto-aspect-ratio handling (the
desktop version can be stretched to all sorts of whacky resolutions without
breaking the effect).
And here's the result:
It's work in progress (the lighting has yet to be graded correctly, and the
particles are missing), but you get the idea. The full screen is used, the frog
is back down to a more appropriate size for human fingers to prod at, and
there's some nice little environmental stuff going on for you to play with.
It would be nice if there were a way to offer a cheap upgrade path for
people who already have the iPhone version, and I'm looking at whether there's
a clever way to do that.
I can't wait to get my hands on the physical hardware. I think there's going
to be some very interesting times for gaming in the next few years.
It was honourably mentioned twice in the IGF Mobile
awards - Best Mobile Game Design, and Achievement In Art. I only just missed
the deadline for submitting last year, which is why it's appearing here so
late. (Interestingly, Ancient Frog was out on the App Store shortly before Zen
Bound, one of last year's winners. They clearly had a working build earlier
than I did. I remember I was in a tearing hurry to get something out the door
before Dylan was born, and pretty much released as soon as the levels were
I've also been nominated in the IMGA awards -
under Excellence In Design.
It's nice getting this recognition - particularly the categories I'm falling
into. I'm an old (old, old) school game programmer, and was always one for
getting my teeth into every part of the creation of a game. Towards the end of
my time in the mainstream games industry I'd been pushed further and further
into a narrow niche, until I was just a middle-manager with a team of
programmers to look after. Getting nominated for 'art' and 'design' awards
helps take some of the bad taste of those times away.
My thrashing around trying to pin down the details of project Blue Comb
continues. I have decided on an art style, which is helping to frame the rest
of it. Ancient Frog started with a very simple and clear premise, but this new
project is just a collection of obsessions which I haven't yet managed to glue
together. I suspect something will need to get cut before I'm able to finish
I just received an email from Jef Armstrong, whose game Mondrian
was recently released. Jef emailed me last year, shortly after Ancient Frog was
released, saying words to the effect that he liked my puzzle game, he was
writing his own puzzle game, so let's talk about puzzle games. Flicking back
through the emails, there's some blogworthy stuff about how I created the
levels in Ancient Frog:
Creating the puzzles was one of the big challenges with Ancient Frog. I've
been thinking about writing a blog article about it, because it's something I
personally find fascinating. A large proportion of all the code written for
Frog (possibly the majority - I should check) went into the tools for creating
the puzzles. However, I'm a bit wary about talking too much about it in public
because I suspect that some people would be disappointed to learn that it's not
all lovingly hand crafted by some sort of puzzle making master craftsman.
Seems a bit silly now. Let's hear the details!
The most important part of the toolset is the solver. This has two levels -
a quick pass to determine whether a particular layout has any solution at all,
and a slow pass to exhaustively determine the optimum solution. (In this
context, 'quick' means anything up to 15 seconds, and 'slow' can be up to 6
hours). There's nothing clever about the way it works. It uses brute force and
a fast computer to walk down the tree of possible moves. There's some
interesting optimisation, and stuff like exploiting symmetry, early-out, loop
detection and so on, but essentially it uses the approach that computer
scientists say is impossible. But that's because it's their job to come up with
general solutions which work for the worst cases in the biggest data sets. I'm
able to limit the problem space to something which is achievable by brute
force, and just skip puzzles which are taking too long to check.
So the process for generating a puzzle goes like this:
* I choose a start point and an end point, and a set of pegs from which it
can choose its route (this lets me put a particular move into a puzzle, but
often I'll just leave it to choose from all pegs)
* I press the magic button
* It goes through a loop where it removes a random peg and tests whether a
solution exists. If there is no solution, it replaces the peg and flags it as
required. This process is repeated until no peg can be removed without creating
an impossible puzzle.
* It checks the result against all the previously generated puzzles to
ensure it's not a duplicate (taking into account simple translation, rotation
* If I like the look of the result, I press magic button #2 and generate the
optimum solution for it
* I quickly step through the solution to see if I like the look of it. I
tend to reject puzzles that require too much walking backwards, or that push
the frog's head off the side of the screen or have the legs passing over the
I like to generate a whole bunch in one sitting, then try to put them in
order based on how difficult I think they look, and how difficult the solver
reckoned they were (that's a bit of code that needs some work by the way - some
of my rankings are quite a bit off). Then I leave them for a day or so while I
get on with other stuff, so I can play them through without knowing how they're
solved. In my playthrough, I log how long it takes me to solve, how many dead
ends I go down, how many undos etc, all of which information also goes into the
difficulty ranking. Each time I play through, I'll generally shuffle levels
around a bit to improve the progression, but I also like having runs of very
similar layouts with increasingly difficult solutions.
The tools also include stuff for applying backgrounds, frog species,
fine-tuning of board position, rotation & translation of the whole puzzle
and so on.
I'm currently experiencing a problem that's so common among game developers,
and so rarely talked about. My new game has the wrong name.
To be clear, this is a very early prototype I'm talking about, and it's just
the name of the project - its directory on the disk and a handful of its files
- that's wrong. I don't even know what title it's going to have when it's
released. But once you've named your project, it's a lot of work to completely
rename it. For one thing, the locations of all the files will change, which
will cause you all sorts of problems if you're using version control software
(and if you're not, there's all sorts of other problems waiting for you right
If you've chosen a name that's a perfect fit for your game, and your game
changes, then every time you see it you find it a little bit jarring. It's not
a big problem, but it's a constant niggle.
The brilliant solution I've come up with is to start using completely random
names for my projects. I've taken a leaf out of the 1950s British military's
book, and decided to use Rainbow Codes. A project is now named using a colour and a
noun. And to make my life easier, I've created a web page to
choose the colour and noun at random.
The problem I have now, of course, is that it's too convenient. I find
myself repeatedly hitting refresh, looking for the 'random' combination that
best suits what I have in mind. Does this game feel like Blue Boy, or
is it more of a Violet Pencil? I'll just keep going until I find the
perfect fit. And when I have a mid-project change of direction, well, it's not
going to be such a good fit any more.