SDA logo


Promotional music video that formed the introduction to a hour-long speed run clip showcase exhibited at Penny Arcade Expo (PAX) in August 2007.



Contact: sdapromo at googlemail dot com

This video (in a slightly modified guise) formed the introduction to the Speed Demos Archive speed run showcase exhibited at the Penny Arcade Expo (PAX) in 2007. It was built as a promotional tool, but it is also a standalone piece of art with which I'm rather pleased. It went from conception to completion in roughly six months, and was created entirely on cheap, widely available consumer PC hardware, using free tools and roughly 6,000 lines of code. There was one piece of specialised equipment that was key to its completion — a blue ballpoint pen.

Music, programming, physics, modelling, rendering and 'final word' direction were the responsibility of Alastair 'DJ Grenola' Campbell, whose words you are now reading, but many creative aspects, as well as some more pragmatic ones, are attributable to others. Nathan Jahnke conceived the original idea for the video, and so it simply would never have existed without him; he was also responsible for providing over one thousand four-minute source clips totalling hundreds of gigabytes for me to use as my source material and for providing helpful input throughout. Another key figure was Sam 'SABERinBLUE' Bennighof, whose level-headed creative advice was continually invaluable. I need to thank my friend of many years, Peter Dodd, whose ingenious mathematical treatment of the physical behaviour of falling domino chains enabled me to code and use an idea I really liked for the final section of the video. Finally, my sincerest gratitude must be extended to all the speed runners whose content was used as the raw material for this production.

<nate> one of the images i keep seeing in my head for a possible new sda promo video is something like a rippling thing
<nate> and on it is a grid of videos
<nate> all moving

<nate> so yeah let me know if you want to run a test
<DJGrenola> i may be tempted at some point :P


Development began when Nate approached me with an idea for creating a promotional music video, with the intent of tiling a large number of simultaneously playing speed run videos onto a waving flag. This concept's roots were old home computer "demoscene" creations, in which the rippling flag motif was a widely known and used graphical technique. Never being one to resist an intriguing programming problem with a potentially pretty outcome, I agreed that it was worthy of some experiments, and I set to work prototyping Nate's idea using my favourite free 3D rendering tool (and now one of my most-loved pieces of software), the Persistence of Vision Ray Tracer (POV-Ray), v3.5. I found it appropriate that the software I was going to use for these experiments had originally been born around the same time as the demos from which our inspiration was drawn; indeed the demoscene theme was one that recurred throughout the production process. Tests began on the very last day of 2006.

Readers unfamiliar with POV-Ray must realise that it was never originally intended to be used to produce moving pictures. Versions of this software from the 386 era would often take many days to render even quite simple scenes, and the very idea of producing even short pieces of video in this fashion was completely prohibited by lack of CPU power. As Moore's Law continued to work its magic, this situation improved and some rudimentary animation support was added to the software, but to make moving pictures work well still requires a fair degree of programmer voodoo. Additionally, POV-Ray has no simple user interface and no internal modeller, so everything had to be modelled painstakingly as constructive solid geometry and entered as source code. I got through a lot of graph paper compiling this project, and many of the objects in the video are created using procedural generation techniques because of the lack of point-and-click modelling. In all honesty, though, I love working this way, and it was very much in keeping with the demoscene paradigm.

The rippling flag was prototyped and proved to work within a few hours, firstly with a static photograph and then with a video clip. The wave effect was produced by Fourier modes that I just made up on the spot. These early results were pretty compelling, so Nate sent over another video made up of a 4x4 grid of Sonic the Hedgehog 3 clips, and I did some more successful tests with that. Somewhere around this point, the project, which was starting to look distinctly plausible, acquired its codename of 'zwei', German for 'two', in reference to the fact that this was going to be the second SDA promotional music video. ('Eins', its predecessor, was made for EGM by Nate in 2004).

It was now starting to become obvious that I was woefully underequipped to put together anything of any length. POV-Ray allows the user to place a camera at a particular point and tell it where to look, but there was no way to make the camera move smoothly about this brave new virtual world or change what it was pointing at in a realistic way. Additionally, a means would need to be found to preserve animation parameters from one frame to the next, as the fact that POV-Ray was designed to render single scenes meant that it forgot everything with each new frame. The test code was cleaned up and moved from a single file into an organised source tree. I wrote libraries that would accomplish inter-frame persistence of variables, allow me to translate, roll, and retarget the camera in smooth, inertial, Newtonian ways, and also allow me to synchronise events in the video to corresponding occurrences in the music. Oh, wait ... we didn't have any music yet.

Nate and I tossed around various suggestions for the music. He was very keen on using an old tracked piece of music called Scirreal; indeed this was the music he'd been listening to when he'd conceived his original flag idea. Even though I liked the idea of using tracked stuff, I had reservations about this choice; it was a little too long and the ending was somewhat anticlimactic. Most importantly, we couldn't secure permission to use it. We listened to various other tracked pieces; I spent a day at modarchive.com (now modarchive.org) downloading S3M and IT files, and my inbox tells me that I issued a request to some friends of mine who are into music production on the fourteenth of January, two weeks after the project was born. Responses were unsatisfactory — nobody was producing the sound I was looking for. I was going to have to do it myself.

<nate> never underestimate dj grenola

We were after something with some real intensity, and in hindsight, I think I overdid that — the tempo I chose was 145 beats per minute, and now I think 140 would have been more effective. I wanted a somewhat psychedelic feel, so I picked sounds that have inspired two generations of lysergic acid enthusiasts — the electric guitar and Roland's TB-303 synthesiser. The piece was arranged in Modplug Tracker, and I used the Propellerheads' Rebirth software to emulate the 303 sound, importing the resulting loops into the tracker as samples. Other samples were garnered from various places; the 'beep' sound that appears somewhere around the 1:20 mark is just a windowed sine wave.

The harmonic minor scales from which the guitar parts are constructed were inspired loosely by Bach's Toccata and Fugue in D minor. They were played on a Yamaha Pacifica 512 electric through a Zoom 1010 pedal set to provide distortion and EQ, and fed directly into the input. The chopped, rhythmical guitar effect at around 2:30 was accomplished simply by toggling the volume on and off in the tracker. Since every other note is a tonic (played as open B and E strings — thanks again, Bach), I was able to use a stereo delay effect to allow the guitar to harmonise with itself, a really rich sound that I've always loved. The bass was originally composed in the tracker, but it sounded so reedy that I ended up buying a cheap electric bass guitar (a Washburn T-14); a new bass line was then overdubbed live, which sounded about five times better. Tracks were exported groupwise as WAV files, effects and EQ were added, and then the whole lot was mixed down using Audacity.

<DJGrenola> so the actual tempo of this thing is 145.092465726 bpm
<DJGrenola> how retarded


I wrote the music in something of a cavalier manner without any real consideration of what corresponding events might occur in the video — the intent was just to fit the video around the music. This worked out okay — in a way it made things easier because it effectively made some of my creative decisions for me — but if I had my time again I might prefer to plan the video out loosely first and then approach the music production afterwards.

With the camera implementation and music 'complete' (in reality, both continued to be tweaked throughout the life of the project), I had all the tools I needed to start work, but I was still missing the source material — speed run clips. Several significant decisions were taken somewhere around this point in time. Bombshell number one was Nate's pushing for something far more ambitious than the 4x4 video grid I'd been testing with — I think I actually laughed out loud when he suggested a 128x128 grid (that's sixteen thousand video clips playing simultaneously); he experimented with exporting various large tiled flag video configurations from Avisynth and Virtualdub as source material for me to use, and ran out of RAM a lot. I suggested that rather than bothering to arrange the videos in a tiled formation beforehand, the videos should just be exported individually and I could tile them myself in the raytracer, making the flag up from hundreds of individually textured objects rather than just one huge one. I didn't know how well this would work, but I tested it and it seemed to offer no real performance hit when compared to the prearranged case. It also afforded me flexibility, which would become significant later on. In theory this would allow me to create video grids of any size, but Nate's uncompromising drive for something that looked as impossible as it actually was and my uncompromising drive for actually finishing the project sometime this decade met at a midpoint of 1,024 simultaneously playing videos on a 32x32 grid, which became the final configuration. Even at this size, I told him that he was off his rocker, but after some more testing I conceded grudgingly that it could probably be done.

<nate> haha remember when i wanted to do 128x128

Another thing we debated at this point was how to get the videos into the raytracer. POV-Ray can load image files and use them as textures, so I'd need to extract 1,024 image files for each frame of output video which would then be loaded by the POV-Ray code I'd written. A four minute video would therefore need to be built from something like fifteen million source image files! Assuming I used an uncompressed picture format such as PNG, this corresponded to at the very least a terabyte of storage, which I didn't plan on acquiring. I wanted to use JPEG instead, which ought to have made it possible to fit the whole lot on one large hard drive, but Nate was very keen on keeping all the videos lossless. I hadn't wanted to complicate things further, but after some more tests I was seduced and elected to use PNG; the video would now be rendered in discrete chunks rather than all in one go, and the texture PNG files would be extracted from the source videos in manageable pieces that corresponded to phrases in the music. With 20 chunks in the full video, each chunk's extracted textures could now fit into 100 or so gigabytes.

Another paradigm shift that transpired was the idea of texturing videos onto other objects, made possible by the abandonment of pre-tiled videos. Various ideas were flying around at this point; some were rejected on practical grounds, some on creative grounds, and some simply for being crap.

The final significant event was that we were contacted by the Penny Arcade Expo coordinators about the speed run showcase that they wanted to show at PAX 2007. In the resulting topic on the SDA forums there were suggestions that we should start the thing with a music video that showed as many runs from the SDA back catalogue as possible. Cue smug grin. The PAX video was going to be shown on 4:3 aspect ratio projection equipment, which was one of the major reasons I chose not to render it in 16:9 widescreen (some of the early tests had been done at 16:9).

<SABERinBLUE> "widescreen is cooler" is not a reason

Nate used Avisynth to produce over one thousand four-minute clips from SDA's terabyte-sized speed run back catalogue and split them across one 250 gigabyte hard drive and seventeen recordable DVDs. Since at the time SDA had runs on something like 320 games, and some of these runs were unsuitable for inclusion (usually because they were individual-level runs that weren't long enough), it was necessary to take multiple clips from single runs in some cases. There is, however, no repetition of footage anywhere in this video. The total source material is about 70 hours in length. Each clip was resized to 320x240, its frame rate was converted to 59.94 fps, and it was encoded in one pass as lossless H.264. In combination with the use of PNG files as the carrier format from clip to raytracer, this means that we succeeded in achieving completely lossless video transferral.

We decided after considerable deliberation to render the final product as NTSC progressive (59.94 fps and 640x480) at 4:3 aspect ratio. We discussed making an HD version, but I had to put my foot down at this point — I was already not much liking some of my projected render times.

While the hard drive and DVDs were winging their way from sunny North America to rainy England in the cargo hold of an aircraft, I compiled what was to become my main reference source for the project — a text file containing timings and frame numbers for the different chunks. A plan for the first two thirds of the video was developed, so I knew what was going to happen in each chunk and could get a feel for the pacing. Nate proposed the idea of texturing videos onto the walls of a maze, a gaming-themed brainwave which I really liked, so I planned to do this after the flag had been explored for a while. There was still no consensus on what we should use for the final third of the video, but I decided not to worry about that yet. There was plenty to be done in the meantime.

The initial flag section was basically completed before the textures arrived; prototypes were done using placeholder "checkerboard" textures and rendered at 320x240 or 160x120 resolution (a method I continued to use for the rest of the video). The text in the opening credits has a refractive index, so if you look carefully at the high resolution versions you can see the textures behind the text bending through the glass as they move. The Metroid Prime video in the opening seconds is actually the only video in the flag which isn't an SDA run — because I needed the highest possible quality for the close zoom on the timer in the opening shot, I ran that section of the game myself and recorded using a DVD recorder on the highest available quality, then deinterlaced it using mencoder's mcdeint filter. This is one of only two videos in the flag which was imported at 640x480 rather than 320x240. As well as preserving the camera's position and velocity (both rotational and translational) and targets between frames, I had to write code to save the positions and velocities of the text in the credits as it spun into position and fell away. I typically had to repeat this tedious procedure for all moving objects in the video, which led ultimately to a directory containing a rather improbable 170 megabytes of text files; at least one was created for each frame of video, although keeping data for all frames (rather than overwriting the data for each new frame) allowed me to resume the render from any previous point, which was essential given the slow nature of production. The "Speed Demos Archive" text was created by routines that would allow me to plot ASCII art into an array of strings in the source code, and have the raytracer produce the goods by parsing the array; this effectively allowed me just to draw that particular piece of text exactly as I wanted it.

My original Fourier hack to make the flag ripple was replaced by one of POV-Ray's internal 3D noise functions, which was faster and produced what I felt was a much more pleasing effect — it became known both inside and outside the source code as "wibble". Despite the slight speed improvement offered by the new wave generator, this effect was still infamous for slowing down the render time by an order of magnitude, so I had to use it sparingly. Making the floor reflective also slowed things down somewhat, but I liked the polished stone effect so much that I just had to grin and bear it. The flag scene was lit by a single invisible spotlight.

<nate> why do we need a floor again
<DJGrenola> because it looks awesome


Nate wanted me to use an idea of his later on in the video, namely manipulating the clips so that they played a game of Tetris. I resisted this fiercely for reasons both creative and technical, but Saber suggested that I could make the pieces of the flag fly into the air in Tetris-shaped clumps, rather than individually (as we'd originally planned). Saber's idea did get used — watch the flag deconstruction carefully! I considered doing this algorithmically, i.e. by writing code that would chop up the flag into Tetris pieces automatically; this is not as simple as it appears at first (owing to the way pieces get stuck underneath other ones), and in the end I divided the flag up into Tetris blocks on graph paper and coded the configuration manually. The pieces move upwards under a constant accelerating force, which required more tedious storage of inter-frame data in text files.

<DJGrenola> textures turned up this morning

The textures arrived from the 'States on February 19th. This was a very exciting moment, because it meant the three of us would be able to see the flesh and blood of our vision for the first time — until now, nobody really had any clue how this was actually going to look. Although Nate had created more clips than I actually needed, some were unsuitable; they had broken aspect ratios, or were too short. I deleted what I couldn't use and took the opportunity to bolster the clip selection by downloading some of SDA's newer content that had missed the deadline for clip creation. Kari Johnson's Dr. Mario 64 runs and Adam Sweeney's Trauma Center: Second Opinion video, among others, were added to the melting pot at this point. The Trauma Center content was the other thing that I eventually chose to feed to the raytracer in full resolution (640x480). Broken clips were re-downloaded, re-selected and re-encoded, probably using mencoder. I also took the opportunity at this point to create a few clips specifically for the "close-up" section of the flag, timing them carefully so that something interesting would be going on when the camera got to them. I also picked eight good looking clips to surround my Metroid Prime video for the initial zoomout, although these were changed several times during "all-up" testing over the next couple of weeks.

Nate had originally suggested that the videos should be placed randomly on the flag. I wasn't satisfied with that at all. Some clips were clearly better than others, and I skimmed through all 1,024 videos making terse notes about their content and how effective I felt they would be in the final product. An OpenOffice spreadsheet was created and the videos were arranged within it. I designated an area in the centre of this virtual flag the "core", a 300-clip region in which I wanted no game to appear twice, and in which all videos had to be average quality or better. Poor quality or overly dark clips (i.e. anything starting with "SilentHill") were pushed to the speed run graveyard at the very edges of the flag. This process was tedious and horrible compared to using a random number generator and "letting god sort them out", but I felt it was work worth doing. Outside the core, I also tried to avoid bloopers such as putting two videos from the same game next to one another. The positions of the videos in the spreadsheet had to be tweaked intensively before I was satisfied with the opening scenes; my IRC logs indicate that I was still mucking about with them much later.

The hard drive Nate had sent me was installed in my fileserver, an Athlon 1300 running Debian GNU/Linux 3.1. All seventeen DVDs, plus the new clips I'd made, were copied onto one of the drives already in there, a 120 gigabyte SATA device. For each of the video's twenty chunks a shellscript was written that invoked the ever-wonderful ffmpeg to extract something like 700,000 PNG files from the source clips. My OpenOffice spreadsheet was exported as a text file and used to create automatically a cunning web of UNIX symbolic links that would map the clips into their correct positions in the flag. The scripts were tested carefully to ensure that the extraction was correct across chunk boundaries — I wasn't going to leave to chance the possibility of frames being duplicated or missing when one chunk became the next, which would have led to an ugly glitch in the final product.

Two cheap 250 gigabyte SATA hard drives were purchased, which became known as "shuttle drives". One of them, glutted with two chunks' worth of texture PNGs (between 150 and 200 gigabytes), would sit in my main desktop machine — a single-core Athlon 64 3200+ running 64-bit Ubuntu GNU/Linux. While POV-Ray was grinding its way through rendering a pair of chunks on this machine, the other disk would be in my fileserver, which would crunch through the extraction of another million and a half PNG files from the H.264 masters, a process which took roughly eleven hours for each chunk (or twenty-two for a pair). Its CPU fan failed at one point, which was a slightly alarming moment (I have terrible luck with fans). POV-Ray's output was produced as PNG files and saved to my system drive on the 64-bit machine. In this way, zwei's vitals were spread precariously across four hard drives; I spent a lot of time expecting one of them to fail, but for once in my life, my luck held. Once both rendering and extraction had completed, both desktop and fileserver machines were shut down and the "shuttle drives" were swapped over for the next chunk pair.

The performance of the raytracer wasn't half bad. In sections where the rippling flag was not used, the main bottleneck was getting all the texture PNGs off disk and into RAM. I forget the exact amount of RAM that needed to be allocated for each frame, but it was of the order of a few hundred megabytes. This was acceptable on a system with 1 gigabyte of memory, allowing me to continue using it even during a 'trace. Because the thing was put together in the bit-part fashion outlined above, and because many chunks were aborted half way through, tweaked, and then resumed, it is difficult to know exactly how long the final render took. All told, it was probably about two weeks' worth of solid CPU time, far better than some of my original estimates. The chunks in which the flag was allowed to ripple were CPU-bound, and took several days of rendering each to complete. Mistakes made in these chunks were therefore very costly; I remember having to redo at least one of them. Thanks to PAX, I was now fighting against a deadline.

<DJGrenola> who can spot what's wrong with this code
<DJGrenola> #declare TITLE_DISPS [I]=VEL;
<DJGrenola> #declare TITLE_VELS [I]=DISP;
* SABERinBLUE raises hand


On the twenty-third of April we spent some time in awe over the first "all-up" output, but more tweaks to video positions were required. I spent the next week experimenting with swapping video clips into different places and doing full renders of the early scenes; while I was waiting for these to run I tested out various maze ideas with dummy textures. As previously for the title text, I wrote code that would allow me to plot the maze's layout directly into the source code. Parsed by my routines, this ASCII art would take three-dimensional form automatically. A lot of thought and experimentation went into how to dimension the maze and lay it out, but I think the second maze I designed was the one used in the final product.

The maze is composed entirely from videos taken from the "core" of the spreadsheet I mentioned before. My beloved reflective floor came back to bite me when I realised that it was going to make dropping the pieces into position from above very difficult; it would be hard to conceal the fact that they were just popping into existence above the landscape. Coupled with the fact that I wanted the drops to be synchronised loosely to the "beeps" in the music at this point, this was an awkward and fiddly scene to set up. I should mention at this point that I had no direct control over the path of the camera. The libraries I'd written merely allowed me to tell the camera to get to a certain place at a certain time; the actual path taken was the responsibility of Sir Isaac Newton, so this was the first of many instances when I had to experiment painstakingly with different endpoints and timings to get the effects I wanted. If I had my time again, I'd almost certainly have added ways to make the camera smoothly approach and land on paths defined as functions of x,y and z, or quadratic/cubic splines. Indeed, on one occasion later on in the video, I was forced to hack together a case where the camera's target follows a circular path. This was a special case rather than a general one, though.

During assembly the maze is lit by four spotlights, one above each of its four corners. I discussed with Saber a number of different ways in which the pieces could drop into position, and I needed to come up with a scheme whereby the total number of pieces dropped equalled the total number in the maze. I quickly decided a tapered approach would be the most visually effective, with single pieces at the beginning, single pieces at the end, and a hailstorm somewhere in between. This wasn't hugely difficult once all the bugs had been worked out.

I wanted the maze navigation to look smooth and organic. We debated various means by which the camera could move. A "bobbing" motion reminiscent of first person shooter games was discussed, and having the camera bank round corners as if it were an aeroplane also saw the light of day, but it just seemed like far too much work to make these modes look convincing. Because I couldn't directly prescribe the path of the camera (only the endpoints of its movement), the maze became a frustrating exercise in trial and error, with the viewer being flung through walls left and right. The "false start", where the camera takes a path leading to a dead end, was inserted to show off more of the videos; it was also intended to waste time, forcing the camera to reach the centre of the maze at the correct point in the music. As the camera glides into the maze, the four external spotlights are dimmed and switched off and the scene is lit by a single spotlight strapped to the camera.

Saber came up with the inspired wheeze of having ghosts in the maze, and I decided that I'd add in the yellow glutton himself to add a touch of comedy to proceedings. The ghost was modelled first. While this is of course in direct violation of canon, I named him Boris. Again, he was built using POV-Ray's constructive solid geometry; the most difficult part here was getting the "wavy" effect at his base right. I accomplished this by scribbling extensively on graph paper with my blue ballpoint pen, using a lot of union and difference blocks and a big pile of cylinders; for some reason the square root of two featured a lot as well. I don't know how visible it is on the final encodes, but there is actually a very tiny glitch in his "skirt"; I never figured out what caused this — possibly rounding errors, but it was so insignificant it really didn't matter. The gag of having the ghost first advancing and then retreating was my idea, although in its original form the viewer would have seen a red ghost doing the chasing on the way into the maze and a blue ghost being chased on the way out. Having the eyebrows reverse as the ghost changes colour was another simple but effective touch. Making him simultaneously reflective and translucent gave him a slightly ethereal look, and I made his main body rotate so that the "skirt" was in motion; this was another really simple tweak which made him a more interesting, animated character. The flash in the sky was realised trivially by changing POV-Ray's scene background colour. One trick I missed with young Boris was that I could have given him a refractive index, causing the videos behind him to be warped as viewed through him, which would probably have looked really good. Oh well. You can't think of everything.

<DJGrenola> 1001 uses for the sin() function
<DJGrenola> #378
<DJGrenola> animating pacman's mouth


The Pac-Man model was much easier to do with just some CSG cylinders, bounding boxes and a prism used to cut out the hole for the mouth. The prism angle was varied as the sine of the frame number, which made his mouth open and close in a smooth, oscillatory fashion. (I abused the aesthetic properties of the sine and cosine functions several times in this video.) I thought that a cylindrical model for the little yellow guy was cooler than the spherical one seen in games like Pac-Mania. For a short time I was mooting making the ghost and his antagonist accelerate and decelerate according to Newtonian physics, and I pondered also making them rotate smoothly rather than changing direction jerkily, but I decided I liked their old-school-styled manoeuvres, and of course this more basic motion was far simpler to implement. More annoying code to load and save the positions of both actors in this virtual maze also needed to be added, of course.

<SABERinBLUE> you don't want to hear my center of maze ideas
<SABERinBLUE> because most of them involve giant robots


I was quite keen on having a short section in the video that could be used to explain briefly what speed running and Speed Demos Archive were all about; it seemed like a criminal waste of an opportunity not to, especially because we knew we'd be showing this at PAX. I introduced the idea of opening out a room in the centre of the maze with a wall that would display some SDA propaganda. Other ideas for passing information to the viewer were discussed, and I tested having text scroll across the flag while the videos played (another demoscene-esque inspiration), but settled on the maze post idea. I thought about showing something akin to the old cards used in silent movies to convey dialogue, and spent some time looking up clips from silent films on archive.org and youtube. This idea was eventually abandoned on the grounds that it just wouldn't work compositionally, and I opted instead for a pseudo-computer terminal look. The "bootup sequence" and the banding effect were accomplished fairly quickly by using multiple pigment layers. The "snow" displayed at the start of the bootup sequence was created using POV-Ray's built-in bozo texture coupled with a color_map, and scaled appropriately; because bozo is infinite in all directions, making it move was simply a matter of offsetting the texture sideways by a suitably large multiple of the current frame number. Easy!

We played with a few fonts but I decided just to recycle the font I'd used in 3D for the initial titles; not only did this provide stylistic continuity, but it required no work. We discussed what information we wanted to convey, and the key points were finalised fairly quickly (although we squabbled for a while over the minutiae). This was one instance where the music caused problems; I'd have liked to have displayed the text (particularly the final panel) for longer, but this was sadly not possible. At any rate, I hoped that because so much was going on at once people would watch the video multiple times, so perhaps they'd pick up on it on repeat viewings. An object texture was use to get the text onto the post. I made the camera move slightly while the text was being displayed to make everything a little less static.

It became clear during planning that I was going to have to end the maze quickly so I could be in position for the final section of the video when the guitar came back in, and my options were basically to leave the maze unsolved, either by withdrawing it through the floor or by lifting the camera out of it, or just to traverse the rest of the maze at high speed. I preferred the latter for various reasons, and once more it was Saber's brain that produced the idea of jumping over a wall, a cheeky nod to the ubiquitous speed running practice of taking shortcuts unintended by game designers. This time, the camera was pretty simple because I no longer needed the elegant glides of the earlier maze section, although the jump required a little experimentation to get right. Most of this testing was done using dummy textures at 320x240 or 160x120 as before. The maze was coming together; now all we needed was a finale.

<DJGrenola> or lol how about a domino run

This flash of inspiration was typed into my IRC client on the twenty-second of March. At first it was little more than a joke, because I felt the chances of making it happen were slim indeed; this would take a far more comprehensive grasp of mechanics than I possessed. Nevertheless, I couldn't get the idea out of my head once I'd conceived it, and progress had been steady thus far; we still had months to go until PAX. There was time at least to figure out how plausible it might be. I called for reinforcements.

One of my most long-standing friends happened to be in town on the twenty-fifth of March — Peter Dodd, a mathematician with a Ph.D. in some quantum physics related discipline that I don't even pretend to understand. I put the problem to him when he came to see me, taking the opportunity to tease him with a few of the test renders. We discussed the physics involved and debated possible assumptions that could be made to simplify it. He was immediately enthusiastic and promised to go away and think seriously about modelling it. I was still dubious — not of his ability to produce a working mathematical treatment, because I knew he would — but rather of my ability to decipher and implement his results.

The first analysis arrived by fax on the thirty-first of March. Predictably, I found it pretty hard to understand, but a flow of emails back and forth helped with that. Meanwhile, as tests on the maze continued, I was at last able to satisfy myself that the clip layout for both the flag and the maze was satisfactory, and that I could sign off my spreadsheet. On the ninth of April, I started extracting and rendering the first two-thirds of the video for real. As chunks were completed, I tarred and bzipped the PNG files up — four gigabytes of them by the time the project was finished — and uploaded them to a server for Nate to grab, along with lossless copies of the music. He'd be doing the final master of the DVD for PAX.

Peter's working contained a mistake or two (although I'm fairly sure that this was just because he threw it together over his lunch break); even though these cost me time, I didn't mind too much because finding them afforded me a decidedly smug glow. I was now having to remember mathematics that my subconscious had long since repressed. By the nineteenth of April I had working test renders of single, straight domino chains. These looked really effective, but filling the final third of the video was going to take a little more. I asked whether a similar treatment could be produced to allow chains to flow round corners, and whether they could be made to split into two.

<DJGrenola> have to wait for him to reply to the six emails i've sent him overnight

While the mathematician considered this extension, I decided to prototype some fireworks, which was an idea I wanted to use in the finale. Fireworks are an inherently difficult thing to make in a raytracer, but I thought I'd just do some experiments and see how they looked. First I tried launching a few hundred spheres radially from an origin with random velocities within a range. To create the illusion of fire I positioned an invisible light source slightly below the origin, and specified no_shadow for the particles. This looked okay, but not hugely convincing, so I went to youtube and looked up some footage of firework displays. The most significant point I noted from this piece of ghetto research was that fireworks all have "comet tails". I went back to my particles and changed their shape so that they now resembled ice cream cones, and added code to make their tails point in the opposite direction to their velocity vector. I made them fizzle out one by one in a staggered fashion and make their light source decay after an initial 'burn' period (all the time adding code to the save-and-reload-data-between-frames routines). While the results don't actually look much like real fireworks, I nevertheless liked their stylised, geometrical appearance; for some reason they reminded me of Metroid Prime, and they were kept.

Four days later, on the twenty-sixth of April, the rendering of the first two-thirds of the video was complete. I sent a copy of my finished work over to the PAX coordinator with whom we were liaising, and she really liked it, which was a definite plus. By this time Peter's second research paper had found its way to me via fax. Adding support for going round corners, this new model was a lot more complicated both to understand and implement. As before, we thrashed out the problems with it via email, including one particularly hard to track down mistake in the geometry which I eventually nailed by independently duplicating his working. According to my inbox, solving these problems and writing a generalised implementation that I could use to string multiple chains together with different offset angles took the best part of another month. Some shortcuts were taken in the process; I think the conservation of momentum is broken in certain instances, but it worked plenty well enough. At last, I had the final tool I needed to finish the project.

<SABERinBLUE> did you consider having these dominoes be made of two videos each?

I hadn't, but it seemed like a good idea; the only issue was that it would reduce the number of dominoes I had available from one thousand to five hundred. I was determined not to recycle any textures. There isn't a huge amount to be said about the domino section; it consisted mostly of experimenting with different layouts and seeing what worked. My library was based on the concept of a "chain", which was a single set of dominoes with a fixed curve angle. Chains could then be linked together into larger chains using a macro call. The library would position the child chain after its parent and handle transparently the flow of dominoes across the chain boundary. Lighting continued to be provided by a spotlight source strapped to the camera. By far the most awkward problem was manipulating the camera so that it tracked the cascade without getting too far behind or too far ahead of it, and making sure it was always looking in the right place. I would have liked to have had something physical that triggered the falling of the domino "packs", but I couldn't think of anything that would be particularly easy, and after months and months of work I was starting to get a little jaded. Before the long, straight section where the fireworks begin, the camera-bound light is dimmed to make the coloured flashes more striking.

We needed an ending, so I decided that the video should end the way it started — with the flag. I made the camera zoom up close to the last few dominoes, allowing me to pop the flag back into existence behind them. A difficulty here was that a small number of Nate's clips didn't run for the full four minutes I'd requested. I was still keen on avoiding repetition of footage, so I replaced them with non-core videos that had remained unused during the maze section. The final obstacle was rounding things off nicely with the site name, URL, a punchy slogan, and a definite endpoint by switching all the videos off — I felt that just cutting them off would have looked amateurish.

An alternative ending appeared on the version that was used on the PAX DVD, featuring a zoom to one flag cell that was displaying "snow". This allowed us to cut nicely into the main feature. I therefore had to develop two alternative routines for the final chunk and render it twice. The final chunk was uploaded to the distribution server for Nate to collect on the sixteenth of June 2007, and this marked the end of the project.

I have my own feelings about what went well and what went badly, what worked and what didn't. I'll probably keep them to myself and let people make up your own minds. I hope you like the video, though — the work was backbreaking.

Return to the game list, the news or the front page.