A speech by CTO, Rob O’Leary at the HTML5 Developers Conference, San Francisco, October 2014

RobDev Blog

Hi all!

I’m Rob O’Leary, CTO and co-founder of Happy Landlord Games. We’re a small, independent game studio based in Stockholm, in Sweden, and we’re developing a game called Adventure Box.

Now, I’m no salesman – I’m an engineer – so I’m not going to write too much about what a wonderful, engaging.. *tasteful* product we’re building. Instead I’m going to talk about the technology behind the game – and specifically, about the blood, sweat and tears involved in developing a web-native voxel engine.

It sounds like a Tolstoy novel, eh. It’s not – voxel engines are awesome! : )

Let me first you a brief overview of the nature of the game and the kinds of experiences our technology needs to deliver.

Adventure Box is a voxel-based role-playing game/city builder. It’s built on the Goo engine, using HTML5 technologies, and it takes place in procedurally generated voxel worlds, in the browser. These worlds change and develop over time based on the choices that players make. The setting is Lord of the Rings meets Mad Max. So it’s a doomed fantasy environment – with magic, steampunk machinery nobody understands any more, and – and least in the beginning – a sense of civilization being on the brink of collapse. When the game opens, you are one of the last survivors of an isolated village. Your village was once a great city, and it’s your task to fight back and rebuild.

So, we’re bringing together old-school RPG gameplay – with character development, exploring, loot, and so on – with the kinds of responsive, interactive environments made possible by voxel engines. And, we’re doing it in the browser – in Javascript, WebGL, HTML5!

As the CTO, I designed the Adventure Box engine and I wrote almost all of its components. I’ve been an engineer since.. Well, since there’s been a web – and I love to talk about technology. So, I’m going to write a little bit about how I got into voxel engines – where this system came from – about what these kinds of systems mean for me – why we’re building it in HTML5 – and finally about what I see as the enormous potential for web-native voxel engines – or more generally – for web-native systems designed for the rendering and manipulation of massive datasets in volumetric spaces.

 

Last year I was experimenting with rendering datasets in volumes. For example, this system right here:

This system was motion controlled and – in this particular case – was loading data from the SEC – the Securities and Exchange Commission here in the US, for the purpose of generating interactive graphs of the ownership and directorship of publicly traded US companies. This wasn’t a typical voxel engine – at least in the rendering stage – but the layout systems used a kind of cellular automata to create generate emergent readable structures – and the data loading and manipulations systems actually formed the core of the Adventure Box data management systems.

Now, this is another data visualization system I wrote – this time running entirely on the GPU. Here we’re seeing some test data – coherent simplex noise – and that’s about 1 gigabyte of data. It’s not a massive dataset, but it is interactive, in real time, no problem. That data is relatively high-dimensional, but in the first part of the video we’re just seeing 3 dimensions.

In the second half of the video we have the same data visualized as changing in the volume over time – and that’s done by just interpreting one dimension as time and then incrementing the rendered time slice at a constant speed. So it’s a form of 4-dimensional raycasting in which rays collide with data only when the time dimension matches the current time slice.

Data in volumes like this can be easily scaled and viewed at any resolution, with the dimensions of the data being interpreted in whatever way makes it most readable, and most useful for the purposes of perceiving patterns in the data – and that’s what volumetric models are really about – leveraging our own intuitive, visual understanding of spaces in order to understand, in real time, quantities of data that are unmanageable for us in other forms – in Excel documents, tables, lists, etc.

At the same time as experimenting with pure data visualization systems I also built a CPU-based voxel engine and variety of procedural content generation systems – for producing landscapes, vegetation, architecture, and so on – and in this case I was experimenting with the translation of motion into volumetric spaces containing coherent data, for the purposes of control.

The over-arching theme, for me, is the interaction with, massive datasets via the mechanism of translating the dimensionality of data into intuitively understandable 3D spaces.

Now, the kinds of datasets we deal with in Adventure Box are intentionally coherent – they’re landscapes, vegetation, and so on. They’re also procedurally generated and consequentially theoretically infinitely large. They’re high-dimensional – each voxel has a location in space and a type, but also a temperature, humidity, compression, and assorted other gameplay values. In fact, these datasets are composed of overlapping waves of 2 and 3-dimensional perlin and simplex noise, interpreted in such a way as to produce recognizable worlds.

These kinds of datasets are exciting for me, because they’re implicitly infinite, they’re very complex, and exploring them is about streaming data in and out of the volume in real-time. This kind of application of volumetric rendering is also very performance-focused – and therefore a real challenge. You’re streaming huge quantities of data from source to browser, and browser to GPU. Often that data is being generated in real time. You’re interpreting input, processing post effects, updating world states, and keeping everything moving at 60 frames per second. You can push that down to 30 FPS on lower-end devices – but any less than that, and it’s game over, son.

So, the idea with Adventure Box was to create a complex voxel engine for the purposes of generating rich, high-dimensional procedural worlds. We combine that with artist-produced models and animations and story-telling to create really deep, fun worlds to explore.

From the beginning we wanted to try to build the game in the browser. HTML5 is the technology to be working with these days – and we knew that if we could create a game that was just one click away from anywhere on the web, we’d have something really accessible and fun. From inside a world I can send out a Tweet asking for help, or post an invite on Facebook, and my friends can click on that link and be straight into the game – no account creation, no downloads, no installs – no barriers at all.

Truthfully, it wasn’t entirely clear to me at the beginning that building such a system was actually possible. I’m happy to say, it is. Javascript and WebGL have come a huge way in the past few years – and the emergence of engines like the Goo Engine have made it possible for studios like us to actually deliver really challenging 3D systems on the HTML5 platform.

So, let me wrap up with a summary of what I see as the huge potential for web-native volumetric systems.

With HTML5 we can now handle the vast quantities of data, and the processing required to stream and render that data in the browser.

We’re delivering fully modelled, genuinely interactive worlds in the browser, without any plugins or installs. We love games, we love the joy they give to people – but also know that we can use this technology to stream any quantity of data into the visible space – and that’s complex, high-dimensional data. We can leverage next-generation input devices to map motion to volume – and we can put this approach to use in any area in which intuitive visualization of and interaction with massive datasets can be useful.

Finally – I’m not going to wrap up with a sales pitch – but if you’d like to see more of what we’re up to with Adventure Box – or even get involved – you can head over to adventurebox.com and sign up to be an Alpha Tester. We have a couple of thousand testers helping us out at the moment – and we’re currently moving from the R&D, technology development stage to the gameplay development stage – so we need all the help we can get!