From the late 90s to the present day, many commercial games have been focused on some sort of realism – graphical photorealism, real-world physics, etc. Certainly when you look at the man-hours spent on most game software you’ll see far more of it invested in features that further the simulation aspects: eg. the creation of life-like environments and characters, and life-like movement of the characters through that environment. The game is almost a token gesture on top, usually just a set of simple goals with a basic scoring system to provide some sort of obvious incentive to chase those goals.

Gamification is a common buzzword these days, typically referring to how businesses can benefit from making interactions with their products more game-like, and so they have looked to the games industry for pointers on this. The interesting thing is that the assumption has been that this would be about seeing how games are made and then extracting game-like properties from them, but in fact it might be more accurate to say that what we call games developers have really just been ‘gamifying’ themselves in the first place: rather than making games with simulation aspects, they are often in the business of writing simulations with a healthy dose of gamification.

We know that a ‘game’ does not have to involve simulation at all – there are plenty of definitions on Wikipedia, each differing, but none implying that a game intrinsically has to model the real or a fictitious world in any way. So why has the games industry adopted this position of writing playable simulations? I think there are a mixture of reasons for this, some good, some bad. I’ll return to the good reasons in a future blog entry, but for now I’m more interested in the bad reasons, which I feel have taken precedence and given us an entire industry of games that aren’t really games any more.

Firstly, many players – and, it would seem, game publishers and some developers – seem to feel that a game can’t be enjoyed unless it reaches some sort of contemporary presentational standard. To many people, good graphics makes a good game. To be fair, a lot of players do have trouble enjoying older games and some modern independent games because they find the graphics primitive and distracting. But I would argue this is mostly cultural: we’ve been sold the mantra of “better looking games are better games” because it helps sell new hardware and software, and I think in part we’ve bought that line because there is a truth to it – newer games do play better, for most people. But this is arguably because improvements in game design have run in parallel with improvements in game technology, and we mistake correlation for causation. The design improvements we’ve seen over the years can apply equally or more so to games that don’t look as realistic as many modern ones do, and thankfully some indie developers are showing us just that.

There’s also an argument regarding the interface, that worse graphics make a game’s visuals harder to understand, and to a degree that does hold true, but it’s hard to argue that even games 15 years old looked so bad that you couldn’t adequately work out what was going on. Children can enjoy blocky graphics and unrealistic iconic representations so I don’t find the interface argument compelling.

Secondly, I think many developers actively want to make things more real. Partly this is because game development is often driven by programmers who are interested primarily in technology and who like to push that technology in new ways. Perhaps the majority of coders I’ve known fall at one extreme or other of an interesting dichotomy – they either want to write interesting features themselves from the bottom up, or they want to play with 3rd party software and libraries to implement those features quickly. But either way they are playing their own ‘development game’ which is more about the technology’s intrinsic properties and not so much about what the technology is to be used for. Programmers get bogged down in optimising code that already runs fast enough or switching to a new and shinier 3D engine because those challenges are often more interesting to them than shipping a finished game. Perhaps that’s why they’re programmers and not managers!

However I think the other side of the coin is that most programmers – and, I would sadly argue, designers too – don’t really know how to improve the abstract thing that is ‘game play’. There are many who’d love to create better stories, emotions, AI, and so on, but don’t have the knowledge or skills to do so, which means they resort to making the improvements in areas they do understand. You can throw more polygons at a 3D mesh to make it look better, but you can’t just throw more materials at a rock/paper/scissors conflict model and expect to magically have a better game. You can make a game run more smoothly or make it more colourful-looking or write one of those amazing everything-is-brown-or-grey-so-it-must-be-a-gritty-game shaders because these are techniques we know about (and saw at SIGGRAPH 15 years ago), but there isn’t much resembling a science for designing the abstract game features, or at least not one that is well-known and accepted. Even some of the better-known designers such as Daniel Cook and Raph Koster seem to consider their work to be more about casting an enlightened eye over trial-and-error, relying on play-testers to tell them what is fun. While nobody would seriously argue that you don’t need some sort of play-testing – just like graphics programming requires the programmer to actually look at what is being rendered – it seems a bit defeatist to assume that it’s not theoretically possible for a knowledgeable enough designer to be able to create a compelling game experience without needing to have others try it first. In particular I can’t agree with the suggestion that emotions, experiences, and personality in games “cannot be systematically engineered no matter how many design articles anyone reads“. I can’t imagine making such a claim about film, or books. It seems even more invalid for games, where the player is a participant: so if we’re not there yet, we just have more work to do, more knowledge to acquire.

I think we can get back on the right path to that by returning to those older and purer games, the ones from decades ago that delivered interesting gameplay to us long before they could attempt to deliver a world that looked like our own, when all the graphics and sounds were necessarily iconic and symbolic. Rather than trying to look and act like real life, they attempted to capture the essence of what games had previously been – sets of abstract rules, represented somewhat arbitrarily, but in such a way that they could be played with. Chess, poker, soccer, Scrabble, all involve real humans in the real world but who are acting on artificial tokens and according to artificial rules. The contests may be played out physically or mentally, numerically, linguistically, or spatially, but essentially they’re all abstract.

This should immediately show us that moving away from simulation is not just about picking a different aesthetic for your game’s visuals as a replacement for photorealism, but about realising that a game does not have to attempt to directly model or simulate any aspect of real or imagined life to be an enjoyable activity. It should instead be sufficient to create some representation of it that lends itself readily to interesting play. Minecraft’s world of blocks is not just a graphical simplification, nor even just an aesthetic choice, but is an abstract representation of the world, simplified to make it easier to reason about and to build with.

We can go further, and say that this creative use of abstraction isn’t limited to symbolising the physics of the world (where physics in this context includes the visible and audible aspects), but can symbolise the interactions within it – the narratives, the emotions, the events. For example, look at combat in a game like Oblivion, where the game simulates a continuous 3D space in which fighting and exploring are seamlessly interwoven, just as in real life. Unfortunately, the limitations of the artificial intelligence means that most fights can be won simply by jumping onto a ledge and shooting your hapless assailant from above. Compare this with the approach taken in the Final Fantasy series (or, of course, any number of traditional CRPGs and JRPGs) which switch to an explicit combat mode, mostly isolated from the main world, with completely different actions and constraints to create a more compelling tactical experience. The part of me that loves exploring an open and consistent world much prefers the former, but there’s no denying that the gameplay is simply better in the latter. Oblivion prioritised the simulation over the game, meaning the simulation flaws become game flaws, revealing the ‘uncanny valley’ in interactive form. Final Fantasy tells you that combat in this world is resolved in a separate space, and once you accept that, you can enjoy the rest of the game undistracted. In such a way, the more abstract form can paradoxically be the more immersive one, because it immediately tells you to engage in suspension of disbelief. You enter the experience already accepting the unrealistic elements – a realism ‘sunk cost’ of sorts – and thus they don’t detract from the game.

Similarly, when a similar phenomenon to the Oblivion problem occurs in Minecraft, eg. a monster being stuck below you while you shoot it, the experience seems less jarring. Minecraft doesn’t try as hard to pretend that it’s a real world and so your immersion isn’t as readily broken by such a problem. Embracing abstraction buys you that extra suspension of disbelief. No-one minds that Chess queens are more powerful than history would suggest. And nobody complains that Monopoly is unrealistic because of the lack of cities built in a square ring.

Combat encounters are just one example. While Oblivion’s fighting attempts to be simulatory, its conversation mini-game is purely abstract and could have worked well had more effort been put into it. Research in RTS and 4X games is often handled with a very abstract interface, for reasons of necessity, but there is surely scope for interesting choices to be made at that level. And obviously some games are almost entirely based on abstract models, such as turn-based strategy games such as Civilization or management games like Transport Tycoon or Football Manager. On the surface they are still simulating something, but in simplified and discrete terms that can be easily reasoned about, both for the designer and for the player.

Other art forms, possibly because they have intrinsic limitations that can’t be solved with better hardware, have long since stopped worrying about trying to make the media more realistic. Painters and sculptors happily create works that are symbolic representations or even caricatures of what they are depicting, rather than just trying to be scale models. Writers commit entire stories to ink printed upon thin slices of wood without worrying that the reader can only possibly enjoy this story if they see it visually and audibly, because we know readers can see beyond the fact that they’re staring at text and allow their imaginations to create the world for them. Are game players not capable of that? Or perhaps we as game developers just don’t have as much respect for our players as other artists have for their audience?

For too long computer games have tried to be interactive films, acting as if we have to simulate some sort of realistic space in order for the game to be fun. I’d argue it’s time to get back in touch with the origins of games and embrace the make-believe and abstract aspects that embody what is unique to games, the ability to play with a set of rules and explore the interactions between them. By weaning players off the ‘playable Hollywood’ model and back onto a purer sense of ‘computerised games’ we can both broaden the appeal of games and garner more respect for the medium.

Share this page:
  • email
  • Facebook
  • Twitter
  • StumbleUpon
  • Reddit
  • LinkedIn

Related Posts: