Platforms
Editorials
Autelia Discusses Human Orbit and Why It Went With AI Overlords

 AI is a frightening thing, but Autelia has a different view

Human Orbit Screen (1)

At Eurogamer Expo we we shown an incredibly intriguing game from the indie studio Autelia. Comprised of former EA, Media Molecule and Eidos Montreal developers, Human Orbit certainly has the pedigree that such an ambitious project could do with to realise its potential.

That’s because Human Orbitit’s a God game with a big difference: you play as an artificial intelligence. That might not sound all that different, but imagine the fact that this is a reality that could one day become possible thanks to the work of people like Deep Mind.

So, the general crux of Human Orbit is that what powers of both physical and emotional manipulation could a fully sentient AI have if it had access to the lives of a society? Set in a remote space station, the inhabitants lives are in your hands.

I’ll go into more detail about Human Orbit in our upcoming preview, but in the meantime I sat down with – or more accurately sent some emails across to – the folks at Autelia to ask some questions about the origins of Human Orbit and if we really do have anything to fear about a future where an AI can play with our lives for fun.

So, on first impressions Human Orbit seems to be the sci-fi take of the God game, but where did the inspiration actually come from and was it the plan all along to have an omnipotent AI at the helm?

Autelia TeamThe game more grew out of our interest in the development process then from external inspirations. Joe, our technical lead, has always been interested in developing AI, and I’ve always been interested in complex systems and how they interact with each other. We wanted to make a game that incorporated both of these interests.

We decided to focus on simulation of the emotional lives of a small population as it’s a relatively unexplored area. In other games we might have an NPC going to work or home to sleep, but it is often based on a schedule and the NPC has little understanding of their actions. NPCs so far have had very little in terms of an emotional life and we wanted to contribute to that somehow.

We knew that to make the project manageable we needed to limit it to a small population. We started thinking about self contained societies, submarines, islands, floating islands. We decided on a space station in the end for various reasons.

We wanted the game to be first person to help avoid that emotional disconnection from the NPCs I feel with third person god games. We also wanted the player themselves to be an ‘other’ with a voyeuristic feeling. Since most first person games feel like a floating camera with an arm attached we decided to go straight for a floating camera, it plays on that voyeuristic feeling as well.

How are these gameplay decisions incorporated into a story? It’s ‘procedurally generated’, but is there an overarching story to the whole thing, or does it play out in a fashion more akin to The Sims?

AT: The overarching story comes from the world-building. We have put some effort into making the station feel like a real lived-in environment. You’ll get hints to the wider world that station inhabits through NPC conversations, branding of certain machines on board, on-board holo-advertisements, that sort of thing. Nothing is explicitly spelled out, but it will certainly be there just under the surface for fans of deeper narratives.

From my time seeing Human Orbit the idea of meddling in the affairs of the inhabitants of the space station, as well as being able to experiment on a macro level, certainly looks enjoyable. But what’s the purpose for the AI? What benefits does it get from involving itself in these matters?

AT: Human Orbit equips you with a number of ways to interfere with the station inhabitants, from the subtle to the overt.

The demo that you saw at EGX showcased one of the more direct ways of manipulating people – the ability to edit their emails directly and alter the way that NPCs relate to one another. One of the core objectives for the player in Human Orbit is to spread their influence across the station’s network. Being able to tamper with people’s interpersonal communications is one way to achieve that objective. For instance, if the player considers one NPC to be a threat to their progress, then they can poison the relationship between that NPC and their superior, with the aim of causing that NPC to be demoted or otherwise dis-empowered. That NPC will no longer be an obstacle.

Another perspective on the ‘benefit’ for the player is that it fulfils the player’s personal objectives. We expect that they will soon identify crew members that they like, and other crew members that they dislike – meddling in the station’s affairs gives them a way to directly advance the goals of crew members that they like, and to do it in a way that is emotionally fulfilling. On the other hand, it also gives the player a way to blight the lives of those crew members to which they have taken a disliking. You can bring people together or pull them apart.

But, you know, meddling with people’s lives in this way – even where you’re trying to make things ‘better’ for them – isn’t something that should be taken lightly. You’re an AI and not a human, so you don’t have motives like the people on the station. As the player, you have information available that let’s you know these NPCs better than anyone else – but that doesn’t give you the moral authority to exercise control their lives.
But you’re going to do it anyway. Because you’re the player. It’s a dark space, that gap between the player and the game.

This slideshow requires JavaScript.

But what are the risks? Can you ever be ‘caught’ by the space station inhabitants and shut down? Is there a ‘game over’ to avoid or ‘goal’ that needs to be hit?

AT: Say you keep sucking crew out of the airlock, the crew will decide the doors are malfunctioning and lock them out of the system. They would all be operated manually until the decided they were fixed, which makes things more difficult for you. Say you keep spraying hot coffee into peoples faces from the coffee machine, it would end with similar results (they might even scrap something like a coffee machine that due to its low importance.) If someone does cotton on to there being an AI, I very much doubt people will listen to them. The station is run in a very bureaucratic way, and it’s quite a crazy claim to make. If it happens that you are discovered, it’ll be a very rare event from a very insightful NPC, and you are going to have to deal with it on an individual basis.

Moving on, how have you found feedback from what you’ve shown the press and public so far?

AT: Our major concern was that we would fly under the radar, as do a lot of great indie games. But, so far, we’ve had a lot of good, positive interest! It’s been pretty great to see, and a real relief. One thing that we have found particularly encouraging is how well people ‘get’ the concept.

Hopefully there is much more to come; fingers crossed.

As a concept AI has a lot of people rather worried. Some think we could see a Skynet/iRobot level of evil seeping into a program with free and purely rational thought, but others believe it’s something immensely beneficial to humanity. Having worked closely with the subject matter in terms of sci-fi, what are your thoughts on how this could all pan out?

AT: I think it’ll be dependent on how the AI is manufactured, the closer it is to simulating humanity, the higher the risk. It’s worth remembering that intelligence is a trait humanity possesses rather the subscribing all the actions of humanity to our intelligence.

I think a lot of the projections of what an AI would truly be like are flawed in that they apply human concepts and motivations to an alien intelligence. Without the backing of millions of years of evolution, it seems a bit of a leap to assume an AI would even have motivations as basic as self preservation. The idea of an AI taking over the world and destroying all humans based on motives of revenge or the idea of just a cold and calculated decision seem flawed in that they are always shoving human baggage onto a machine.

The real risk is that we won’t even recognise a true artificial intelligence even after it has come into existence.

Well, that’s a chilling note to end on…

If you want to follow the development of Human Orbit you can keep up to date with what Autelia are doing by following its blog.

No Comments to “ Autelia Discusses Human Orbit and Why It Went With AI Overlords ”

x
x
x