“Kill All Humans” – Robots and Ethics

By Jessica Osborne

We all remember our first. Mine was called Robby, I met him on Altair IV, he was my very first robot. I say he was mine, I’ve never owned a toy of Robby (he’s a vintage collector’s item, I have expensive tastes but no money) and I never saw him outside any kind of screen. And to be honest, I wasn’t fascinated by him either. I grew up with my dad periodically making us watch Forbidden Planet every couple of weeks, hailing it as the best SF movie of all time. Robby was basically family.

robbie1And not once did we, as a family question his place in the film, or what his place would be in wider society. For those who don’t know me, I recently got very into robots. I wrote a short script on robots being used in long-distance relationships and began doing a lot of research into robots. A fellow student set up a ‘Robots Discussion Group’ for a few nerdy students to meet and talk robots in Fountains every other Wednesday. And of course, we ended up getting into the moral and ethical complications of robots during our first meeting.

Two ethical conundrums came up that I really want to talk about, they’re probably the two most common arguments against robots and AI of any kind, but I like them.

  • If a Google car is driving along and has to hit either a young child or an elderly woman how can we programme it to choose who to hit?
  • And if we create a realistic SexBot with personality, should it be able to withhold consent?

google carSo first of all the Google car: How exactly do we as human drivers decide who to swerve to kill. Ignoring the fact that this Google car really should have breaks, does it matter which choice the car or programmer makes if both are wrong? Most people say the car should kill the old lady, let the child live, but then the same old problems came up: what if the kid grows up to destroy humanity/cure cancer? What if the old woman is the Queen/a former Nazi? Either way, there are too many issues and too much knowledge that could change the feelings to the outcome of the accident. Should robots make accidents? Can they eradicate accidents if the people programming them can’t?

I know I’m just throwing out a bunch of questions and not really giving any answers, but how cool is this to think about? We need to create a cold, calculating AI that has no problem killing people, but it also has to decide to kill the right people and do so ethically. This is wild.

But onto the next problem: Consent. And to me I don’t think this is really a problem. It came up in the discussion that consent for a robot is a falsehood as they’ll have been programmed to give or withhold consent. But that raises the question of why would we allow what is essentially an object to ask for consent. We don’t give sex toys the option of consent, so why give it to robots? The purpose of a sex robot is really that you can’t be turned down. But then of course how does encouraging this kind of behaviour amongst humans? If we teach people that you don’t ask consent of robots, does that bleed over into not asking a real human for consent? Is this just further objectifying sexual partners rather than a healthy outlet for sexual frustrations?

cyborgI think the only way to really create any tangible answers to these questions is to just do it, we can’t understand something that hasn’t really happened, right? At this point it’s all just guesswork, and it is usually guesswork and fear mongering that holds back progress and I think that’s the real issue here.

The next Robot vs Humans Discussion Group meeting will be on Wednesday 22nd February at 3pm in HG013. Check out the Facebook Group for more information.