UC researchers ask: is robot abuse immoral?
In a new study, University of Canterbury (UC) researchers have found that participants considered abusive behaviour towards a robot just as immoral as abusive behaviour towards a human.
Corridor Digital provided the video clips which used computer-generated imagery (CGI) for Associate Professor Christoph Bartneck and PhD student Merel Keijers research into robot abuse.
“It’s not uncommon for humans to exhibit abusive behaviour towards robots in real life,” Associate Professor Bartneck says. “Our research looks at how abusive behaviour towards a human is perceived in comparison with identical behaviour towards a robot.”
Participants were shown 16 video clips that depicted different levels of violence and abuse towards a human and a Boston Dynamics Atlas robot. The robot in the video was computer-generated imagery (CGI), its motions created by a human actor. As a result, there were two versions of a video with identical abusive behaviours – one where the victim was a human and one where it was a robot.
“We found that participants saw bullying of humans and robots as equally unacceptable, which is interesting because a robot doesn’t have feelings and can’t experience pain – it doesn’t even understand the concept of abuse.
“It doesn’t make sense from a logical point of view,” says Associate Professor Bartneck. “It’s very interesting in the sense that if we treat robots as if they are humans, we consider it immoral to mistreat them.”