MURDEROUS robot soldiers will become so advanced they could develop their own moral code to violently defy orders, an AI expert claims.
Ex-cybernetics engineer Dr Ian Pearson predicts we are veering towards a future of conscious machines.
But if robots are thrust into action by military powers, the futurologist warns they will be capable of conjuring up their own "moral viewpoint".
And if they do, the ex-rocket scientist claims they may turn against the very people sending them out to battle.
Dr Pearson, who blogs for Futurizon, told Daily Star Online: "As AI continues to develop and as we head down the road towards consciousness – and it isn't going to be an overnight thing, but we're gradually making computers more and more sophisticated – at some point you're giving them access to moral education so they can learn morals themselves.
"You can give them reasoning capabilities and they might come up with a different moral code, which puts them on a higher pedestal than the humans they are supposed to be serving.
"They might decide themselves that, although they have been told to respect this particular moral viewpoint, actually theirs is more important and they might go off on their own direction which we might not approve of."
Related Articles
Killer robot BOMBSHELL: New models to go 'off the rails in deadly malfunction errors'
Bionic eye to be implanted in HUMANS as robotic device nears release
Chinese robot stabs factory worker in HORROR AI malfunction
“If they are in control of weapons and they decide that they are a superior moral being than the humans they are supposed to be guarding, they might make decisions that certain people ought to be killed in order to protect the larger population”
Dr Pearson
Asked if this could prove fatal, he responded: "Yes, of course.
"If they are in control of weapons and they decide that they are a superior moral being than the humans they are supposed to be guarding, they might make decisions that certain people ought to be killed in order to protect the larger population.
"Who knows what decisions they might take?
"If you have a guy on a battlefield, telling soldiers to shoot this bunch of people, for whatever reason, but the computer thinks otherwise, the computer is not convinced by it, it might conclude that soldier giving the orders is the worst offender rather than the people he's trying to kill, so it might turn around and kill him instead.
"It's entirely possible, it depends on how the systems are written."
Dr Pearson's warning comes amid growing concerns of fully autonomous robots being used in war.
Replies