AI and weapons

NB Team

Share now

This week some of the brightest minds of our time, including Stephen Hawking, Elon Musk and Steve Wozniak, signed a letter stating their opposition to the use of Artificial Intelligence based weapons.

It’s next to impossible to argue that computers haven’t improved our lives. Robotic surgeons could help to save millions more lives, for example, and it’s difficult to argue that the Internet is a bad thing, surely?

But, where do we draw the line?

It comes down to a question of ethics, and that’s not just for weapons, but for every single industry. Even something as supposedly innocent as search engine results could be misused. Weapons are a much more obvious point of controversy because of the potential impact.

The point of a weapon is, at best, to do damage. There’s no positive outcome for the person it’s being used against. That’s why each and every use should have a conscience behind it. A computer program, no matter how complex, is simply working through a series of commands at lightning speed. There is no thought, no instinct, no emotion.

Arguably, there are already AI assisted weapons out there and this is simply a natural extension. A current example is drones. For reconnaissance, some are fully automated, flying set patterns to take photos. For combat however, there is a ‘pilot’ sitting at a computer screen, taking a decision about when to fire. Even this is controversial enough as it creates a sense of disconnection from reality – the setup is alarmingly like a videogame, almost like Ender’s Game, where it’s possible to remove oneself from the impact of your decision.

The danger is that we would take crucial decision making out of our own hands. We would become desensitised, and come to think of life-ending choices as trivial – simply switch off the screen and it no longer exists. It’s not your decision, and therefore not your responsibility. And that’s not even entering the difficulty of coding – how can you create a completely foolproof set of code that encapsulates the complexity of this kind of decision making?

We’ll undoubtedly continue to see incredible advances in technology over the coming years. In a decade we’ll look back and wonder how society managed with such rudimentary technology. But we can only hope we’ll never see a day when there are computers making military-related life and death decisions.

3 laws safe!
3 laws safe!