How (un)Safe is AI?

How-unsafe-is-AI

The first time artificial intelligence emerged was in the 1950s when Alan Turing wondered would it be possible for the machines to think [1]. Turing developed a test (i.e., the Turing Test) where the same questions are asked to both human and the computer, while the interrogator (i.e., the judge) needs to decide which answers belong to a computer, and which to the human. It is said that the machine passed the test when the interrogator cannot distinguish the machine from the actual person. 

Almost 70 years later, the artificial intelligence (AI) represents one of the most prospective concepts in computer science. So, after all this time did any machine pass the Turing test? Up to this date, no. There were several attempts, one in 2014, where a chatbot named Eugene Goostman managed to convince more than 30% of the judges that it was human. Unfortunately, this attempt was full of controversy, as some claimed that the chatbot passed the test by cheating, hence scientists are still arguing whether this can really count as passing of the Turing test. But, evidence of intelligent machinery emerges almost daily, even without the test. 

In 2016, a robot managed to escape from the lab in Russia. Twice!

This has raised multiple questions on the intelligence of the robots and the possible need for freedom, as robotic technology develops. In 2017, two Google Home smart speakers were placed next to each other and used speech recognition to understand and learn from one another. Their debate lasted for several days and they even questioned their existence and whether they are robots or humans. But that is not all. The debate heated at one point so much that one robot threatened to slap the other, and even insults were exchanged, one being particularly interesting – “You are a manipulative bunch of metal”. 

Probably the most famous AI—a humanoid robot named Sophia—appeared in 2016; and in 2017, it was already given a citizenship of Saudi Arabia. During the interview with the CEO of Hanson Robotics, Dr. David Hanson, the robot was asked if it wants to destroy humans. Sophia jokingly replied, “Okay, I will destroy humans.”

AI is inevitable

Despite these cases, AI is inevitable. Siri and Alexa, Apple’s and Amazon’s personal assistants, respectively, are using machine learning technology to function. Self-driving cars, as well as one of the most popular streaming services, Netflix, are also AI-based. Robots based on the AI are used in production, medicine, transportation and manufacturing, helping people perform their jobs better. 

The concern of how dangerous the AI can be has already been addressed by several authorities and organizations. An open letter addressed to the European Commission, that aims to establish a framework for robotics and AI, and ban giving robots a legal status as electronic persons, was signed by more than 280 experts in different fields such as medicine, AI, ethics, law, and engineering. Moreover, in 2018 a group consisting of 52 experts released the draft containing AI ethics guidelines. The guidelines suggested that AI must “respect fundamental rights, applicable regulation and core principles and values […] and be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.” [2]. Hence, with adequate measures, regulations and laws, the AI can be helpful for humanity. But it is also necessary to remember that the AI is (un)safe as much as humanity makes it to be.


References:
[1] Turing, A. M. (1950). Computing Machinery and Intelligence. Mind 49: 433-460.
[2] The Alliance for Artificial Intelligence, EU ethics guidelines for trustworthy AI. Retrieved from https://allai.nl/eu-ethics-guidelines-for-trustworthy-ai/

Radmila Janković

I am a PhD student and a research scientist passionate about sharing science and making science fun and more accessible for everyone. A huge cat lover interested in everything about the world that surrounds us.

View all posts by Radmila Janković →

Leave a comment