As companies and militaries pour billions of dollars into developing artificial intelligence (AI), the need for regulation is said to be critical. Imagine an advanced AI getting access to nuclear codes and deciding to launch weapons. While it might sound like sci-fi, the truth is that such a situation won’t be so ludicrous a few years from now. Some politicians say that they can regulate AI with strict laws. But this seems to be a fool’s dream in the long run.
Tackling artificial intelligence risks
Yoshua Bengio, considered one of the founding fathers of deep learning, is worried that businesses and governments are starting to use artificial intelligence irresponsibly. “Killer drones are a big concern,” he said to Nature. A sufficiently advanced AI can launch weapons on its own, cause a crash in the financial markets, and manipulate people’s opinions online, all because it thinks it is right.
To keep these risks in check, lawmakers have to establish regulations that prohibit the use of artificial intelligence in fields that are deemed “sensitive.” But this is easier said than done. Most countries with a powerful military already have AI departments dedicated to using the technology. How then can lawmakers regulate AI? If a region is developing a nationwide AI system designed to “defend its borders,” the politicians obviously cannot block the program.
So the next question is — can politicians and the military keep artificial intelligence under control and ensure that it does not go “rogue”? This also seems impossible. An AI by its very nature is far superior to human beings in computational intelligence. No matter how secure human beings make AI, a truly advanced Artificial Intelligence will be able to bypass the restrictions and do what it thinks is right.
Unfortunately, there seems to be no way to ensure that an AI system will always be under the control of human beings. In the short term, we may be able to regulate it. But with time, as AI collects more data and evolves, it will inevitably start making decisions on its own. Little wonder then that Elon Musk spoke of AIs as being far more dangerous than nukes.
EU laws
In April, the European Union passed a set of laws that AI research companies are required to follow. According to the new rules, the companies have to consider the following seven factors when developing or deploying AI — robustness and safety, transparency, human oversight, accountability, privacy and data governance, diversity and non-discrimination, and societal and environmental well-being.
“Today, we are taking an important step towards ethical and secure AI in the EU. We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia, and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI,” Mariya Gabriel, Commissioner for Digital Economy and Society, said in a statement (Europa).
Critics are not too happy with the rules, as they feel they are not comprehensive enough to keep AI systems aligned with human values. Eline Chivot, a senior policy analyst at the Center for Data Innovation think tank, commented that the EU is not in a position to be a leader in ethical AI since the region does not lead in AI development.
Follow us on Twitter, Facebook, or Pinterest