GET THE APP

Is Artificial Intelligence Dangerous to Public Health
..

International Journal of Public Health and Safety

ISSN: 2736-6189

Open Access

Editorial - (2021) Volume 6, Issue 4

Is Artificial Intelligence Dangerous to Public Health

Masamba Roghu*
*Correspondence: Masamba Roghu, Department of Public Health Administration, University of Dhaka, Dhaka, Bangladesh, Tel: 7564925789, Email:
Department of Public Health Administration, University of Dhaka, Dhaka, Bangladesh

Received: 01-Apr-2021 Published: 22-Apr-2021 , DOI: 10.37421/2736-6189.2021.6.224
Citation: Masamba Roghu. “Is Artificial Intelligence Dangerous to Public Health.” Int J Pub Health Safety 6 (2021): 223.
Copyright: © 2021 Roghu M. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Editorial

What will happen when, not if, governments start using the power of deep learning to control the masses? Repressive AI is already a reality. If in doubt, just look to China. Today, no longer content with controlling its own population, China is busy exporting its repressive technology around the world. Elon Musk, AI pioneer Alan Turing, and renowned researcher Nick Bostrom have all said that artificial intelligence will transform the world-maybe even destroy it. So, yes, we should be worried. After all, humans have taught computers to multiply numbers, play chess, identify criminals in crowds, replicate human voices, and translate complex documents. Why can’t we teach computers to annihilate humans? The above examples-including the annihilation of humans-involve “narrow AI.” In other words, computer systems that are trained to perform at a human or superhuman level in one specific task. Although “general AI,” where computer systems perform at a human or superhuman level across lots of different tasks, is some way off, the day is coming. And when that day comes, AI systems will exhibit unforeseen levels of complexity and competency. These powerful systems will behave in unpredictable ways. And, as we know, unpredictable power in the hands of unpredictable people often spells disaster. If in doubt, just look to The People’s Republic of China (PRC), where the proliferation of AI technology “compliments” an authoritarian, wholly illiberal regime. Suppression and intimidation are key components of China’s geopolitical strategy, and AI is helping the country achieve its mission. In Hong Kong, Chinese officials have unleashed the most brutal crackdown on internal dissent since the Tiananmen Square massacre of 1989. On August 25th, clearly infuriated by the actions of those in Beijing, protesters took to the streets and destroyed facial recognition towers. The people are afraid. And so they should be. Given the focus on facial recognition in China, such fear is warranted. As Zak Doffman of Forbes writes, “In the world of surveillance, no country invests more in its AI-fuelled startups and growth-stage businesses than China. And no technology epitomises this investment more than facial recognition-a technology that courts more controversy than almost any other. But a thriving domestic tech base has done nothing to quell the concerns of citizens. China is held up as a Big Brother example of what should be avoided by campaigners in the West, but that doesn't help people living in China.” But it’s not just the people of Hong Kong and mainland China that should be concerned. As the ne plus ultra of authoritarianism, China now exports its AI technology to countries around the world. As a recent report published by the Council on Foreign Relations states: “For China, the expansion to new markets takes the development of AI to a whole new level. In its ambitious plan to become a world leader in AI, Beijing has begun to use developing countries as laboratories to improve its surveillance technologies”.

arrow_upward arrow_upward