top of page

Tech Industry Leaders and Scientists Issue Global Warning on AI Risks

In a joint statement posted online, prominent figures in the scientific and tech communities, including executives from Microsoft and Google, have raised concerns about the potential dangers of artificial intelligence (AI) and the need to prioritize its mitigation. The statement, published on the Center for AI Safety's website, asserts that safeguarding against AI-related risks should be considered a global priority alongside other societal-scale threats.


Among the signatories are Sam Altman, CEO of OpenAI, the organization behind ChatGPT, and Geoffrey Hinton, a renowned computer scientist often referred to as the godfather of AI. By condensing the warning into a concise single sentence, the statement aims to represent a broad coalition of scientists, acknowledging that they may have varying opinions on the specific risks and the most effective solutions to address them. According to Dan Hendrycks, executive director of the San Francisco-based Center for AI Safety, the intention was to encourage experts to openly express their concerns rather than silently discuss them among themselves.


The rise of highly capable AI chatbots like ChatGPT has amplified apprehensions surrounding AI systems surpassing human capabilities and potentially running amok. Earlier this year, over 1,000 researchers and technologists, including Elon Musk, signed a more extensive letter advocating for a six-month pause on AI development due to the perceived "profound risks to society and humanity."


Regulations for AI development are being urgently sought by countries worldwide. The European Union is leading the way with its anticipated approval of the AI Act later this year, setting a precedent for AI governance. Nimrod Partush, VP of AI & Data Science at CYE weighed in on the Center for AI Safety's remarks:

"This initiative by the heads of the tech companies leading AI research is blessed, and shows forethought and responsibility. This is refreshing in the tech business landscape as it is usually driven by bottom line profits.


These leaders are right to call for regulation. The danger is there. Right now, AI technology is sparking the imaginations of almost anyone coming in contact with it. Love it or hate it, one cannot ignore the incredible results Large Language Models (in short - LLMs) like ChatGPT and its peers produce. But, if left unchecked, this technology could advance towards putting some, or all aspects of human well being and freedom at risk.


To put it simply - LLMs right now surpass the average human ability in a myriad of tasks starting from coding, to medical diagnosis, legal advice etc. These models were trained over the entire collection of human knowledge -- the Internet. Their memory is almost unlimited and they need no sleep or rest. The first and simple risk LLMs pose is job displacement. If unchecked, corporations could start replacing humans with AI in various positions. Proper regulation, taxation and maybe even radical solutions like universal basic income are required to avoid putting humans at risk of poverty.


But the threat doesn't end there. If LLMs continue to advance, they will surpass human ability tenfold. And by allowing them to connect to the internet to perform tasks, and eventually even putting them in physical forms like cars or robots, there is potential for a real existential risk for mankind. Personally, I am leaning towards seeing AI advancing as a benevolent force for humanity, but I would still recommend extreme precautions being taken before allowing unrestricted connectivity and physical manifestation.


To conclude, we are lucky to live in an age where responsible scientists are developing these technologies. But what about corporations and countries where human rights and wellbeing are not a top priority? Like in the case of the Atom bomb, there is no real way to prevent them from acquiring these abilities. We can only hope they will identify that the threat posed for their people and governments outweighs the gains, and apply proper regulation as well."


###

bottom of page