Training Of AI Systems More Powerful Than OpenAI’s GPT-4 Brought To A Halt

Training Of AI Systems More Powerful Than OpenAI’s GPT-4 Brought To A Halt

In an open letter, Elon Musk alongside a group of AI experts called for a six-month moratorium on developing systems that are more formidable than OpenAI’s recently released model GPT-4, stressing on the plausible threats to society and humanity.

A letter published by the non-profit Future of Life Institute mentioned that powerful AI systems shall be developed only once we are certain that their impacts will be in our favour and their risks will be manageable. Musk, Apple co-founder Steve Wozniak, Stability AI CEO Emad Mostaque, researchers at Alpahbet-owned DeeMind and AI behemoths Yoshua Bengio and Stuart Russell are among the more than 1,000 signatories to the letter.

The signatories called for the pause until safety measures for such designs were formulated, put into place and audited by impartial experts. “Powerful AI systems should be developed only once we are certain that their effects will be positive and their risks will be controllable,” read the letter.

The letter also emphasised upon the probable dangers that AI systems can pose on the society and humanity at large viz. economic and political upheavals and encouraged developers to collaborate between developers and government’s policymakers and regulatory agencies.

EU police agency Europol on Monday had expressed ethical and legal concerns over cutting-edge artificial intelligence (AI) like ChatGPT cautioning about the potential abuse of the system in phishing attempts, disinformation campaigns, and cybercrime.

Read | 10 Best WhatsApp Alternatives You Must Try 

Microsoft-backed OpenAI’s ChatGPT has brought about a revolution in the AI sector encouraging competitors to develop similar large language models and foster integration of their products with generative AI.

Chief Executive at OpenAI, Sam Altman, did not sign the letter, although OpenAI stayed mum about the incident. One of the signatories of the letter, Gary Marcus, an emeritus professor at New York University also emphasised upon the necessity to slow the pace of development of these powerful AI systems until we receive a thorough understanding of the aftermath that these technologies might spur.

He further went on to say that these tech might inflict substantial harm primarily because the major players are growing more covert in their activities, making it difficult for society to guard against potential harms.

About The Author

Kumkum Pattnaik
Kumkum's unparalleled love for gadgets is what drives her to research, scrutinize and pen down tech-related content from every corner of the world. Whether it is getting her hands on the latest electronic devices or reading voraciously to find what tech mammoths are up to, she makes sure that her inventory is up-to-date.View More Posts