Tech leaders and experts call for a 6-month break from AI experimentation

Tech leaders and experts are calling for a 6-month break from AI experimentation. They are concerned about the (too) turbulent events in this area and the lack of security around.

The open letter, signed by technology leaders and other high-profile AI researchers, calls on AI labs and companies to “immediately suspend “their work. The signatories, including Steve Wozniak and Elon Musk, would like to take a break of at least six months from producing GPT-4-based technologies to take advantage of existing AI systems and allow people to adapt and ensure they can benefit everyone. The letter adds that thought and foresight are essential to secure AI systems.

Tech leaders and experts call for a 6-month break from AI experimentation

The reference to GPT-4, an OpenAI model that can respond with text to written or visual messages, comes as companies are already trying to develop sophisticated chat systems using this technology. Microsoft, for example, recently confirmed that it has redesigned its Bing search engine to take advantage of the GPT-4 model. Google, for its part, recently launched Bard, its own generative artificial intelligence system based on LaMDA. This problem has been around for a long time, but the race tech companies are in today seems to be even more worrisome.

“Unfortunately, this level of planning and management does not exist, AI labs are hard at work developing and deploying virtual brains that no one – not even their creators – can understand, predict or control in any way. Safe,” can we read in the special.

They are concerned about the (too) fast developments in the area and the insecurity around

This letter was published by the Future of Life Institute (FLI), an organization committed to reducing the risks and misuse of new technologies. Earlier, Elon Musk donated $10 million to the FLI to study the safety of AI. Apart from Steve Wozniak and the latter, there are big names in the field of AI such as Center for AI and Digital Policy President Mark Rothenberg, MIT physicist and FLI President Max Tegmark, and writer Yuval Noah Harari. The latter also wrote a New York Times op-ed last week on the risks posed by AI with Center for Humane Technology founders and other signatories Tristan Harris and Aza Raskin.

This call seems somewhat of a logical follow-up to a 2022 survey of more than 700 machine learning researchers, which found that almost half of the participants said there was a 10 percent chance that the consequences of AI would be “extremely bad.”until the extinction of mankind. When asked about the safety of AI research, 68% of researchers responded that more or much more needs to be done.

CDN CTB