In an open letter, Elon Musk and a group of artificial intelligence researchers and industry executives urge for a six-month moratorium on constructing systems more powerful than OpenAI’s recently debuted GPT-4, citing possible hazards to society and mankind.
The letter, signed by over 1,000 people including Musk and distributed by the non-profit Future of Life Institute, urged for a halt to advanced AI development until shared safety protocols for such systems were devised, implemented, and inspected by independent experts.
“Powerul AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.
The letter detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities.
Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned (GOOGL.O) DeepMind, as well as AI heavyweights Yoshua Bengio and Stuart Russell.
The Future of Life Institute is primarily sponsored by the Musk Foundation, as well as the London-based effective altruism group Founders Pledge and the Silicon Valley Community Foundation, according to the European Union’s transparency register.
The concerns come as the EU police force Europol joined a chorus of ethical and legal concerns over advanced AI like ChatGPT on Monday, warning about the system’s potential misuse in phishing efforts, misinformation, and cybercrime.
Meanwhile, the UK government has proposed an “adaptable” legislative framework for AI.
The government’s approach, outlined in a policy paper published on Wednesday, would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.
Musk, whose carmaker Tesla (TSLA.O) is using AI for an autopilot system, has been vocal about his concerns about AI.
Since its release last year, Microsoft-backed OpenAI’s ChatGPT has prompted rivals to accelerate developing similar large language models, and companies to integrate generative AI models into their products.
Sam Altman, chief executive at OpenAI, hasn’t signed the letter, a spokesperson at Future of Life told Reuters. OpenAI didn’t immediately respond to requests for comment.
“The letter isn’t perfect, but the spirit is right: we need to slow down until we better understand the ramifications,” said Gary Marcus, a professor at New York University who signed the letter. “They can cause serious harm… the big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialize.”
Reuters