On Thursday, tech executives were summoned to the White House and told they must safeguard the public from the hazards of artificial intelligence (AI).
Google CEO Sundar Pichai, Microsoft CEO Satya Nadella, and OpenAI CEO Sam Altmann were informed they had a “moral” obligation to protect society.
The White House has stated that it may further regulate the sector.
Recently released AI products, such as ChatGPT and Bard, have piqued the public’s interest.
They allow ordinary users to interact with “generative AI,” which can summarise information from multiple sources in seconds, debug computer code, write presentations, and even poetry that sound plausible enough to be human-generated.
Their implementation has spurred new debate about the role of AI in society by providing a practical representation of the new technology’s potential risks and rewards.
On Thursday, technology CEOs gathered at the White House were told it was up to them to “ensure the safety and security of their products” and were warned that the government was open to new rules and legislation addressing artificial intelligence.
According to Sam Altman, CEO of OpenAi, the company powering ChatGPT, executives are “surprisingly on the same page on what needs to happen” in terms of regulation.
Following the meeting, US Vice President Kamala Harris said in a statement that the new technology could jeopardize safety, privacy, and civil rights, but it also had the potential to improve people’s lives.
She stated that the commercial sector has “an ethical, moral, and legal responsibility to ensure the safety and security of their products.”
The White House announced a $140m (£111m) investment from the National Science Foundation to launch seven new AI research institutes.
Calls for the dramatic rise in emerging AI to be better regulated have been coming thick and fast, from both politicians and tech leaders.
Earlier this week, the “godfather” of AI, Geoffrey Hinton, quit his job at Google – saying he now regretted his work.
He told the BBC that some of the dangers of AI chatbots were “quite scary”.
In March, a letter signed by Elon Musk and Apple founder Steve Wozniak, called for a pause to the rollout of the technology.
And on Wednesday, the head of the Federal Trade Commission (FTC), Lina Khan, outlined her views on how and why AI needed to be regulated.
There are concerns that AI could rapidly replace peoples’ jobs, as well as worries that chatbots like ChatGPT and Bard can be inaccurate and lead to the dissemination of misinformation.
There are also concerns that generative AI could flout copyright law. Voice cloning AI could exacerbate fraud. AI generated videos can spread fake news.
However, advocates like Bill Gates have hit back against calls for an AI “pause” saying such a move would not “solve the challenges” ahead.
Mr Gates argues it would be better to focus on how best to use the developments in AI.
And others believe there is a danger of over-regulating – which would give a strategic advantage to tech companies in China.