Tesla CEO Elon Musk called for a U.S. “referee” for artificial intelligence on Wednesday, following a meeting with lawmakers on Capitol Hill with him, Meta Platforms CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai, and other tech CEOs to address AI legislation.
Legislators are looking for methods to limit the risks of the nascent technology, which has seen a surge in investment and public interest since the debut of OpenAI’s ChatGPT chatbot.
Musk stated that a regulator was required to ensure the safe usage of AI.
“It’s important for us to have a referee,” Musk told reporters, drawing parallels to sports. The billionaire, who also owns the social media site X, went on to say that a regulator would “ensure that companies take actions that are safe and in the best interests of the general public.”
Musk described the meeting as a “service to humanity” that “may go down in history as being very important to the future of civilization.” Musk admitted calling AI “a double-edged sword” during the event.
According to Zuckerberg, Congress should “engage with AI to support innovation and safeguards.” This is a new technology, and there are crucial equities to balance here, for which the government is ultimately responsible.” He went on to say that it was “better if the standard is set by American companies that can work with our government to shape these models on important issues.”
More than 60 senators participated. Lawmakers stated there was broad agreement on the necessity for government regulation of artificial intelligence.
“We are beginning to really deal with one of the most significant issues facing the next generation and we got a great start on it today,” Democratic Senate Majority Leader Chuck Schumer, who organized the forum, told reporters after the meetings. “We have a long way to go.”
Republican Senator Todd Young, a co-host of the forum, said he believes the Senate is “getting to the point where I think committees of jurisdiction will be ready to begin their process of considering legislation.”
But Republican Senator Mike Rounds cautioned it would take time for Congress to act. “Are we ready to go out and write legislation? Absolutely not,” Rounds said. “We’re not there.”
Lawmakers want safeguards against potentially dangerous deep fakes such as bogus videos, election interference and attacks on critical infrastructure.
Other attendees included Nvidia CEO Jensen Huang, Microsoft CEO Satya Nadella, IBM CEO Arvind Krishna, former Microsoft CEO Bill Gates and AFL-CIO labor federation President Liz Shuler.
Schumer emphasized the need for regulation ahead of the 2024 U.S. general election, particularly around deep fakes.
“A lot of things that have to be done, but that one has a quicker timetable than some of the others,” he said.
In March, Musk and a group of AI experts and executives called for a six-month pause in developing systems more powerful than OpenAI’s GPT-4, citing potential risks to society.
Regulators globally have been scrambling to draw up rules governing the use of generative AI, which can create text and generate images whose artificial origins are virtually undetectable.
On Tuesday, Adobe , IBM, Nvidia and five other companies said they had signed President Joe Biden’s voluntary AI commitments requiring steps such as watermarking AI-generated content.
The commitments, announced in July, are aimed at ensuring AI’s power is not used for destructive purposes. Google, OpenAI and Microsoft signed on in July. The White House has also been working on an AI executive order.