US technology leaders have been told they have a “moral” duty to ensure that artificial intelligence does not harm society during a meeting at the White House.
The CEOs of Google, Microsoft, OpenAI and Anthropic attended the two-hour Artificial Intelligence Development and Organization meeting on Thursday at the invitation of US Vice President Kamala Harris.
US President Joe Biden, who attended the meeting briefly, told the CEOs that the work they are doing has “huge potential and great stakes”.
“I know you understand,” Biden said, according to a video later released by the White House.
“And I hope you can educate us on what you think is most needed to protect society as well as progress.”
Harris said in a statement after the meeting that technology companies “must comply with existing laws to protect the American people” and “ensure the safety and security of their products.”
The White House said the meeting included a “frank and constructive discussion” about the need for technology companies to be more transparent with the government about their AI technology as well as the need to ensure the integrity of these products and protect them from malicious attacks. .
“It’s amazing that we’re on the same page about what needs to happen,” OpenAI CEO Sam Altman told reporters after the meeting.
The meeting came as the Biden administration announced a $140 million investment in seven new AI research institutes and the creation of an independent commission to conduct public assessments of existing AI systems and set guidelines on the federal government’s use of AI.
The dizzying pace of advances in artificial intelligence has fueled excitement in the tech world as well as concerns about social harm and the possibility of technology eventually slipping out of developers’ control.
Although still in its infancy, AI has already been embroiled in plenty of controversy, from fake news and non-consensual pornography to the case of a Belgian man who committed suicide after encouraging an AI-powered chatbot.
In a Stanford University survey last year of 327 natural language processing experts, more than a third of researchers said they believed AI could lead to “nuclear catastrophe on scale.”
In March, Tesla CEO Elon Musk and Apple co-founder Steve Wozniak were among 1,300 signatories to an open letter calling for a six-month break from training AI systems where “strong AI systems should only be developed when we are confident their impacts will occur.” It is happening.” . “Be positive and his risks will be under control.”