His message includes policies such as CAIP’s call for legislation that would hold AI developers liable for “serious harm,” require permits to develop “high-risk” systems, and empower regulators to “pause AI projects if they identify a clear emergency.”

“(The) risks of AI remain unaddressed and are in danger of being overtaken by the rapid pace of AI development,” Nathan Calvin, senior policy advisor at the CAIS Action Fund, said in an email.

Detractors see the entire enterprise as a diversion. By focusing on doomsday scenarios, critics say, these well-funded groups are raising barriers to entry for smaller AI companies and diverting attention from more immediate and concrete problems with the technology, such as its potential to eliminate jobs or perpetuate discrimination.

Until late last year, organizations working to focus attention in Washington on the existential threat of AI tended to operate under the radar. Instead of lobbying directly, groups like Open Philanthropy funded AI staff in Congress and poured money into key think tanks. RAND Corporation, an influential think tank that played a key role in drafting President Joe Biden’s October executive order on AI, received more than $15 million from Open Philanthropy last year.

The only similar group that previously had an official lobbying operation was the Future of Life Institute, which since 2020 has spent roughly $500,000 lobbying Washington on AI. FLI is supported by Skype co-founder Tallinn, along with tech tycoon Elon Musk and cryptocurrency billionaire Vitalik Buterin.

Increased lobbying work (and the policies pushing CAIP and CAIS) could directly benefit major AI companies, said Suresh Venkatasubramanian, a Brown University professor and co-author of a 2022 White House paper that focused more on the short-term risks of AI. including their potential to undermine privacy or increase discrimination through biased screening tools.

“We should discuss what science tells us about AI,” Venkatasubramanian said. “But if they want to lobby, then that’s a different path. “It’s about who has the most money and who wants to fund their agenda through cash injections from a rich apocalyptic cult.”

The Center for AI Policy signed up its first lobbyist in December, spending $98,000 in just one month to lobby Congress and the State Department on “issues related to supporting catastrophic risk mitigation from advanced purpose-built artificial intelligence.” general”, according to its disclosure. . Leading this campaign is Kate Forscey, former technology policy advisor to the Silicon Valley representative. Anna Eshoo (D-Calif.), who joined the new bipartisan House Artificial Intelligence Task Force on Tuesday.

On Capitol Hill, CAIP is considering a bill it calls the Responsible AI Act, according to a LinkedIn post by CAIP CEO Jason Green-Lowe. In an email, Green-Lowe said CAIP is also pressing Washington on near-term AI risks, including policies on copyright, AI-generated content and federal procurement standards for AI systems.

Green-Lowe disputed Venkatasubramanian’s claim that CAIP advocates for policies that would benefit major AI companies. He called his organization “totally independent of large AI labs” and said CAIP’s proposed security policies “won’t give those labs unfair advantages because the cost of compliance is small compared to the price of hardware and talent.” .

Both CAIS and CAIP have strong ties to “effective altruism,” a philanthropic movement increasingly focused on AI’s supposed threat to humanity, which some researchers fear is being co-opted by major AI companies in a bid to set their political priorities.

According to Green-Lowe, approximately one-third of CAIP’s total grants were provided by Lightspeed Grants. Lightspeed lists Skype’s Tallinn, a long-time effective altruist, as its primary funder. Tallinn has invested a significant amount of money in leading AI companies: He was an early investor in DeepMind, now owned by Google, and led a 2021 investment in Anthropic that included funding from Moskovitz and former Google CEO, Eric Schmidt.

The Center for AI Safety has also mobilized lobbyists through its new CAIS Action Fund, which spent $80,000 between October and December on lobbying Congress and agencies. CAIS tapped Varun Krovi, a longtime lobbyist and former chief of staff to retired Michigan Democratic Rep. Brenda Lawrence, to lead the effort.

In May, CAIS issued a one-sentence statement calling AI an extinction risk that should be addressed as “a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” The statement included signatures from industry luminaries such as OpenAI CEO Sam Altman and Demis Hassabis, head of Google DeepMind.

In addition to its new lobbying arm, CAIS has also directly advised governments. It recently partnered with the UK on AI risks and became one of the first members of the US Department of Commerce’s AI Safety Institute Consortium, announced earlier this month.

CAIS funding also shows connections to the AI ​​industry and effective altruism. Open Philanthropy has donated more than $10.5 million to CAIS since November 2022. Moskovitz and other Open Philanthropy leaders have financial and personal ties to leading AI companies, including OpenAI and Anthropic. CAIS also received $6.5 million from cryptocurrency company FTX before it collapsed in late 2022. Its disgraced founder, Sam Bankman-Fried, is another well-known effective altruist. (The bankrupt FTX has since demanded information from CAIS about that donation, and Calvin declined to say whether CAIS would return the money.)

Calvin took pains to separate the work of the CAIS Action Fund, saying that Open Philanthropy “has not made any contributions” and that its new lobbying push is funded by a “combination of donors.” Beyond lobbying Congress on AI safety, Calvin said his group is advocating for greater investments in R&D and more AI funding for agencies like the National Science Foundation and the National Institute of Standards and Standards. Technology.

Divyansh Kaushik, an AI policy researcher at the Federation of American Scientists (a group that is also part of the AI ​​safety consortium launched this month by the Commerce Department) said direct lobbying efforts represent a “second stage.” for existential risk organizations in Washington. After failing to provide lawmakers with bills or other practical approaches to address the abstract risks of AI, he framed the new push as an attempt to professionalize.

“I think these organizations are now starting to realize that maybe the rhetoric that was being put forward all along has been somewhat counterproductive to their causes, and they’re trying to make it more relatable to those members,” Kaushik said. “They are trying to develop more mature efforts to reach more members and present arguments that at least sound reasonable.”

By Sam