Lee took the question during Biden's State of the Union address on Thursday, where he was a guest of Rep. Anna G. Eshoo (D-Calif.) to promote legislation to fund a national AI repository. Raised it even higher.
Mr. Lee front line A growing number of academics, policymakers and former employees say the steep costs of using AI models are driving researchers out of the field and undermining independent research into the burgeoning technology.
As companies like Meta, Google, and Microsoft pour billions into AI; Significant disparities in resources are emerging, even at the nation's wealthiest universities. Meta aims to procure 350,000 special computer chips called GPUs, which are needed to perform massive calculations on AI models. In contrast, Stanford University's natural language processing group uses his 68 GPUs for all its work.
After attending the State of the Union address #SOTU Tonight, I had a brief exchange with President Biden. @POTUS.
Me: “Sir. Mr. President, you made a historic speech by mentioning AI for the first time ever in your SOTU speech.”@POTUS (smiling): “Yes! And keep it safe.” 1/ pic.twitter.com/cJ7vs440fx— Feifei Li (@drfeifei) March 8, 2024
To obtain the expensive computational power and data needed to study AI systems, academics often partner with technology employees. meanwhile, Eye-popping salaries at tech companies are draining academia of star talent.
Big technology companies currently have a monopoly on breakthroughs in this field. According to a Stanford University report, in 2022 the technology industry created 32 significant machine learning models, while academics created 3 models, with the majority of AI breakthroughs starting in universities in 2014. This was a significant reversal from last year.
Researchers say this lopsided power relationship shapes the field in subtle ways, prompting AI researchers to adjust their research toward commercial use. Meta CEO Mark Zuckerberg announced last month that the company's independent AI lab would move closer to its product teams, ensuring “some level of coordination” between the groups.
“The public sector is currently far behind industry in terms of resources and talent,” said Lee, a former Google employee and co-director of the Stanford Institute for Human-Centered AI. “This will have significant consequences, because his AI goals for the public sector are focused on creating public goods, whereas industry is focused on developing technology in pursuit of profit. Because there is.”
Some are looking for new sources of funding. Lee has been touring Washington, meeting with White House Office of Science and Technology Director Arati Prabhakar, dining with political reporters at high-end seafood and steakhouses, and meeting with AI experts, including Sen. Martin Heinrich. He also visits the U.S. Capitol for meetings with members of Congress. (DN.M.), Mike Rounds (RS.D.), and Todd Young (R., Indiana).
Big technology companies are donating computing resources to the National AI Research Resource, a national warehouse project, including a $20 million donation in computing credits from Microsoft.
“We have long recognized the importance of sharing knowledge and computing resources with our colleagues in academia,” Eric Horvitz, Microsoft's chief scientific officer, said in a statement.
Policymakers have taken several steps to address the funding gap.Last year, the National Science Foundation announced a $140 million investment to launch seven university-led projects. The National Institute for AI will investigate how AI can mitigate the effects of climate change, improve education, and more.
Eshoo said he wants to pass the Create AI Act, which has bipartisan support in the House and Senate. By the end of the year when she plans to retire. The bill “essentially democratizes AI,” Eshoo said.
But academics say the injection may not occur quickly enough.
As Silicon Valley races to develop chatbots and image generators, aspiring computer science professors are attracting high salaries and opportunities to work on interesting AI problems. According to the 2023 report, nearly 70% of people with PhDs in artificial intelligence end up working in the private sector, compared to 21% of graduates 20 years ago.
Big Tech's AI boom has driven salaries for top researchers to new heights. According to salary tracking website His Levels.fyi, the median compensation package for Meta's AI research scientists ranged from $256,000 in 2020 to $330,000 in 2023. It rose to $5,250. True stars can attract even more cash. Ali Ghodsi, who regularly competes for AI talent as CEO of AI startup DataBricks, says AI engineers with a PhD and several years of experience building AI models can earn as much as $20 million over four years. He said he could.
“The compensation is off the charts. It's ridiculous,” he said. “In the grand scheme of things, this is not an unusual number.”
University academics often have little choice but to collaborate with industry researchers, as companies often pay for the computing power and data provision. A 2023 report found that nearly 40% of papers presented at major AI conferences in 2020 had at least one author who was a tech employee. Mohamed Abdallah, a scientist at Canada-based Trillium Health Partners' Health Improvement Institute, also notes that industry grants often provide funding for doctoral students to conduct research. He said he is conducting research on the influence of industry on academics' AI research.
“It was kind of a joke, like everyone was employed by them,” Abdallah said. “And the people who remained were funded by them, so in a sense they were employed by them.”
Google spokeswoman Jane Park said the company believes private companies and universities should work together to develop the science behind AI. Google still regularly publishes research results to benefit the broader AI community, Park said.
David Harris, a former research manager on Meta's AI team, said corporate labs can't censor research results, but they can influence which projects are worked on.
“When we see a mix of authors employed by companies and authors working at universities, we need to thoroughly examine the motivations of the companies contributing to the work.” Harris, who is a research fellow to the Prime Minister. University of California, Berkeley. “We used to view people employed in academia as neutral academics motivated only by the pursuit of truth and the good of society.”
Tech giants can source vast amounts of computing power through their data centers and have access to GPUs, the specialized computer chips needed to perform the massive calculations needed for AI. These resources are expensive. A recent report by a Stanford University researcher estimated that Google DeepMind's large-scale language model Chinchilla cost him $2.1 million to develop. More than 100 top artificial intelligence researchers on Tuesday urged generative AI companies to provide legal and technical guidance to researchers so they can scrutinize their products without fear of suspension or threat of legal action by internet platforms. He asked them to provide a safe zone.
Neil Thompson, director of the FutureTech research project at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory, which studies advances in computing, said AI scientists can process more data to improve model performance. The need for advanced computing power is likely to become even stronger as the world's economy grows. .
“To continue to get better, [what] “What we expect to need is more and more money, more and more computers, and more and more data,” Thompson said. “What this means is that people who don't have a lot of computing power… [and] People who don't have as many resources will not be able to participate. ”
Tech companies like Meta and Google have historically imitated universities, where scientists decide what projects advance research, said the people, who spoke on condition of anonymity to discuss private company matters. He has been operating an AI research institute. .
Those employees were largely isolated from teams focused on product development and revenue generation, the people said. They will be judged on the basis of their publication of impactful papers and notable breakthroughs, and will be benchmarked in the same way as their academic peers. Metatop's AI scientists Yann LeCun and Joel Pinault hold joint positions at New York University and McGill University, blurring the lines between industry and academia.
An increasingly competitive market for generative AI products could reduce research freedom within companies. Last April, Google announced that it would combine its two AI research groups, DeepMind, an AI research company it acquired in 2010, and Google Research's Brain team into a single division called Google DeepMind. Last year, Google began leveraging its AI discoveries more, and began sharing research papers only after they were turned into products, the Washington Post reported.
Meta has also reorganized its research team. The company brought FAIR under the direction of its VR arm, Reality Labs, in 2022, and last year redeployed some of the group's researchers to a new generative AI product team. Last month, Zuckerberg told investors that FAIR would “work closely” with the generative AI product team, adding that while the two groups would continue to conduct research “at different time periods,” they “will not be able to conduct any level of research.” It's beneficial for the company.” coordination between them.”
“Many high-tech companies now hire research scientists with some knowledge of AI and set certain expectations about how much freedom they have to set their own schedules and research agendas. “It could be,” Harris said. “Things are changing, especially for companies who are currently working hard to get these products shipped.”