[ad_1]
US senators send a threat to open-source AI community with letter to Meta CEO Mark Zuckerberg questioning the leak of Meta’s popular open-source large language model (LLaMA) in March. The letter was sent in response to LLaMA’s potential for its misuse in spam, fraud, malware, privacy violations, harassment, and other wrongdoing and harms. The senators pointed to LLaMA’s release in February, saying that Meta released LLaMA for download by approved researchers, “rather than centralizing and restricting access to the underlying data, software, and model.”
Sections:
1. Why the senators’ letter is significant
2. Concerns of Machinations Behind the Scenes
3. Release of LLaMA Was ‘Not an Unacceptable Risk’
4. Misguided Attempt to Limit Access
5. The Politics of Intimidation
Section 1: Why the senators’ letter is significant
Experts express concern amid a government crackdown on artificial intelligence (AI) regulation. In late June 2022, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) sent a letter to Meta CEO Mark Zuckerberg questioning the leak in March 2022 of its popular open-source large language model LLaMA. This letter threatens the open-source AI community, arriving at a key moment when Congress has prioritized regulating artificial intelligence, while open-source AI is seeing a wave of new LLMs.
Section 2: Concerns of Machinations Behind the Scenes
Several experts have expressed concern about machinations behind the scenes. It is easy for both government officials and proprietary competitors to throw open source under the bus, and proprietary software providers look at it as a form of competition. Thus, that makes it an easy target. Steve Ballmer called open-source software a cancer on the intellectual property system when he was CEO of Microsoft. Steven Weber, a professor at the School of Information, thinks Microsoft, operating through OpenAI, is running scared, in the same way that Microsoft ran scared of Linux in the late 1990s.
Section 3: Release of LLaMA Was ‘Not an Unacceptable Risk’
There is currently no legislation or strong community norms about acceptable practices when it comes to AI. Christopher Manning, director of the Stanford AI Lab, while he strongly encouraged the government and the AI community to work to develop regulations and norms applicable to all companies, communities, and individuals developing or using large AI models. He supports the open-source release of models like the LLaMA models. He fully acknowledges that models like LLaMA can be used for bad purposes, such as disinformation or spam, but they are smaller and less capable than the largest models built by OpenAI, Anthropic, and Google.
Section 4: Misguided Attempt to Limit Access
The Senators’ letter to Meta is a misguided attempt at limiting access to a new technology. The letter is full of typical straw-man concerns, and it does not make sense to use a language model to generate spam. The discourse around AI safety is a panicked response with little to zero supporting evidence of societal harms. In general, the discourse around AI safety is a panicked response that could lead to the squelching of innovation in America and handing over the keys to the most important technology of our generation to a few companies who have proactively shaped the debate.
Section 5: The Politics of Intimidation
The Blumenthal/Hawley letter is a threat made to open source through what Adam Thierer calls a nasty gram. At some point, lawmakers will start to put more and more pressure on other providers or platforms who may do business with or provide a platform for open-source applications or models, and that’s how you get to regulating open source without formally regulating open source.
Conclusion:
The threat of AI regulation looms over the open-source AI community. Senators Richard Blumenthal and Josh Hawley sent a letter to Meta CEO Mark Zuckerberg questioning the leak of Meta’s popular open-source large language model LLaMA. Several experts have expressed concern about machinations behind the scenes. Open-source software providers look at it as a form of competition, which makes it an easy target. This threat could lead to the squelching of innovation in America and handing over the keys to the most important technology of our generation to a few companies who have proactively shaped the debate.
FAQ:
1. What is the open-source AI community?
The open-source AI community is a group of developers who share their code and expertise freely.
2. What is an LLM model?
A large language model (LLM) is a type of machine learning algorithm that can understand and generate human language.
3. Why are Blumenthal and Hawley concerned about LLaMA’s release?
The senators say they are concerned about LLaMA’s potential for its misuse in spam, fraud, malware, privacy violations, harassment, and other wrongdoing and harms.
4. What is the concern about machinations behind the scenes?
Several experts are concerned about government officials and proprietary competitors throwing open source under the bus, with proprietary software providers looking at it as a form of competition.
5. What is the solution to increasing regulation of AI?
Christopher Manning, director of the Stanford AI Lab, strongly encouraged the government and AI community to work to develop regulations and norms applicable to all companies, communities, and individuals developing or using large AI models.
[ad_2]
For more information, please refer this link