[ad_1]
OpenAI CEO Sam Altman on AI Regulation in Europe
OpenAI CEO Sam Altman, whose company has become one of the most lucrative ventures for the rollout of artificial intelligence, has also worked to become one of the new figureheads for AI regulation. Altman managed to make a number of U.S. congresspeople smile and nod along, but he hasn’t found the same success in Europe. He’s now been forced to clarify what his company’s plans are for keeping on outside the U.S.
Altman Voices Concerns about EU AI Regulations
During a stop in London, UK on Wednesday, Altman told a crowd that if the EU keeps on the same tack with its planned AI regulations, it will cause them some serious headaches. He said, “If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible.”
Altman Rolls Back Statement on Returning Home from World Tour
Altman rolled back that statement to some degree on Friday after returning home from his week-long world tour. He said that “we are excited to continue to operate here and of course have no plans to leave.”
AI Regulations in the US and EU
While the White House has issued some guidance on combating the risks of AI, the U.S. is still miles behind on any real AI legislation. There is some movement within Congress like the year-old Algorithmic Accountability Act, and more recently with a proposed “AI Task Force,” but in reality, there’s nothing on the books that can deal with the rapidly expanding world of AI implementation.
The EU, on the other hand, modified a proposed AI Act to take into account modern generative AI like chatGPT. Specifically, that bill could have huge implications for how large language models like OpenAI’s GPT-4 are trained on terabyte upon terabyte of scraped user data from the internet. The ruling European body’s proposed law could label AI systems as “high risk” if they could be used to influence elections.
Big Tech Companies’ Efforts for AI Regulation
OpenAI isn’t the only big tech company wanting to at least seem like it’s trying to get in front of the AI ethics debate. On Thursday, Microsoft execs did a media blitz to explain their own hopes for regulation. Microsoft President Brad Smith said during a LinkedIn livestream that the U.S. could use a new agency to handle AI. It’s a line that echoes Altman’s own proposal to Congress, though he also called for laws that would increase transparency and create “safety breaks” for AI used in critical infrastructure.
Even with a five-point blueprint for dealing with AI, Smith’s speech was heavy on hopes but feather light on details. Microsoft has been the most-ready to proliferate AI compared to its rivals, all in an effort to get ahead of big tech companies like Google and Apple. Not to mention, Microsoft is in an ongoing multi-billion dollar partnership with OpenAI.
OpenAI’s Grant Program for AI Rules
On Thursday, OpenAI revealed it was creating a grant program to fund groups that could decide rules around AI. The fund would give out 10, $100,000 grants to groups willing to do the legwork and create “proof-of-concepts for a democratic process that could answer questions about what rules AI systems should follow.” The company said the deadline for this program was in just a month, by June 24.
OpenAI offered some examples of what questions grant seekers should look to answer. One example was whether AI should offer “emotional support” to people. Another question was if vision-language AI models should be allowed to identify people’s gender, race, or identity based on their images. That last question could easily be applied to any number of AI-based facial recognition systems, in which case the only acceptable answer is “no, never.”
Ethical Questions in AI Regulation
There are quite a few ethical questions that a company like OpenAI is incentivized to leave out of the conversation, particularly in how it decides to release the training data for its AI models. Which goes back to the everlasting problem of letting companies dictate how their own industry can be regulated. Even if OpenAI’s intentions are, for the most part, driven by a conscious desire to reduce the harm of AI, tech companies are financially incentivized to help themselves before they help anybody else.
Conclusion
AI regulation is a complex issue that challenges the global tech industry. While AI ethics debates are ongoing, big tech companies like Microsoft and OpenAI have initiated efforts to perform their part. OpenAI’s grant program for AI rules is a positive step forward to finding a democratic approach to answering the questions about the rules AI systems should follow. However, there are still ethical questions that need to be addressed around AI regulation, particularly in how companies release training data for their AI models. While the intentions of these tech companies toward AI regulation are conscious to reduce the harm of AI, implementing such rules and regulations would challenge the financially incentivized tech industry for developing further AI models.
FAQs
1. What is OpenAI?
OpenAI is an artificial intelligence research institute consisting of a for-profit company and a non-profit research group.
2. What is the proposed EU AI Act?
The EU modified a proposed AI Act to take into account modern generative AI like chatGPT. Specifically, that bill could have implications for how large language models are trained on scraped data from the internet. The ruling European body’s proposed law could label AI systems as “high risk” if they could be used to influence elections.
3. What is OpenAI’s grant program for AI rules?
OpenAI is creating a grant program to fund groups that can decide rules around AI. It will give out 10, $100,000 grants to groups willing to create “proof-of-concepts for a democratic process that could answer questions about what rules AI systems should follow.”
4. What are some ethical questions around AI regulation?
There are quite a few ethical questions that need to be addressed in AI regulation, particularly in how companies release training data for their AI models. Tech companies are financially incentivized to help themselves before they help anybody else.
5. What is Microsoft’s role in AI regulation?
Microsoft has initiated efforts for AI regulation which echo Altman’s own proposal to Congress. Microsoft President Brad Smith said during a LinkedIn livestream that the U.S. could use a new agency to handle AI. Not to mention, Microsoft is in an ongoing multi-billion dollar partnership with OpenAI.
[ad_2]
For more information, please refer this link