Skip to content

Regulators prepare for tackling generative AI such as ChatGPT.

[ad_1]

Regulators are facing a challenging task in trying to control and monitor the rapidly advancing artificial intelligence (AI) technology. The technology behind AI is generative AI technology, and it has raised significant privacy and safety concerns. Many are worried that AI could disrupt the way businesses and societies operate, but some regulators are still relying on old laws to control the technology. The European Union is currently setting up new AI rules, which could act as a benchmark for addressing the concerns raised with ChatGPT, the powerful AI software developed by OpenAI. In April, Europe’s national privacy watchdogs set up a task force to look into issues with ChatGPT. As regulators race to keep up with the mass roll-out of AI, challenging tasks lie ahead, including privacy and copyright concerns.

**New AI Rules in Development**

The European Union is currently at the forefront of developing new AI rules that could be used by regulators worldwide to control and monitor AI technology for privacy and safety concerns. The generative AI technology behind powerful AI software, ChatGPT, has raised many concerns, including privacy, safety, and copyright issues. It could take several years for the legislation to become enforceable, leaving regulators to rely on existing laws.

**Adaptation of Existing Rules**

In the absence of AI regulations, the only recourse for governments is to apply existing rules. For example, data protection laws are applicable to AI software that is designed to protect personal data. Similarly, there are regulations that have not been designed specifically for AI, but they are still applicable when it comes to safety concerns.

**Hallucinations and Serious Errors**

Generative AI models, such as those powering ChatGPT, are known for producing errors, or “hallucinations,” generating misinformation with certainty. These errors could have serious consequences if banks or government departments use AI to speed up decision-making. Big tech companies, including Alphabet’s Google and Microsoft Corp., have stopped using AI products deemed ethically dicey, such as financial products.

**Issues with Content Produced and Data Fed**

Regulators aim to apply existing rules that cover everything from copyright and data privacy to two key issues: the data fed into models and the content they produce. Agencies in the US and Europe are being encouraged to interpret and reinterpret their mandates. The US Federal Trade Commission’s (FTC) investigation of algorithms for discriminatory practices is an example in the US. In the EU, proposals for the bloc’s AI Act will force companies like OpenAI to disclose any copyrighted material used to train their models.

**Legal Challenges**

Proving copyright infringement will not be easy, according to Sergey Lagodinsky, one of several politicians involved in drafting the EU proposals. It is like reading hundreds of novels before writing one’s own, he said. If a company copies someone else’s material and publishes it, that is one thing, but if they are not directly plagiarizing someone else’s material, what they trained themselves on would not matter.

**Existing Laws and the AI Discrimination Issue**

The Defenseur des Droits, particularly in France, usually handles discrimination claims, but its lack of expertise in AI bias has prompted CNIL, a French data regulator, to take the lead on the issue. It is considering using a GDPR provision that protects individuals from automated decision-making. Meanwhile, Britain’s Financial Conduct Authority is one of several state regulators that has been tasked with developing new AI guidelines.

**Dialogue Between Companies and Regulators**

As regulators struggle to keep up with the fast pace of technological advances, some in the industry have called for greater dialogue between regulatory bodies and corporate leaders. Dialogue is essential for balance between consumer protection and business growth.

**FAQs**

**What is AI technology?**

AI technology is the branch of computer science aimed at creating machines that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.

**What are the concerns around AI technology?**

The concerns around AI technology are mainly related to privacy and safety. Generative AI technology, for example, has raised concerns about how the technology could disrupt the way businesses and societies operate.

**What are the existing laws governing AI technology?**

Existing laws governing AI technology include data protection laws, regulations governing safety concerns, and copyright laws.

**What are the new AI rules in the development stage?**

The European Union is currently at the forefront of developing new AI rules that could be used by regulators globally to control and monitor AI technology for privacy and safety concerns.

**Why is dialogue between companies and regulators necessary?**

Dialogue between companies and regulators is crucial to achieving balance between consumer protection and business growth.

[ad_2]

For more information, please refer this link