Skip to content

Judge demands declaration and vetting of AI-generated content in court

Judge demands declaration and vetting of AI-generated content in court

[ad_1]

The Use of AI in Legal Work Draws Concerns

Few lawyers are willing to let an AI make their arguments in a courtroom. However, an attorney recently allowed an AI language model to supplement his legal research in a federal filing, leading Judge Brantley Starr to add a requirement that any attorney appearing in his court must attest that no portion of the filing was drafted by generative artificial intelligence, or if it was, that it was checked by a human being.

Judge Starr Takes Steps to Prevent Future Debacles

Judge Brantley Starr’s mandatory certification requirement regarding generative artificial intelligence was added recently, requiring lawyers to file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being. This new requirement came in response to an attorney who allowed ChatGPT, an AI language model, to supplement his legal research in a recent federal filing and ended up providing him with six cases and relevant precedents, all of which were completely fabricated.

The Necessity of the Certification Requirement

In addition to the certification form for lawyers to sign, the memorandum on this matter at Judge Starr’s office included a well-informed and convincing explanation of the certification requirement’s necessity. The memorandum noted that while AI language models may have many uses in the law, legal briefing is not one of them. These platforms are prone to hallucinations and bias. They make up quotes and citations, which they present as facts. While attorneys are sworn to faithfully uphold the law and represent their clients, generative artificial intelligence is not. It holds no allegiance to any client, the rule of law, or the laws and Constitution of the United States, making it unreliable and biased.

Justify Your Use of AI

While this is just one judge in one court, it would not be surprising if others took up this rule as their own. While AI language models have the potential to be helpful technology, their use must be clearly declared and checked for accuracy.

FAQs

What is generative artificial intelligence?

Generative artificial intelligence is a technology that involves the use of artificial intelligence language models to generate text.

Why is it necessary to certify that no portion of a filing was drafted by generative artificial intelligence?

The use of generative artificial intelligence in legal work is concerning because these platforms are prone to hallucinations and bias. They make up quotes and citations, which they present as facts. While attorneys are sworn to faithfully uphold the law and represent their clients, generative artificial intelligence is not. It holds no allegiance to any client, the rule of law, or the laws and Constitution of the United States, making it unreliable and biased.

What happens if an attorney fails to certify that no portion of a filing was drafted by generative artificial intelligence?

It is unclear what would happen if an attorney fails to certify that no portion of a filing was drafted by generative artificial intelligence. However, they may have to justify their use of the technology in court.

What forms of AI are included in this certification requirement?

ChatGPT, Harvey.AI, and Google Bard are examples of generative artificial intelligence mentioned in the certification requirement.

Is AI language model technology helpful in legal work?

While AI language models may have many uses in the law, legal briefing is not one of them. AI language models are unreliable and biased, making them unsuitable for legal work.

Conclusion

Judge Brantley Starr’s mandatory certification requirement regarding generative artificial intelligence is a response to an attorney who recently allowed an AI language model to supplement his legal research in a federal filing, which ended up providing him with falsified information. This certification requirement is necessary because generative artificial intelligence is prone to bias and hallucinations. While AI may have many uses in the law, its use must be declared and checked for accuracy to ensure that it does not undermine the rule of law.

[ad_2]

For more information, please refer this link