[ad_1]
Introduction
The proliferation of AI-generated images and videos has led to a digital information disaster that AI researchers have warned against for years. These AI-generated images and videos, called deepfakes, can be used maliciously to spread false information and cause chaos. Start-ups and established AI firms alike are racing to develop new AI deepfake detection tools to prevent this.
The Deepfake Pentagon Fiasco
When an AI-generated image of an explosion occurring outside the Pentagon proliferated on social media earlier this week, it provided a brief preview of a digital information disaster AI researchers have warned against for years. The image was clearly fabricated, but that didn’t stop several prominent accounts like Russian state-controlled RT and Bloomberg news impersonator @BloombergFeed from running with it. Local police reportedly received frantic communications from people believing another 911-style attack was underway. The ensuing chaos sent a brief shockwave through the stock market.
The Godfather of AI’s Warning
Earlier this year, computer scientist Geoffrey Hinton, referred to by some as the “Godfather of AI,” said he was concerned the increasingly convincing quality of AI-generated images could lead the average person to “not be able to know what is true anyone.”
Companies Racing to Find Detection Solutions
Startups and established AI firms are racing to develop new AI deepfake detection tools to prevent the spread of false information. Some are focusing on sussing out AI involvement in audio and videos, while others are focusing their efforts more squarely on text generated by AI chatbots. In some cases, these current detection systems seem to perform well, but tech safety experts fear the tools are still playing catch up.
Companies Leading the Race to Detect Deepfakes
Here are some of the companies that are leading the race to detect deepfakes:
- Optic: Focusing on sussing out AI involvement in audio and videos
- Intel’s FakeCatch: Focusing on sussing out AI involvement in audio and videos
- Fictitious.AI: Focusing its efforts more squarely on text generated by AI chatbots
Conclusion
As AI-generated images and videos become increasingly convincing, it’s essential that we prioritize the development of AI deepfake detection tools. By doing so, we can prevent the spread of false information and preserve the integrity of information on the internet.
FAQs
What are deepfakes?
Deepfakes are AI-generated images and videos that can be used maliciously to spread false information and cause chaos.
Why are companies racing to develop deepfake detection tools?
Companies are racing to develop deepake detection tools to prevent the spread of false information and preserve the integrity of information on the internet.
What are some companies leading the race to detect deepfakes?
Some companies leading the race to detect deepfakes include Optic, Intel’s FakeCatch, and Fictitious.AI.
What is the potential impact of deepfakes?
If left undetected, deepfakes have the potential to spread false information and cause chaos, potentially leading to real-world harm or damage.
[ad_2]
For more information, please refer this link