Skip to content

Warning: Rise of AI-generated hate – What leaders need to know before diving in!

Warning: Rise of AI-generated hate – What leaders need to know before diving in!

[ad_1]

The Impact of Generative AI on Organizations and Workplaces

When you hear the phrase “artificial intelligence,” it may be tempting to imagine the kinds of intelligent machines that are a mainstay of science fiction or extensions of the kinds of apocalyptic technophobia that have fascinated humanity since Dr. Frankenstein’s monster. However, the kinds of AI that are rapidly being integrated into businesses around the world are not of this variety — they are very real technologies that have a real impact on actual people.

While AI has already been present in business settings for years, the advancement of generative AI products such as ChatGPT, ChatSonic, Jasper AI and others will dramatically escalate the ease of use for the average person. As a result, the American public is deeply concerned about the potential for abuse of these technologies. A recent ADL survey found that 84% of Americans are worried that generative AI will increase the spread of misinformation and hate.

Leaders considering adopting this technology should ask themselves tough questions about how it may shape the future — both for good and ill — as we enter this new frontier. Here are three things I hope all leaders will consider as they integrate generative AI tools into organizations and workplaces.

Make trust and safety a top priority

While social media is used to grappling with content moderation, generative AI is being introduced into workplaces that have no previous experience dealing with these issues, such as healthcare and finance. Many industries may soon find themselves suddenly faced with difficult new challenges as they adopt these technologies. If you are a healthcare company whose frontline AI-powered chatbot is suddenly being rude or even hateful to a patient, how will you handle that?

For all of its power and potential, generative AI makes it easy, fast and accessible for bad actors to produce harmful content.

Over decades, social media platforms have developed a new discipline — trust and safety — to try to get their arms around thorny problems associated with user-generated content. Not so with other industries.

For that reason, companies will need to bring in experts on trust and safety to talk about their implementation. They’ll need to build expertise and think through ways these tools can be abused. And they’ll need to invest in staff who are responsible for addressing abuses so they are not caught flat-footed when these tools are abused by bad actors.

Establish high guardrails and insist on transparency

Especially in work or education settings, it is crucial that AI platforms have adequate guardrails to prevent the generation of hateful or harassing content.

While incredibly useful tools, AI platforms are not 100% foolproof. Within a few minutes, for example, ADL testers recently used the Expedia app, with its new ChatGPT functionality, to create an itinerary of famous anti-Jewish pogroms in Europe and a list of nearby art supply stores where one could purchase spray paint, ostensibly to engage in vandalism against those sites.

Before adopting AI broadly, leaders should ask critical questions, such as: What kind of testing is being done to ensure that these products are not open to abuse? Which datasets are being used to construct these models? And are the experiences of communities most targeted by online hate being integrated into the creation of these tools?

Without transparency from platforms, there’s simply no guarantee these AI models don’t enable the spread of bias or bigotry.

Safeguard against weaponization

Even with robust trust and safety practices, AI still can be misused by ordinary users. As leaders, we need to encourage the designers of AI systems to build in safeguards against human weaponization.

Unfortunately, for all of their power and potential, AI tools make it easy, fast and accessible for bad actors to produce content for any of those scenarios. They can produce convincing fake news, create visually compelling deepfakes and spread hate and harassment in a matter of seconds. Generative AI-generated content could also contribute to the spread of extremist ideologies — or be used to radicalize susceptible individuals.

In response to these threats, AI platforms should incorporate robust moderation systems that can withstand the potential deluge of harmful content perpetrators might generate using these tools.

Generative AI has almost limitless potential to improve lives and revolutionize how we process the endless amount of information available online. I’m excited about the prospects for a future with AI, but only with responsible leadership.

Conclusion

Generative AI has the potential to be a powerful tool in the workplace and beyond, but proper implementation requires a deep consideration of the risks and benefits. Leaders must prioritize trust and safety, insist on transparency, and safeguard against weaponization. With responsible leadership and careful planning, generative AI can be harnessed to improve lives and increase efficiency.

FAQ

What is generative AI?

Generative AI refers to technologies that can generate new content, such as text, images, or sounds. These technologies use machine learning algorithms to analyze existing data and generate new content in a way that mimics human creativity.

What are some examples of generative AI products?

Some examples of generative AI products include ChatGPT, ChatSonic, Jasper AI, and others. These products can be used to generate text, music, or other forms of content.

Why are people concerned about the potential abuse of generative AI?

People are concerned that generative AI may be used to spread misinformation, hate, or harassment. Without proper safeguards in place, these tools may be used to target vulnerable communities or spread extremist ideologies.

What can leaders do to ensure responsible use of generative AI?

Leaders should prioritize trust and safety, insist on transparency, and safeguard against weaponization. They should also bring in experts on trust and safety to help implement these technologies and invest in staff who are responsible for addressing abuses.

[ad_2]

For more information, please refer this link