Guides

Coming soon

Market insights

Coming soon

Search

Personalize

0%

Anthropic's efforts against election misinformation

2 mins

Nazarii Bezkorovainyi

Published by: Nazarii Bezkorovainyi

16 February 2024, 10:04AM

Alina  Chernomorets

Updated by: Alina Chernomorets

16 February 2024, 12:00AM

In Brief

Anthropic, a well-funded AI startup, is testing Prompt Shield, a technology aimed at steering users away from political misinformation when using their GenAI chatbot, Claude.

Prompt Shield detects when users inquire about political topics and redirects them to authoritative voting information sources like TurboVote.

Anthropic acknowledges Claude's shortcomings in real-time political updates, leading to potential misinformation.

OpenAI also implements similar measures with ChatGPT, directing users to reliable voting resources like CanIVote.org.

Despite proactive efforts by AI vendors, legislation regarding AI's role in politics remains absent, highlighting the importance of responsible AI use in elections.

Anthropic, the AI startup armed with hefty funding, is gearing up for the 2024 U.S. presidential race with a groundbreaking tech experiment. They're trialing a system, aptly named Prompt Shield, aimed at steering users away from political pitfalls when chatting with their GenAI chatbot.

This ingenious tool, a blend of AI smarts and predefined rules, swoops in with a helpful pop-up whenever a Claude user based in the U.S. dares to dip into political waters. Instead of diving headfirst into potential misinformation, users are gently nudged towards reliable voting info on TurboVote, courtesy of Democracy Works.

Why the need for Prompt Shield? Well, Anthropic admits Claude isn't exactly a political pundit.

We’ve had ‘prompt shield’ in place since we launched Claude — it flags a number of different types of harms, based on our acceptable user policy. We’ll be launching our election-specific prompt shield intervention in the coming weeks and we intend to monitor use and limitations … We’ve spoken to a variety of stakeholders including policymakers, other companies, civil society and nongovernmental agencies and election-specific consultants [in developing this].

Admitted Anthropic's spokesperson to TechCrunch.

Anthropic's move is just the latest in a series of efforts by AI vendors to keep politics clean. OpenAI, for instance, has clamped down on ChatGPT's political mischief, while also directing users to legit voting resources like CanIVote.org.

Despite these proactive steps, Congress is yet to legislate on AI's role in politics. Still, with more countries heading to the polls than ever before, the need for responsible AI in the political arena has never been more crucial.

As the tech world braces for a whirlwind election season, it's clear that the race to combat misinformation is well and truly on. Whether through innovative tools like Prompt Shield or strict platform policies, one thing's for sure – the fight against political meddling in the digital age is heating up.

User Comments

There are no reviews here yet. Be the first to leave review.

Hi, there!

Join our newsletter

Stay in the know on the latest alpha, news and product updates.