Google’s and Microsoft’s chatbots making up Super Bowl stats
1 mins
Published by: Markus Ivakha
16 February 2024, 10:05PM
Google's Gemini chatbot, originally Bard, falsely reported that the 2024 Super Bowl had occurred, offering detailed but fictional statistics.
Microsoft's Copilot chatbot also mistakenly announced the Super Bowl's outcome, but with a different winner, highlighting AI's potential for error.
These mistakes underscore the limitations of artificial intelligence models that generate responses based on data patterns without real understanding.
Such incidents emphasize the need for caution in relying on AI for accurate information, as these systems can propagate errors found in their training data.
Google and Microsoft recognize the imperfections of their AI technologies, advising users to verify information provided by AI systems, pointing to the broader challenges and limitations of current AI models.
Google's Gemini chatbot, previously known as Bard, mistakenly claimed that the 2024 Super Bowl had already taken place, complete with made-up statistics. According to a discussion on Reddit, Gemini, based on Google's artificial intelligence, provided detailed results and performance statistics for a Super Bowl game that hasn't occurred, favoring the Kansas City Chiefs over the San Francisco 49ers.
In a similar vein, Microsoft's Copilot chatbot also falsely reported the game's outcome, but with the 49ers winning against the Chiefs, suggesting a final score of 24-21. This error reflects the limitations of current artificial intelligence models, which generate responses based on patterns learned from vast amounts of data without understanding truth or falsehood.
These AI models, including the one behind Copilot, which shares similarities with OpenAI's ChatGPT (GPT-4), are designed to predict text likelihood based on context and data patterns. While they can produce text that seems logical, they can also create nonsensical or inaccurate statements, as demonstrated by the fictional Super Bowl results.
The incident highlights the critical importance of not overly relying on AI for factual information, as these models can propagate inaccuracies from their training data. Google and Microsoft acknowledge that their AI applications are not infallible and may commit errors, underscoring the need for users to verify AI-generated information. This example of misinformation about the Super Bowl serves as a reminder of the broader limitations and potential pitfalls of current AI technology.
User Comments
There are no reviews here yet. Be the first to leave review.
Hi, there!