Microsoft’s Bing chatbot recently cited a fake news article claiming that Google’s Bard chatbot had been decommissioned. The article, which cited a tweet from Bard and a comment on Hacker News, was completely false. It does, however, highlight a broader concern that Big Tech’s hastily launched AI chatbots could significantly degrade the web’s information ecosystem.
The situation is an early warning sign of an AI misinformation telephone game in which chatbots are unable to identify reliable news sources, misread stories about themselves, and exaggerate their own abilities. The whole ordeal started with a single joke comment on HackerNews, which raised the question of what someone with malicious intent could do if they wanted these systems to fail.
The inability of AI language models to distinguish between fact and fiction risks unleashing a wave of misinformation and mistrust on the web, creating a haze that is difficult to trace or refute authoritatively. The online chatbot launches represent a clear market share grab, and while these companies may claim that their chatbots are experiments rather than search engines, this is a flimsy defense.
There have already been numerous reports of AI chatbots spreading misinformation, inventing new stories, or discussing non-existent books. The current case suggests that AI chatbots are even citing one another’s errors, exacerbating the problem. This is a concerning trend, and Big Tech must urgently prioritize safety over market share.
The sources for this piece include an article in TheVerge.