From ExtremeTech: Google wasted no time releasing its Bard AI chatbot in early 2023 following the reveal of ChatGPT in Microsoft Bing, but maybe it should have spared a little time. Bard launched with embarrassing glitches, and Google still finds itself apologizing for the AI. It's not that Bard is worse than other generative AIs, but Google itself is a hub of information. Producing a product that lies is a new experience for Google, so it's recommended that anyone using Bard should fact-check it with, you guessed it, a Google search.
Debbie Weinstein, the head of Google's UK operations, recently explained the reliability problem to the BBC. Bard is "not really the place that you go to search for specific information," Weinstein said. "We're encouraging people to actually use Google as the search engine to actually reference information they found."
Google labels Bard as an "experiment," just like the Generative Search Experience (SGE) it is testing in search results. Weinstein claims that Bard is best used for problem-solving and tinkering with new ideas. However, Google is where people go for answers, which makes this lying robot something of a liability.
View: Full Article