Interestingly, Google’s AI search function appears to be caught in an endless loop of repeating its most notorious mistake, which was suggesting glue for pizza. AI Overviews has been under fire for advising customers to put glue on their pizza cheese in order to keep it from sliding off, yet they are still recommending it even after that occurrence.
Before consumers could see summaries in their search results, Google required users to opt-in for its AI Overviews tool, which it started testing last year. But after its I/O developer conference last month, the firm made the decision to provide the feature to the wider audience. Since then, the business has faced criticism after the AI gave naive consumers dangerous and erroneous recommendations. Not as well-known as the glue on pizza example, but AI Overviews has also recommended eating rocks and “adding more oil to a cooking oil fire.”
At the time, Google only partially took responsibility for the incident, referring it to the search searches that generated the heated comments in a blog post on the incident. “Not much online material thoughtfully addresses that query,” it stated, alluding to a user’s search for “The optimal quantity of rocks to consume.” Numerous of the widely shared screenshots, according to Google, were fake. After the uproar, the business did, however, limit the number of search results where AI Overviews appeared.
Compared to the 27% number in the early days of the feature’s deployment, AI Overviews are anticipated to appear in 11% of Google search results now. It seems, though, that the business may not have addressed the underlying issue that sparked the entire dispute.
Former Google employee Colin McMillen saw that the AI Overviews function was repeating its most notorious mistake, which is to put glue on pizza. This time, news reports detailing Google’s own mistake served as the source rather than some obscure forum thread from ten years ago. McMillen just typed in “how much glue to add to pizza,” a ridiculous search term that is covered by the company’s previous justification.
We were unable to use the identical search query to cause an AI Overview to appear. But Google’s highlighted snippet responded, citing a recent news piece and recommending “an eighth of a cup” in highlighted font. Featured snippets are where the search engine highlights possible responses to frequently asked subjects; they are not driven by generative AI.
Large language models have long been known to produce bizarre or deceptive text, and ChatGPT gained notoriety for producing hallucinations soon after its introduction in 2022. But OpenAI was able to put effective restraints on the situation. With Bing Chat (now Copilot), Microsoft went one step further by enabling the chatbot to conduct online searches and double-check its answers. Excellent results were obtained with this method, probably because GPT-4 exhibits “emergent behavior,” which allows it to think logically.
Google’s PaLM 2 and Gemini models, on the other hand, excel at writing and creative jobs but have had trouble with factual accuracy, even when it comes to web browsing.