- USDT(TRC-20)
- $0.0
Google is finally explaining what the heck happened with its AI Overviews.
For those who arenât caught up, AI Overviews were introduced to Googleâs search engine on May 14, taking the beta Search Generative Experience and making it live for everyone in the U.S. The feature was supposed to give an AI-powered answer at the top of almost every search, but it wasnât long before it started suggesting that people put glue in their pizzas or follow potentially fatal health advice. While theyâre technically still active, AI Overviews seem to have become less prominent on the site, with fewer and fewer searches from the Lifehacker team returning an answer from Googleâs robots.
In a blog post yesterday, Google Search VP Liz Reid clarified that while the feature underwent testing, "thereâs nothing quite like having millions of people using the feature with many novel searches.â The company acknowledged that AI Overviews hasnât had the most stellar reputation (the blog is titled âAbout last weekâ), but it also said it discovered where the breakdowns happened and is working to fix them.
âAI Overviews work very differently than chatbots and other LLM products,â Reid said. âTheyâre not simply generating an output based on training data,â but instead running âtraditional âsearchâ tasksâ and providing information from âtop web results.â Therefore, she doesnât connect errors to hallucinations so much as the model misreading whatâs already on the web.
âWe saw AI Overviews that featured sarcastic or troll-y content from discussion forums," she continued. "Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice.â In other words, because the robot canât distinguish between sarcasm and actual help, it can sometimes present the former for the latter.
Similarly, when there are âdata voidsâ on certain topics, meaning not a lot has been written seriously about them, Reid said Overviews was accidentally pulling from satirical sources instead of legitimate ones. To combat these errors, the company has now supposedly made improvements to AI Overviews, saying:
All these changes mean AI Overviews probably arenât going anywhere soon, even as people keep finding new ways to remove Google AI from search. Despite social media buzz, the company said âuser feedback shows that with AI Overviews, people have higher satisfaction with their search results,â going on to talk about how dedicated Google is to âstrengthening [its] protections, including for edge cases."
That said, it looks like thereâs still some disconnect between Google and users. Elsewhere in its posts, Google called out users for ânonsensical new searches, seemingly aimed at producing erroneous results.â
Specifically, the company questioned why someone would search for âHow many rocks should I eat?â The idea was to break down where data voids might pop up, and while Google said these questions âhighlighted some specific areas that we needed to improve,â the implication seems to be that problems mostly appear when people go looking for them.
Similarly, Google denied responsibility for several AI Overview answers, saying that âdangerous results for topics like leaving dogs in cars, smoking while pregnant, and depressionâ were faked.
Thereâs certainly a tone of defensiveness to the post, even as Google spends billions on AI engineers who are presumably paid to find these kinds of mistakes before they go live. Google says AI Overviews only âmisinterpret languageâ in âa small number of cases,â but we do feel bad for anyone sincerely trying to up their workout routine who might have followed its "squat plug" advice.
Full story here:
For those who arenât caught up, AI Overviews were introduced to Googleâs search engine on May 14, taking the beta Search Generative Experience and making it live for everyone in the U.S. The feature was supposed to give an AI-powered answer at the top of almost every search, but it wasnât long before it started suggesting that people put glue in their pizzas or follow potentially fatal health advice. While theyâre technically still active, AI Overviews seem to have become less prominent on the site, with fewer and fewer searches from the Lifehacker team returning an answer from Googleâs robots.
In a blog post yesterday, Google Search VP Liz Reid clarified that while the feature underwent testing, "thereâs nothing quite like having millions of people using the feature with many novel searches.â The company acknowledged that AI Overviews hasnât had the most stellar reputation (the blog is titled âAbout last weekâ), but it also said it discovered where the breakdowns happened and is working to fix them.
âAI Overviews work very differently than chatbots and other LLM products,â Reid said. âTheyâre not simply generating an output based on training data,â but instead running âtraditional âsearchâ tasksâ and providing information from âtop web results.â Therefore, she doesnât connect errors to hallucinations so much as the model misreading whatâs already on the web.
âWe saw AI Overviews that featured sarcastic or troll-y content from discussion forums," she continued. "Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice.â In other words, because the robot canât distinguish between sarcasm and actual help, it can sometimes present the former for the latter.
Similarly, when there are âdata voidsâ on certain topics, meaning not a lot has been written seriously about them, Reid said Overviews was accidentally pulling from satirical sources instead of legitimate ones. To combat these errors, the company has now supposedly made improvements to AI Overviews, saying:
We built better detection mechanisms for nonsensical queries that shouldnât show an AI Overview, and limited the inclusion of satire and humor content.
We updated our systems to limit the use of user-generated content in responses that could offer misleading advice.
We added triggering restrictions for queries where AI Overviews were not proving to be as helpful.
For topics like news and health, we already have strong guardrails in place. For example, we aim to not show AI Overviews for hard news topics, where freshness and factuality are important. In the case of health, we launched additional triggering refinements to enhance our quality protections.
All these changes mean AI Overviews probably arenât going anywhere soon, even as people keep finding new ways to remove Google AI from search. Despite social media buzz, the company said âuser feedback shows that with AI Overviews, people have higher satisfaction with their search results,â going on to talk about how dedicated Google is to âstrengthening [its] protections, including for edge cases."
That said, it looks like thereâs still some disconnect between Google and users. Elsewhere in its posts, Google called out users for ânonsensical new searches, seemingly aimed at producing erroneous results.â
Specifically, the company questioned why someone would search for âHow many rocks should I eat?â The idea was to break down where data voids might pop up, and while Google said these questions âhighlighted some specific areas that we needed to improve,â the implication seems to be that problems mostly appear when people go looking for them.
Similarly, Google denied responsibility for several AI Overview answers, saying that âdangerous results for topics like leaving dogs in cars, smoking while pregnant, and depressionâ were faked.
Thereâs certainly a tone of defensiveness to the post, even as Google spends billions on AI engineers who are presumably paid to find these kinds of mistakes before they go live. Google says AI Overviews only âmisinterpret languageâ in âa small number of cases,â but we do feel bad for anyone sincerely trying to up their workout routine who might have followed its "squat plug" advice.
Full story here: