Google Search AI hallucinations push Google to hire “AI Answers Quality” engineers

Google Search AI hallucinations push Google to hire "AI Answers Quality" engineers

AI, including AI Overviews on Google Search, can hallucinate and often make up stuff or offer contradicting answers when asked in two different ways.

A new job listing suggests that Google is hiring engineers to verify AI answers and improve their quality, as it’s reimagining the search experience.

“In Google Search, we’re reimagining what it means to search for information – any way and anywhere. To do that, we need to solve complex engineering challenges and expand our infrastructure while maintaining a universally accessible and useful experience that people around the world rely on,” Google explained in the job listing.

Wiz

The job title is “AI Answers Quality,” and those who join the Google Search team will be tasked with improving AI answers’ quality, specifically AI Overviews.

“Help the AI Answers Quality team deliver AI Overviews to users’ hard and complicated queries on the SRP and in AI Mode,” the job listing reads.

This is the first time Google has indirectly acknowledged that its AI Overviews need improvement, and the timing is interesting.

Google is forcing users to view AI answers, but it’s doing little to improve the quality

After the recent update, Google has been pushing more and more users to AI Mode and AI answers.

In fact, Google has also updated its Discover feed with AI Overviews for news and is even rewriting headlines of news publications using AI.

While Google AI Overviews have come a long way and answers are now better than what we have seen in the past, there are still some rough edges.

For example, a week ago, when I Googled the valuation of a certain startup, Google made up a figure of $4 million. Then, I opened another tab and asked the same question but worded it slightly differently, and AI Overviews said the company is valued at over $70 million.

I cross-verified Google’s answers with the links (citations) it generated, but I noticed that the value Google cited twice did not even exist in the sources it allegedly referred to.

This is just one of the examples where Google can get things wrong.

Recently, The Guardian reported that AI Overviews offer health advice that is misleading or straight-up incorrect.

Google AI answers have improved over the past several months, but they still need to get better, as most users tend to believe everything Google shows.

Wiz

As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.

This free cheat sheet outlines 7 best practices you can start using today.



Source link