Google is enhancing its AI-powered Search capabilities through an updated Lens feature that allows users to search using their camera and voice, eliminating the need for typed queries. The new feature, which leverages AI technology, lets users point their camera at an object and ask questions out loud for instant information.
For instance, if you’re visiting an aquarium and curious about a specific fish, you can open Google Lens, hold the shutter button, and ask something like, “Why are they swimming together?” Google shared in a blog post on Thursday, October 3, explaining the practical use of the tool.
This voice-activated video recognition feature was first introduced earlier this year at Google’s I/O event and is now available to Search Labs participants enrolled in the “AI Overviews and more” experiment in English on both Android and iOS. However, the voice search aspect of Google Lens is already globally accessible in English for all Android and iOS users.
Additionally, Google announced a new feature aimed at shoppers, allowing users to point their camera at an item in the physical world and get details like pricing information, making shopping more interactive and seamless.
For users in the US, search result pages will begin to incorporate AI for better organization and more intuitive results.
What exactly is Google Lens? First introduced about seven years ago, Google Lens enables users to perform searches based on images. It can recognize objects in pictures, and its applications include translating foreign text on signs or documents.
With more than 20 billion visual searches conducted every month via Google Lens, the feature is particularly popular among younger users aged 18 to 24, according to the company.
“The overarching goal is to make search more intuitive and effortless, allowing people to search in any context, wherever they are,” said Rajan Patel, Google’s vice president of search engineering and a co-founder of Google Lens, in comments shared by the Associated Press.