Google used an event in Paris to unveil some of the latest AI advancements to its Search and Maps products.
The last-minute event was largely seen as a response to Microsoft’s integration of OpenAI’s models into its products. Just yesterday, Microsoft held an even more impromptu event where it announced that a new version of OpenAI’s ChatGPT chatbot – based on GPT-4 – will be integrated into the Edge browser and Bing search engine.
Google was expected to make a large number of AI announcements at its I/O developer conference in May. The event this week felt like a rushed and unpolished attempt by Google to remind the world (or, more likely, investors) that it’s also an AI leader and hasn’t been left behind.
OpenAI reportedly set off alarm bells at Google with ChatGPT. At the invite of Google CEO Sundar Pichai, the company’s founders – Larry Page and Sergey Brin – returned for a series of meetings to review Google’s AI product strategy.
In the wake of those meetings, it was allegedly decided that Google will speed up its AI review process so it can deploy solutions more quickly. Amid those reports, and Google’s firing of high-profile ethics researchers, many are concerned that the company will rush unsafe products to market.
Prabhakar Raghavan, SVP at Google, led proceedings. In his opening remarks, he stated that Google’s goal is to “significantly improve the lives of as many people as possible”. Throughout the event, various speakers appeared to really want to push the narrative that Google won’t take risks.
“When it comes to AI, it’s critical that we bring models to the world responsibly,” said Raghavan.
Google Search
Search is Google’s bread-and-butter. The threat that a ChatGPT-enhanced Bing could pose to Google appears to have been what caused such alarm within the company.
“Search is still our biggest moonshot,” said Raghavan. Adding, “the moon keeps moving.”
Google used this section to highlight some of the advancements it’s been making in the background that most won’t be aware of. This has included the use of zero-shot machine translation to add two dozen new languages to Google Translate over the past year.
Another product that continues to be enhanced by AI is Google Lens, which is now used more than 10 billion times per month.
“The camera is the next keyboard,” explains Raghavan. “The age of visual search is here.”
Liz Reid, VP of Engineering at Google, took the stage to provide an update on what the company is doing in this area.
Google Lens is being expanded to support video content. A user can activate Lens, touch something they want to learn more about in a video clip (such as a landmark), and Google will bring up more information about it.
“If you can see it, you can search it,” says Reid.
Multi-search is another impressive visual search enhancement that Google showed off. The feature allows users to search with both an image and text so, for example, you could try and find a specific chair or item of clothing in a different colour.
Google was going to give a live demo of multi-search but awkwardly lost the phone. Fortunately, the company says that it’s now live globally so you can give it a go yourself.
Few companies have access to the amount of information about the world and its citizens that Google does. Privacy arguments aside, it enables the company to offer powerful services that complement one another.
Reid says that users will be able to take a photo of something like a bakery item and ask Google to source a nearby place from Google Maps where the person can get their hands on an equivalent. Google says that feature is coming soon to images on mobile search results pages.
Bard
Prabhakar retook the stage to discuss Google’s response to ChatGPT.
Google’s conversational AI service is called Bard and it’s powered by LaMDA (Language Model for Dialogue Applications).
LaMDA is a model that’s built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. Instead of relying on pre-defined responses like older chatbots, LaMDA is trained on dialogue for more open-ended natural interactions and can deliver up-to-date information from the web.
In an example of an interaction, Prabhakar asked Bard what he should consider when buying a new car. He then asked for the pros and cons of an electric car. Finally, he asked Bard to help him plan a road trip.
Bard is now available to trusted testers but Prabhakar says that Google is going to check it meets the company’s “high bar” for safety before a broader rollout.
The company says that it’s embracing NORA (No One Right Answer) for questions like, “What is the best constellation to look for when stargazing?” as it’s subjective. Generative AI will be used in such instances to bring multiple viewpoints to results—which sounds quite similar to what it’s been doing in Google News for some time to help address bias concerns.
Prabhakar goes on to highlight the potential for generative AI goes far beyond text. The SVP highlights that Google can use generative AI to create a 360-degree view of items like sneakers from just a handful of images.
Next month, Google will begin onboarding developers for its Generative Language API to help them access some powerful capabilities. Initially, the API will be powered by LaMDA. Prabhakar says that “a range of models” will follow.
Google Maps
Chris Phillips, Head of Google’s Geo Group, took to the stage to give an overview of some of the AI enhancements the company is bringing to Google Maps.
Phillips says that AI is “powering the next-generation of Google Maps”. Google is using AI to fuse billions of Street View and real-world images to evolve 2D maps into “multi-dimensional views” that will enable users to virtually soar over buildings if they’re planning a visit.
However, most impressive is how AI is enabling Google to take 2D images of indoor locations and turn them into 3D that people can explore. One provided example of where this could be useful is checking out a restaurant ahead of a date to see whether the lighting and general ambience is romantic:
Additional enhancements are being made to ‘Search with Live View’ which uses AR to help people find things nearby like ATMs.
When searching for things like coffee shops, you can see if they’re open and even how busy they typically are all from the AR view.
Google says that it’s making its largest expansion of indoor live view today. Indoor live view is expanding to 1000 new airports, train stations, and shopping centres.
Finally, Google is helping users make more sustainable transport choices. Phillips says that Google wants to “make the sustainable choice, the easy choice”.
New Google Maps features for electric vehicle owners will help with trip planning by factoring in traffic, charge level, and energy consumption. Charging stop recommendations will be improved and a “Very fast” charging filter will help EV owners pick somewhere they can get topped up quickly and be on their way.
Even more sustainable than EV driving is walking. Google is making walking directions more “glanceable” from your route overview. The company says that it’s rolling out globally on Android and iOS over the coming months.
Prabhakar retakes the stage to highlight that Google is “25 years into search” but teases that in some ways is “only just beginning.” He goes on to say that more is in the works and the “best is yet to come.”
Google I/O 2023 just got that much more exciting.
(Photo by Mitchell Luo on Unsplash)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Read the full article here