Google recently held I/O 2022. During the event, the company announced lots of software updates. With its progress in the field of artificial intelligence, Google is adding many new features to its products, including Google Maps, Google Meets, Google Assistant, etc. Keep reading to learn more about new features coming to these Google products in the near future.
Google Maps gets an immersive view
Google is adding a new feature to Maps called Immersive View. Google calls it a “whole new way to explore with Maps.” The company says Immersive View will help users experience “what a neighborhood, landmark, restaurant, or popular place looks like — and even feel like you’re there.”
Essentially, Google is using artificial intelligence to merge billions of “Street Views” and create aerial images that act as a digital model of the location. The feature will also allow users to check a landmark, building or place and how it looks at different times of the day. Along with this, Google is also adding features like Eco Routing and Live View.
New features in Google Meet
First, Google Meet is getting a new feature called Portrait Restoration, which uses Google AI technology to improve video quality. If users are in a poorly lit environment or using a low-speed Wi-Fi connection, Google will detect it through AI and improve the user’s overall video quality.
Second, Google is adding a new feature called Portrait Light. With this feature, users will be able to add AI-simulated studio-quality lighting to their video stream. Other features added to Google Meet are de-reverberation for audio, live sharing for contacts, automated transcriptions, and security protections.
No more saying “Hey Google”
Google is launching new features for its voice assistant. From May 11, 2022, Nest Hub Max users in the US will be able to give Google Assistant commands without saying “Hey Google”.
The feature is called Look and speak and it uses both facial and voice recognition technology to recognize the user and execute voice commands. Google explained the technology behind this feature in the official blog.
The company mentions that “it takes six machine learning models to process more than 100 signals from both the camera and the microphone – such as proximity, head orientation, gaze direction, movement of lips, context awareness and intention classification – all in real time”.
Apart from this, Google is also launching a brand new Google Wallet for users from over 40 countries.