According to reports, Google is launching screen-sharing and live video, two of its main Gemini features. These features were initially shown at Google I/O 2024 by the tech company located in Mountain View. Those features, which were created by Google DeepMind as part of Project Astra, enable the artificial intelligence (AI) chatbot to respond in real-time to inquiries with regard to the user’s device and environment. They additionally involve live multisensory data processing capabilities. The business had previously stated that by March, these additional functionalities will be available. Notably, only Gemini advanced users may access these capabilities on the mobile applications at this time.
Google is launching new Gemini features.
Reddit user Kien_PS previously shared a screenshot of the “Share-screen with live” function on the Bard (Gemini’s previous name) subreddit, which was first noticed by 9to5Google. On Sunday, the same person again shared a video demonstration of the functionality, showing off how it functions.
In addition, Google representative Alex Joseph informed The Verge that Gemini Live will soon get new Artificial intelligence (AI) capabilities. In addition to screen sharing, Gemini will have real-time access to the user’s device’s cameras and be able to respond to questions regarding anything the user observes.
With the use of this real-time information analysis feature, users can now ask Gemini questions regarding suggested outfits by displaying their closest to the device or by pointing to a landmark or a store while they’re outside. Gemini will be able to assist the user in traveling between different displays on their smartphone thanks to the screen-sharing ability, which is an enhanced version of the current “Talk about the screen” feature.
Read More: Google’s AI Gemini 2.0 Flash Can Remove Watermarks from Images
Gemini Live, which was made available to customers last year, has both of these capabilities and allows for two-way live speech conversations. In the past, Google stated that it wanted to improve Gemini’s usefulness in real-time scenarios.
Notably, the Gemini live streaming capability is comparable to the real-time video functionality of the Ray-Ban Meta Glasses and OpenAI’s Enhanced Voice Mode with Sight function for ChatGPT. As cloud-based servers get more powerful and AI and their backbones develop, IT companies are now able to provide quicker interpretation for real-time applications.
Notably, only Gemini Premium customers presently have access to these two Gemini capabilities. Regarding if and when it would be extended to the free tier, the organization has not disclosed any details. The Google One AI Premium plan includes a Gemini Advanced membership, which costs Rs. 1,950.