Google’s Robotic Push: Unveils Two New AI Models
On Wednesday, Alphabet's Google unveiled two new artificial intelligence (AI) models designed specifically for robotics applications, built on its Gemini 2.0 framework, positioning itself to meet the demands of the fast-evolving robotics sector.
Recent advancements in AI and machine learning have significantly accelerated the commercialisation of robots, particularly in industrial settings, according to experts.
This launch follows closely on the heels of robotics startup Figure AI's decision to end its partnership with OpenAI after achieving a key AI breakthrough for robotics just a month ago.
CEO of Google, Sundar Pichai, also expressed his excitement.
Gemini Robotics & Gemini Robotics-ER
Google’s Gemini Robotics is an advanced vision-language-action model designed to enable robots to perform physical actions as outputs.
The second model, Gemini Robotics-ER, offers robots a deep understanding of their surroundings and allows developers to implement custom programs using the reasoning capabilities of Gemini 2.0.
These models are versatile, supporting various robot types, from humanoids to those used in factories and warehouses.
By leveraging AI models like Gemini, robotics startups, often constrained by limited resources, can reduce development costs and accelerate time-to-market.
Google has tested its Gemini Robotics model on ALOHA 2, a bi-arm robotics platform, but it can be tailored for complex applications, such as Apptronik’s Apollo robot.
Apptronik recently raised $350 million to scale its AI-powered humanoid robots, with Google participating in the funding.
While Google once owned robotics leader Boston Dynamics, it sold the company to SoftBank Group Corp in 2017.