Google’s AI robots are learning from watching movies – just like the rest of us



Google DeepMind’s robotics team is teaching robots to learn how a human intern would: by watching a video. The team has published a new paper demonstrating how Google’s RT-2 robots embedded with the Gemini 1.5 Pro generative AI model can absorb information from videos to learn how to get around and even carry out requests at their destination.

Thanks to the Gemini 1.5 Pro model’s long context window, training a robot like a new intern is possible. This window allows the AI to process extensive amounts of information simultaneously. The researchers would film a video tour of a designated area, such as a home or office. Then, the robot would watch the video and learn about the environment. 


https://cdn.mos.cms.futurecdn.net/Vqx58ZDFdDGFDoZGNtpg2G-1200-80.png



Source link
erichs211@gmail.com (Eric Hal Schwartz)

Latest articles

spot_imgspot_img

Related articles

spot_imgspot_img