Embodied AI spins a pen and helps clean the living room in new research
Reading Time: 4 minutesSure, AI can write sonnets and do a passable Homer Simpson Nirvana cover. But if anyone is going to welcome our new techno-overlords, they’ll need to be capable of something more practical — which is why Meta and Nvidia have their systems practicing everything from pen tricks to collaborative housework.
The two tech giants coincidentally both published new research this morning pertaining to teaching AI models to interact with the real world, basically through clever use of a simulated one.
Turns out the real world is not only a complex and messy place, but a slow-moving one. Agents learning to control robots and perform a task like opening a drawer and putting something inside might have to repeat that task hundreds or thousands of times. That would take days — but if you have them do it in a reasonably realistic simulacrum of the real world, they could learn to perform almost as well in just a minute or two.
Using simulators is nothing new, but Nvidia has added an additional layer of automation, applying a large language model to help write the reinforcement learning code that guides a naive AI towards performing a task better. They call it Evolution-driven Universal REward Kit for Agent, or EUREKA. (Yes, it’s a stretch.)
Say you wanted to teach an agent to pick up and sort objects by color. There are lots of ways to define and code this task, but some might be better than others. For instance, should a robot prioritize fewer movements or lower completion time? Humans are fine at coding these, but finding out which is best can sometimes come down to trial and error. What the Nvidia team found was that a code-trained LLM was surprisingly good at it, outperforming humans much of the time in the effectiveness of the reward function. It even iterates on its own code, improving as it goes and helping it generalize to different applications.
The impressive pen trick above is only simulated, but it was created using far less human time and expertise than it would have taken without EUREKA. Using the technique, agents performed highly on a set of other virtual dexterity and locomotion tasks. Apparently it can use scissors pretty well, which is… probably good.
Getting these actions to work in the real world is, of course, another and different challenge — actually ’embodying’ AI. But it’s a clear sign that Nvidia’s embrace of generative AI isn’t just talk.
New Habitats for future robot companions
Meta is hot on the trail of embodied AI as well, and it announced a couple of advances today starting with a new version of its ‘Habitat’ dataset. The first version of this came out back in 2019, basically a set of nearly photorealistic and carefully annotated 3D environments that an AI agent could navigate around. Again, simulated environments are not new, but Meta was trying to make them a bit easier to come by and work with.
It came out with version 2.0 later, with more environments that were far more interactive and physically realistic. They’d started building up a library of objects that could populate these environments as well — something many AI companies have found worthwhile to do.
Now we have Habitat 3.0, which adds in the possibility of human avatars sharing the space via VR. That means people, or agents trained on what people do, can get in the simulator with the robot and interact with it or the environment at the same time.
It sounds simple but it’s a really important capability. Say you wanted to train a robot to clean up the living room by bringing dishes from the coffee table to the kitchen, and putting stray clothing items in a hamper. If the robot is alone, it might develop a strategy to do this that could easily be disrupted by a person walking around nearby, perhaps even doing some of the work for it. But with a human or human-esque agent sharing the space, it can do the task thousands of times in a few second and learn to work with or around them.
They call the cleanup task ‘social rearrangement,’ and another important one ‘social navigation.’ This is where the robot needs to unobtrusively follow someone around in order to, say, stay in audible range or watch them for safety reasons — think of a little bot that accompanies someone in the hospital to the bathroom.
A Spot robot in the real world doing a pick-and-place task.
A new database of 3D interiors they call HSSD-200 improves on the fidelity of the environments as well. They found that training in around a hundred of these high-fidelity scenes produced better results than training in 10,000 lower-fidelity ones.
Meta also talked up a new robotics simulation stack, HomeRobot, for Boston Dynamics’ Spot and Hello Robot’s Stretch. Their hope is that by standardizing some basic navigation and manipulation software, they will allow researchers in this area to focus on higher-level stuff where innovation is waiting.
Habitat and HomeRobot are available under an MIT license at their Github pages, and HSSD-200 is under a Creative Commons non-commercial license — so go to town, researchers.
Ref: techcrunch
MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG