Our team shares the most interesting trends, themes and observations from CoRL 2025.
The Conference on Robot Learning is an annual, premier international conference dedicated to presenting and discussing cutting-edge research in robot learning. Launched in 2017, it has quickly become a highly selective, single-track venue where the world's top researchers, practitioners, and industry experts converge to explore the increasingly important role of learning in robotics.
This year, the conference was held in Seoul, South Korea, in conjunction with the Humanoids conference. Needless to say, it was inspiring to see what experts in the world of robotics learning have been cooking up, and rewarding for members of the Agility team to share their own world-class work.
Here are a handful of the top takeaways from CoRL 2025.
Dyna Robotics, Google DeepMind, and Franka Emika all had live demos of autonomous robot operation driven by e2e neural networks. Dyna’s robot folded clothes all day, and GDM’s Gemini could follow spoken or written instructions, open drawers, and draw on a page. It’s powerful validation that these robots can travel across the globe and deliver on demand in front of the public.
Beyond building credibility, it’s just really exciting to see the physical manifestation of this work in person. You can’t help but get excited for what’s happening in the industry. If you are an engineer on the sidelines, it’s time to join one of these companies; you won’t be disappointed. We are currently hiring for several positions.
Last year, very few robots moved around; this year, robots like the Unitree R1 were everywhere. They cartwheeled and even boxed conference attendees. Even though CoRL is much broader, you can’t deny the impact that humanoids are having on the industry. More investment, new companies, and a variety of different approaches to hardware are bringing more robots to the world. The market for robot learning is booming, and humanoids are a big part of that.
People were using humanoids to perform various tasks, including interacting with terrain and venturing into the wild. The best student paper, in fact, went to VideoMimic, which produced fantastic work that taught humanoid robots how to perform tasks on different terrains using video inputs. Much like our own work, whole-body control is a key focus for many companies that are making significant progress.
While large companies like Galaxea, NVIDIA, and Physical Intelligence discussed their VLAs, far fewer researchers engaged in language-based work, opting instead for real-world reinforcement learning projects like "steering your diffusion model" (paper) and whole-body control projects like videomimic.
At the conference, we saw several examples of work aimed at speeding up policy execution or using RL to improve skill policies. Similarly, there was considerable excitement about UMI-style tools, including a couple of companies that sell these tools to robotics researchers and companies. There was also significant interest in using egocentric data for robot training.
With such a diverse selection of experts, you would be right in assuming there was plenty of debate on just about every topic. One thing that had consensus was the need for significantly more data, including data from various modalities. That thought explains the surge in interest in egocentric and UMI data, as well as robot teleop data. We are also seeing increased excitement around video models for generating robot data, think Veo3 and similar tools.
The Data Pyramid