Stretch Community News - November 2025
Happy holidays from Hello Robot!
We’ve got some very cool new work from the Stretch community this month, including new research papers from IROS, CoRL and more! CoRI helps robots explain their actions in clear natural language, DynaMem gives robots dynamic memory to track changing environments, and Stretch helps with a number of new tasks including 3D photogrammetric reconstruction.
Read on for more details! And if you’d like your work featured in a future newsletter, we’d love to hear from you! Drop us a line at community@hello-robot.com.
Cheers,
Aaron Edsinger
CEO - Hello Robot
Understanding what a robot is about to do can make all the difference in human-robot collaboration. Researchers at Carnegie Mellon and the Honda Research Institute introduced CoRI, which bridges that gap by translating a robot’s motion plans and visual data into clear, natural language explanations, allowing people to anticipate and trust the robot’s next move. In studies of assistive tasks like feeding and bathing, CoRI significantly improved communication clarity and user confidence.
The team at the Technical University of Munich present GOPLA, which introduces a new way for robots to learn object placement by combining semantic reasoning, spatial mapping, and diffusion-based planning. The framework transforms language and visual inputs into structured, 3D-aware plans, bridging human intuition with robotic precision.
Imagine a robot that never forgets its surroundings and is continually updating its memory as objects move, appear or disappear. Researchers from New York University and Meta Inc., created DynaMem, software that builds a dynamic spatio-semantic memory, letting a robot explore open spaces, answer language queries, and locate objects even in changing scenes. The result? A pick-and-drop success rate of ~70% which is more than double previous static systems.
Researchers from the University of Tulsa are developing a framework for personal assistive robots that can recognize people, understand their preferences, and adapt to changing environments in real time. Using the Stretch 3 platform, the system combines deep learning, facial recognition, and conversational AI to deliver context-aware help, bringing us closer to truly intelligent, human-centered robotics.
Capturing real-world spaces for virtual experiences just got easier. Researchers from Oakland University and the US Army have developed an automated 3D reconstruction system using the Stretch 3 robot, equipped with a high-resolution camera and adaptive lighting to scan cluttered indoor environments with remarkable precision. Achieving up to 98% image alignment accuracy, this system streamlines photogrammetry for immersive VR, training, and digital preservation, bridging robotics and virtual reality like never before.
As the need for caregiving support grows, assistive robots like Stretch 3 are showing real promise in helping reduce caregiver strain. Researchers from the University of Illinois Urbana-Champaign and University of Waterloo conducted hands-on trials with professional caregivers, Stretch successfully handled tasks like video calls and item delivery while earning high marks for usability and trust. This research highlights how human-centered robotics could soon play a meaningful role in everyday caregiving, enhancing care without adding burden.
In a powerful demonstration of how AI and robotics can connect people across generations and miles, seven South Florida high school students remotely operated the Stretch mobile robot from Nova Southeastern University, 3,250 miles away at Our Place Social Center in Oregon. Guided by healthcare professionals, the students used Stretch to hand out hydration candies and interact with older adults living with dementia, showcasing how emerging technologies like AI and assistive robotics are transforming healthcare, education, and human connection.
What if robots learned alongside students in real campus buildings instead of in sterile labs? A team at Northeastern University is doing just that. They deployed a helper robot nicknamed “Marlo” inside a residential building to navigate corridors, recognize human routines, and take on real-world tasks with students who were coding and refining its behavior as it goes.
Robots that truly understand their surroundings need to connect what they see with what’s happening. The new Event-Grounding Graph (EGG) framework made by Phuoc Nguyen, Francesco Verdoja, and Ville Kyrki does just that, linking spatial features with dynamic events to help robots reason about actions in context. By grounding events like “washing a mug” to real-world objects and spaces, EGG enables robots to recall, interpret, and answer complex questions about their environments with human-like understanding.
Open-source tools aren’t just for coding; they’re powering the next wave of robotics. From community-built hardware to shared AI frameworks, this movement is making robots more accessible, flexible, and collaborative than ever. The article by Chris Paxton highlights Stretch AI as a leading example of how open-source innovation is shaping the future of intelligent, human-centered robots.
Finding misplaced or hidden items is second nature to humans, but a major challenge for robots. This new open-vocabulary search and retrieval framework from researchers at the University of Bonn, Germany enables robots to locate objects even when maps are outdated or items have been moved. By combining spatial, semantic, and geometric reasoning, the system allows the Stretch SE3 robot to detect concealed spaces and plan new viewpoints, cutting search time by 68% while maintaining 100% navigation accuracy, a meaningful step toward affordable, intelligent service robots for everyday homes.