Stretch Community News - June/July 2024

 

Welcome to the Hello Robot monthly community update!

Want to win your own Stretch 3 robot? Check out the PhyRC Challenge - a new competition for robotic caregiving, hosted by the Cornell EmPRISE lab. The winner will take home their very own Stretch! The deadline for registration is September 8th - don’t miss it!

Read on for more details on the PhyRC Challenge, as well as new policies for navigation and manipulation, improving grasping in warehouses, semantic placement, and much more! If you’d like to see your work featured in a future newsletter, please let us know!


Want a robot to navigate a cluttered room and fetch you something?  SPIN is a new end-to-end learned policy from Carnegie Mellon University that navigates and manipulates using active vision and whole-body coordination, optimizing action and perception simultaneously for smooth and reactive motion in cluttered and dynamic environments.


The EmPRISE lab at Cornell University is hosting the PhyRC challenge, a new competition to facilitate innovation in physical robotic caregiving. The competition has a simulation stage followed by a real robot phase, and the winner of the Mobile Manipulation for Robot-assisted Bed Bathing track will win their very own Stretch 3 robot! Make sure to register before the deadline on September 8th!  


In a modern warehouse, robots must be able to pick diverse objects from dense, cluttered containers with a high degree of success. New work at the University of Washington introduces Interactive Visual Failure Prediction, a method that allows robots to visually evaluate and refine grasps in clutter during execution, outperforming policies trained with human supervision alone and reducing reliance on human intervention.


Poliformer is an RGB-only indoor navigation agent, developed at the Allen Institute for AI, which is trained end-to-end with reinforcement learning at scale. Despite being trained purely in simulation, the transformer-based model generalizes to the real-world without adaptation.


New work from Georgia Tech and the Allen Institute for AI focuses on Semantic Placement - a visual “common sense” task to predict natural locations where a given object could be placed in a scene. Large amounts of training data were collected by removing objects from natural scenes using inpainting, and the resulting prediction model was tested in simulation with a robot performing Embodied Semantic Placement.


Stretch appeared in three recent papers from the Verifiable Robotics group at Cornell - a framework that enables robots to automatically recover from assumption violations of high-level specifications during task execution, a control synthesis framework for performing collaborative tasks with heterogeneous multi-robot systems, and a method for online modification of Event-based Signal Temporal Logic specifications.


At the University of Bremen, researchers in Prof. Michael Beetz’s lab have been working on several projects with Stretch, including integrating the robot with their Virtual Research Building, a Blockly-based interface for visual programming, and a live pick-and-place demo at the recent Bremen AI Days.


 

Next
Next

Stretch Community News - April/May 2024