Robots are extremely effective in environments like factory floors that are structured for them, and currently ineffective in environments like our homes that are structured for humans. The Personal Robotics Lab at The Robotics Institute at Carnegie Mellon University is developing the fundamental building blocks of perception, navigation, manipulation, and interaction that will enable robots to perform useful tasks in environments structured for humans.

The lab was founded by Prof. Siddhartha Srinivasa in 2006 with funding from Intel Pittsburgh and the Quality of Life Technologies NSF ERC.

Our current research focus is on two topics: Physics-based Manipulation and The Mathematics of Human-Robot Interaction. They are heavily intertwined, both born out of the goal of robots performing complex manipulation tasks with and around people.

Physics-based Manipulation: We focus on using physics in the design of actions, algorithms, and hands for manipulation. We are developing nonprehensile physics-based actions and algorithms to reconfigure clutter in the way of the primary task by pulling, pushing, sweeping, and sliding it out of the way. We have also recently shown that a class of tactile localization problems can be formulated to be submodular. We also focus on functional gradient methods which have been used with much success in physics. We have developed CHOMP, a functional gradient optimizer for robot motion planning, and variants like GSCHOMP that exploit the structure of manipulation problems. We are also working on building robustness into the design and algorithms of simple hands, embracing physics and underactuation to stabilize objects.

The Mathematics of Human-Robot Interaction: We focus on formalizing Human-Robot Interaction principles using machine learning, motion planning, and function gradient algorithms. We have been working on enabling seamless and fluent human-robot handovers. We have developed a taxonomy of human and dog handovers, designed expressive grasps and motions, and used time-series analysis to learn the communication of intent. Our JHRI paper summarizes this work. We are formalizing assistive teleoperation, framing it as the joint tightly-coupled problem of prediction, solved with machine learning, and arbitration by policy blending, solved with control theory. We are also working on the online adaptation of teleoperation interfaces with kernel machines. Our latest work formalizes predictable and legible motion as inference problems, bringing together concepts in psychology, animation, and machine learning. We are presently working on generating legible motion via functional gradient optimization.

We are also working on Manipulation Planning: extending randomized planners to constraint manifolds and Perception for Manipulation: we have developed MOPED: an efficient object recognition and pose estimation system for manipulation.

We have a growing interest in expressive and legible motion, and are working on putting our robot into a theatrical play as a way of exploring how humans and robots can communicate stories through prosody and movement.

We integrate all of our algorithms on our technology testbed HERB, short for Home Exploring Robot Butler, a bimanual mobile manipulator comprised of two Barrett WAM arms on a Segway base equipped with a suite of image and range sensors.HERB serves both as a realistic testbed for our algorithms and as a focal point of our industry collaborations, with Intel's Embedded Communications Group, the Quality of Life Technologies Center, Willow Garage, and Barrett Technologies, among many others, and our academic collaborations, with The University of Pittsburgh, The University of Washington at Seattle, Georgia Tech, Technical University of Munich, Berlin, and Aachen, among many others.

For more details on Personal Robotics, please contact Prof. Srinivasa (email: siddh@cs.cmu.edu).

HERB has a new CMU webpage!