Our research interests are in InferenceModeling, Control, and Learning applied to Robotic Manipulation. We’re interested in the complex and exciting world of physical interactions, interactions that are fundamental to realizing the promise of robotics: a solution to some of society’s biggest challenges — including patient care, industrial automation, and disaster response.

Central to these challenges is a robot’s ability to control its environment through selective contact and yet, despite its importance, manipulation is still an open problem. As robots touch their environment, they change it. The characteristic challenge in manipulation is that robots have to reason about and cause this change in partially unknown environments using noisy and incomplete sensory information. Our lab’s focus is to build agents that reason about this change intelligently.

Why is manipulation hard? Some important challenges include:

1- Hybrid and Multi-Modal Dynamics: The dynamics of the robot change as it comes into contact with its environment. Contact itself can further be complicated by sliding vs. sticking vs. separation interactions. The environment also changes due to the robots actions. Inferring the current state of the robot and its environment with partially observable sensory feedback is challenging. Planning and controls through these complex physical interactions are also difficult.

Frictional interaction is a difficult phenomenon to model. We find ourselves constantly trading off computational complexity with predictive accuracy for things we can model. Unfortunately, there are also many interactions that we cannot model. For instance, some physical interactions are governed by physical properties we cannot observe (e.g. blocks in a Jenga tower). Our robots should learn and build representations that help them not only make predictions about the changes their actions make to their environment, but also to plan and control them.

2- Challenging Perception:  As our robots move from structured environments into the real-world, they have to learn to perceive their surroundings and make sense of their environment. They have to answer fundamental questions like: “what is the state representation of shoe laces as I try to tie them?” or “How do I measure success when scooping ice cream?”. These challenges call for new tools bridging the gap between computer vision, tactile sensing, and active perception that enable novel levels of physics-based reasoning that are foundational to robotics in the real-world.

3- Hardware and End-effectors: End-effectors are aptly named, they are the instrument at the “end” of the robot that “effects” change in the world. Despite the advances in hardware, our robotic end-effectors are primitive compared to the dexterity and sensory feedback of our hands. Developing dextrous end-effectors with high-quality tactile sensing is an open problem. Knowing what to do with this additional dexterity and sensing is also an important challenge

Though contact is complex, it provides a wealth of information which our robots can use to better understand their world. The idea is that robots change their environment through contact. By monitoring these changes, robots can learn about their world and make inferences. Using this understanding, they can better plan and control their physical interactions.

Our lab develops algorithmsmodels, and hardware that enable robots to intelligently and autonomously interact with and learn from their environment in the physical world. We use a combination of model-based and machine learning approaches augmented with inference, planning, and controls algorithms to enable contact mastery.