Our lab develops inference, modeling, control, and learning methods for robotic manipulation: robots that can reliably change the world through purposeful contact. We focus on the messy, high-impact settings where manipulation matters most, from assistive care and industrial automation to field robotics in hazardous environments.

Manipulation remains an open problem because contact both reveals and alters the state of the world. As robots touch their environment, they must decide what to do while simultaneously inferring what is happening, under partial observability, noise, and complex physics. Our goal is to build agents that reason through interaction: they act to gain information, update beliefs, and use that understanding to plan and control physical outcomes.

Why is manipulation hard? Key challenges include:

  1. Contact-rich dynamics and uncertainty: Contact induces hybrid, discontinuous dynamics (stick–slip, impacts, frictional transitions) that are difficult to model and even harder to control robustly. Success requires uncertainty-aware estimation, planning, and control and methods that transfer from simulation to the real world.
  2. Challenging perception for interaction: In unstructured environments, robots must build task-relevant state from multimodal, incomplete signals. They face questions like “What is the right representation of shoelaces while tying a knot?” or “What does success mean when scooping ice cream?” These problems demand new tools spanning vision, touch, force, and active perception, increasingly powered by self-supervised learning and physics-grounded reasoning.
  3. Hardware and end-effectors: End-effectors are where decisions become reality, yet today’s hands are still primitive compared to human dexterity and tactile acuity. Developing dexterous, robust hands with high-quality tactile sensing and controllers that can fully exploit that capability remains a central bottleneck.

Although contact is complex, it is also information-rich: by monitoring how the world changes under interaction, robots can infer latent properties (pose, friction, compliance) and improve their decisions. We build algorithms, models, and hardware that enable robots to learn from contact and control through contact, combining model-based methods with modern machine learning to achieve reliable autonomy in the physical world.