Abstract

Collocated tactile sensing is a fundamental enabling technology for dexterous manipulation. However, deformable sensors introduce complex dynamics between the robot, grasped object, and environment that must be considered for fine manipulation. Here, we propose a method to learn soft tactile sensor membrane dynamics that accounts for sensor deformations caused by the physical interaction between the grasped object and environment.
Our method combines the perceived 3D geometry of the membrane with proprioceptive reaction wrenches to predict future deformations conditioned on robot action. Grasped object poses are recovered from membrane geometry and reaction wrenches, decoupling interaction dynamics from the tactile observation model.
We benchmark our approach on two real-world contact-rich tasks: drawing with a grasped marker and in-hand pivoting. Our results suggest that explicitly modeling membrane dynamics achieves better task performance and generalization to unseen objects than baselines.

Membrane Deformation Visualization: The sensor membranes deform significantly as a result of their interaction with the grasped object and environment. (top) real deformation, (bottom) perceived deformation.
Control Pipeline: Given a measured state, our controller queries the membrane dynamics model with sampled actions to obtain the predicted membrane states. The object pose is estimated from the predicted membrane states and it is compared with the desired poses to compute the costs associated to the sample actions. The costs are aggregated and the resultant optimal action is executed by the robot.

Paper

In 6th Conference on Robotic Learning (CoRL 2022), Auckland, New Zealand (poster)

Video

Code