Many robotic systems rely on vision to interpret their environment, but vision-only perception often fails in constrained manipulation tasks such as opening cabinet doors. Handleless furniture, increasingly common in modern homes, adds difficulty due to narrow gaps where a robot must insert the gripper, making execution sensitive to perception and modeling errors. In this work, an autonomous door-opening framework for handleless cabinets is presented that integrates visual, force, and tactile sensing to improve robustness and support failure recovery. Central to the approach is a correction method that uses information from unsuccessful actions to refine camera parameters. The corrected parameters are then used to update the environment model, enabling replanning of a more reliable door-opening trajectory after failure. The correction method is evaluated in simulation through a sensitivity analysis of key parameters and validated on a real UR5 robot equipped with a 3D camera, a force-torque sensor, and a tactile sensor.
****
Šimundić, Valentin; Petrović, Luka; Džijan, Matej; Cupec, Robert
Framework For Robot Door Opening Based On Visual, Force and Tactile Integration //
IEEE access, 14 (2026), 1-1. doi: 10.1109/access.2026.3655617
****