Research
My research focuses on building Physical AI: robot systems that integrate embodiment, sensing, and control, enabling them to operate in real-world environments, labs, industry, and everyday settings.
Physical AI for real-world inspection and automation
I develop end-to-end robotics systems that bridge the gap between AI capabilities and deployment, encompassing perception, task specification, control, and evaluation under real-world constraints (including latency, safety, partial observability, and imperfect calibration).
Dexterous manipulation in clutter
My focus is on robust grasping and manipulation of unknown objects in clutter by combining vision with proprioceptive feedback and practical sensing, aiming for performance that does not rely on fragile assumptions.
Human–robot interaction for close-proximity tasks
I study interaction where humans and robots share space, timing, and intent. This includes designing interfaces and behaviors that are predictable, safe, and efficient for users, especially in tasks that require continuous adjustment rather than pre-scripted motions.
Teleoperation and shared control
I design teleoperation pipelines that enable a human operator to provide intent, while the robot handles timing, collision avoidance, and local motion generation. The goal is reliable task execution, not just demonstrations.
Wearable sensing and biosignal-driven intent (EMG and beyond)
I use wearable sensing, especially EMG, to infer user intent for robot control and assistance. This includes signal processing, intent decoding, and control interfaces that remain usable under day-to-day variability (electrode placement, fatigue, motion artifacts).
Multimodal perception for manipulation
I integrate multiple sensing streams (vision, motion capture when available, robot proprioception, and task context) to support manipulation pipelines that are resilient to occlusion, clutter, and changing lighting.
Locomanipulation on mobile platforms
I extend manipulation to mobile platforms (e.g., legged robots) where navigation and manipulation must be solved in the same loop. This requires whole-body planning and control, as well as interfaces that maintain system stability while interacting with the environment.
Robot self-learning in the real world
In the long term, I envision robots that continually improve through their own experience, collecting data from interactions, detecting failure modes, and adapting to new environments with minimal human intervention. The emphasis is on safety, repeatability, and measurable gains over time.