What I Work On
Research
My research focuses on building Physical AI: robot systems that integrate embodiment, sensing, and control for real-world environments.
🤖 Physical AI for real-world inspection and automation
I develop end-to-end robotics systems that bridge the gap between AI capabilities and deployment, encompassing perception, task specification, control, and evaluation under real-world constraints (including latency, safety, partial observability, and imperfect calibration).
🖐 Dexterous manipulation in clutter
My focus is on robust grasping and manipulation of unknown objects in clutter by combining vision with proprioceptive feedback and practical sensing, aiming for performance that does not rely on fragile assumptions.
🤝 Human–robot interaction for close-proximity tasks
I study interaction where humans and robots share space, timing, and intent. This includes designing interfaces and behaviors that are predictable, safe, and efficient for users, especially in tasks that require continuous adjustment rather than pre-scripted motions.
🎮 Teleoperation and shared control
I design teleoperation pipelines that enable a human operator to provide intent, while the robot handles timing, collision avoidance, and local motion generation. The goal is reliable task execution, not just demonstrations.
⌚ Wearable sensing and biosignal-driven intent (EMG and beyond)
I use wearable sensing, especially EMG, to infer user intent for robot control and assistance. This includes signal processing, intent decoding, and control interfaces that remain usable under day-to-day variability (electrode placement, fatigue, motion artifacts).
🗣 Vision-Language Models for Robot Locomanipulation
I study how vision-language models can support robot locomanipulation by grounding task-relevant language into perception and action in real-world environments. This includes identifying objects and affordances in clutter, guiding shared autonomy, and supporting manipulation on mobile platforms under partial observability, latency constraints, and changing task conditions.
♿ Assistive Human-Robot Interaction
I develop assistive human-robot interaction methods that translate human intent into reliable robot assistance through adaptive interfaces, shared autonomy, and real-time perception. This includes collaborative work on assistive quadruped robots for mobility support, where onboard sensing and intention-aware control are used to provide context-sensitive assistance in everyday environments. I am also interested in how vision-language models can enhance these systems by improving scene interpretation, user adaptation, and natural interaction with robotic assistants.
📷 Multimodal perception for manipulation
I integrate multiple sensing streams (vision, motion capture when available, robot proprioception, and task context) to support manipulation pipelines that are resilient to occlusion, clutter, and changing lighting.
🐕 Locomanipulation on mobile platforms
I extend manipulation to mobile platforms (e.g., legged robots) where navigation and manipulation must be solved in the same loop. This requires whole-body planning and control, as well as interfaces that maintain system stability while interacting with the environment.
🔄 Robot self-learning in the real world
In the long term, I envision robots that continually improve through their own experience, collecting data from interactions, detecting failure modes, and adapting to new environments with minimal human intervention. The emphasis is on safety, repeatability, and measurable gains over time.