Combining Augmented Reality with Semi-autonomous, Lightmyography Based Control to Improve Usability of Prostheses
Published in IEEE ACCESS, 2026
Amputees often struggle with the use of new prostheses due to many factors. One such issue includes the increasing complexity of control in modern prosthesis and the difficult learning curve that a new prosthetic control system can present. As such, it is important we design control interfaces that keep the simplicity of traditional myoelectric prosthesis, while harnessing the increased dexterity of modern robotic hands. This paper expands upon a semi-autonomous control system combining human-in-the-loop control with Lightmyography decoding, together with affordances-based filtering using object detection and IMU data. An augmented reality display is designed using the capabilities of the Apple Vision Pro to give the user useful information about the control system in an attempt to reduce the mental workload associated with using the robotic hand. Overall, a user study showed that for individuals who were open to using augmented reality, workload was significantly lower (p=0.0036) based on the NASA-TLX and the usability was significantly higher (p=0.0062) based on the system usability scale, when the augmented reality was included in the framework. The framework shows promise in the training of prosthetic systems and may be able to reduce the chance of prosthetic rejection by lowering perceived workload. However, we must keep in mind that many people are not keen to use augmented reality or virtual reality headsets as they find them impractical, with the use of these headsets in everyday life being very uncommon.
Recommended citation: B. Guan, Z. Wang, R. V. Godoy, M. Owen and M. Liarokapis, "Combining Augmented Reality with Semi-autonomous, Lightmyography Based Control to Improve Usability of Prostheses," in IEEE Access, doi: 10.1109/ACCESS.2026.3677129.
Download Paper