2023
Journal Articles
1.
Pinardi, Mattia; Noccaro, Alessia; Raiano, Luigi; Formica, Domenico; Pino, Giovanni Di
Comparing end-effector position and joint angle feedback for online robotic limb tracking Journal Article
In: PLOS ONE, vol. 18, no. 6, pp. e0286566, 2023, ISSN: 1932-6203, (Publisher: Public Library of Science).
Abstract | Links | BibTeX | Tags: Body limbs, Motion, Prosthetics, Robotics, Robots, Sensory perception, Vibration, Vision
@article{pinardi_comparing_2023,
title = {Comparing end-effector position and joint angle feedback for online robotic limb tracking},
author = { Mattia Pinardi and Alessia Noccaro and Luigi Raiano and Domenico Formica and Giovanni {Di Pino}},
url = {https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0286566},
doi = {10.1371/journal.pone.0286566},
issn = {1932-6203},
year = {2023},
date = {2023-06-01},
urldate = {2023-06-01},
journal = {PLOS ONE},
volume = {18},
number = {6},
pages = {e0286566},
abstract = {Somatosensation greatly increases the ability to control our natural body. This suggests that supplementing vision with haptic sensory feedback would also be helpful when a user aims at controlling a robotic arm proficiently. However, whether the position of the robot and its continuous update should be coded in a extrinsic or intrinsic reference frame is not known. Here we compared two different supplementary feedback contents concerning the status of a robotic limb in 2-DoFs configuration: one encoding the Cartesian coordinates of the end-effector of the robotic arm (i.e., Task-space feedback) and another and encoding the robot joints angles (i.e., Joint-space feedback). Feedback was delivered to blindfolded participants through vibrotactile stimulation applied on participants’ leg. After a 1.5-hour training with both feedbacks, participants were significantly more accurate with Task compared to Joint-space feedback, as shown by lower position and aiming errors, albeit not faster (i.e., similar onset delay). However, learning index during training was significantly higher in Joint space feedback compared to Task-space feedback. These results suggest that Task-space feedback is probably more intuitive and more suited for activities which require short training sessions, while Joint space feedback showed potential for long-term improvement. We speculate that the latter, despite performing worse in the present work, might be ultimately more suited for applications requiring long training, such as the control of supernumerary robotic limbs for surgical robotics, heavy industrial manufacturing, or more generally, in the context of human movement augmentation.},
note = {Publisher: Public Library of Science},
keywords = {Body limbs, Motion, Prosthetics, Robotics, Robots, Sensory perception, Vibration, Vision},
pubstate = {published},
tppubtype = {article}
}
Somatosensation greatly increases the ability to control our natural body. This suggests that supplementing vision with haptic sensory feedback would also be helpful when a user aims at controlling a robotic arm proficiently. However, whether the position of the robot and its continuous update should be coded in a extrinsic or intrinsic reference frame is not known. Here we compared two different supplementary feedback contents concerning the status of a robotic limb in 2-DoFs configuration: one encoding the Cartesian coordinates of the end-effector of the robotic arm (i.e., Task-space feedback) and another and encoding the robot joints angles (i.e., Joint-space feedback). Feedback was delivered to blindfolded participants through vibrotactile stimulation applied on participants’ leg. After a 1.5-hour training with both feedbacks, participants were significantly more accurate with Task compared to Joint-space feedback, as shown by lower position and aiming errors, albeit not faster (i.e., similar onset delay). However, learning index during training was significantly higher in Joint space feedback compared to Task-space feedback. These results suggest that Task-space feedback is probably more intuitive and more suited for activities which require short training sessions, while Joint space feedback showed potential for long-term improvement. We speculate that the latter, despite performing worse in the present work, might be ultimately more suited for applications requiring long training, such as the control of supernumerary robotic limbs for surgical robotics, heavy industrial manufacturing, or more generally, in the context of human movement augmentation.
2022
Journal Articles
2.
Khoramshahi, Mahdi; Roby-Brami, Agnes; Parry, Ross; Jarrassé, Nathanaël
In: PLOS ONE, vol. 17, no. 12, pp. e0278228, 2022, ISSN: 1932-6203, (Publisher: Public Library of Science).
Abstract | Links | BibTeX | Tags: Body weight, Hip, Kinematics, Prosthetics, Robotics, Shoulders, Skeletal joints, Velocity
@article{khoramshahi_identification_2022b,
title = {Identification of inverse kinematic parameters in redundant systems: Towards quantification of inter-joint coordination in the human upper extremity},
author = { Mahdi Khoramshahi and Agnes Roby-Brami and Ross Parry and Nathanaël Jarrassé},
url = {https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0278228},
doi = {10.1371/journal.pone.0278228},
issn = {1932-6203},
year = {2022},
date = {2022-01-01},
urldate = {2024-02-12},
journal = {PLOS ONE},
volume = {17},
number = {12},
pages = {e0278228},
abstract = {Understanding and quantifying inter-joint coordination is valuable in several domains such as neurorehabilitation, robot-assisted therapy, robotic prosthetic arms, and control of supernumerary arms. Inter-joint coordination is often understood as a consistent spatiotemporal relation among kinematically redundant joints performing functional and goal-oriented movements. However, most approaches in the literature to investigate inter-joint coordination are limited to analysis of the end-point trajectory or correlation analysis of the joint rotations without considering the underlying task; e.g., creating a desirable hand movement toward a goal as in reaching motions. This work goes beyond this limitation by taking a model-based approach to quantifying inter-joint coordination. More specifically, we use the weighted pseudo-inverse of the Jacobian matrix and its associated null-space to explain the human kinematics in reaching tasks. We propose a novel algorithm to estimate such Inverse Kinematics weights from observed kinematic data. These estimated weights serve as a quantification for spatial inter-joint coordination; i.e., how costly a redundant joint is in its contribution to creating an end-effector velocity. We apply our estimation algorithm to datasets obtained from two different experiments. In the first experiment, the estimated Inverse Kinematics weights pinpoint how individuals change their Inverse Kinematics strategy when exposed to the viscous field wearing an exoskeleton. The second experiment shows how the resulting Inverse Kinematics weights can quantify a robotic prosthetic arm’s contribution (or the level of assistance).},
note = {Publisher: Public Library of Science},
keywords = {Body weight, Hip, Kinematics, Prosthetics, Robotics, Shoulders, Skeletal joints, Velocity},
pubstate = {published},
tppubtype = {article}
}
Understanding and quantifying inter-joint coordination is valuable in several domains such as neurorehabilitation, robot-assisted therapy, robotic prosthetic arms, and control of supernumerary arms. Inter-joint coordination is often understood as a consistent spatiotemporal relation among kinematically redundant joints performing functional and goal-oriented movements. However, most approaches in the literature to investigate inter-joint coordination are limited to analysis of the end-point trajectory or correlation analysis of the joint rotations without considering the underlying task; e.g., creating a desirable hand movement toward a goal as in reaching motions. This work goes beyond this limitation by taking a model-based approach to quantifying inter-joint coordination. More specifically, we use the weighted pseudo-inverse of the Jacobian matrix and its associated null-space to explain the human kinematics in reaching tasks. We propose a novel algorithm to estimate such Inverse Kinematics weights from observed kinematic data. These estimated weights serve as a quantification for spatial inter-joint coordination; i.e., how costly a redundant joint is in its contribution to creating an end-effector velocity. We apply our estimation algorithm to datasets obtained from two different experiments. In the first experiment, the estimated Inverse Kinematics weights pinpoint how individuals change their Inverse Kinematics strategy when exposed to the viscous field wearing an exoskeleton. The second experiment shows how the resulting Inverse Kinematics weights can quantify a robotic prosthetic arm’s contribution (or the level of assistance).