GLORIA

GEOMAR Library Ocean Research Information Access

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • MDPI AG  (2)
  • Yan, Bing-Yan  (2)
Material
Publisher
  • MDPI AG  (2)
Person/Organisation
Language
Years
  • 1
    Online Resource
    Online Resource
    MDPI AG ; 2022
    In:  Sensors Vol. 22, No. 5 ( 2022-03-02), p. 1954-
    In: Sensors, MDPI AG, Vol. 22, No. 5 ( 2022-03-02), p. 1954-
    Abstract: Using motion information of the upper limb to control the prosthetic hand has become a hotspot of current research. The operation of the prosthetic hand must also be coordinated with the user’s intention. Therefore, identifying action intention of the upper limb based on motion information of the upper limb is key to controlling the prosthetic hand. Since a wearable inertial sensor bears the advantages of small size, low cost, and little external environment interference, we employ an inertial sensor to collect angle and angular velocity data during movement of the upper limb. Aiming at the action classification for putting on socks, putting on shoes and tying shoelaces, this paper proposes a recognition model based on the Dynamic Time Warping (DTW) algorithm of the motion unit. Based on whether the upper limb is moving, the complete motion data are divided into several motion units. Considering the delay associated with controlling the prosthetic hand, this paper only performs feature extraction on the first motion unit and the second motion unit, and recognizes action on different classifiers. The experimental results reveal that the DTW algorithm based on motion unit bears a higher recognition rate and lower running time. The recognition rate reaches as high as 99.46%, and the average running time measures 8.027 ms. In order to enable the prosthetic hand to understand the grasping intention of the upper limb, this paper proposes a Generalized Regression Neural Network (GRNN) model based on 10-fold cross-validation. The motion state of the upper limb is subdivided, and the static state is used as the sign of controlling the prosthetic hand. This paper applies a 10-fold cross-validation method to train the neural network model to find the optimal smoothing parameter. In addition, the recognition performance of different neural networks is compared. The experimental results show that the GRNN model based on 10-fold cross-validation exhibits a high accuracy rate, capable of reaching 98.28%. Finally, the two algorithms proposed in this paper are implemented in an experiment of using the prosthetic hand to reproduce an action, and the feasibility and practicability of the algorithm are verified by experiment.
    Type of Medium: Online Resource
    ISSN: 1424-8220
    Language: English
    Publisher: MDPI AG
    Publication Date: 2022
    detail.hit.zdb_id: 2052857-7
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    Online Resource
    Online Resource
    MDPI AG ; 2022
    In:  Electronics Vol. 11, No. 19 ( 2022-09-27), p. 3078-
    In: Electronics, MDPI AG, Vol. 11, No. 19 ( 2022-09-27), p. 3078-
    Abstract: A prosthetic hand is one of the main ways to help patients with upper limb disabilities regain their daily living abilities. Prosthetic hand manipulation must be coordinated with the user’s action intention. Therefore, the key to the control of the prosthetic hand is to recognize the action intention of the upper limb. At present, there are still problems such as difficulty in decoding information and a low recognition rate of identifying action intention with EMG signals and EEG signals. While inertial sensors have the advantages of low cost and high accuracy and posture information can characterize the upper limb motion state, visual information has the advantages of high information and being able to detect the type of target objects, which can be complementarily fused with inertial sensors to further grasp the human motion requirements. Therefore, this paper proposes an upper limb action intention recognition method based on the fusion of posture information and visual information. The inertial sensor is used to collect the attitude angle data during the movement of the upper limb, and according to the similarity of the human upper limb structure to the linkage mechanism, a model of the upper limb of the human body is established using the positive kinematics theory of a mechanical arm to solve the upper limb end positions. The upper limb end positions were classified into three categories: torso front, upper body nearby, and the initial position, and a multilayer perceptron model was trained to learn the classification relationships. In addition, a miniature camera was installed on the hand to obtain visual image information during upper limb movement. The target objects are detected using the YOLOv5 deep learning method, and then, the target objects are classified into two categories: wearable items and non-wearable items. Finally, the upper limb intention is jointly decided by the upper limb motion state, target object type, and upper limb end position to achieve the control of the prosthetic hand. We applied the upper limb intention recognition method to the experimental system of a mechanical prosthetic hand and invited several volunteers to test it. The experimental results showed that the intention recognition success rate reached 92.4%, which verifies the feasibility and practicality of the upper limb action intention recognition method based on the fusion of posture information and visual information.
    Type of Medium: Online Resource
    ISSN: 2079-9292
    Language: English
    Publisher: MDPI AG
    Publication Date: 2022
    detail.hit.zdb_id: 2662127-7
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...