In:
PLOS ONE, Public Library of Science (PLoS), Vol. 17, No. 2 ( 2022-2-23), p. e0264302-
Abstract:
The cross-view 3D human pose estimation model has made significant progress, it better completed the task of human joint positioning and skeleton modeling in 3D through multi-view fusion method. The multi-view 2D pose estimation part of this model is very important, but its training cost is also very high. It uses some deep learning networks to generate heatmaps for each view. Therefore, in this article, we tested some new deep learning networks for pose estimation tasks. These deep networks include Mobilenetv2, Mobilenetv3, Efficientnetv2 and Resnet. Then, based on the performance and drawbacks of these networks, we built multiple deep learning networks with better performance. We call our network in this article LHPE-nets, which mainly includes Low-Span network and RDNS network. LHPE-nets uses a network structure with evenly distributed channels, inverted residuals, external residual blocks and a framework for processing small-resolution samples to achieve training saturation faster. And we also designed a static pose sample simplification method for 3D pose data. It implemented low-cost sample storage, and it was also convenient for models to read these samples. In the experiment, we used several recent models and two public estimation indicators. The experimental results show the superiority of this work in fast start-up and network lightweight, it is about 1-5 epochs faster than the Resnet-34 during training. And they also show the accuracy improvement of this work in estimating different joints, the estimated performance of approximately 60% of the joints is improved. Its performance in the overall human pose estimation exceeds other networks by more than 7 mm . The experiment analyzes the network size, fast start-up and the performance in 2D and 3D pose estimation of the model in this paper in detail. Compared with other pose estimation models, its performance has also reached a higher level of application.
Type of Medium:
Online Resource
ISSN:
1932-6203
DOI:
10.1371/journal.pone.0264302
DOI:
10.1371/journal.pone.0264302.g001
DOI:
10.1371/journal.pone.0264302.g002
DOI:
10.1371/journal.pone.0264302.g003
DOI:
10.1371/journal.pone.0264302.g004
DOI:
10.1371/journal.pone.0264302.g005
DOI:
10.1371/journal.pone.0264302.g006
DOI:
10.1371/journal.pone.0264302.g007
DOI:
10.1371/journal.pone.0264302.g008
DOI:
10.1371/journal.pone.0264302.t001
DOI:
10.1371/journal.pone.0264302.t002
DOI:
10.1371/journal.pone.0264302.t003
DOI:
10.1371/journal.pone.0264302.t004
DOI:
10.1371/journal.pone.0264302.t005
DOI:
10.1371/journal.pone.0264302.t006
DOI:
10.1371/journal.pone.0264302.t007
DOI:
10.1371/journal.pone.0264302.s001
DOI:
10.1371/journal.pone.0264302.s002
DOI:
10.1371/journal.pone.0264302.s003
DOI:
10.1371/journal.pone.0264302.s004
Language:
English
Publisher:
Public Library of Science (PLoS)
Publication Date:
2022
detail.hit.zdb_id:
2267670-3
Permalink