GLORIA

GEOMAR Library Ocean Research Information Access

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Online Resource
    Online Resource
    MDPI AG ; 2021
    In:  Sensors Vol. 21, No. 19 ( 2021-09-23), p. 6358-
    In: Sensors, MDPI AG, Vol. 21, No. 19 ( 2021-09-23), p. 6358-
    Abstract: Multi-object tracking is a significant field in computer vision since it provides essential information for video surveillance and analysis. Several different deep learning-based approaches have been developed to improve the performance of multi-object tracking by applying the most accurate and efficient combinations of object detection models and appearance embedding extraction models. However, two-stage methods show a low inference speed since the embedding extraction can only be performed at the end of the object detection. To alleviate this problem, single-shot methods, which simultaneously perform object detection and embedding extraction, have been developed and have drastically improved the inference speed. However, there is a trade-off between accuracy and efficiency. Therefore, this study proposes an enhanced single-shot multi-object tracking system that displays improved accuracy while maintaining a high inference speed. With a strong feature extraction and fusion, the object detection of our model achieves an AP score of 69.93% on the UA-DETRAC dataset and outperforms previous state-of-the-art methods, such as FairMOT and JDE. Based on the improved object detection performance, our multi-object tracking system achieves a MOTA score of 68.5% and a PR-MOTA score of 24.5% on the same dataset, also surpassing the previous state-of-the-art trackers.
    Type of Medium: Online Resource
    ISSN: 1424-8220
    Language: English
    Publisher: MDPI AG
    Publication Date: 2021
    detail.hit.zdb_id: 2052857-7
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    In: Healthcare, MDPI AG, Vol. 8, No. 2 ( 2020-04-21), p. 104-
    Abstract: Saccadic eye movement is an important ability in our daily life and is especially important in driving and sports. Traditionally, the Developmental Eye Movement (DEM) test and the King–Devick (K-D) test have been used to measure saccadic eye movement, but these only involve measurements with “adjusted time”. Therefore, a different approach is required to obtain the eye movement speed and reaction rate in detail, as some are rapid eye movements, while others are slow actions, and vice versa. This study proposed an extended method that can acquire the “rest time” and “transfer time”, as well as the “adjusted time”, by implementing a virtual reality-based DEM test, using a FOVE virtual reality (VR) head-mounted display (HMD), equipped with an eye-tracking module. This approach was tested in 30 subjects with normal vision and no ophthalmologic disease by using a 2-diopter (50-cm) distance. This allowed for measurements of the “adjusted time” and the “rest time” for focusing on each target number character, the “transfer time” for moving to the next target number character, and recording of the gaze-tracking log. The results of this experiment showed that it was possible to analyze more parameters of the saccadic eye movement with the proposed method than with the traditional methods.
    Type of Medium: Online Resource
    ISSN: 2227-9032
    Language: English
    Publisher: MDPI AG
    Publication Date: 2020
    detail.hit.zdb_id: 2721009-1
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    MDPI AG ; 2019
    In:  Symmetry Vol. 11, No. 5 ( 2019-05-03), p. 621-
    In: Symmetry, MDPI AG, Vol. 11, No. 5 ( 2019-05-03), p. 621-
    Abstract: In this paper, we propose a method for calculating the dynamic background region in a video and removing false positives in order to overcome the problems of false positives that occur due to the dynamic background and frame drop at slow speeds. Therefore, we need an efficient algorithm with a robust performance value including processing speed. The foreground is separated from the background by comparing the similarities between false positives and the foreground. In order to improve the processing speed, the median filter was optimized for the binary image. The proposed method was based on a CDnet 2012/2014 dataset and we achieved precision of 76.68%, FPR of 0.90%, FNR of 18.02%, and an F-measure of 75.35%. The average ranking across categories is 14.36, which is superior to the background subtraction method. The proposed method was operated at 45 fps (CPU), 150 fps (GPU) at 320 × 240 resolution. Therefore, we expect that the proposed method can be applied to current commercialized CCTV without any hardware upgrades.
    Type of Medium: Online Resource
    ISSN: 2073-8994
    Language: English
    Publisher: MDPI AG
    Publication Date: 2019
    detail.hit.zdb_id: 2518382-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    Online Resource
    Online Resource
    MDPI AG ; 2022
    In:  Sensors Vol. 22, No. 11 ( 2022-05-26), p. 4026-
    In: Sensors, MDPI AG, Vol. 22, No. 11 ( 2022-05-26), p. 4026-
    Abstract: Gaze is an excellent indicator and has utility in that it can express interest or intention and the condition of an object. Recent deep-learning methods are mainly appearance-based methods that estimate gaze based on a simple regression from entire face and eye images. However, sometimes, this method does not give satisfactory results for gaze estimations in low-resolution and noisy images obtained in unconstrained real-world settings (e.g., places with severe lighting changes). In this study, we propose a method that estimates gaze by detecting eye region landmarks through a single eye image; and this approach is shown to be competitive with recent appearance-based methods. Our approach acquires rich information by extracting more landmarks and including iris and eye edges, similar to the existing feature-based methods. To acquire strong features even at low resolutions, we used the HRNet backbone network to learn representations of images at various resolutions. Furthermore, we used the self-attention module CBAM to obtain a refined feature map with better spatial information, which enhanced the robustness to noisy inputs, thereby yielding a performance of a 3.18% landmark localization error, a 4% improvement over the existing error and A large number of landmarks were acquired and used as inputs for a lightweight neural network to estimate the gaze. We conducted a within-datasets evaluation on the MPIIGaze, which was obtained in a natural environment and achieved a state-of-the-art performance of 4.32 degrees, a 6% improvement over the existing performance.
    Type of Medium: Online Resource
    ISSN: 1424-8220
    Language: English
    Publisher: MDPI AG
    Publication Date: 2022
    detail.hit.zdb_id: 2052857-7
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    In: Sensors, MDPI AG, Vol. 20, No. 16 ( 2020-08-14), p. 4566-
    Abstract: The use of human gesturing to interact with devices such as computers or smartphones has presented several problems. This form of interaction relies on gesture interaction technology such as Leap Motion from Leap Motion, Inc, which enables humans to use hand gestures to interact with a computer. The technology has excellent hand detection performance, and even allows simple games to be played using gestures. Another example is the contactless use of a smartphone to take a photograph by simply folding and opening the palm. Research on interaction with other devices via hand gestures is in progress. Similarly, studies on the creation of a hologram display from objects that actually exist are also underway. We propose a hand gesture recognition system that can control the Tabletop holographic display based on an actual object. The depth image obtained using the latest Time-of-Flight based depth camera Azure Kinect is used to obtain information about the hand and hand joints by using the deep-learning model CrossInfoNet. Using this information, we developed a real time system that defines and recognizes gestures indicating left, right, up, and down basic rotation, and zoom in, zoom out, and continuous rotation to the left and right.
    Type of Medium: Online Resource
    ISSN: 1424-8220
    Language: English
    Publisher: MDPI AG
    Publication Date: 2020
    detail.hit.zdb_id: 2052857-7
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 6
    Online Resource
    Online Resource
    MDPI AG ; 2021
    In:  Sensors Vol. 21, No. 3 ( 2021-02-02), p. 1013-
    In: Sensors, MDPI AG, Vol. 21, No. 3 ( 2021-02-02), p. 1013-
    Abstract: RGB-D cameras have been commercialized, and many applications using them have been proposed. In this paper, we propose a robust registration method of multiple RGB-D cameras. We use a human body tracking system provided by Azure Kinect SDK to estimate a coarse global registration between cameras. As this coarse global registration has some error, we refine it using feature matching. However, the matched feature pairs include mismatches, hindering good performance. Therefore, we propose a registration refinement procedure that removes these mismatches and uses the global registration. In an experiment, the ratio of inliers among the matched features is greater than 95% for all tested feature matchers. Thus, we experimentally confirm that mismatches can be eliminated via the proposed method even in difficult situations and that a more precise global registration of RGB-D cameras can be obtained.
    Type of Medium: Online Resource
    ISSN: 1424-8220
    Language: English
    Publisher: MDPI AG
    Publication Date: 2021
    detail.hit.zdb_id: 2052857-7
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 7
    Online Resource
    Online Resource
    Optica Publishing Group ; 2018
    In:  Applied Optics Vol. 57, No. 1 ( 2018-01-01), p. A91-
    In: Applied Optics, Optica Publishing Group, Vol. 57, No. 1 ( 2018-01-01), p. A91-
    Type of Medium: Online Resource
    ISSN: 1559-128X , 2155-3165
    Language: English
    Publisher: Optica Publishing Group
    Publication Date: 2018
    detail.hit.zdb_id: 207387-0
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...