GLORIA

GEOMAR Library Ocean Research Information Access

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
Material
Language
  • 1
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2023
    In:  Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies Vol. 7, No. 3 ( 2023-09-27), p. 1-26
    In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Association for Computing Machinery (ACM), Vol. 7, No. 3 ( 2023-09-27), p. 1-26
    Abstract: Technical advances in the smart device market have fixated smartphones at the heart of our lives, warranting an ever more secure means of authentication. Although most smartphones have adopted biometrics-based authentication, after a couple of failed attempts, most users are given the option to quickly bypass the system with passcodes. To add a layer of security, two-factor authentication (2FA) has been implemented but has proven to be vulnerable to various attacks. In this paper, we introduce VibPath, a simultaneous 2FA scheme that can understand the user's hand neuromuscular system through touch behavior. VibPath captures the individual's vibration path responses between the hand and the wrist with the attention-based encoder-decoder network, authenticating the genuine users from the imposters unobtrusively. In a user study with 30 participants, VibPath achieved an average performance of 0.98 accuracy, 0.99 precision, 0.98 recall, 0.98 f1-score for user verification, and 94.3% accuracy for user identification across five passcodes. Furthermore, we also conducted several extensive studies, including in-the-wile, permanence, vulnerability, usability, and system overhead studies, to assess the practicability and viability of the VibPath from multiple aspects.
    Type of Medium: Online Resource
    ISSN: 2474-9567
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2023
    detail.hit.zdb_id: 2892727-8
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2021
    In:  Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies Vol. 5, No. 4 ( 2021-12-27), p. 1-33
    In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Association for Computing Machinery (ACM), Vol. 5, No. 4 ( 2021-12-27), p. 1-33
    Abstract: Accurate recognition of facial expressions and emotional gestures is promising to understand the audience's feedback and engagement on the entertainment content. Existing methods are primarily based on various cameras or wearable sensors, which either raise privacy concerns or demand extra devices. To this aim, we propose a novel ubiquitous sensing system based on the commodity microphone array --- SonicFace, which provides an accessible, unobtrusive, contact-free, and privacy-preserving solution to monitor the user's emotional expressions continuously without playing hearable sound. SonicFace utilizes a pair of speaker and microphone array to recognize various fine-grained facial expressions and emotional hand gestures by emitted ultrasound and received echoes. Based on a set of experimental evaluations, the accuracy of recognizing 6 common facial expressions and 4 emotional gestures can reach around 80%. Besides, the extensive system evaluations with distinct configurations and an extended real-life case study have demonstrated the robustness and generalizability of the proposed SonicFace system.
    Type of Medium: Online Resource
    ISSN: 2474-9567
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2021
    detail.hit.zdb_id: 2892727-8
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    American Chemical Society (ACS) ; 2011
    In:  ACS Applied Materials & Interfaces Vol. 3, No. 3 ( 2011-03-23), p. 789-794
    In: ACS Applied Materials & Interfaces, American Chemical Society (ACS), Vol. 3, No. 3 ( 2011-03-23), p. 789-794
    Type of Medium: Online Resource
    ISSN: 1944-8244 , 1944-8252
    Language: English
    Publisher: American Chemical Society (ACS)
    Publication Date: 2011
    detail.hit.zdb_id: 2467494-1
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    Online Resource
    Online Resource
    Institute of Electrical and Electronics Engineers (IEEE) ; 2021
    In:  IEEE Transactions on Information Forensics and Security Vol. 16 ( 2021), p. 2805-2820
    In: IEEE Transactions on Information Forensics and Security, Institute of Electrical and Electronics Engineers (IEEE), Vol. 16 ( 2021), p. 2805-2820
    Type of Medium: Online Resource
    ISSN: 1556-6013 , 1556-6021
    Language: Unknown
    Publisher: Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2021
    detail.hit.zdb_id: 2209730-2
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2020
    In:  Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies Vol. 4, No. 3 ( 2020-09-04), p. 1-27
    In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Association for Computing Machinery (ACM), Vol. 4, No. 3 ( 2020-09-04), p. 1-27
    Abstract: With the rapid growth of artificial intelligence and mobile computing, intelligent speech interface has recently become one of the prevalent trends and has already presented huge potentials to the public. To address the privacy leakage issue during the speech interaction or accommodate some special demands, silent speech interfaces have been proposed to enable people's communication without vocalizing their sound (e.g., lip reading, tongue tracking). However, most existing silent speech mechanisms require either background illuminations or additional wearable devices. In this study, we propose the EchoWhisper as a novel user-friendly, smartphone-based silent speech interface. The proposed technique takes advantage of the micro-Doppler effect of the acoustic wave resulting from mouth and tongue movements and assesses the acoustic features of beamformed reflected echoes captured by the dual microphones in the smartphone. Using human subjects who perform a daily conversation task with over 45 different words, our system can achieve a WER (word error rate) of 8.33%, which shows the effectiveness of inferring silent speech content. Moreover, EchoWhisper has also demonstrated its reliability and robustness to a variety of configuration settings and environmental factors, such as smartphone orientations and distances, ambient noises, body motions, and so on.
    Type of Medium: Online Resource
    ISSN: 2474-9567
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2020
    detail.hit.zdb_id: 2892727-8
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 6
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2021
    In:  Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies Vol. 5, No. 2 ( 2021-06-23), p. 1-30
    In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Association for Computing Machinery (ACM), Vol. 5, No. 2 ( 2021-06-23), p. 1-30
    Abstract: We propose SonicASL, a real-time gesture recognition system that can recognize sign language gestures on the fly, leveraging front-facing microphones and speakers added to commodity earphones worn by someone facing the person making the gestures. In a user study (N=8), we evaluate the recognition performance of various sign language gestures at both the word and sentence levels. Given 42 frequently used individual words and 30 meaningful sentences, SonicASL can achieve an accuracy of 93.8% and 90.6% for word-level and sentence-level recognition, respectively. The proposed system is tested in two real-world scenarios: indoor (apartment, office, and corridor) and outdoor (sidewalk) environments with pedestrians walking nearby. The results show that our system can provide users with an effective gesture recognition tool with high reliability against environmental factors such as ambient noises and nearby pedestrians.
    Type of Medium: Online Resource
    ISSN: 2474-9567
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2021
    detail.hit.zdb_id: 2892727-8
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 7
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2023
    In:  Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies Vol. 7, No. 2 ( 2023-06-12), p. 1-21
    In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Association for Computing Machinery (ACM), Vol. 7, No. 2 ( 2023-06-12), p. 1-21
    Abstract: Sign language builds up an important bridge between the d/Deaf and hard-of-hearing (DHH) and hearing people. Regrettably, most hearing people face challenges in comprehending sign language, necessitating sign language translation. However, state-of-the-art wearable-based techniques mainly concentrate on recognizing manual markers (e.g., hand gestures), while frequently overlooking non-manual markers, such as negative head shaking, question markers, and mouthing. This oversight results in the loss of substantial grammatical and semantic information in sign language. To address this limitation, we introduce SmartASL, a novel proof-of-concept system that can 1) recognize both manual and non-manual markers simultaneously using a combination of earbuds and a wrist-worn IMU, and 2) translate the recognized American Sign Language (ASL) glosses into spoken language. Our experiments demonstrate the SmartASL system's significant potential to accurately recognize the manual and non-manual markers in ASL, effectively bridging the communication gaps between ASL signers and hearing people using commercially available devices.
    Type of Medium: Online Resource
    ISSN: 2474-9567
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2023
    detail.hit.zdb_id: 2892727-8
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 8
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2021
    In:  Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies Vol. 5, No. 1 ( 2021-03-19), p. 1-25
    In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Association for Computing Machinery (ACM), Vol. 5, No. 1 ( 2021-03-19), p. 1-25
    Abstract: With the rapid growth of wearable computing and increasing demand for mobile authentication scenarios, voiceprint-based authentication has become one of the prevalent technologies and has already presented tremendous potentials to the public. However, it is vulnerable to voice spoofing attacks (e.g., replay attacks and synthetic voice attacks). To address this threat, we propose a new biometric authentication approach, named EarPrint, which aims to extend voiceprint and build a hidden and secure user authentication scheme on earphones. EarPrint builds on the speaking-induced body sound transmission from the throat to the ear canal, i.e., different users will have different body sound conduction patterns on both sides of ears. As the first exploratory study, extensive experiments on 23 subjects show the EarPrint is robust against ambient noises and body motions. EarPrint achieves an Equal Error Rate (EER) of 3.64% with 75 seconds enrollment data. We also evaluate the resilience of EarPrint against replay attacks. A major contribution of EarPrint is that it leverages two-level uniqueness, including the body sound conduction from the throat to the ear canal and the body asymmetry between the left and the right ears, taking advantage of earphones' paring form-factor. Compared with other mobile and wearable biometric modalities, EarPrint is a low-cost, accurate, and secure authentication solution for earphone users.
    Type of Medium: Online Resource
    ISSN: 2474-9567
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2021
    detail.hit.zdb_id: 2892727-8
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 9
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2022
    In:  Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies Vol. 6, No. 2 ( 2022-07-04), p. 1-28
    In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Association for Computing Machinery (ACM), Vol. 6, No. 2 ( 2022-07-04), p. 1-28
    Abstract: Intelligent speech interfaces have been developing vastly to support the growing demands for convenient control and interaction with wearable/earable and portable devices. To avoid privacy leakage during speech interactions and strengthen the resistance to ambient noise, silent speech interfaces have been widely explored to enable people's interaction with mobile/wearable devices without audible sounds. However, most existing silent speech solutions require either restricted background illuminations or hand involvement to hold device or perform gestures. In this study, we propose a novel earphone-based, hand-free silent speech interaction approach, named EarCommand. Our technique discovers the relationship between the deformation of the ear canal and the movements of the articulator and takes advantage of this link to recognize different silent speech commands. Our system can achieve a WER (word error rate) of 10.02% for word-level recognition and 12.33% for sentence-level recognition, when tested in human subjects with 32 word-level commands and 25 sentence-level commands, which indicates the effectiveness of inferring silent speech commands. Moreover, EarCommand shows high reliability and robustness in a variety of configuration settings and environmental conditions. It is anticipated that EarCommand can serve as an efficient, intelligent speech interface for hand-free operation, which could significantly improve the quality and convenience of interactions.
    Type of Medium: Online Resource
    ISSN: 2474-9567
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2022
    detail.hit.zdb_id: 2892727-8
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 10
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2022
    In:  Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies Vol. 6, No. 2 ( 2022-07-04), p. 1-32
    In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Association for Computing Machinery (ACM), Vol. 6, No. 2 ( 2022-07-04), p. 1-32
    Abstract: Recognition of facial expressions has been widely explored to represent people's emotional states. Existing facial expression recognition systems primarily rely on external cameras which make it less accessible and efficient in many real-life scenarios to monitor an individual's facial expression in a convenient and unobtrusive manner. To this end, we propose PPGface, a ubiquitous, easy-to-use, user-friendly facial expression recognition platform that leverages earable devices with built-in PPG sensor. PPGface understands the facial expressions through the dynamic PPG patterns resulting from facial muscle movements. With the aid of the accelerometer sensor, PPGface can detect and recognize the user's seven universal facial expressions and relevant body posture unobtrusively. We conducted an user study (N=20) using multimodal ResNet to evaluate the performance of PPGface, and showed that PPGface can detect different facial expressions with 93.5 accuracy and 0.93 fl-score. In addition, to explore the robustness and usability of our proposed platform, we conducted several comprehensive experiments under real-world settings. Overall results of this work validate a great potential to be employed in future commodity earable devices.
    Type of Medium: Online Resource
    ISSN: 2474-9567
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2022
    detail.hit.zdb_id: 2892727-8
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...