GLORIA

GEOMAR Library Ocean Research Information Access

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Online Resource
    Online Resource
    Acoustical Society of America (ASA) ; 2018
    In:  The Journal of the Acoustical Society of America Vol. 144, No. 3_Supplement ( 2018-09-01), p. 1936-1936
    In: The Journal of the Acoustical Society of America, Acoustical Society of America (ASA), Vol. 144, No. 3_Supplement ( 2018-09-01), p. 1936-1936
    Abstract: Sensory processing abnormalities are a hallmark of autism spectrum disorder (ASD). In the auditory domain, hyper- and hypo-sensitivity to sound, reduced orientation to speech, and difficulty listening in noise are commonly reported. However, the etiology of these auditory processing abnormalities is poorly understood. In this preliminary study, multi-talker speech perception thresholds, otoacoustic emissions, electrophysiological responses, as well as standardized measures of cognition, language, and adaptive function were measured in each individual subject with ASD and their age-matched controls. Speech perception was assessed by estimating target-to-masker ratios at 50% correct for speech targets (0° azimuth) presented with two spatially separated (±45° azimuth) simultaneous speech maskers. The electrophysiological battery used to characterize the transmission and representation of sound included: (1) a click-evoked supra-threshold auditory brainstem response, (2) an envelope following response recorded to a 400-ms-long 4 kHz pure tone carrier amplitude modulated at 100 Hz at two modulations depths (0 and -6 dB) and (3) a binaural potential evoked by an interaural phase difference embedded in an amplitude-modulated carrier tone. Together, these measures provide a behavioral and physiological assay of auditory function and show the potential to further our understanding of the mechanisms underlying auditory abnormalities in individuals with ASD.
    Type of Medium: Online Resource
    ISSN: 0001-4966 , 1520-8524
    RVK:
    Language: English
    Publisher: Acoustical Society of America (ASA)
    Publication Date: 2018
    detail.hit.zdb_id: 1461063-2
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    Online Resource
    Online Resource
    Acoustical Society of America (ASA) ; 2021
    In:  The Journal of the Acoustical Society of America Vol. 150, No. 4_Supplement ( 2021-10-01), p. A271-A271
    In: The Journal of the Acoustical Society of America, Acoustical Society of America (ASA), Vol. 150, No. 4_Supplement ( 2021-10-01), p. A271-A271
    Abstract: The ability to selectively attend to one talker in the presence of competing talkers is a crucial skill employed in everyday life. In this study, multitalker speech perception thresholds were measured in three groups; Autism Spectrum Disorder (ASD), Fetal Alcohol Syndrome Disorder (FASD), and an age- and sex-matched typically functioning (TF) group. Participants listened to three simultaneous sentences from the Coordinate Response Measure corpus: the target stream to be attended (0° azimuth) and two spatially separated (±45° azimuth) masker streams. Participants were asked to identify the color and number associated with the callsign “Charlie.” Target-to-masker ratios (TMRs) were estimated based on the average of four runs in which the target was fixed at 40 dB SPL and maskers were adaptively varied using a one-up-one-down procedure to estimate 50% correct. The target speaker was always male; the two maskers were either male/male or female/female. Overall, TMR thresholds were higher in both ASD and FASD groups than the TF group. Additionally, a negative correlation between intellectual ability and TMR thresholds was observed. These preliminary results suggest intellectual ability may impact how well listener’s perceive speech in multitalker situations, especially in neurodiverse populations.
    Type of Medium: Online Resource
    ISSN: 0001-4966 , 1520-8524
    RVK:
    Language: English
    Publisher: Acoustical Society of America (ASA)
    Publication Date: 2021
    detail.hit.zdb_id: 1461063-2
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    Elsevier BV ; 2016
    In:  Trends in Neurosciences Vol. 39, No. 2 ( 2016-02), p. 74-85
    In: Trends in Neurosciences, Elsevier BV, Vol. 39, No. 2 ( 2016-02), p. 74-85
    Type of Medium: Online Resource
    ISSN: 0166-2236
    Language: English
    Publisher: Elsevier BV
    Publication Date: 2016
    detail.hit.zdb_id: 2011000-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    Online Resource
    Online Resource
    eLife Sciences Publications, Ltd ; 2015
    In:  eLife Vol. 4 ( 2015-02-05)
    In: eLife, eLife Sciences Publications, Ltd, Vol. 4 ( 2015-02-05)
    Abstract: In the noisy din of a cocktail party, there are many sources of sound that compete for our attention. Even so, we can easily block out the noise and focus on a conversation, especially when we are talking to someone in front of us. This is possible in part because our sensory system combines inputs from our senses. Scientists have proposed that our perception is stronger when we can hear and see something at the same time, as opposed to just being able to hear it. For example, if we tried to talk to someone on a phone during a cocktail party, the background noise would probably drown out the conversation. However, when we can see the person we are talking to, it is easier to hold a conversation. Maddox et al. have now explored this phenomenon in experiments that involved human subjects listening to an audio stream that was masked by background sound. While listening, the subjects also watched completely irrelevant videos that moved in sync with either the audio stream or with the background sound. The subjects then had to perform a task that involved pushing a button when they heard random changes (such as subtle changes in tone or pitch) in the audio stream. The experiment showed that the subjects performed well when they saw a video that was in sync with the audio stream. However, their performance dropped when the video was in sync with the background sound. This suggests that when we hold a conversation during a noisy cocktail party, seeing the other person's face move as they talk creates a combined audio–visual impression of that person, helping us separate what they are saying from all the noise in the background. However, if we turn to look at other guests, we become distracted and the conversation may become lost.
    Type of Medium: Online Resource
    ISSN: 2050-084X
    Language: English
    Publisher: eLife Sciences Publications, Ltd
    Publication Date: 2015
    detail.hit.zdb_id: 2687154-3
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    In: Neuron, Elsevier BV, Vol. 97, No. 3 ( 2018-02), p. 640-655.e4
    Type of Medium: Online Resource
    ISSN: 0896-6273
    Language: English
    Publisher: Elsevier BV
    Publication Date: 2018
    detail.hit.zdb_id: 2001944-0
    SSG: 12
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 6
    Online Resource
    Online Resource
    Society for Neuroscience ; 2018
    In:  eneuro Vol. 5, No. 1 ( 2018-01), p. ENEURO.0441-17.2018-
    In: eneuro, Society for Neuroscience, Vol. 5, No. 1 ( 2018-01), p. ENEURO.0441-17.2018-
    Abstract: Speech is an ecologically essential signal, whose processing crucially involves the subcortical nuclei of the auditory brainstem, but there are few experimental options for studying these early responses in human listeners under natural conditions. While encoding of continuous natural speech has been successfully probed in the cortex with neurophysiological tools such as electroencephalography (EEG) and magnetoencephalography, the rapidity of subcortical response components combined with unfavorable signal-to-noise ratios signal-to-noise ratio has prevented application of those methods to the brainstem. Instead, experiments have used thousands of repetitions of simple stimuli such as clicks, tone-bursts, or brief spoken syllables, with deviations from those paradigms leading to ambiguity in the neural origins of measured responses. In this study we developed and tested a new way to measure the auditory brainstem response (ABR) to ongoing, naturally uttered speech, using EEG to record from human listeners. We found a high degree of morphological similarity between the speech-derived ABRs and the standard click-evoked ABR, in particular, a preserved Wave V, the most prominent voltage peak in the standard click-evoked ABR. Because this method yields distinct peaks that recapitulate the canonical ABR, at latencies too short to originate from the cortex, the responses measured can be unambiguously determined to be subcortical in origin. The use of naturally uttered speech to measure the ABR allows the design of engaging behavioral tasks, facilitating new investigations of the potential effects of cognitive processes like language and attention on brainstem processing.
    Type of Medium: Online Resource
    ISSN: 2373-2822
    Language: English
    Publisher: Society for Neuroscience
    Publication Date: 2018
    detail.hit.zdb_id: 2800598-3
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 7
    Online Resource
    Online Resource
    Frontiers Media SA ; 2014
    In:  Frontiers in Neuroscience Vol. 8 ( 2014-10-20)
    In: Frontiers in Neuroscience, Frontiers Media SA, Vol. 8 ( 2014-10-20)
    Type of Medium: Online Resource
    ISSN: 1662-453X
    Language: Unknown
    Publisher: Frontiers Media SA
    Publication Date: 2014
    detail.hit.zdb_id: 2411902-7
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 8
    Online Resource
    Online Resource
    Acoustical Society of America (ASA) ; 2016
    In:  Journal of the Acoustical Society of America Vol. 140, No. 4_Supplement ( 2016-10-01), p. 3207-3208
    In: Journal of the Acoustical Society of America, Acoustical Society of America (ASA), Vol. 140, No. 4_Supplement ( 2016-10-01), p. 3207-3208
    Abstract: Both the comprehension and detection of speech in noise are improved when the listener sees the talker’s mouth. There are multiple reasons for this, from basic physical temporal correlations to higher order linguistic cues; we have recently performed several experiments investigating the former. They were based on artificial stimuli with speech-like dynamics but no linguistic information. Auditory stimuli were a tone or tone complex with randomly modulated amplitude. Visual stimuli were a disc with a randomly modulated radius. We manipulated the correlation between the visual stimulus and each auditory stimulus. In all experiments, the visual stimulus provided no information about the task. In the first study, we presented two competing auditory stimuli and had listeners respond to events in the target stimulus (brief pitch or timbre fluctuations). Performance was better when the visual stimulus matched the auditory target than when it matched the masker. The second study employed a two-interval two-alternative forced choice detection task. Despite a range of stimulus variations, no effect of audio-visual coherence on auditory detection was ever observed. Taken together, these results suggest that listening improvements provided by visual stimuli derive from improvements in segregation and scene analysis, more than overcoming simple energetic masking.
    Type of Medium: Online Resource
    ISSN: 0001-4966 , 1520-8524
    RVK:
    Language: English
    Publisher: Acoustical Society of America (ASA)
    Publication Date: 2016
    detail.hit.zdb_id: 1461063-2
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 9
    Online Resource
    Online Resource
    Acoustical Society of America (ASA) ; 2012
    In:  The Journal of the Acoustical Society of America Vol. 132, No. 5 ( 2012-11-01), p. EL385-EL390
    In: The Journal of the Acoustical Society of America, Acoustical Society of America (ASA), Vol. 132, No. 5 ( 2012-11-01), p. EL385-EL390
    Abstract: Listeners are good at attending to one auditory stream in a crowded environment. However, is there an upper limit of streams present in an auditory scene at which this selective attention breaks down? Here, participants were asked to attend one stream of spoken letters amidst other letter streams. In half of the trials, an initial primer was played, cueing subjects to the sound configuration. Results indicate that performance increases with token repetitions. Priming provided a performance benefit, suggesting that stream selection, not formation, is the bottleneck associated with attention in an overcrowded scene. Results' implications for brain-computer interfaces are discussed.
    Type of Medium: Online Resource
    ISSN: 0001-4966 , 1520-8524
    RVK:
    Language: English
    Publisher: Acoustical Society of America (ASA)
    Publication Date: 2012
    detail.hit.zdb_id: 1461063-2
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 10
    Online Resource
    Online Resource
    Acoustical Society of America (ASA) ; 2016
    In:  Journal of the Acoustical Society of America Vol. 140, No. 4_Supplement ( 2016-10-01), p. 3267-3267
    In: Journal of the Acoustical Society of America, Acoustical Society of America (ASA), Vol. 140, No. 4_Supplement ( 2016-10-01), p. 3267-3267
    Abstract: The sound-induced flash illusion (SIFI) provides a way of testing multisensory integration through perceptual illusion. Studies disagree on the influence of auditory-visual spatial (in)congruence on SIFI. To better assess the possible influence of spatial proximity, we manipulated the spatial congruence of competing auditory stimuli. Study participants were presented with two timbrally distinct concurrent auditory stimuli of one or two beeps and a visual stimulus composed of one or two flashes. One auditory stimulus always matched the number of visual flashes, and the other did not. The auditory-matching stimulus was manipulated to either be spatially congruent or not with the visual flashes, which could occur centrally or to the left or right. Participants were instructed to report the number of flashes they saw. For half of each session participant’s attention was not directed and had no knowledge of flash location on the coming trial, while in the other half there was a visual spatial cue prior to the trial to the location of the visual flashes. We compare and contrast the results with and without spatial attention cueing in order to examine the effect of spatial attention on this multisensory illusion.
    Type of Medium: Online Resource
    ISSN: 0001-4966 , 1520-8524
    RVK:
    Language: English
    Publisher: Acoustical Society of America (ASA)
    Publication Date: 2016
    detail.hit.zdb_id: 1461063-2
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...