User profiles for R. K. Maddox

Ross K Maddox

University of Michigan
Verified email at umich.edu
Cited by 1718

Using neuroimaging to understand the cortical mechanisms of auditory selective attention

AKC Lee, E Larson, RK Maddox… - Hearing research, 2014 - Elsevier
Over the last four decades, a range of different neuroimaging tools have been used to study
human auditory attention, spanning from classic event-related potential studies using …

[HTML][HTML] Defining auditory-visual objects: behavioral tests and physiological mechanisms

JK Bizley, RK Maddox, AKC Lee - Trends in neurosciences, 2016 - cell.com
Crossmodal integration is a term applicable to many phenomena in which one sensory
modality influences task performance or perception in another sensory modality. We distinguish …

Hierarchical cross-modal talking face generation with dynamic pixel-wise loss

L Chen, RK Maddox, Z Duan… - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
We devise a cascade GAN approach to generate talking face video, which is robust to different
face shapes, view angles, facial characteristics, and noisy audio conditions. Instead of …

Lip movements generation at a glance

L Chen, Z Li, RK Maddox, Z Duan… - Proceedings of the …, 2018 - openaccess.thecvf.com
Cross-modality generation is an emerging topic that aims to synthesize data in one modality
based on information in a different modality. In this paper, we consider a task of such: given …

The parallel auditory brainstem response

MJ Polonenko, RK Maddox - Trends in hearing, 2019 - journals.sagepub.com
… The same measures were quantified for 38% of the responses by the other author (RK M).
The intraclass correlation coefficient (ICC3) for each frequency and measure was ≥ 0.9 (the …

[PDF][PDF] Integration of visual information in auditory cortex promotes auditory scene analysis through multisensory binding

H Atilgan, SM Town, KC Wood, GP Jones, RK Maddox… - Neuron, 2018 - cell.com
How and where in the brain audio-visual signals are bound to create multimodal objects
remains unknown. One hypothesis is that temporal coherence between dynamic multisensory …

Auditory selective attention is enhanced by a task-irrelevant temporally coherent visual stimulus in human listeners

RK Maddox, H Atilgan, JK Bizley, AKC Lee - Elife, 2015 - elifesciences.org
10.7554/eLife.04995.001 In noisy settings, listening is aided by correlated dynamic visual
cues gleaned from a talker's face—an improvement often attributed to visually reinforced …

Generating talking face landmarks from speech

SE Eskimez, RK Maddox, C Xu, Z Duan - … /ICA 2018, Guildford, UK, July 2 …, 2018 - Springer
The presence of a corresponding talking face has been shown to significantly improve
speech intelligibility in noisy conditions and for hearing impaired population. In this paper, we …

Auditory brainstem responses to continuous natural speech in human listeners

RK Maddox, AKC Lee - eneuro, 2018 - eneuro.org
Speech is an ecologically essential signal, whose processing crucially involves the subcortical
nuclei of the auditory brainstem, but there are few experimental options for studying these …

[PDF][PDF] Directing eye gaze enhances auditory spatial cue discrimination

RK Maddox, DA Pospisil, GC Stecker, AKC Lee - Current Biology, 2014 - cell.com
The present study demonstrates, for the first time, a specific enhancement of auditory spatial
cue discrimination due to eye gaze. Whereas the region of sharpest visual acuity, called the …