Integration of visual and auditory information is commonplace in both humans and rhesus macaques, and is deficient in individuals with ASD. (A) Behavioral and fMRI studies reveal differences in multisensory integration in ASD. Left, ASD and TD individuals perform similarly when discriminating speech sounds using auditory information alone, but ASD individuals are significantly impaired relative to TD individuals when visual information is added to the task. Speech information consisted of short sentences read aloud overlaid on a background of auditory noise. Y-axis, speech reception threshold, the speech-to-noise ratio at which individuals accurately report the speech signal. More negative values indicate better performance. Right, activity in the STS during audiovisual integration of speech is absent in ASD subjects. Images modified from [46, 47]. (B) Single neurons of rhesus macaques represent audio-visual integration while perceiving meaningful vocalizations. Left, image and corresponding spectrogram of rhesus macaque performing a coo vocalization. Black dot on gray background is a visual control stimulus. Right, firing of a single STS neuron in response to hearing a coo (green), observing a coo (blue), or simultaneously hearing and observing a coo (red). Y-axis indicates the firing frequency of the neuron (spikes/second); X-axis indicates time, with coo stimulus presented at time zero. Note that higher neuronal firing is elicited when auditory and visual information is presented simultaneously. Images reproduced from .