Research

Research over the past decades has clarified how early representations of visual inputs in retinotopic visual areas are transformed to compact, high-level visual object codes in inferotemporal (IT) cortex. But how these high-level visual object codes are transformed into even higher-level semantic codes that can directly guide internal action goals remains unclear. We propose to tackle the problem of semantic visual coding through experiments leveraging the macaque face patch system. Two key questions are: 1) What is the high-level semantic code for objects in the brain?  2) How can high-level semantic representations re-activate/modulate sensory representations?

Functional magnetic resonance imaging (fMRI) studies have shown that visual imagery activates a network of frontal parietal regions as well as visual areas selective to faces and scenes during passive viewing. But fMRI gives only a blurry picture. We propose to use fMRI-guided electrophysiology to investigate the detailed neural dynamics underlying semantic representation across prefrontal cortex, the medial temporal lobe, and inferotemporal cortex. Our ability to target face patches gives us huge leverage to dissect how high-level sensory cortex responds to internally driven, “offline” activation of sensory representations.