In the human brain many different types of information are mapped systematically across the cortical sheet. However, much of this information is represented at a scale far finer than can be measured in typical fMRI localization experiments. Measurement of this fine-scale information could provide important new insights about cognitive functions such as language. In this talk I will summarize recent studies from my laboratory that reveal, with unprecedented detail, how semantic information in speech is represented across the human neocortex. We first used fMRI to record brain activity while subjects listened to stories. We then used a voxel-wise modeling and decoding (VMD) approach to estimate what specific semantic information is represented in each individual voxel. Analysis of the voxel-wise models within and between subjects suggests that all humans share a common, low-dimensional semantic space. Projecting the semantic weights for each voxel onto the cortical surfaces reveals highly complex maps of semantic representation in the temporal, parietal, and frontal lobes. To identify these complex maps across subjects we developed a new statistical algorithm that does not require normalization of individual brains into a common cortical space. The resulting maps indicate that semantic information in speech is represented in ~80 distinct functional areas and gradients that are found consistently in all subjects. Our voxel-wise modeling approach provides a powerful new method for mapping the representation of many different perceptual and cognitive processes across the human brain, and for investigating how these representations are modulated by attention, memory and learning.