We describe a model of neural recoding in spatial vision that specifies how the outputs of selected units akin to VI cells are normalized and combined to signal information about particular stimulus attributes. The recoding portion of the model is linked to psychophysical behavior via a two-stage signal-detection decision module that specifies how the outputs of the combining mechanisms are used in making fine spatial discriminations. We describe how masking and cue summation experiments isolate each of the processing stages, how earlier results from such studies guided development of the model, and we demonstrate how these procedures permit empirical estimates of model parameters as well as tests of alternative formulations. An important part of our work describes the characteristics of two complementary types of higher-level mechanisms isolated from previously published discrimination data. One sums normalized primary-level responses over disparate frequencies to signal precise information about the orientation of a stimulus; the other sums over all orientations to signal the spatial grain of texture-like patterns. We demonstrate how the model accounts for a large body of previously published discrimination data, and present the results of a new quantitative test of model predictions.