Modeling the role of context and prediction error in encoding variability
Numerous studies of the neural correlates of episodic memory indicate that encoding efficacy during study contributes to whether information will be remembered (Sederberg et al., 2003, 2006, 2007). However, most models of memory, including the temporal context model (TCM) and its variants (TCM-A, Sederberg et al. 2008; MR, Polyn et al. 2009), lack a clear mechanism for generating variance at encoding, such that the only sources of variance in their simulations retrieval noise and, possibly, variability in semantic relatedness between words. In this talk I present a new two-stage theory of episodic memory formation inspired by recent neuroimaging work in visual perception (Turk-Browne et al., in prep) and normative models of reinforcement learning (Dayan, 1993). First, our memory system makes a prediction based on the salient features of the current context. Second, the resulting prediction error helps determine what we subsequently encode from our experience and how strongly we encode it. I instantiate this theory in the TCM framework by modifying the traditional Hebbian learning rule so that learning is based on prediction error, which I then yoke to the learning rate to modulate encoding efficacy. I will illustrate the explanatory power of prediction-based encoding variability with simulations and new interpretations of the changes in memory performance as a function of a variety of factors, including list position (primacy), spaced vs. massed repetitions, semantic relatedness of items, and mid-list changes in encoding task.