Figure 6.
Bayesian model. *A*, Message-passing algorithm for the full model. Run length (*r*) refers to the number of data points obtained previously from the current generative distribution. On each trial, the distribution either changes and *r* is set to zero, or the generative distribution does not change and *r* is increased by one. After *t* trials, the algorithm must maintain and update *t* + 1 predictive distributions (one for each possible *r*) and the probability distribution across these possible values of *r*. *B*, Message-passing algorithm for the reduced model. Instead of considering all possible values of *r*, the model considers only the possibility that a change point did occur (represented by solid lines from *r* = 0 to *r* = 1) or did not occur (represented by all other solid lines). Posterior probabilities of these alternatives are computed according to Bayes' rule, then combined by taking the expected value of the run-length distribution *r̂* (small, gray, filled circles). Only a single, approximate predictive distribution is maintained and updated on a trial-by-trial basis. This approach massively reduces complexity and leads the algorithm to take the form of a delta rule (see Materials and Methods). *C*, Learning rates used by the reduced Bayesian model can be described analytically in terms of *r̂* and change-point probability. Lines indicate relationships between learning rate and change-point probability for a given *r̂* (increasing for darker lines). The dotted black line reflects the theoretical limit of the function as *r̂* goes to infinity. *D*, Performance of subjects and models. Mean absolute errors made by the full Bayesian model (FB), the reduced Bayesian model (RB), a delta-rule model using the best fixed learning rate possible for each session (FA), subjects (S), and a delta-rule model using subject learning rates in random order (rS) are shown.