Introduction
We commit to a proposition about a specific world state when making a perceptual decision. One basic form of such commitment is binary classification. It is to decide whether an item's magnitude lies on the smaller or larger side of the magnitude distribution across items of interest (Fig. 1A). For example, when uttering “this tree is tall” while walking in a wood, we are implicitly judging the height of that tree to be taller than the typical height of the trees in the wood (Klein, 1980; Bierwisch, 1989), where “typical height” works as the boundary dividing the “short” and “tall” classes. Like this, binary classification is exercised in our daily language use, whenever modifying a subject with relative adjectives (Rips and Turnbull, 1980; Tribushinina, 2011; Solt, 2015; Lassiter and Goodman, 2017), and has been adopted as an essential paradigm for studying perceptual decision-making (Lages and Treisman, 1998; Grinband et al., 2006; Kepecs et al., 2008; Nahum et al., 2010; Lak et al., 2014; Bosch et al., 2020; Hachen et al., 2021).
Figure 1. Two contending hypotheses on the origin of the repulsive bias in binary classification. A, Task structure (left) and statistical knowledge (right) for binary classification. For any given item, its class is determined by its position relative to the class boundary in the distribution of feature magnitudes relevant to a given task (e.g., a tree is classified as “tall” if its height is in the side greater than the typical height of the trees in the wood of interest). This relativity of binary classification makes the “biased sensory encoding” and the “biased knowledge about boundary position” because of previous stimuli, in principle, have equal footings in inducing the repulsive bias. B, Sensory-adaptation hypothesis. It points to the adaptation of a low-level stimulus-encoding signal to past stimuli (arrow 1) as the origin of the repulsive bias (arrow 2). In the case of visual classification tasks, the task-relevant sensory signals in the early visual cortex (blue patch), which are subject to adaptation, have been hypothesized to mediate the repulsive bias. C, Boundary-updating hypothesis. It points to the attractive shift of a classifier's internal class boundary toward previous stimuli (arrow 3) as the origin of the repulsive bias (arrow 4). Such boundary-representing signals are expected to reside not in the early sensory cortex but in the high-tier associative cortices (red patch).
Humans and nonhuman animals show various forms of history bias in binary classification. One frequent form of such history biases is a tendency to classify an item as the class opposite to its preceding items, dubbed repulsive bias (Lages and Treisman, 1998, 2010; Bosch et al., 2020; Hachen et al., 2021). For instance, we tend to classify a tree of intermediate height as “tall” after seeing a short tree. Currently, it remains unclear why and how repulsive bias occurs.
As one most straightforward scenario for repulsive bias, the previous stimuli may repel away our perception of the current stimulus from themselves because the sensory system adapts to earlier stimuli (Gibson and Radner, 1937; Stocker and Simoncelli, 2006; Clifford et al., 2007; Knapen et al., 2010; Pavan et al., 2012; Morgan, 2014; Nakashima and Sugita, 2017; Fig. 1B). According to this “sensory-adaptation” hypothesis, the current tree is biasedly classified as “tall” since the sensory system's adaptation to the previous short tree makes the current tree appear taller than its physical height. However, there is an alternative scenario, which considers the possibility that the internal class boundary adaptively shifts toward recent samples of property magnitude (Treisman and Williams, 1984; Lages and Treisman, 1998, 2010; Dyjas et al., 2012; Raviv et al., 2014; Norton et al., 2017; Hachen et al., 2021; Fig. 1C). According to this “boundary-updating” hypothesis, the current tree is biasedly classified as “tall” since the shift of the class boundary toward the previous short tree makes the current tree be positioned in the taller side of the boundary.
As discussed previously (Hachen et al., 2021), it is hard to assess which hypothesis is more viable based on behavioral data. This difficulty arises because binary classification is a matter of the relativity between the perceived stimulus and the class boundary: the identical bias in classification can be caused either by sensory-adaptation or boundary-updating. However, the two hypotheses involve distinct neural routes through which repulsive bias transpires. The sensory-adaptation hypothesis predicts that the sensory brain signals subject to adaptation, such as those in the early sensory cortex with substantive adaptation to earlier stimuli, contribute to the choice variability. By contrast, the boundary-updating hypothesis predicts that the brain signals of the shifting boundary, such as those in the high-tier cortices involved in the working memory of previous stimuli, contribute to the choice variability.
Here, we tested these two predictions by analyzing functional magnetic resonance imaging (fMRI) data. We found that the stimulus-encoding signal in primary visual cortex (V1) exhibited adaptation, but its bias induced by adaptation was dissociated from current choices. By contrast, the boundary-representing signals in the posterior-superior-temporal gyrus and the inferior-parietal lobe not only shift to previous stimuli but also covaried with current choices. Our findings contribute to the resolution of the competing ideas regarding the source of repulsive bias by providing the first neural evidence supporting the boundary-updating scenario.
Materials and Methods
The data of experiment 1 (Exp1) and experiment 2 (Exp2) were acquired from 19 (nine females, aged 20–30 years) and 18 (nine females, aged 20–30 years) participants, respectively. Among the participants, 17 of them participated in both experiments. The Research Ethics Committee of Seoul National University approved the experimental procedures. All participants gave informed consent and were naive to the purpose of the experiments. High-spatial-resolution images were acquired only from the early visual cortex in Exp1 while the images in Exp2 were acquired from the entire brain with a conventional spatial resolution. The 17 people who provided the data for both experiments participated in three to six behavior-only sessions for training and stimulus calibration, one fMRI session for retinotopy, and two experimental fMRI sessions (one for each experiment). The remaining people also completed the behavioral and retinotopy fMRI sessions with the same protocols but participated in only one of the two experiments.
The data from Exp1 had been used for our previous work (Choe et al., 2014). The data of Exp2 has never been used in any previous publication. In the current paper, we describe some basic procedures of Exp1. For more details on Exp1, please refer to the original work (Choe et al., 2014).
Experimental setup
MRI data were collected using a 3 Tesla Siemens Tim Trio scanner equipped with a 12-channel Head Matrix coil at the Seoul National University Brain Imaging Center. Stimuli were generated using MATLAB (MathWorks) in conjunction with MGL (http://justingardner.net/mgl) on a Macintosh computer. Observers looked through an angled mirror attached to the head coil to view the stimuli displayed via an LCD projector (Canon XEED SX60) onto a back-projection screen at the end of the magnet bore at a viewing distance of 87 cm, yielding a field of view of 22 × 17°.
Behavioral data acquisition
Figure 2 illustrates the experimental procedures. On each trial, the observer initially viewed a small fixation dot (diameter in visual angle, 0.12°; luminance, 321 cd/m2) appearing at the center of a dark (luminance, 38 cd/m2) screen. A slight increase in the size of the fixation dot (from 0.12° to 0.18° in diameter), which was readily detected with foveal vision, forewarned the observer of an upcoming presentation of a test stimulus. The test stimulus was a brief (0.3 s) presentation of a thin (full-width at half-maximum of a Gaussian envelope, 0.17°), white (321 cd/m2), dashed (radial frequency, 32 cycles/360°) ring that counter-phase-flickered at 10 Hz. After each presentation, participants classified the ring size into small or large using a left-hand or right-hand key, respectively, within 1.5 s from stimulus onset. They were instructed to maintain strict fixation on the fixation dot throughout experimental runs. This behavioral task was performed in three different environments: (1) the training sessions, (2) the practice runs of trials inside the MR scanner, and (3) the main scan runs inside the MR scanner, in the following order.
In the training sessions, participants practiced the task intensively over several (three to six) sessions (∼1000 trials per session) in a dim room outside the scanner until they reached an asymptotic level of accuracy. Note that we opted to train observers with the stimuli that were much larger than those for the main experiments (mean radius of 9°) to avoid any unwanted perceptual learning effects at low sensory levels and to train participants to learn the task structure of classification.
In the practice runs of trials inside the MR scanner, participants performed 54 practice trials and then 180 threshold-calibration trials while lying in the magnet bore. On each of the threshold-estimation trials in which consecutive trials were apart from one another by 2.7 s., one of 20 different-sized rings was presented according to a multiple random staircase procedure (four randomly interleaved one-up-two-down staircases, two starting from the easiest stimulus and the other two starting from the hardest one) with trial-to-trial feedback based on the class boundary with the radius of 2.84°. A Weibull function was fit to the psychometric curves obtained from the threshold-calibration trials using a maximum-likelihood procedure. From the fitted Weibull function, the threshold difference in size (Δ in Fig. 2B) associated with a 70.7% correct proportion of responses was estimated. By finding this threshold for each participant, three threshold-level ring sizes were individually tailored as 2.84−Δ° (S-ring), 2.84° (M-ring), 2.84+Δ° (L-ring).
In the main scan runs, one of these rings with threshold-level differences was presented in the order defined by an m-sequence (base = 3, power = 3; nine S and L-rings and eight M-rings were presented; all scan runs started with two M-rings; Buracas and Boynton, 2002) to null the autocorrelation between stimuli. Participants were not informed of the existence of medium-ring. Importantly, participants did not receive trial-to-trial feedback. Instead, only their run-averaged percent correct based on the trials of S-ring and L-ring was shown during a break after each run, to prevent trial-to-trial feedback from evoking any unwanted brain responses associated with rewards (Marco-Pallarés et al., 2007; Carlson et al., 2011) or errors (Carter et al., 1998; Holroyd et al., 2004; Cavanagh and Frank, 2014). Consecutive trials were apart from one another by 13.2 s. In the main scan runs of Exp1 and Exp2, observers performed 156 (six runs × 26 trials) and 208 (eight runs × 26 trials) trials in total, respectively.
MRI equipment and acquisition
We acquired three types of MRI images. (1) 3D, T1-weighted, whole-brain images were acquired at the beginning of each functional session: MPRAGE; resolution, 1 × 1 × 1 mm; field of view (FOV), 256 mm; repetition time (TR), 1.9 s; time for inversion, 700 ms; time to echo (TE), 2.36 ms; and flip angle (FA), 9°. (2) 2D, T1-weighted, in-plane images were acquired at the beginning of each functional session. The parameters for the retinotopy-mapping, the V1 mapping, and the whole brain mapping differed slightly as follows (retinotopy, followed by the V1 mapping, and then by the whole brain mapping): MPRAGE; resolution, 1.078 × 1.078 × 2.0 mm, 1.083 × 1.083 × 2.3 mm 1.08 × 1.08 × 3.3 mm; TR, 1.5 s; T1, 700 ms; TE, 2.79 ms; and FA, 9°). (3) 2D, T2*-weighted, functional images were acquired during each functional session: gradient EPI; TR, 2.7, 2.2, 2.2 s; TE, 40 ms; FA, 77°, 73°, 73°; FOV, 208 mm, 207 mm, 208 mm; image matrix, 104 × 104, 90 × 90, 90 × 90; slice thickness, 1.8 mm with 11% gap, 2 mm with 15% slice gap, 3 mm with 10% space gap; slice, 30, 22, 32 oblique transfers slices; bandwidth, 858 Hz/px, 750 Hz/px, 790 Hz/px; and effective voxel size, 2.0 × 2.0 × 1.998 mm, 2.3 × 2.3 × 2.3 mm, 3.25 × 3.25 × 3.3 mm).
Retinotopy-mapping protocol
Standard traveling wave methods (Engel et al., 1994; Sereno et al., 1995) were used to define V1, to estimate each participant's hemodynamic impulse response function (HIRF) of V1, and to estimate V1 voxels' receptive field center and width. High-contrast and flickering (1.33 Hz) dartboard patterns were presented either as 0.89°-thick expanding or contracting rings in two scan runs, as 40°-width clockwise or counterclockwise rotating wedges in four runs or in one run as four stationary, 15°-wide wedges forming two bowties centered on the vertical and horizontal meridians. Each scanning run consisted of nine repetitions of 27-s period of stimulation. The fixation behavior during the scans was assured by monitoring participants' performance on a fixation task, in which they had to detect any reversal in direction of a small dot rotating around the fixation.
Data preprocessing of V1 images in the retinotopy-mapping session and the main session of Exp1
All functional EPI images were motion-corrected using SPM8 (http://www.fil.ion.ucl.ac.uk/spm; Friston et al., 1996; Jenkinson et al., 2002) and then co-registered to the high-resolution reference anatomic volume of the same participant's brain via the high-resolution in-plane image (Nestares and Heeger, 2000). After co-registration, the images of the retinotopy-mapping scan were resliced, but not spatially smoothed, to the spatial dimensions of the main experimental scans. The area V1 was manually defined on the flattened gray matter cortical surface mainly based on the meridian representations, resulting in 825.4±140.7 (mean±SD across observers) voxels. The individual voxels' time series were divided by their means to convert them from arbitrary intensity units to percentage modulations and were linearly detrended and high-pass filtered (Smith et al., 1999) using custom scripts in MATLAB (MathWorks). The cutoff frequency was 0.0185 Hz for the retinotopy-mapping session and 0.0076 Hz for the main session. The first 10 (of 90; a length of a cycle) and 6 (of 156; a length of a trial) frames of each run of the retinotopy-mapping session and main session, respectively, were discarded to minimize the effect of transient magnetic saturation and allow the hemodynamic response to reach a steady state. The “blood-vessel-clamping” voxels, which show unusually high variances of fMRI responses, were discarded (Olman et al., 2007; Shmuel et al., 2007); a voxel was classified as “blood-vessel-clamping” if its variance exceeds 10 times of the median variance value of the entire voxels. As the final step of data preprocessing, we removed a stimulus-nonspecific (untuned) component from the detrended BOLD time series by subtracting the across-eccentricity-bin average from the individual bins' time series at each time frame t, which resulted in the tuned responses (TRi):
TRi(t)=RRi(t)−∑i=1neRRi(t)/ne, where RRi is the i-th bin's BOLD time series, and ne is the number of eccentricity bins (21). This subtraction procedure is exactly the same as we did in our previous work (Choe et al., 2014). We used TRi(t) to extract the size-encoding signal in V1.
Data preprocessing of whole-brain images in the main session of Exp2
The whole-brain images of the participants in Exp2 were normalized to the MNI template in the following steps: motion correction, co-registration to whole-brain anatomic images via the in-plane images (Nestares and Heeger, 2000), spike elimination, slice timing correction, resampling to 3 × 3 × 3-mm voxel size with the SPM DARTEL Toolbox (Ashburner, 2007). Spatial smoothing was not applied to avoid the blurring of the patterns of activity. All the procedures were implemented using SPM8 and SPM12 (https://www.fil.ion.ucl.ac.uk/spm-statistical-parametric-mapping/; Friston et al., 1996; Jenkinson et al., 2002), except for spike elimination, for which we used the AFNI toolbox (Cox, 1996). The first 6 frames of each functional scan, which correspond to the first trial of each run, were discarded to allow the hemodynamic responses to reach a steady state. Then, the normalized BOLD time series at each voxel, each run, and each brain underwent linear detrending, high-pass filtering (0.0076-Hz cutoff frequency with a Butterworth filter), conversion into percent-change signals, and correction for non-neural nuisance signals, which was done by regressing out the mean BOLD activity of CSF.
The anatomic masks of CSF, white matter, and gray matter were defined by generating the probability tissue maps for individual participants from T1-weighted images, by smoothing those maps to the normalized MNI space using SPM12, and then by averaging them across participants. Finally, the masks were defined as respective groups of voxels whose probabilities exceed 0.5.
Unfortunately, in a few of the sessions, functional images did not cover the entire brain. Especially, the lost part was much larger in one participant's session than the others including the orbitofrontal cortex and posterior cerebellum. Thus, not to lose too many of voxels for analysis because of this single session, we relaxed the criterion of voxel selection a bit by including the voxels that were shared by >16 brains in the normalized MNI space. As a result, some voxels in the temporal pole, ventral orbitofrontal, and posterior cerebellum were excluded from data analysis.
Estimation of the eccentricities in retinotopic space for V1 voxels
For each V1 voxel in Exp1, its eccentricity (e, as shown in Fig. 3E,H) was defined by fitting a one-dimensional Gaussian function simultaneously to the time-series of fMRI responses to the expanding and contracting ring stimuli in the retinotopy session, which were also used for the definition of V1. The essence of this procedure is as follows (additional details can be found in the original paper; Choe et al., 2014).
First, the time series of fMRI were extracted only from a relevant group of voxels with SNR > 3 in both of the ring scan runs. Second, an eccentricity-tuning curve (gain over eccentricity, in other words) of a single voxel, g(ε), was modeled by a Gaussian as a function of the eccentricity in a visuotopic space, ε, and it was parameterized by a peak eccentricity, e, and a tuning width, σ:
ge(ε)=exp−((ε-e)22σ2).
Third, the collective responses of visual neurons within that voxel with a particular g(ε) at a given time frame t, n(t), were predicted by multiplying g(ε) by spatial layout of stimulus input at that time frame, s(ε,t):
n(t)=∑εs(ε,t)g(ε).
Fourth, the predicted time-series of fMRI responses of that voxel, fMRIp(t), were generated by convoluting n(t) with a scaled (by β) copy of the HIRF acquired from the meridian scans, h(t)β, and plus a baseline response, b:
fMRIp(t)=n(t)*h(t)β + b.
Fifth, the eccentricity e and the other model parameters (σ, β, b) were found by fitting fMRIp(t) to the predicted time-series of fMRI responses to the actual stimulation, fMRIo(t), by minimizing the residual sum of squared errors between fMRIp(t)and fMRIo(t) over all time frames, RSS:
RSS=∑t(fMRIo(t)−fMRIp(t))2.
Extraction of the size-encoding signal from V1 voxels
The three different weighting profiles, each representing the contributions of the individual eccentricity bins assessed by the three different schemes (the uniform, the discriminability, and the log-likelihood ratio schemes), were defined as follows. The uniform scheme (Fig. 4B, blue) assigned three discrete values to the eccentricity bins depending on which flanking side of the M-ring (rM) their preferred eccentricities (e) belonged to:
w(e)={−1, fore<rM0,fore=rM1,fore>rM.
The discriminability scheme (Fig. 4B, red) defined the weights in proportion to the differential responses of given eccentricity bins to the L (rL) and the S-rings (rS), which were derived from the eccentricity-tuning curves defined from the retinotopy-mapping session:
w(e)=ge(rL)−ge(rS)−δ, where ge is the eccentricity-tuning curve of the eccentricity bin with preferred eccentricity, e, and the baseline offset, δ, is as follows:
∑e[ge(rL)−ge(rS)]/ne.
The log-likelihood ratio scheme (Fig. 4B, yellow) defined the weights by taking the differences between the log-likelihoods of obtaining a given response if the stimulus were the L-ring, logLL, and if the stimulus were the S-ring, logLS. Because the eccentricity-tuning curves were assumed to be described by a Gaussian function, the log-likelihood ratio weights at preferred eccentricity, e, can be simplified to the following formula:
w(e)=logLL−logLS=−12σL2(e−rL)2+12σS2(e−rS)2−δ, where σL and σS are the tuning widths with rL and rS, and the baseline offset, δ, is as follows:
∑e[−12σL2(e−rL)2 + 12σS2(e−rS)2]/ne.
A Bayesian model of boundary-updating (BMBU)
The generative model
The generative model is the observers' causal account for noisy sensory measurements, where the true ring size, S, causes a noisy sensory measurement on a current trial, m(t), which becomes noisier as i trials elapse, thus turning into a noisy retrieved measurement of the value of S on trial t−i, r(t−i) (Fig. 5D). Hence, the generative model can be specified with the following three probabilistic terms: a prior of S, p(S), a likelihood of S given m(t), p(m(t)|S), and a likelihood of S given r(t−i), p(r(t−i)|S). These three terms were all modeled as normal distribution functions, the shape of which is specified with mean and standard deviation parameters, μ and σ: μ0 and σ0 for the prior, μm(t) and σm(t) for the likelihood for m(t), and μr(t−i) and σr(t−i) for the likelihood for r(t−i). The mean parameters of the two likelihoods, μm(t) and μr(t−i), are identical to m(t) and r(t−i); therefore, the parameters that must be learned are reduced to μ0, σ0, σm(t), and σr(t−i).
σm(t) is assumed to be invariant across different values of m(t), as well as across trials. Therefore, σm(t) is reduced to a constant σm. Finally, because σr(t−i) is assumed to originate from σm and to increase as trials elapse (Gorgoraptis et al., 2011; Zokaei et al., 2015), σr(t−i) is also reduced to the following parametric function: σr(t−i)=σm(1+κ)i, where κ>0. As a result, the generative model is completely specified by the four parameters, Θ={μ0,σ0,σm,κ}.
The primary purpose of BMBU is to build a generative Bayesian model which allows us to estimate the trial-to-trial latent states of the class boundary variable that are likely to be used by human observers whose class boundary is continually attracted to previous stimuli as posited by the boundary-updating hypothesis on “repulsive bias.” In doing so, we intended to build a parsimonious model with minimal free parameters as long as the model implements the strategy essential to the boundary-updating hypothesis. For this reason, we had to introduce several arbitrary assumptions in building BMBU. For example, although we assumed that memory precision decays exponentially, other forms of decay function are also possible, such as hyperbolic, power, and logarithmic ones. We also assumed that the noisy sensory measurement on a current trial, m(t), becomes the noisy retrieved measurement of the value of S as trials elapse. However, it is equally possible that the memory measurements of S in the elapsed trials can be retrieved independently from the sensory measurement used for decision-making. Whether or not these assumptions are valid might be an interesting research question but is beyond the scope of the current work, especially in that the alternative assumptions about such detailed modeling aspects are unlikely to affect the way BMBU shifts the class boundary toward previous stimuli.
Stimulus inference (s)
A Bayesian estimate of the value of S on a current trial, s(t), was distributed as a posterior function of a given sensory measurement m(t):
p(s(t))=p(S|m(t))∝p(m(t)|S)p(S).
The posterior p(S|m(t)) is a conjugate normal distribution of the prior and likelihood of S given the evidence m(t) whose mean μs(t) and standard deviation σs(t) were calculated as follows (Fig. 5D):
μs(t)=σ02m(t) + σm2μ0σ02 + σm2;σs(t)=σ0σmσ02 + σm2.
Class boundary inference (b)
The Bayesian observer infers the value of class boundary on a current trial, b(t), by inferring the posterior function of a given set of retrieved sensory measurements r→(t)={r(t−1),r(t−2),...r(t−n)}:
b(t)=S∼=argmaxSp(S|r→(t)), where the maximum number of measurements that can be retrieved, n, was set to 7. We set 7 because it is much longer than the effective trial lags of the previous stimulus effect (Fig. 5C). Here, p(S|r→(t)) is a conjugate normal distribution of the prior and likelihoods of S given the evidence r→(t):
p(S|r→(t))∝p(r→(t)|S)p(S)=p(r(t−1)|S)p(r(t−2)|S)...p(r(t−7)|S)p(S), whose mean and standard deviation were calculated (Bromiley, 2003) based on the knowledge of how the retrieved stimulus becomes noisier as trials elapse:
μb(t)=β0μ0 + ∑i=17βir(t−i);σb(t)=β02σ02 + ∑i=17βi2σr(t−i)2, where β0=σ0−2σ0−2+∑i=17σr(t−i)−2 and βi=σr(t−i)−2σ0−2+∑i=17σr(t−i)−2. We postulated that the uncertainty of b(t) is equivalent to σb(t) (Fig. 5G).
Deduction of decision variable (v), decision (d), and decision uncertainty (u)
On each trial, the Bayesian observer makes a binary decision d(t) by calculating the probability of s(t) is larger than b(t), which is called the decision variable, v(t), defined as
v(t)=p(s(t)>b(t))=Φ[s(t)−b(t)σs(t)2 + σb(t)2].
Then, if v(t) is larger than 0.5, d(t) is large. Otherwise, d(t) is small. Also, we defined the decision uncertainty, u(t), which represents the odds that the current decision will be incorrect (Sanders et al., 2016), as follows:
u(t)=Φ[−|s(t)−b(t)|σs(t)2+σb(t)2].
Fitting the parameters of BMBU
For each human participant, the parameters of the generative model, Θ={μ0,σ0,σm,κ}, were estimated as those maximizing the sum of log-likelihoods for T individual choices made by the observer, D→(T)=[D(1),D(2),...,D(T)]:
Θ̂=argmaxΘ∑t=1Tlogp(D(t)|Θ).
For each participant, estimation was conducted in the following steps. First, we found local minima of parameters using a MATLAB function, fminsearchbnd.m, with the iterative evaluation number set to 50. We repeated this step by choosing 1000 different initial parameter sets, that were randomly sampled within uniform prior bounds, and acquired 1000 candidate sets of parameter estimates. Second, from these candidate sets of parameters, we selected the top 20 in terms of goodness-of-fit (sum of log-likelihoods) and searched the minima using each of those 20 sets as initial parameters by increasing the iterative evaluation number to 100,000 and setting tolerances of function and parameters to 10−7 for reliable estimation. Finally, using the parameters fitted via the second step, we repeated the second step one more time. Then, we selected the parameter set that showed the largest sum of likelihoods as the final parameter estimates. We discarded (1) the first trial of each run and (2) the trials in which RTs were too short (less than 0.3 s) for parameter estimation for any further analyses because (1) the first trial of each run does not have its previous trial, which is necessary for investigating the repulsive bias, and (2) the response made during the stimulus is shown (0–0.3 s) can be considered too hasty to reflect a normal cognitive decision-making process.
A constant-boundary model
The constant-boundary model has two parameters, bias of class boundary μ0 and measurement noise σm. Stimulus estimates, s(t), were assumed to be sampled from a normal distribution, N(S(t),σm). Each stimulus sample has uncertainty σs(t)=σm. Class boundary b(t) was assumed to be a constant, μ0; so σp(b(t))=σb(t)=0.
Estimation of the latent states of the variables of BMBU
Fitting the model parameters separately for each human participant (Θ̂={μ̂0,σ̂0,σ̂m,κ̂}) allowed us to create the same number of Bayesian observers, each tailored to each human individual. We repeated the experiment on these Bayesian observers using the stimulus sequences identical to those presented to their human partners for the following two purposes. First, we wanted to examine whether BMBU's choice (d(t)) can reproduce the human partners' repulsive bias. Second, we need to estimate the trial-to-trial latent states of the model variables (s(t), b(t), v(t), u(t)) that were used by the human partners, thus represented in their brains engaged in the binary classification task. We acquired a sufficient number (106 repetitions) of simulated choices, d(t), and decision uncertainty values, u(t), which were determined by the corresponding number of the stimulus estimates, s(t), and the boundary estimates, b(t), for each Bayesian observer. Then, the averages across those 106 simulations were taken as the final outcomes. When estimating s(t), b(t), v(t), and u(t) for the observed choice D(t), we only included the simulation outcomes in which the simulated choice d(t) matched the observed choice D(t).
Recovery of the true states of the model variables
To ascertain the validity of our procedure of estimating the latent variables of BMBU described above, we checked how accurately it recovers the true states of the variables. This recovery test was conducted in the following procedure.
First, we created 256 different sets of parameter values by taking the possible combinations of the four different values of each of the four model parameters, where the four different values corresponded to the 20th, 40th, 60th, and 80th percentiles of the parameter values fitted to the observers' choices. Second, we acquired the synthetic choices and the true model variables b, s, v, and u by plugging one parameter set into BMBU and simulating it on the actual stimulus sequence presented to the observers. Third, we fitted the parameters of BMBU to the synthetic choices in the same procedure conducted for fitting BMBU to the observed choices. Fourth, we simulated a set of the recovered states of the model variables using the fitted model parameters. Fifth, we calculated the R2 between the true and the recovered variables to assess how reliably our model fitting procedure can recover the true states of the model variables. Finally, we repeated the above procedure for all the remained parameter sets and used the R2 averaged across the 256 parameter sets as the performance measure of the recovery test.
The multiple logistic regression model for capturing the repulsive bias
To capture the repulsive bias in human classification, we logistically regressed the current choice onto stimuli and choices using the following regression model to obtain regression coefficients p→={p(1),⋯,p(11)} for each observer:
D(t)=eK(t)1 + eK(t), where K(t)=p(0) + p(1)S(t) + ∑i=15(p(1 + i)S(t−i) + p(6 + i)D(t−i)), the independent variables were each standardized to z scores for each participant. S(t) and D(t) are the stimulus and the observed choice values at trial t. S(t−i) and D(t−i) are the stimulus and the observed choice at the ith trial lags from trial t.
To capture the repulsive bias of the Bayesian observers, the Bayesian observers' choices were also regressed with the logistic regression model by substituting d(t) and d(t−i), the simulated choices, for D(t) and D(t−i), the observed choices. The regression was repeatedly conducted for each simulation, and the regression coefficients that were averaged across simulations were taken as final outcomes. The simulation was repeated 105 times. We confirmed that the simulation number was sufficiently large to produce stable simulation outcomes.
The average marginal effect analysis
Average marginal effect (AME) was calculated by using the R-package “margins” (Leeper et al., 2018). AME quantifies the average marginal effect between an ordinal dependent variable (i.e., binary choice) and an independent variable of a multiple logistic (or probit) regression model (Williams and Jorgensen, 2023). To calculate the AMEs of any given variable on the current choice (D(t)) without controlling the previous (S(t−1)) and current stimuli (S(t); i.e., the baseline AME), we implemented a logistic regression model with two regressors –the variable of interest X (i.e., V1, b, s, or v) and the previous choice (D(t−1)):
D(t)∼logit(β0 + βXX + βD1D(t−1)).
We always included D(t−1) as a regressor because the effect of D(t−1) would confound the effect of S(t−1), if D(t−1) is not included in the regression model. Specifically, because S(t−1) and D(t−1) are highly correlated, it would be unclear whether the AME difference before and after controlling S(t−1) is ascribed to the effect of S(t−1) or that of D(t−1), if D(t−1) is not controlled. The effect of D(t−1) was controlled in all regression models.
To test whether the AME of X decreased after controlling S(t−1) (or S(t)), we calculated the AME of X from the logistic regression model including S(t−1) (or S(t)) as an additional regressor, as follows:
D(t)∼logit(β0 + βXX + βD(t−1)D(t−1) + βS(t−1)(orS(t))S(t−1)(orS(t))), and subtracted the new AME from the baseline AME to see whether the baseline AME significantly changed after controlling previous or current stimuli.
Searching for the multivoxel patterns of activity representing the latent variables of BMBU
We assumed that (1) activity patterns of neural population for representing the latent variables are different between participants, but (2) locations and (3) timings of the activity patterns overlap across participants. Therefore, to identify the brain signals of the latent variables of BMBU in fMRI responses, the support vector regression (SVR) decoding was conducted for each human participant within specific spatial and temporal windows.
As for the spatial window, we implemented a searchlight technique (Kahnt et al., 2011b; Haynes, 2015). A searchlight has a radius of 9 mm (= 3 voxels; Soon et al., 2008) and thus can contain 123 voxels at most. Of the 123 voxels, we excluded the voxels located in CSF or white matter because they reflect non-neural signals. Thus, the effective number of voxels in a searchlight used for the analysis can vary searchlight by searchlight.
As for the temporal windows, we implemented the time-resolved decoding technique in which a target variable is decoded from the BOLD responses at each of the within-trial time points (Fig. 6B). We used the first four time points (out of six in total) because the BOLD responses associated with the action of button press, the last process of the sensory-to-motor decision-making stream, is maximized at the fourth time point (the result is not shown here). In sum, SVR is trained for each participant, each time point, and each searchlight.
Table 1. The sets of regressions that BMBU requires the brain signals of its latent variables to satisfy
Before training SVR, the BOLD responses in a searchlight and a target latent variable were z-scored across trials. Then, the z-scored variable was decoded for each searchlight using the cross-validation method of leave-one-run-out (eightfold cross-validation). As a result, for each searchlight and at each time point, we acquired a set of decoded latent variables in all trials. In other words, on each time point, we acquired the 4-dimensional map of the decoded variable (i.e., three spatial dimensions and 1 trial dimension). The 3D spatial dimensions of the decoded variables were smoothed with a 5 mm FWHM Gaussian kernel on each trial.
After this subject-wise decoding analysis, we conducted the across-subject analysis to test whether the decoded variables are significantly informative. To do so, for each searchlight locus and each time point, we regressed the smoothed decoded variable onto the regression conditions of the target variable by using a generalized linear mixed effect regression model (GLMM) with a random effect of subjects. The number of regression conditions was 14, 14, and 17 for b(t), s(t), and v(t), respectively (Table 1). Those regression models were deduced from the causal structure between the variables of BMBU (see the next section). We accepted a given cluster as the brain signals of b(t), s(t), or v(t) only when they satisfied those regression models over >12 contiguous searchlights. For the ROI analysis, the decoded variables were averaged over all searchlights within each ROI.
SVR was conducted using LIBSVM (https://www.csie.ntu.edu.tw/∼cjlin/libsvm/) with a linear kernel and constant regularization parameter of 1 (Soon et al., 2008; Kahnt et al., 2011b). The brain imaging results were visualized using Connectome Workbench (Marcus et al., 2011) and xjview.
The regression-model test for verifying the brain signals of b(t), s(t), and v(t)
To identify the brain signals of b(t), s(t), and v(t), we defined three respective lists of regressions that must be satisfied by the brain signals. We stress that each of these lists consists of the necessary conditions to be satisfied because the conditions are deduced from the causal structure of the variables in BMBU (Fig. 5G). Below, we specify the specific regression tests for s(t) and v(t) that constitute these lists. For the tests for b(t), see Results.
The 14 regressions for the brain signal of s(t) (Table 1): (#1–4), ys, s decoded from brain signals, must be regressed positively onto s, the variable it represents, even when the false discovery rate is controlled (Benjamini and Hochberg, 1995), and s orthogonalized to v or d because it should reflect the variance irreducible to the offspring variables of s; (#5), ys must not be regressed onto b because s and b are independent of each other (b↮s; Fig. 5G); (#6, 7), ys must be positively regressed onto v (s→v; Fig. 5G) but not when v is orthogonalized to s because the influence of s on v is removed; (#8, 9) ys must be positively regressed onto d (s→v→d; Fig. 5G) but not onto u because u cannot be linearly correlated with s (s→v→u is blocked by the interaction between u and v; Fig. 5G); (#10–12), ys must be positively regressed onto the current stimuli and not the past stimuli because s is inferred solely from the current stimulus measurement; (#13, 14), ys must not be regressed onto previous decisions because s is inferred solely from the current stimulus measurement. #10–14 were investigated by a multiple regression with regressors [S(t),S(t−1),S(t−2),D(t−1),D(t−2)]. We did not include D(t) as a regressor because D(t) may induce a spurious correlation between b and s by controlling the collider v (Elwert and Winship, 2014; b→v←s and v→d; Fig. 5G).
The 17 regressions for the brain signal of v(t) (Table 1). (#1–5), yv, v decoded from brain signals, must be positively regressed onto v, the variable it represents, even when the false discovery rate is controlled (Benjamini and Hochberg, 1995), and v orthogonalized to b,s, or d, because it should reflect the variance irreducible to the offspring variables of v; (#6, 7), yv must be negatively regressed onto one of its parents b (b→v; Fig. 5G), but not when b is orthogonalized to v, because the influence of b on v is removed; (#8, 9), yv must be positively regressed onto one of another parent s (s→v; Fig. 5G), but not when s is orthogonalized to v, because the influence of s on v is removed; (#10, 11), yv must be regressed onto d but not onto u because u's correlation with its parent v cannot be revealed without holding the variability of d (the interaction between u and v); (#12–14), yv must be positively regressed onto the current stimulus because the influence of the current stimulus on v is propagated via s (S(t)→s→v), and negatively regressed onto the past stimuli because the influence of the past stimuli on v is propagated via b (S(t−1)→b→v) —strongly onto the 1-back stimulus and more weakly onto the two-back stimulus (thus, nonsignificant regression with one-tailed regression in the opposite sign is modeled moderately); (#15–17), yv must be regressed onto the current decision and not the past decisions because the current decision is a dichotomous translation of v (v→d; Fig. 5G), whereas past decisions have nothing to do with the current state of v. #12–17 were investigated by a multiple regression with regressors [S(t),S(t−1),S(t−2),D(t),D(t−1),D(t−2)]. D(t) was included as a regressor because v does not suffer from a spurious correlation that arises by controlling a collider variable which is absent in this case.
Bayesian network analysis
To investigate whether the relationship between decoded b, s, and v is consistent with the causal structure postulated by BMBU, we calculated the BIC values for all the three-node networks consisting of the time series of three brain signals {yb, ys, yv} (Scutari, 2010) and determined the causal graph whose likelihood is maximal. The three-node network has 162 possible structures, as follows. A total of 27 edge structures can be created out of three nodes since three types of edges are possible for any given pair of nodes (i.e., x→y, x←y or x↮y) and there are three pairs (i.e., {b,v}, {v,s}, {s,b}; 33). Also, a total of 6 combinations of three nodes exist for {yc, ys, yv} since we have three (IPLb1, pSTGb3, pSTGb5), two (DLPFCs3, Cerebs5), and single (aSTGv5) brain signals of b, s, and v, respectively (3×2×1). Thus, because each of the 6 possible node combinations can have 27 edge structures, there are 162 possible three-node causal networks.
We opted to apply this Bayesian network analysis to the three-node networks instead of the six-node network consisting of all the six brain signals identified by the searchlight analysis because the number of possible six-node networks (N=36C2=315=14,348,907) was unrealistically large so that the statistical results are likely to suffer from type I errors. In addition, guided by BMBU, we were interested in identifying the causal structure of the three brain signals, each corresponding to one of the three model variables (b, s, and v). In other words, we were not interested in the causal relationship between the brain signals representing the same model variable (e.g., between pSTGb3, pSTGb5).
Statistics
We used the searchlight technique to look for brain signals related to the latent variables of the BMBU. To make the searchlight analysis statistically powerful by reducing the noise effect in the BOLD signals, we applied a generalized linear mixed effect model (GLMM) with the random effect of observers to calculate the association between the true and the decoded model variables. We applied the mixed effect model only to the searchlight analysis (Fig. 6; Table 1). For the other regression analyses, we conducted the analysis for each individual, respectively, because the mixed effect model was too computationally demanding to be applied to all other analyses. For instance, applying GLMM to the model simulation depicted in Figure 5C requires 105repetitions of regression analysis. The significance tests were two-tailed except for the searchlight analysis as specified in Table 1. Also, for the time-resolved searchlight analysis, we implemented the multiple-comparison test (the false discovery rate (fdr) correction; Benjamini and Hochberg, 1995) for each of the fMRI time frames. In the figures summarizing statistical results, all confidence intervals are the 95% confidence intervals of the mean across individual observers.
Results
Experimental paradigm
Over consecutive trials, participants sorted ring sizes into two classes, small and large, under moderate time pressure (Fig. 2A). To ensure decision-makings with uncertainty, we presented three rings (small, medium, and large) differing by a threshold size (Δ), which was tailored for individuals (Fig. 2B; see Materials and Methods). The ring sizes were presented in m-sequence to rule out any correlation between consecutive stimulus sizes (Buracas and Boynton, 2002). We provided participants with feedback after each scan run by summarizing their performance with the proportion of correct trials.
Figure 2. Binary classification task on ring size. A, Within-trial procedure. With the eyes fixed, human participants were prewarned (2.2 s), with the increase of the fixation dot, to get ready for the upcoming trial after a long intertrial interval (9.5 s), briefly viewed the ring stimulus (0.3 s), and judged its size as large or small in respect to the medium size ring within a limited window of time (1.5 s). B, Ring stimuli with threshold-level differences in size. On each trial, a participant viewed one of the three rings, small (S), medium (M), large (L), the size contrast (Δ) of which was optimized to ensure threshold-level classification performance on a participant-to-participant basis in a separate calibration run inside the MR scanner, right before the main session of fMRI scan runs. The order of ring sizes over trials was constrained with an m-sequence to preclude the temporal correlation among stimuli. Here, the luminance of the rings is inverted here for an illustrative purpose.
To verify the sensory-adaptation hypothesis, we conducted experiment 1, where 19 participants performed the classification task while BOLD measurements with a high spatial resolution were acquired only from their early visual cortices. To verify the boundary-updating hypothesis, we conducted experiment 2, where 18 participants performed the same task while their whole brains were imaged. The data of experiment 1 had been used in our published work (Choe et al., 2014).
Repulsive bias in experiment 1
The participants in experiment 1 displayed a substantive amount of repulsive bias. As anticipated, the proportion of large choices (PL) increased as the ring size on the current trial (S(t)) increased. Importantly, when the psychometric curves were conditioned on the previous stimulus (S(t−1)), they shifted upward as the ring size in the previous trial decreased (the contrasts between the solid, dotted, and dashed lines in Fig. 3A), which indicates the presence of repulsive bias. By contrast, the psychometric curves were not affected much by the previous choice (the contrasts between the gray and black lines in Fig. 3A). To quantify the impact of the previous stimulus on the current choice, we subtracted the PLs acquired when the previous ring size was S from those when L separately for each of the six combinatorial conditions of the current stimulus (three sizes) and previous choice (two alternatives) and then averaged those six PL differences. The averaged PL difference (−0.20) was significantly smaller than zero (t(18)=−8.9,p=5.1×10−8; Fig. 3B, left). We also quantified the impact of the previous choice on the current choice similarly: the PL differences of previous large from small choices were calculated separately for the nine combinatorial conditions of the current and previous stimulus and then averaged. The averaged PL difference (−0.018) did not significantly differ from zero (t(18)=−0.68,p=0.50; Fig. 3B, right).
Figure 3. Influences of previous and current stimuli on classification behavior and V1 activity in experiment 1. A–C, Repulsive bias in psychometric curves (A, B) and regression analysis (C). The psychometric curves, where the fractions of large choices are plotted against the current stimulus, are shown separately for the six possible combinations defined by the previous stimulus and choice (A). As the summary of the effects of the previous stimulus on the current choice, the differences in the fractions of large choices between the previous stimuli were L-ring and S-ring (p(D(t)=large|S(t−1)=1)−p(D(t)=large|S(t−1)=−1)) are computed separately for the six combinations of the current stimulus and previous choice and then averaged (B, left). As the summary of the effects of the previous choice on the current choice, the differences in the fractions of large choices between the previous choices were large and small (p(D(t)=large|D(t−1)=large)−p(D(t)=large|D(t−1)=small)) are computed separately for the nine combinations of the current and previous stimuli and then averaged (B, right). The small gray circles represent the individual observers. The multiple logistic regression coefficients of the current choice are plotted against trial lags (C). In the inset, the regression coefficients for the previous-stimulus (S(t−1)) regressor are plotted against those for the previous-choice (D(t−1)) regressor for individual observers, where the red error bars demarcate the 95% CIs of the means. D, Eccentricity map of V1 on the flattened left occipital cortex of a representative brain, S08. The dot, curves, and colors correspond to those in the inset depicting the visual field. The image is borrowed from our previous work (Choe et al., 2014). E, H, Spatiotemporal BOLD V1 responses to L-ring (left) and S-ring (middle), and their differentials (right), presented on the current (E) and previous (H) trials. The color bars indicate BOLD changes in the unit of % signal, averaged across all participants. The vertical dashed line marks the time point for stimulus onset. The horizontal dashed line corresponds to the eccentricity of M-ring, splitting the voxels into “L-prefer” and “S-prefer” groups based on their preferred ring size. F, The differential of BOLD responses at peak between the small and large ring on the current trial. The vertical dashed line marks the eccentricity of M-ring. The horizontal red and blue lines mark the average BOLD signals of the L-prefer and S-prefer voxels, respectively. The vertical orange line quantifies the stimulus-driven gain of V1 responses. G, I, Time courses of the stimulus-driven gain of V1 responses to the current (G) and previous (I) stimuli. The stimulus duration and response window are demarcated by the light and dark gray bars demarcate (G, I). The 95% CIs of the mean across observers are indicated by the shaded areas (F) or by the vertical error bars (B, C, G, I). Asterisks indicate the statistical significance (*P<0.05, **P<10−3, ***P<10−4; B, C, G, I). The orange boxes and arrows are drawn to help the relationships between the panels (E–G).
To ensure this repulsive effect of the previous stimulus on the current choice, we logistically regressed each participant's current choice (D(t)) simultaneously onto the previous stimulus and choice. The regression coefficients for the previous stimuli were significant up to two trial lags across participants (S(t−1), β=−0.39, t(18)=−9.6, p=1.6×10−8; S(t−2), β=−0.11, t(18)=−2.4,p=0.026; D(t−1), β=−0.17, t(18)=−2.8,p=0.012), which confirms the robust presence of repulsive bias in experiment 1 (Fig. 3C).
Sensory adaptation in V1
As a first step toward the verification of the sensory-adaptation hypothesis, we defined the size-encoding signal in V1. As our group showed previously (Choe et al., 2014); the eccentricity-tuned BOLD responses in V1 (Fig. 3D) readily resolved the threshold-level differences in ring size, as anticipated by the retinotopic organization of the V1 architecture (Fig. 3E). Thus, the subtraction of the BOLD responses at the voxels preferring S-ring to L-ring from those at the voxels preferring L-ring to S-ring (Fig. 3F) was significantly greater when S(t) was large than when small (the third and the fourth time points, β=0.11, t(18)=4.8, p=1.5×10−4and β=0.13, t(18)=6.6, p=3.7×10−6; Fig. 3G).
Next, having defined the size-encoding signal in V1, which will be referred to as “V1,” we sought evidence of sensory-adaptation in that signal. According to the previous work on sensory-adaptation (Clifford et al., 2007; Kohn, 2007; Solomon and Kohn, 2014; Weber et al., 2019), we expected V1 to decrease following the large size and to increase following the small size because of the selective gain reduction at the sensory neurons tuned to previous stimuli. In line with this expectation, V1 indeed significantly decreased when preceded by L-ring than when preceded by S-ring (the fourth time point, β=−0.45, t(18)=−2.2, p=0.040; Fig. 3H,I). Although we rendered ineffective the autocorrelation between consecutive stimuli using an m-sequence (see Materials and Methods), we additionally checked the possibility that the observed adaptation of V1 might have spuriously occurred because of any imbalance in the ring size of the current stimuli. To do so, we first calculated the differences in V1 between the previous S-rings and L-rings separately for the three current stimuli and then averaged those three differences. We confirmed that the averaged V1 differences were smaller when preceded by L-ring than when preceded by S-ring (the fourth time point, β=−0.44, t(18)=−2.1, p=0.049).
In sum, the V1 population activity reliably encoded the ring size and exhibited sensory adaptation.
The variability of V1 associated with previous stimuli fails to contribute to the choice variability
Next, we verified the critical prediction of the sensory-adaptation hypothesis on repulsive bias. Below, we will define what this crucial prediction is and how we empirically examine that prediction.
Above, we confirmed that the ring size, not only on the current trial (S(t)) but also on the previous trial (S(t−1)), affects V1 on the current trial (S(t−1)→V1←S(t) in Fig. 4A). What we do not know yet is whether the variabilities of V1 that originate from S(t) and S(t−1), respectively, flow all the way into the observer's current choice (S(t)→V1→D(t) and S(t−1)→V1→D(t) in Fig. 4A). Critically, if the sensory-adaptation hypothesis is true, the variability of V1 associated with S(t−1) must contribute to the current choice (D(t); S(t−1)→V1→D(t)), just as that associated with S(t) must do so (S(t)→V1→D(t)). Here, it is important to realize that the mere association between S(t) and V1 (S(t)→V1) does not warrant their contribution to D(t) (S(t)→V1→D(t)). Likewise, the association between S(t−1) and V1 (S(t−1)→V1) does not warrant their contribution to D(t) (S(t−1)→V1→D(t)).
Figure 4. Origin of the covariation between the stimulus-encoding signal of V1 and the current choice. A, The causal structure of the variables implied by the sensory-adaptation hypothesis. The stimulus-encoding signal of V1 (V1) is influenced by the current stimulus (S(t)), the previous stimulus (S(t−1)), and the unknown sources (UV1). In turn, V1 influences the current choice (D(t)). If the sensory-adaptation hypothesis is true, part of the causal influence of V1 on D(t) must originate from S(t−1), as indicated by the connected chain of the dotted arrows. B, Extraction of the stimulus-encoding signal of V1. For any given run from any participant, the matrix of spatiotemporal BOLD responses in V1 (top left) was multiplied by one of the three weighting vectors (right; blue, red, and yellow lines represent the uniform, discriminability, and log-likelihood ratio readout schemes, respectively) to result in the vector of stimulus-encoding signal (V1) in the same trial length (bottom left). The positive and negative values of V1 indicate the larger and smaller sizes of the ring, respectively. C, Multiple linear regression of the stimulus-encoding signal of V1 on S(t), S(t−1), and D(t−1). The colors correspond to the three different readout schemes in B. D-F The average marginal effects (AMEs) of V1 on D(t), with V1 extracted by the uniform (D), discriminability (E), and log-likelihood ratio (F) readout schemes. In each panel, the influence of V1 on D(t) that can be ascribed to S(t−1) and S(t) were assessed by checking i) whether the AME of V1 on D(t) (left) significantly decreased or not after controlling the influence of S(t−1) (second from the left) and S(t) (second from the right), respectively, or ii) whether the AME of V1 on D(t) controlling the influence of both S(t−1) and S(t) (right) significantly increased or not after only controlling the influence of S(t) (second from the right) and S(t−1) (second from the left), respectively. Asterisks indicate the statistical significance (*P<0.05, **P<0.01, ***P<0.001), and “n.s.” stands for the nonsignificance of the test (C–F). The 95% CIs of the mean across participants are indicated by the vertical error bars (C–F).
We can test the critical implication of the sensory-adaptation hypothesis by comparing the average marginal effect (AME; Williams and Jorgensen, 2023) of V1 on D(t) (V1→D(t)) to that of V1 on D(t) with S(t−1) controlled (S(t−1)↛V1→D(t)). The rationale behind this comparison is that the contribution of V1 to D(t) must be substantially smaller when S(t−1) was controlled than when not if the contribution of S(t−1) to D(t) via V1 (i.e., S(t−1)→V1→D(t)) is substantial. In addition, the critical implication can also be tested by comparing the AME of V1 on D(t) with S(t) only controlled (S(t)↛V1→D(t)) to that of V1 on D(t) with S(t−1) and S(t) both controlled (S(t−1)&S(t)↛V1→D(t)). In this case, the contribution of V1 to D(t) must be greater when only S(t) is controlled than when both S(t−1) and S(t) are controlled if the contribution of S(t−1) to D(t) via V1 is substantial. AME was adopted instead of comparing regression coefficients because it does not suffer from the scale problem, unlike logistic and probit regression coefficients (Mize et al., 2019).
In doing so, the trial-to-trial measures of V1 were acquired by taking the sum of BOLDs across the eccentricity bins with the same readout weights used in the previous section (Fig. 4B). At first, we confirmed that V1 contains both current stimuli and adaptation signals by regressing V1 on to S(t), S(t−1), and D(t−1) concurrently for each participant (Fig. 4C). This multiple regression analysis indicates that the previously observed adaptation to S(t−1) (Fig. 3H,I) was still significant across participants (β=−0.047, t(18)=−2.2, p=0.039), even when we controlled the variability of D(t−1) (β=0.021, t(18)=0.82, p=0.42), a potential confounding variable.
The AME of V1 on D(t) was significant across participants (β=0.020,t(18)=2.3,p=0.031; Fig. 4D, the first bar). Importantly, it did not significantly decrease across participants when the influence of S(t−1) was controlled (t(18)=−1.6,p=0.13; Fig. 4D, the change of the first to second bars). Given the significant repulsive bias associated with S(t−1) presented on the two-back trial, we also controlled S(t−2) in addition to S(t−1). Despite this additional control, the AME of V1 on D(t) did not significantly decrease (t(18)=−1.5, p=0.15). By contrast, the AME of V1 on D(t) substantially decreased across participants, almost to none, when the influence of S(t) was controlled (t(18)=−6.0, p=1.1×10−5; Fig. 4D, the change of the first to third bars). Likewise, the AME of V1 on D(t) with S(t) only controlled did not differ from that of V1 on D(t) with S(t−1) and S(t) both controlled (t(18)=1.4, p=0.17; Fig. 4D, the change of the fourth to third bars), whereas the AME of V1 on D(t) with S(t−1) controlled was greater than that of V1 on D(t) with S(t−1) and S(t) both controlled (t(18)=6.02, p=1.1×10−5; Fig. 4D, the change of the fourth to second bars). These results coherently indicate that the contribution of the previous stimuli to D(t) via V1 is absent or negligible, which is at odds with the sensory-adaptation hypothesis.
The analyses above were conducted for V1 acquired at the fourth time point, where sensory adaptation was significant. However, an insignificant but substantial amount of sensory adaption occurred also at the preceding (third) time point (Fig. 3I). To check the possibility that the contribution of S(t−1) to D(t) via V1 might be present if V1 is alternatively defined, we redefined V1 by averaging those acquired at the third and fourth points and repeated the same AME analyses as above. However, the contribution of the previous stimuli to D(t) via V1 is still absent or negligible: the AME of V1 on D(t) did not differ from that of V1 on D(t) with S(t−1) controlled (t(18)=−1.4, p=0.19); the AME of V1 on D(t) with S(t) only controlled did not differ from that of V1 on D(t) with S(t−1) and S(t) both controlled (t(18)=1.03, p=0.32).
Furthermore, the same pattern of AMEs was observed when we used two alternative readout schemes for extracting V1. The AME of V1 on D(t) decreased after S(t) was controlled (the discriminability scheme: t(18)=−5.4, p=4.3×10−5; the log likelihood scheme: t(18)=−6.0, p=1.1×10−5; Fig. 4E,F, the change of the first to third bars) but not after S(t−1) was controlled (the discriminability scheme: t(18)=−1.4, p=0.19; the log likelihood scheme: t(18)=−1.5, p=0.14; Fig. 4E,F, the change of the first to second bars). Likewise, the AME of V1 on D(t) with S(t−1) only controlled was larger than that of V1 on D(t) with S(t−1) and S(t) both controlled (the discriminability scheme: t(18)=5.4, p=4.0×10−5; the log likelihood scheme: t(18)=6.0, p=1.2×10−5; Fig. 4E,F, the change of the fourth to second bars), while that with S(t) only controlled did not differ from that of V1 on D(t) with S(t−1) and S(t) both controlled (the discriminability scheme: t(18)=1.3, p=0.22; the log likelihood scheme: t(18)=1.4, p=0.18; Fig. 4E,F, the change of the fourth to third bars). Put together, the AME analyses suggest that the contribution of V1 to the current choice is ascribed mostly to the current stimulus but hardly to the previous stimuli, which is inconsistent with the sensory-adaptation hypothesis.
Repulsive bias in experiment 2
Having failed to find the evidence supporting the sensory-adaptation hypothesis in experiment 1, we conducted experiment 2 to search the whole brain for the signal representing the class boundary and to test whether that signal relates to the previous stimuli and the current choice in a manner consistent with the boundary-updating hypothesis. As mentioned earlier (see above, Experimental paradigm), the experimental procedure in experiment 2 was the same as in experiment 1, except for the fMRI protocol.
The behavioral performance in experiment 2 closely matched that in experiment 1 (Fig. 3A–C) in many aspects. The PL difference induced by the previous stimulus (−0.25) substantially differed from zero (t(17)=−7.3, p=1.3×10−6) indicating the existence of repulsive bias, whereas that by the previous choice (0.027) did not significantly differ from zero (t(17)=1.3, p=0.19; Fig. 5A,B). The logistic regression analysis confirmed the significant presence of repulsive bias across participants (S(t−1), β=−0.54, t(17)=−7.9, p=4.6×10−7; S(t−2), β=−0.24, t(17)=−4.7,p=2.3×10−4; D(t−1), β=0.0055, t(17)=0.13,p=0.90; Fig. 5C).
Figure 5. Repulsive bias in experiment 2 and a Bayesian model of boundary updating (BMBU). A–C, Repulsive bias in psychometric curves (A, B) and regression analysis (C). The formats were identical to those in the corresponding figure panels for experiment 1 (Fig. 3A–C), except that the ex post model simulation results (green lines and symbols) are added. In the bottom insets of B, the observed (x-axis) and simulated (y-axis) average differences in the fractions of large choices between the trials in which the previous stimulus was L-ring and those in which it was S-ring are plotted against one another, where the red diagonal demarcates the identity line. In the bottom insets of C, the observed (x-axis) and simulated (y-axis) regression coefficients for the previous stimulus (S(t−1)) regressor are plotted against one another for individual observers, where the red diagonal demarcates the identity line. D–G, The measurement generation (C), stimulus inference (D), class-boundary inference (E), and decision-variable deduction (F) processes of BMBU. BMBU posits that the Bayesian decision-maker has an internal causal model of how a physical stimulus size (S) engenders a current sensory measurement (m(t)) and a retrieved memory measurement from ith preceding trial (r(t−i); D, top), which specifies the probability distribution of m(t) and r(t−i) conditioned on S, respectively (D, bottom). In turn, p(m(t)|S) allows the Bayesian decision-maker to infer S on observing m(t) by combining it with the prior knowledge about S, p(S), to compute the posterior probability of S given m(t), p(S|m(t)) (E). Similarly, p(r(t−i)|S) allows for inferring the class boundary (b(t)) on retrieving the memory of previous sensory measurements (r→(t)=[r(t−1),r(t−2),...]) by combining it with p(S) to compute the posterior probability of S given r→(t), p(S|r→(t)) (F). In D–F, black dotted curves, p(m(t)|S); gray dotted curves, p(r(t−i)|S), the darker the dotted curve is, the more recent the memory is; gray dashed curves, p(S); black solid curve, p(S|m(t)); red solid curve, p(S|r→(t)). Finally, the inferred stimulus, s(t), and the inferred class boundary, b(t), allow for deducing the decision variable, v(t), the choice variable, d(t), and the uncertainty variable, u(t) (G, top), as illustrated in an example bivariate distribution of s(t) and b(t), from which v(t), d(t) and u(t) are derived (G, bottom). H, An example temporal trajectory of the class boundary inferred by BMBU in a single scan run of a representative subject 04. The black and red lines indicate the sizes of physical stimulus and the boundary inferred by BMBU, respectively. I, J, Ex post simulation results of the constant-boundary model. The formats are identical to those of A and B.
Bayesian model of boundary-updating (BMBU)
As we identified V1 in experiment 1, we first need to identify the brain signal that reliably represents the class boundary. However, it is challenging to identify such signals in two aspects. First, unlike in experiment 1, where V1 was the obvious cortical region to bear the size-encoding signal susceptible to adaptation given a large volume of previous work (Kohn, 2007; Patterson et al., 2013; Morgan, 2014; Solomon and Kohn, 2014; Weber et al., 2019; Fritsche et al., 2022) and our own work (Choe et al., 2014), we have no such a priori region where the boundary-representing signal resides. This aspect requires us to explore the whole brain. Second, unlike in experiment 1, where the size variable was physically prescribed by the experimental design, we need to infer the trial-to-trial states (i.e., sizes) of the class boundary, which is an unobservable, thus latent, variable. This aspect requires us to build a model. To address these challenges, we inferred the latent state of the class boundary using a Bayesian model of boundary-updating (BMBU) and searched the whole brain for the boundary-representing signal using a searchlight multivariate pattern analysis technique.
We developed BMBU by formalizing the binary classification task in terms of Bayesian decision theory (Knill and Richards, 1996), a powerful framework for modeling human decision-making behavior under uncertainty. Binary classification is to judge whether the “ring size on the current trial t (S(t))” is larger or smaller than the “the typical size of rings appearing across the entire trials (S∼).” Therefore, a classifier must infer them based on the measurements of stimulus size in the sensory and memory systems.
The generative model
On trial t, S(t) is randomly sampled from a probability distribution p(S) and engenders a measurement in the sensory system m(t), which is a random sample from a probability distribution p(m(t)|S(t)) (Fig. 5D, bottom, black dotted curve). Critically, as i trials elapse, m(t) is re-encoded into a mnemonic measurement in the working-memory system r(t−i), which is a random sample from a probability distribution p(r(t−i)|S(t)) (Fig. 5D, bottom, light-gray dotted curve). Here, we assumed that the width of p(r(t−i)|S(t)) increases as i increases reflecting the working memory decay (Gorgoraptis et al., 2011; Zokaei et al., 2015).
Inferring the current stimulus size
On trial t, the Bayesian classifier infers S(t) by inversely propagating m(t) in the generative model (Fig. 5E, top). As a result, the inferred size (s(t)) is defined as the value of S given m(t), as captured by the following equation:
p(s(t))=p(S|m(t)),(1) where the width of p(S|m(t)) reflects the precision of s(t) (Fig. 5E, bottom).
Inferring the class boundary
On trial t, the Bayesian classifier infers the class boundary (b(t)), i.e., the inferred value of S∼, by inversely propagating a set of retrieved measurements in the working memory system r→(t)={r(t−1),r(t−2),r(t−3),...,r(t−n)} (Fig. 5F, top). b(t) is defined as the most probable value of S given r→(t), as captured by the following equation:
b(t)=argmaxSp(S|r→(t))(2) where the width of p(S|r→(t)) reflects the precision of b(t). Notably, Equation 2 implies that b(t) must be attracted more to recent stimuli than to old ones because (1) the precision of working memory evidence decreases as trials elapse (Fig. 5F, bottom, dotted curves) and (2) the more uncertain the evidence is, the less weighed the evidence is for class-boundary inference.
Making a decision with the inferred current stimulus size and the inferred class boundary
Having estimated s(t) and b(t), the Bayesian classifier deduces a decision variable (v(t)) from s(t) and b(t) and translating it into a binary decision (d(t)) with a degree of uncertainty (u(t); Fig. 5G). Here, v(t) is the probability that s(t) will be greater than b(t) (v(t)=p(s(t)>b(t))); d(t) is large or small if v(t) is greater or smaller than 0.5, respectively; u(t) is the probability that d(t) will be incorrect (u(t)=p(s(t)<b(t)|d(t)=large) or p(s(t)>b(t)|d(t)=small); Sanders et al., 2016).
In sum, BMBU models a human decision-maker as the Bayesian classifier who, over consecutive trials, continuously infers the class boundary (b) and the current stimulus size (s), deduces the decision variable (v) from s and b, and makes a decision (d) with a varying degree of uncertainty (u). As shown below, BMBU well predicts human participants' choices and reproduces their repulsive bias.
The prediction and simulation of human choices and repulsive bias by BMBU
We assessed BMBU's accountability for human behavior in the binary classification task in two aspects, comparing its (1) predictability of the choices and (2) reproducibility of repulsive bias to those of the control model which does not update the class boundary (“constant-boundary model”; see Materials and Methods).
We assessed the predictability of BMBU and the constant-boundary model by fitting them to human choices using the maximum likelihood rule (see Materials and Methods). BMBU excels over the constant-boundary model in goodness-of-fit. The average AIC difference across participants is −10.48 and was significantly less than the conventional threshold (−4; Anderson and Burnham, 2004; t(17)=−2.6, p=0.020). The variance explained by BMBU, measured by the Nagelkerke R2, is equal to 132% of that by the constant-boundary model.
After equipping the models with their best-fit parameters, we assessed their reproducibility by making them simulate the decisions over the same sequence of ring sizes presented to the human participants (see Materials and Methods). From this simulation, we can also vividly appreciate how BMBU updates its class boundary (b(t)) depending on the ring sizes encountered over a sequence of classification trials (Fig. 5H). As implied by Equation 2, BMBU continuously shifts b(t) toward the ring sizes shown in previous trials. Such attractive shifts are pronounced especially when streaks of S-ring (Fig. 5H, solid arrow) or L-ring (Fig. 5H, dashed arrow) appeared over trials. Importantly, we confirmed that such boundary-updating of BMBU reproduces the repulsive bias displayed by the human participants with a remarkable level of resemblance across participants, both for the psychometric curves (the R2 of the effect of previous stimulus on PL between humans and BMBU was 0.89; Fig. 5A,B) and for the coefficients of the stimulus and choice regressors (the R2 of coefficients of the immediately preceding stimulus between humans and BMBU was 0.94; Fig. 5C). None of the simulated PLs and coefficients, a total of 17 points, fell outside the 95% confidence intervals of the corresponding human PLs and coefficients. Not surprisingly, the constant-boundary model failed to show any slightest hint of repulsive bias (Fig. 5I,J). Although we used m-sequences to prevent any auto-correlation among ring sizes, the failure of the constant-boundary model in reproducing repulsive bias reassures that the actual stimulus sequences used in the experiment do not contain any unwanted statistics that might induce spurious kinds of repulsive bias.
In sum, BMBU's inferences of the class boundary based on past stimuli accounted for a substantive fraction of the choice variability of human classifiers and successfully captured their repulsive bias.
Brain signals of the class boundary and the other latent variables
In the previous section, we demonstrated that BMBU accounted well for the variability of human choices and successfully reproduced the observed repulsive bias. However, such correspondences between the humans' and the models' choices do not necessarily warrant the validity of our procedure of estimating the latent states of the model variables (b, s, and v), which is crucial in testing the boundary-updating hypothesis. To validate our estimation procedure, we tested whether it could accurately recover the true states of the model variables based on the synthetic datasets simulated with 256 ground-truth model parameter sets (see the Materials and Methods). The recovered states of the model variables well matched the corresponding true states (R2 = 0.98±0.0044, 0.96±0.0073, and 0.96±0.0040 for b, s, and v, respectively; mean±95% confidence interval), which ascertains the validity of our procedure of estimating the latent states of the model variables.
Then, with the trial-to-trial states of the simulated latent variables, we identified the brain signals of those variables with the following rationale and procedure. On any given trial t, a classifier makes a decision in the manner constrained by the causal structure of BMBU (Fig. 5G). This causal structure implies two important points to be considered when identifying the neural representations of b, s and v. First, for any cortical activity, its significant correlation with the variable of interest does not necessarily imply that it represents that variable per se but is open to the possibility that it may represent the other variables that are associated with the variable of interest. Second, if any given cortical activity represents the variable of interest, that activity must not violate any of its relationships with the other variables that are implied by the causal structure (Table 1; see Materials and Methods).
We incorporated these two points in our search of the brain signals of b, s and v, as follows. Initially, we identified the candidate brain signals of b, s, and v by localizing the patterns of activities that closely reflect the trial-to-trial states of b, s, and v. For localization, we used the support vector regressor decoding with the searchlight technique (Kahnt et al., 2011a; Hebart et al., 2016), which is highly effective in detecting the local patterns of population fMRI responses associated with the latent variables of computational models (Kriegeskorte et al., 2006). Next, we put those candidate brain signals to a strong test of whether their trial-to-trial states satisfy the causal relationships with the other variables. Specifically, we converted those causal relationships into the empirically testable sets of regression models (Table 1), respectively for b (14 regressions), s (14 regressions), and v (17 regressions) and checked whether all the regressors' coefficients derived from the brain signals were consistent with the regression models (see Materials and Methods). In what follows, we will describe how the regression tests for the brain signal of b (yb) were derived from the causal structure of the variables defined by BMBU (see Materials and Methods for those for the two remaining variables s and v).
According to the causal relationship of b with the latent variables, yb must satisfy the following single linear regression models: yb must be positively regressed onto b (#1) and be so even when the false discovery rate (Benjamini and Hochberg, 1995) is applied (#2); yb must be positively regressed onto b even when b is orthogonalized to v (#3) or d (#4) because yb should reflect the variance irreducible to the offspring variables of b; yb must not be regressed onto s because b and s are independent of one another (b↮s; Fig. 5G, #5); yb must be negatively regressed onto v (b→v; Fig. 5G, #6) but not when v is orthogonalized to b because such orthogonalization removes the influence of b on v (#7); yb must be negatively regressed onto d (b→v→d; Fig. 5G, #8) but not onto u because u is not linearly correlated with b (b→v→u is blocked by the nonlinear relationship between u and v, Fig. 5G, #9). In addition, according to the causal relationship of the latent variables with the stimuli and choices (Fig. 5D–G), yb must satisfy the following multiple linear regression model defined by the observable variables [S(t),S(t−1),S(t−2),D(t−1),D(t−2)]: yb must not be regressed onto the current stimulus (#10) because b is independent of S(t); yb must be positively regressed onto the 1-back stimulus for sure (#11) because b firmly shifts toward S(t−1); the regression of yb onto the two-back stimulus must be weaker than that onto the 1-back stimulus (#12) because of memory decay (Fig. 5D; accordingly, the sign of the regression coefficient of S(t−2) was defined as the complementary part of that of S(t−1)); yb must not be regressed onto previous decisions because previous decisions do not have any influence on b (#13, 14). We did not include D(t) as a regressor in the multiple regression because D(t) may induce a spurious correlation between b and s by controlling the collider (common offspring) variable v (Elwert and Winship, 2014; b→v←s; Fig. 5G) via its relationship with v (v →d; Fig. 5G).
As a result, the brain signals that survived the exhaustive regression tests clustered in six separate regions (Fig. 6; Table 2). The signal of b appeared in three separate regions at different time points relative to stimulus onset, a region in the left inferior parietal lobe at 1.1s (IPLb1) and two regions in the left posterior superior temporal gyrus at 3.3 and 5.5 s (pSTGb3, pSTGb5). The signal of s appeared in the left dorsolateral prefrontal cortex at 3.3 s (DLPFCs3) and in the right cerebellum at 5.5 s (Cerebs5). The signal of v appeared in the left anterior superior temporal gyrus at 5.5 s (aSTGv5). To ascertain the robustness of the neural representations of the latent variables in these six areas, we repeated the searchlight decoding analysis using a different searchlight size (87 voxels, which is smaller than the original one, 123 voxels). Despite the change in searchlight size, we could detect the clusters that survived all regression tests around the six regions (Table 2).
Figure 6. Brain signals of the latent variables of BMBU. A, Loci of the brain signals. The brain regions where BOLD activity patterns satisfied all the regressions implied by the causal structure of the variables in BMBU are overlaid on the inflated cortex and the axial view of the cerebellum of the template brain. B, Within-trial time courses of the satisfied regressions in number. The within-trial task phases are displayed (top panel) to help appreciate when the brain signals become pronounced, with the hemodynamic delay (4–5 s) in BOLD (bottom three panels). C, The coefficients and the 95% CIs of the generalized linear mixed effect model (GLMM) of the decoded variable averaged across the searchlights of each ROI on the time points on which each ROI was detected. The regression index indicates the index specified in Table 1. B, C, The colors of the symbols and lines correspond to those of the brain regions shown in A. Asterisks indicate the statistical significance (*P,0:05, **P,0:01, ***P,0:001). The 95% CIs of the mean across participants are indicated by the vertical error bars.
Table 2. Specification of the brain signals of the latent variables of BMBU
Lastly, we investigated whether the probable causal structures between the brain signals of b, s, and v are consistent with BMBU in the following two critical aspects. First, the brain signal of v should be concurrently affected by the brain signals of b and s: b→v←s. Second, there should be no causal connection between b and s because BMBU is built on the assumption that b and s are independent of one another (i.e., b and s are biased by previous and current stimuli, respectively): b↮s (Fig. 5G). To examine these aspects, we investigated all of the three-node networks (N = 162) composed of the brain signals of b, s, and v, and calculated their Bayesian Information Criterion (BIC; see Materials and Methods).
The outcomes of BIC evaluation were consistent with BMBU. First, out of the 162 possible causal graphs, the smallest (best) BIC value was found for “pSTGb5→aSTGv5 ←Cerebs5” (Fig. 7). Second, We found that any graph with the causal arrows between b(t) and s(t) is significantly less likely than the best causal graph (BIC > 2; shown at the bottom of Fig. 7; Kass and Raftery, 1995). The results indicate that the relationship between the identified brain signals faithfully reflects the causal relationship of the latent variables implied by BMBU.
Figure 7. The probable causal structures between the brain signals of the latent variables in BMBU. For each row, the value in the left indicates the relative BIC scores of the causal structures in reference to the most probable one at the top.
The variability of the class-boundary brain signals associated with previous stimuli contributes to the variability of choice
Finally, with the brain signals that represent the class boundary (IPLb1, pSTGb3, and pSTGb5) in our hands, we verified the boundary-updating hypothesis with the rationale and analysis identical to those for the verification of the sensory-adaptation hypothesis.
We stress that the respective associations of the brain signal of b with the previous stimulus (S(t−1); Table 1, eleventh row) and with the variable d (Table 1, eighth row) do not necessarily imply that the variability of the brain signal of b that is associated with S(t−1) contributes to the choice variability (as implied by the causal information flows through b depicted in Fig. 8A), for the same reasons mentioned when verifying the sensory-adaptation hypothesis. To verify such contribution, we need to compare the AME of the brain signals of b on the current choice (D(t); pSTGb5→D(t)) to the AME of the brain signals of b on D(t) with S(t−1) controlled (S(t−1)↛pSTGb5→D(t)).
Figure 8. Origin of the covariation between the current choice and the brain signals of the latent variables in BMBU. A, The causal structure of the variables implied by the boundary-updating hypothesis. The brain signal of the decision variable (v(t)) is influenced by the brain signal of the inferred class criterion (b(t)), brain signal of the inferred stimulus (s(t)), and the unknown sources (Uv). In turn, b(t) is influenced by the previous stimulus (S(t−1)) and the unknown sources (Ub) whereas s(t) is influenced by the current stimulus (S(t)) and the unknown sources (Us). Lastly, v(t) influences the current choice (D(t)). If the boundary-updating hypothesis is true, part of the causal influence of b(t) on D(t) must originate from S(t−1), as indicated by the connected chain of the dotted arrows. B–G, The average marginal effects (AMEs) of the brain signals on D(t), with the brain signals of b(t) from pSTGb5 (B), IPLb1 (C), and pSTGb3 (D), s(t) from DLPFCs3 (E), and Cerebs5 (F), and v(t) from aSTGv5 (G). In each panel, the influences of the given brain signal on D(t) that can be ascribed to S(t−1) and S(t) were assessed by checking (1) whether the AME of the given brain signal on D(t) (left) is significantly reduced or not after controlling the influence of S(t−1) (second from the left) and S(t) (second from the right), respectively, or (2) whether the AME of V1 on D(t) controlling the influence of both S(t−1) and S(t) (right) significantly increased or not after only controlling the influence of S(t) (second from the right) and S(t−1) (second from the left), respectively. The colors of the bars correspond to those of the brain regions shown in Figure 6A. Asterisks indicate the statistical significance (*P<0.05, **P<0.01, ***P<0.001), and “n.s.” stands for the nonsignificance of the test. The 95% CIs of the mean across participants are indicated by the vertical error bars.
As anticipated, the AME of pSTGb5 on D(t) was negatively significant across participants (t(17)=−4.8,p=1.7×10−4; Fig. 8B, the first bar). Importantly, unlike the size-encoding signal in V1, the negative AME significantly weakened across participants when the contribution of S(t−1) was controlled (t(17)=2.8,p=0.012; Fig. 8B, the change of the first to second bars). On the other hand, controlling S(t) did not affect the AME of pSTGb5 on D(t) at all (t(17)=0.29,p=0.77; Fig. 8B, the change of the first to third bars), which is consistent with the absence of the contribution of S(t) on b in the causal relationship defined by BMBU (Fig. 5G). Likewise, the null effect of S(t) on the AMEs of pSTGb5 on D(t) was confirmed by the insignificant difference between the AME with S(t−1) controlled and that with S(t) and S(t−1) both controlled (t(17)=−0.31,p=0.77; Fig. 8B, the change of the fourth to second bars). Also, the effect of S(t−1) on the AMEs of pSTGb5 on D(t) was confirmed by the significant difference between the AME with S(t−1) controlled and that with S(t) and S(t−1) both controlled (t(17)=−2.7,p=0.014; Fig. 8B, the change of the fourth to third bars).
The same patterns were also observed for IPLb1 and pSTGb3 (Fig. 8C,D). Especially, the AMEs of pSTGb3 and IPLb1 on D(t) both weakened after controlling S(t−1) (pSTGb3: t(17)=2.2,p=0.046; Fig. 8D, the change of the first to second bars; IPLb1: t(17)=2.1,p=0.0503; Fig. 8C, the change of the first to second bars), but not after controlling S(t) (pSTGb3: t(17)=−0.57,p=0.58; Fig. 8D, the change of the first to third bars; IPLb1: t(17)=0.22,p=0.83; Fig. 8C, the change of the first to third bars). The null effect of S(t) was confirmed by the insignificance difference between the AME with S(t−1) controlled and that with S(t) and S(t−1) both controlled (pSTGb3: t(17)=0.43,p=0.67, Fig. 8D; IPLb1: t(17)=−0.41,p=0.69, Fig. 8C, the change of the fourth to second bars). Also, the effect of S(t−1) was confirmed by the significant or marginally significant differences between the AME with S(t−1) controlled and that with S(t) and S(t−1) both controlled (pSTGb3: t(17)=−1.9,p=0.081, Fig. 8D, the change of the fourth to third bars; IPLb1: t(17)=−2.2,p=0.045, Fig. 8C, the change of the fourth to third bars). Put together, the AME analyses suggest that the contribution of the class boundary to the current choice is significantly ascribed to the previous stimuli supporting the boundary-updating hypothesis on repulsive bias.
Having found the evidence supporting the boundary-updating hypothesis in the brain signals of b, we also conducted the same AME analysis on the signals of s and v below. Given the causal structure of b, s, and v, the validity of the boundary-updating hypothesis will be reinforced if the brain signals of s and v also turn out acting as fulfilling their causal roles defined by BMBU. According to BMBU, the contribution of s to D(t) must originate not from S(t−1) but from the S(t) (the causal route indicated by the solid arrows in Fig. 8A). In line with this implication, the AMEs of DLPFCs3 and Cerebs5 on D(t) were both significant across participants (t(17)=3.8, p=0.0014 for DLPFCs3; t(17)=3.3, p=0.0041 for Cerebs5; Fig. 8E,F, first bars) and significantly decreased after controlling S(t) (t(17)=−4.4, p=4.1×10−4 for DLPFCs3; t(17)=−3.7,p=0.0019 for Cerebs5; Fig. 8E,F, the change of the first to third bars) but not after controlling S(t−1) (t(17)=1.2, p=0.26 for DLPFCs3; t(17)=0.69,p=0.50 for Cerebs5; Fig. 8E,F, the change of the first to second bars). Likewise, the AMEs of DLPFCs3 and Cerebs5 on D(t) with S(t−1) controlled were both larger than those with both S(t) and S(t−1) controlled (t(17)=4.3,p=0.0050 for DLPFCs3; t(17)=3.8, p=0.0016for Cerebs5; Fig. 8E,F, the change of the fourth to second bars), whereas the AMEs of DLPFCs3 and Cerebs5 on D(t) with S(t) controlled did not differ from those with both S(t) and S(t−1) controlled (t(17)=−0.92,p=0.37 for DLPFCs3; t(17)=−0.057, p=0.96 for Cerebs5; Fig. 8E,F, the change of the fourth to third bars). Put together, the AME analyses suggest that the contribution of the inferred stimulus to the current choice is significantly ascribed to the current but not to the previous stimuli supporting the boundary-updating hypothesis.
On the contrary, the contribution of v to D(t) must originate not only from S(t−1) but also from S(t) (Fig. 8A). In line with this implication, the AME of aSTGv5 on D(t) was significant (t(17)=5.1,p=9.7×10−5; Fig. 8G, the first bar) and significantly decreased both after controlling S(t−1) (t(17)=−2.8,p=0.012; Fig. 8G, the change of the first to second bars) and after controlling S(t) (t(17)=−4.1,p=7.5×10−4; Fig. 8G, the change of the first to third bars). Likewise, the AME of aSTGv5 on D(t) with controlled both S(t) and S(t−1) significantly increased both after controlling S(t−1) (t(17)=4.1,p=6.7×10−4; Fig. 8G, the change of the fourth to second bars) and after controlling S(t) (t(17)=2.8,p=0.012; Fig. 8G, the change of the fourth to third bars). Put together, the AME analyses suggest that the contribution of the decision variable to the current choice is significantly ascribed to both current and the previous stimuli supporting the boundary-updating hypothesis.
On a separate note, the six loci of the brain signals of b,s, and d were defined by applying the conservative criterion that any given cluster satisfying all the regression tests (Table 1) should be the same or larger than 12. We note that there was a focal region in the right-hemisphere medial visual cortex that survived the regression tests for s(t) on the 3 s after stimulus onset (VCs3) but failed to reach the threshold size (N voxels = 6).
To examine the neural loci of the inferred stimulus further, we checked the possibility that VCs3 might carry the signal via which the current stimulus (S(t)) contributes to the current choice (D(t)). The AME of VCs3 on D(t) was significant (t(17)=3.0,p=0.0074), but no longer when S(t) was controlled (t(17)=1.6, p=0.14), which indicates that the noise variability of VCs3 is not tightly linked to the variability of the current choice. However, the AME of DLPFCs3 on D(t) with S(t) controlled was significant (t(17)=2.5,p=0.023; Fig. 8E, the third bar) and that of Cerebs5 was marginally significant (t(17)=2.0,p=0.063; Fig. 8F, the third bar). The results indicate that DLPFCs3 and Cerebs5 carry the signal via which the current stimulus (S(t)) contributes to the current choice (D(t)), whereas such contribution is not evident for VCs3.
Furthermore, to test the sensory-adaptation hypothesis, we examined whether VCs3 carries the stimulus signal via which the previous stimulus (S(t−1)) contributes to the current choice (D(t)). However, the AME of VCs3 on D(t) did not decrease when the contribution of S(t−1) was controlled (t(17)=0.28, p=0.78). Likewise, the AME of VCs3 on D(t) with S(t) controlled did not differ from that with both S(t−1) and S(t) controlled (t(17)=0.70,p=0.49). These results corroborate the AME analyses on V1 in experiment 1 (Fig. 4D–F), confirming that the previous stimulus is unlikely to contribute to the current choice via the stimulus-related signals in the early visual cortex.
In sum, the results suggest that neural signals of b and s transferred previous and current stimuli to current decisions, respectively, and the neural signal of v transferred both previous and current stimuli to current decisions as BMBU implies, which is consistent with the boundary-updating hypothesis.
Discussion
Here, we explored the two possible origins of repulsive bias, sensory-adaptation versus boundary-updating, in binary classification tasks. Although V1 adapted to the previous stimulus, its variability associated with the previous stimulus failed to contribute to the choice variability. By contrast, the variability associated with the previous stimulus in the boundary-representing signals in IPL and pSTG contributed to the choice variability. These results suggest that the repulsive bias in binary classification is likely to arise as the internal class boundary continuously shifts toward the previous stimulus.
Dissociation between sensory-adaptation in V1 and repulsive bias
What makes sensory-adaptation a viable origin of repulsive bias is not its mere presence but its contribution to repulsive bias. The presence of sensory-adaptation in V1 has been firmly established (Clifford et al., 2007; Kohn, 2007; Solomon and Kohn, 2014; Weber et al., 2019) and is the necessary premise for the sensory-adaptation hypothesis to work. What matters is whether the trial-to-trial variability of V1 because of such adaptation exerts its influence on the current choice. Such an influence was not observed in our data.
From a general perspective, our findings demonstrate a dissociation between the impact of previous decision-making episodes on the sensory-cortical activity and the contribution of that sensory-cortical activity to decision-making behavior. In this regard, V1 in the current work acts like the binocular-disparity-encoding signal of V2 neurons in a recent single-cell study on monkeys (Lueckmann et al., 2018), where, despite the impact of the history on V2 activity, the variability of V2 activity associated with the history failed to contribute to the history effects on decision-making behavior. Similarly, our findings also echo the failure of the sensory-adaptation of V1 in influencing the visual orientation estimation in an fMRI study on human participants (Sheehan and Serences, 2022). There, while sensory-adaptation was evident along the hierarchy of visual areas including V1, V2, V3, V4, and interaparietal sulcus (IPS), the history effect of the previous stimulus on the current estimation behavior was opposite to that expected from sensory-adaptation, which suggests that a downstream mechanism compensates for sensory-adaptation. Such a mechanism was also called for when the single-cell-recording work on monkeys tried to explain their intriguing adaptation effects found along the visual processing hierarchy (McLelland et al., 2009). For instance, static visual stimuli engendered prolonged, on the order of tens of seconds, adaptation in the lateral geniculate nucleus but the adaptation in V1 was paradoxically short-lived, on the order of 100 ms.
The representations of the class boundary in IPL and pSTG
To account for the repulsive bias in binary classification, previous studies proposed descriptive models based on the common idea that the internal boundary continuously shifts toward the previous stimuli (Treisman and Williams, 1984; Lages and Treisman, 1998, 2010; Dyjas et al., 2012; Raviv et al., 2014; Norton et al., 2017; Hachen et al., 2021). However, the neural concomitant of class-boundary updating has rarely been demonstrated.
To our best knowledge, this issue has so far been addressed by one fMRI work (White et al., 2012); which reported the class-boundary signal in the left inferior temporal pole. However, several aspects of this work make it hard to consider the reported brain signal to represent the class boundary inducing repulsive bias. First, they experimentally manipulated the class boundary in a block-by-block manner. Thus, it is unclear whether the reportedly boundary-representing signal was updated by previous stimuli trial-to-trial, which is required to induce repulsive bias. Second, the class boundary size correlated with the average stimulus size block-by-block in their experiments. Because of this confounding factor, one cannot rule out the possibility that the reported brain signal reflects the sensory signal associated with the average stimulus size induced by the current stimulus. By contrast, the brain signal of the class boundary in our work is free from these methodological limitations, because it is updated on a trial-to-trial basis and survived the rigorous set of tests, including those addressing possible confounding variables (Table 1). In this sense, the current work can be considered the first demonstration of the brain signals representing the class boundary that is dynamically updated in such a way that it can account for repulsive bias.
We emphasize that we developed BMBU to infer the trial-to-trial latent states of the class boundary used by human observers for the purpose of verifying the boundary-updating hypothesis on repulsive bias. In this sense, BMBU should not be taken as a unified account of the history effects reported by previous studies. For example, BMBU does not account for the influence of previous decisions on subsequent decision-making, another significant contributor to the history effects (Akaishi et al., 2014; Urai and Donner, 2022). To be sure, we are open to the possibility that there might be a unified mechanism relating the previous, and current, as well, stimuli and previous decisions to the current decision in an integrative manner. To incorporate the previous decisions into such a unified mechanism, it is important to distinguish the influence of the previous choice from that of the previous motor response, which we could not do in the current work because choices and motor responses covaried. In this regard, the weak but significant negative regression coefficient of the previous decision in experiment 1 (Fig. 3C) could have been reflective of the influence of the previous motor response, as previously suggested (Zhang and Alais, 2020).
The representations of inferred stimuli in DLPFC and cerebellum
The brain signals of the inferred ring size (s(t)) in dorsolateral prefrontal cortex (DLPFC) and cerebellum share many features withV1 in that their covariation with the current choice did not decrease after controlling the previous stimulus but decreased after controlling the current stimulus (Figs. 4D–F, 8E,F). This commonality suggests that DLPFC, cerebellum, and V1 alike route the flow of information originating from the current stimulus. Then, what made V1 ineligible for the brain signal of s(t)?
It is notable that BMBU treats s(t) as the random variable that has the noise variability in addition to being influenced by the physical stimulus (Fig. 8A). This means that the brain signal of s(t) is supposed to be associated with the choice even when the current stimulus was controlled because the noise variability can also influence the current choice, as captured by the concept of “choice probability” (Macke and Nienborg, 2019). However, unlike DLPFC and cerebellum, the AME of V1 on the current choice disappeared after controlling the current stimuli, which disqualifies V1 as the brain signal of s(t). In line with this, the AME of VCs3 on D(t) also disappeared after S(t) was controlled in experiment 2, which again disqualifies VCs3 as the valid brain signal of s(t).
The residence of the inferred, i.e., subjective or perceived, stimulus representation in DLPFC and cerebellum, instead of the visual cortex, seems consistent with previous reports. DLPFC and cerebellum have been well known for their critical involvement in visual awareness (Gao et al., 1996; Rees et al., 2002; Dehaene and Changeux, 2011; Lau and Rosenthal, 2011; Baumann et al., 2015). By contrast, the visual cortex is likely to be involved more in a faithful representation of physical input than its subjective representation (Renart and Machens, 2014), consistent with the previous findings of our group (Lee et al., 2007; Choe et al., 2014).
The representation of the decision variable in aSTG
Whereas previous single-cell studies have reported that the decision variable is represented in the prefrontal cortex (Kim and Shadlen, 1999; Hanks et al., 2015; Hebart et al., 2016), we identified the brain signal of v only in aSTG but not in PFC. This inconsistency may reflect the poor spatial and temporal resolution of fMRI measurements. For example, if any given signal of interest is encoded in the sequential or dynamical activity patterns across a neural population, as recently demonstrated theoretically (Orhan and Ma, 2019) or empirically (Wutz et al., 2018), such signals cannot be decoded from fMRI responses. Alternatively, the inconsistency may have been a result of the previous studies not taking into account the history effect in defining the decision variable, in contrast to our study which did, given the prevalence of diverse history effects in various decision-making tasks (Fründ et al., 2014; Lak et al., 2020). In this scenario, the brain signal of the inferred stimulus in DLPFC in our study hints at the possibility that the previously reported decision variable signal in PFC could have reflected the inferred stimulus, which is closely associated with the decision variable when the decision boundary is assumed to be fixed (Gold and Shadlen, 2007). Understanding the functional role of DLPFC in perceptual decision-making seems to require further future studies, especially those in which the history effects are considered in decision variable definition while neural responses are probed at a sufficiently high spatiotemporal resolution.