Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Neural Evidence for Boundary Updating as the Source of the Repulsive Bias in Classification

Heeseung Lee, Hyang-Jung Lee, Kyoung Whan Choe and Sang-Hun Lee
Journal of Neuroscience 21 June 2023, 43 (25) 4664-4683; https://doi.org/10.1523/JNEUROSCI.0166-23.2023
Heeseung Lee
Department of Brain and Cognitive Sciences, Seoul National University, Seoul 08826, Republic of Korea
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Hyang-Jung Lee
Department of Brain and Cognitive Sciences, Seoul National University, Seoul 08826, Republic of Korea
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kyoung Whan Choe
Department of Brain and Cognitive Sciences, Seoul National University, Seoul 08826, Republic of Korea
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sang-Hun Lee
Department of Brain and Cognitive Sciences, Seoul National University, Seoul 08826, Republic of Korea
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

This article has a correction. Please see:

  • Lee et al., “Neural Evidence for Boundary Updating as the Source of the Repulsive Bias in Classification” - September 13, 2023

Abstract

Binary classification, an act of sorting items into two classes by setting a boundary, is biased by recent history. One common form of such bias is repulsive bias, a tendency to sort an item into the class opposite to its preceding items. Sensory-adaptation and boundary-updating are considered as two contending sources of the repulsive bias, yet no neural support has been provided for either source. Here, we explored human brains of both men and women, using functional magnetic resonance imaging (fMRI), to find such support by relating the brain signals of sensory-adaptation and boundary-updating to human classification behavior. We found that the stimulus-encoding signal in the early visual cortex adapted to previous stimuli, yet its adaptation-related changes were dissociated from current choices. Contrastingly, the boundary-representing signals in the inferior-parietal and superior-temporal cortices shifted to previous stimuli and covaried with current choices. Our exploration points to boundary-updating, rather than sensory-adaptation, as the origin of the repulsive bias in binary classification.

SIGNIFICANCE STATEMENT Many animal and human studies on perceptual decision-making have reported an intriguing history effect called “repulsive bias,” a tendency to classify an item as the opposite class of its previous item. Regarding the origin of repulsive bias, two contending ideas have been proposed: “bias in stimulus representation because of sensory adaptation” versus “bias in class-boundary setting because of belief updating.” By conducting model-based neuroimaging experiments, we verified their predictions about which brain signal should contribute to the trial-to-trial variability in choice behavior. We found that the brain signal of class boundary, but not stimulus representation, contributed to the choice variability associated with repulsive bias. Our study provides the first neural evidence supporting the boundary-based hypothesis of repulsive bias.

  • Bayesian inference
  • history effect
  • perceptual decision
  • repulsive bias
  • sensory adaptation

Introduction

We commit to a proposition about a specific world state when making a perceptual decision. One basic form of such commitment is binary classification. It is to decide whether an item's magnitude lies on the smaller or larger side of the magnitude distribution across items of interest (Fig. 1A). For example, when uttering “this tree is tall” while walking in a wood, we are implicitly judging the height of that tree to be taller than the typical height of the trees in the wood (Klein, 1980; Bierwisch, 1989), where “typical height” works as the boundary dividing the “short” and “tall” classes. Like this, binary classification is exercised in our daily language use, whenever modifying a subject with relative adjectives (Rips and Turnbull, 1980; Tribushinina, 2011; Solt, 2015; Lassiter and Goodman, 2017), and has been adopted as an essential paradigm for studying perceptual decision-making (Lages and Treisman, 1998; Grinband et al., 2006; Kepecs et al., 2008; Nahum et al., 2010; Lak et al., 2014; Bosch et al., 2020; Hachen et al., 2021).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Two contending hypotheses on the origin of the repulsive bias in binary classification. A, Task structure (left) and statistical knowledge (right) for binary classification. For any given item, its class is determined by its position relative to the class boundary in the distribution of feature magnitudes relevant to a given task (e.g., a tree is classified as “tall” if its height is in the side greater than the typical height of the trees in the wood of interest). This relativity of binary classification makes the “biased sensory encoding” and the “biased knowledge about boundary position” because of previous stimuli, in principle, have equal footings in inducing the repulsive bias. B, Sensory-adaptation hypothesis. It points to the adaptation of a low-level stimulus-encoding signal to past stimuli (arrow 1) as the origin of the repulsive bias (arrow 2). In the case of visual classification tasks, the task-relevant sensory signals in the early visual cortex (blue patch), which are subject to adaptation, have been hypothesized to mediate the repulsive bias. C, Boundary-updating hypothesis. It points to the attractive shift of a classifier's internal class boundary toward previous stimuli (arrow 3) as the origin of the repulsive bias (arrow 4). Such boundary-representing signals are expected to reside not in the early sensory cortex but in the high-tier associative cortices (red patch).

Humans and nonhuman animals show various forms of history bias in binary classification. One frequent form of such history biases is a tendency to classify an item as the class opposite to its preceding items, dubbed repulsive bias (Lages and Treisman, 1998, 2010; Bosch et al., 2020; Hachen et al., 2021). For instance, we tend to classify a tree of intermediate height as “tall” after seeing a short tree. Currently, it remains unclear why and how repulsive bias occurs.

As one most straightforward scenario for repulsive bias, the previous stimuli may repel away our perception of the current stimulus from themselves because the sensory system adapts to earlier stimuli (Gibson and Radner, 1937; Stocker and Simoncelli, 2006; Clifford et al., 2007; Knapen et al., 2010; Pavan et al., 2012; Morgan, 2014; Nakashima and Sugita, 2017; Fig. 1B). According to this “sensory-adaptation” hypothesis, the current tree is biasedly classified as “tall” since the sensory system's adaptation to the previous short tree makes the current tree appear taller than its physical height. However, there is an alternative scenario, which considers the possibility that the internal class boundary adaptively shifts toward recent samples of property magnitude (Treisman and Williams, 1984; Lages and Treisman, 1998, 2010; Dyjas et al., 2012; Raviv et al., 2014; Norton et al., 2017; Hachen et al., 2021; Fig. 1C). According to this “boundary-updating” hypothesis, the current tree is biasedly classified as “tall” since the shift of the class boundary toward the previous short tree makes the current tree be positioned in the taller side of the boundary.

As discussed previously (Hachen et al., 2021), it is hard to assess which hypothesis is more viable based on behavioral data. This difficulty arises because binary classification is a matter of the relativity between the perceived stimulus and the class boundary: the identical bias in classification can be caused either by sensory-adaptation or boundary-updating. However, the two hypotheses involve distinct neural routes through which repulsive bias transpires. The sensory-adaptation hypothesis predicts that the sensory brain signals subject to adaptation, such as those in the early sensory cortex with substantive adaptation to earlier stimuli, contribute to the choice variability. By contrast, the boundary-updating hypothesis predicts that the brain signals of the shifting boundary, such as those in the high-tier cortices involved in the working memory of previous stimuli, contribute to the choice variability.

Here, we tested these two predictions by analyzing functional magnetic resonance imaging (fMRI) data. We found that the stimulus-encoding signal in primary visual cortex (V1) exhibited adaptation, but its bias induced by adaptation was dissociated from current choices. By contrast, the boundary-representing signals in the posterior-superior-temporal gyrus and the inferior-parietal lobe not only shift to previous stimuli but also covaried with current choices. Our findings contribute to the resolution of the competing ideas regarding the source of repulsive bias by providing the first neural evidence supporting the boundary-updating scenario.

Materials and Methods

The data of experiment 1 (Exp1) and experiment 2 (Exp2) were acquired from 19 (nine females, aged 20–30 years) and 18 (nine females, aged 20–30 years) participants, respectively. Among the participants, 17 of them participated in both experiments. The Research Ethics Committee of Seoul National University approved the experimental procedures. All participants gave informed consent and were naive to the purpose of the experiments. High-spatial-resolution images were acquired only from the early visual cortex in Exp1 while the images in Exp2 were acquired from the entire brain with a conventional spatial resolution. The 17 people who provided the data for both experiments participated in three to six behavior-only sessions for training and stimulus calibration, one fMRI session for retinotopy, and two experimental fMRI sessions (one for each experiment). The remaining people also completed the behavioral and retinotopy fMRI sessions with the same protocols but participated in only one of the two experiments.

The data from Exp1 had been used for our previous work (Choe et al., 2014). The data of Exp2 has never been used in any previous publication. In the current paper, we describe some basic procedures of Exp1. For more details on Exp1, please refer to the original work (Choe et al., 2014).

Experimental setup

MRI data were collected using a 3 Tesla Siemens Tim Trio scanner equipped with a 12-channel Head Matrix coil at the Seoul National University Brain Imaging Center. Stimuli were generated using MATLAB (MathWorks) in conjunction with MGL (http://justingardner.net/mgl) on a Macintosh computer. Observers looked through an angled mirror attached to the head coil to view the stimuli displayed via an LCD projector (Canon XEED SX60) onto a back-projection screen at the end of the magnet bore at a viewing distance of 87 cm, yielding a field of view of 22 × 17°.

Behavioral data acquisition

Figure 2 illustrates the experimental procedures. On each trial, the observer initially viewed a small fixation dot (diameter in visual angle, 0.12°; luminance, 321 cd/m2) appearing at the center of a dark (luminance, 38 cd/m2) screen. A slight increase in the size of the fixation dot (from 0.12° to 0.18° in diameter), which was readily detected with foveal vision, forewarned the observer of an upcoming presentation of a test stimulus. The test stimulus was a brief (0.3 s) presentation of a thin (full-width at half-maximum of a Gaussian envelope, 0.17°), white (321 cd/m2), dashed (radial frequency, 32 cycles/360°) ring that counter-phase-flickered at 10 Hz. After each presentation, participants classified the ring size into small or large using a left-hand or right-hand key, respectively, within 1.5 s from stimulus onset. They were instructed to maintain strict fixation on the fixation dot throughout experimental runs. This behavioral task was performed in three different environments: (1) the training sessions, (2) the practice runs of trials inside the MR scanner, and (3) the main scan runs inside the MR scanner, in the following order.

In the training sessions, participants practiced the task intensively over several (three to six) sessions (∼1000 trials per session) in a dim room outside the scanner until they reached an asymptotic level of accuracy. Note that we opted to train observers with the stimuli that were much larger than those for the main experiments (mean radius of 9°) to avoid any unwanted perceptual learning effects at low sensory levels and to train participants to learn the task structure of classification.

In the practice runs of trials inside the MR scanner, participants performed 54 practice trials and then 180 threshold-calibration trials while lying in the magnet bore. On each of the threshold-estimation trials in which consecutive trials were apart from one another by 2.7 s., one of 20 different-sized rings was presented according to a multiple random staircase procedure (four randomly interleaved one-up-two-down staircases, two starting from the easiest stimulus and the other two starting from the hardest one) with trial-to-trial feedback based on the class boundary with the radius of 2.84°. A Weibull function was fit to the psychometric curves obtained from the threshold-calibration trials using a maximum-likelihood procedure. From the fitted Weibull function, the threshold difference in size (Δ in Fig. 2B) associated with a 70.7% correct proportion of responses was estimated. By finding this threshold for each participant, three threshold-level ring sizes were individually tailored as 2.84−Δ° (S-ring), 2.84° (M-ring), 2.84+Δ° (L-ring).

In the main scan runs, one of these rings with threshold-level differences was presented in the order defined by an m-sequence (base = 3, power = 3; nine S and L-rings and eight M-rings were presented; all scan runs started with two M-rings; Buracas and Boynton, 2002) to null the autocorrelation between stimuli. Participants were not informed of the existence of medium-ring. Importantly, participants did not receive trial-to-trial feedback. Instead, only their run-averaged percent correct based on the trials of S-ring and L-ring was shown during a break after each run, to prevent trial-to-trial feedback from evoking any unwanted brain responses associated with rewards (Marco-Pallarés et al., 2007; Carlson et al., 2011) or errors (Carter et al., 1998; Holroyd et al., 2004; Cavanagh and Frank, 2014). Consecutive trials were apart from one another by 13.2 s. In the main scan runs of Exp1 and Exp2, observers performed 156 (six runs × 26 trials) and 208 (eight runs × 26 trials) trials in total, respectively.

MRI equipment and acquisition

We acquired three types of MRI images. (1) 3D, T1-weighted, whole-brain images were acquired at the beginning of each functional session: MPRAGE; resolution, 1 × 1 × 1 mm; field of view (FOV), 256 mm; repetition time (TR), 1.9 s; time for inversion, 700 ms; time to echo (TE), 2.36 ms; and flip angle (FA), 9°. (2) 2D, T1-weighted, in-plane images were acquired at the beginning of each functional session. The parameters for the retinotopy-mapping, the V1 mapping, and the whole brain mapping differed slightly as follows (retinotopy, followed by the V1 mapping, and then by the whole brain mapping): MPRAGE; resolution, 1.078 × 1.078 × 2.0 mm, 1.083 × 1.083 × 2.3 mm 1.08 × 1.08 × 3.3 mm; TR, 1.5 s; T1, 700 ms; TE, 2.79 ms; and FA, 9°). (3) 2D, T2*-weighted, functional images were acquired during each functional session: gradient EPI; TR, 2.7, 2.2, 2.2 s; TE, 40 ms; FA, 77°, 73°, 73°; FOV, 208 mm, 207 mm, 208 mm; image matrix, 104 × 104, 90 × 90, 90 × 90; slice thickness, 1.8 mm with 11% gap, 2 mm with 15% slice gap, 3 mm with 10% space gap; slice, 30, 22, 32 oblique transfers slices; bandwidth, 858 Hz/px, 750 Hz/px, 790 Hz/px; and effective voxel size, 2.0 × 2.0 × 1.998 mm, 2.3 × 2.3 × 2.3 mm, 3.25 × 3.25 × 3.3 mm).

Retinotopy-mapping protocol

Standard traveling wave methods (Engel et al., 1994; Sereno et al., 1995) were used to define V1, to estimate each participant's hemodynamic impulse response function (HIRF) of V1, and to estimate V1 voxels' receptive field center and width. High-contrast and flickering (1.33 Hz) dartboard patterns were presented either as 0.89°-thick expanding or contracting rings in two scan runs, as 40°-width clockwise or counterclockwise rotating wedges in four runs or in one run as four stationary, 15°-wide wedges forming two bowties centered on the vertical and horizontal meridians. Each scanning run consisted of nine repetitions of 27-s period of stimulation. The fixation behavior during the scans was assured by monitoring participants' performance on a fixation task, in which they had to detect any reversal in direction of a small dot rotating around the fixation.

Data preprocessing of V1 images in the retinotopy-mapping session and the main session of Exp1

All functional EPI images were motion-corrected using SPM8 (http://www.fil.ion.ucl.ac.uk/spm; Friston et al., 1996; Jenkinson et al., 2002) and then co-registered to the high-resolution reference anatomic volume of the same participant's brain via the high-resolution in-plane image (Nestares and Heeger, 2000). After co-registration, the images of the retinotopy-mapping scan were resliced, but not spatially smoothed, to the spatial dimensions of the main experimental scans. The area V1 was manually defined on the flattened gray matter cortical surface mainly based on the meridian representations, resulting in 825.4±140.7 (mean±SD across observers) voxels. The individual voxels' time series were divided by their means to convert them from arbitrary intensity units to percentage modulations and were linearly detrended and high-pass filtered (Smith et al., 1999) using custom scripts in MATLAB (MathWorks). The cutoff frequency was 0.0185 Hz for the retinotopy-mapping session and 0.0076 Hz for the main session. The first 10 (of 90; a length of a cycle) and 6 (of 156; a length of a trial) frames of each run of the retinotopy-mapping session and main session, respectively, were discarded to minimize the effect of transient magnetic saturation and allow the hemodynamic response to reach a steady state. The “blood-vessel-clamping” voxels, which show unusually high variances of fMRI responses, were discarded (Olman et al., 2007; Shmuel et al., 2007); a voxel was classified as “blood-vessel-clamping” if its variance exceeds 10 times of the median variance value of the entire voxels. As the final step of data preprocessing, we removed a stimulus-nonspecific (untuned) component from the detrended BOLD time series by subtracting the across-eccentricity-bin average from the individual bins' time series at each time frame t, which resulted in the tuned responses (TRi): TRi(t)=RRi(t)−∑i=1neRRi(t)/ne, where RRi is the i-th bin's BOLD time series, and ne is the number of eccentricity bins (21). This subtraction procedure is exactly the same as we did in our previous work (Choe et al., 2014). We used TRi(t) to extract the size-encoding signal in V1.

Data preprocessing of whole-brain images in the main session of Exp2

The whole-brain images of the participants in Exp2 were normalized to the MNI template in the following steps: motion correction, co-registration to whole-brain anatomic images via the in-plane images (Nestares and Heeger, 2000), spike elimination, slice timing correction, resampling to 3 × 3 × 3-mm voxel size with the SPM DARTEL Toolbox (Ashburner, 2007). Spatial smoothing was not applied to avoid the blurring of the patterns of activity. All the procedures were implemented using SPM8 and SPM12 (https://www.fil.ion.ucl.ac.uk/spm-statistical-parametric-mapping/; Friston et al., 1996; Jenkinson et al., 2002), except for spike elimination, for which we used the AFNI toolbox (Cox, 1996). The first 6 frames of each functional scan, which correspond to the first trial of each run, were discarded to allow the hemodynamic responses to reach a steady state. Then, the normalized BOLD time series at each voxel, each run, and each brain underwent linear detrending, high-pass filtering (0.0076-Hz cutoff frequency with a Butterworth filter), conversion into percent-change signals, and correction for non-neural nuisance signals, which was done by regressing out the mean BOLD activity of CSF.

The anatomic masks of CSF, white matter, and gray matter were defined by generating the probability tissue maps for individual participants from T1-weighted images, by smoothing those maps to the normalized MNI space using SPM12, and then by averaging them across participants. Finally, the masks were defined as respective groups of voxels whose probabilities exceed 0.5.

Unfortunately, in a few of the sessions, functional images did not cover the entire brain. Especially, the lost part was much larger in one participant's session than the others including the orbitofrontal cortex and posterior cerebellum. Thus, not to lose too many of voxels for analysis because of this single session, we relaxed the criterion of voxel selection a bit by including the voxels that were shared by >16 brains in the normalized MNI space. As a result, some voxels in the temporal pole, ventral orbitofrontal, and posterior cerebellum were excluded from data analysis.

Estimation of the eccentricities in retinotopic space for V1 voxels

For each V1 voxel in Exp1, its eccentricity (e, as shown in Fig. 3E,H) was defined by fitting a one-dimensional Gaussian function simultaneously to the time-series of fMRI responses to the expanding and contracting ring stimuli in the retinotopy session, which were also used for the definition of V1. The essence of this procedure is as follows (additional details can be found in the original paper; Choe et al., 2014).

First, the time series of fMRI were extracted only from a relevant group of voxels with SNR > 3 in both of the ring scan runs. Second, an eccentricity-tuning curve (gain over eccentricity, in other words) of a single voxel, g(ε), was modeled by a Gaussian as a function of the eccentricity in a visuotopic space, ε, and it was parameterized by a peak eccentricity, e, and a tuning width, σ: ge(ε)=exp−((ε-e)22σ2).

Third, the collective responses of visual neurons within that voxel with a particular g(ε) at a given time frame t, n(t), were predicted by multiplying g(ε) by spatial layout of stimulus input at that time frame, s(ε,t): n(t)=∑εs(ε,t)g(ε).

Fourth, the predicted time-series of fMRI responses of that voxel, fMRIp(t), were generated by convoluting n(t) with a scaled (by β) copy of the HIRF acquired from the meridian scans, h(t)β, and plus a baseline response, b: fMRIp(t)=n(t)*h(t)β + b.

Fifth, the eccentricity e and the other model parameters (σ, β, b) were found by fitting fMRIp(t) to the predicted time-series of fMRI responses to the actual stimulation, fMRIo(t), by minimizing the residual sum of squared errors between fMRIp(t)and fMRIo(t) over all time frames, RSS: RSS=∑t(fMRIo(t)−fMRIp(t))2.

Extraction of the size-encoding signal from V1 voxels

The three different weighting profiles, each representing the contributions of the individual eccentricity bins assessed by the three different schemes (the uniform, the discriminability, and the log-likelihood ratio schemes), were defined as follows. The uniform scheme (Fig. 4B, blue) assigned three discrete values to the eccentricity bins depending on which flanking side of the M-ring (rM) their preferred eccentricities (e) belonged to: w(e)={−1, fore<rM0,fore=rM1,fore>rM.

The discriminability scheme (Fig. 4B, red) defined the weights in proportion to the differential responses of given eccentricity bins to the L (rL) and the S-rings (rS), which were derived from the eccentricity-tuning curves defined from the retinotopy-mapping session: w(e)=ge(rL)−ge(rS)−δ, where ge is the eccentricity-tuning curve of the eccentricity bin with preferred eccentricity, e, and the baseline offset, δ, is as follows: ∑e[ge(rL)−ge(rS)]/ne.

The log-likelihood ratio scheme (Fig. 4B, yellow) defined the weights by taking the differences between the log-likelihoods of obtaining a given response if the stimulus were the L-ring, logLL, and if the stimulus were the S-ring, logLS. Because the eccentricity-tuning curves were assumed to be described by a Gaussian function, the log-likelihood ratio weights at preferred eccentricity, e, can be simplified to the following formula: w(e)=logLL−logLS=−12σL2(e−rL)2+12σS2(e−rS)2−δ, where σL and σS are the tuning widths with rL and rS, and the baseline offset, δ, is as follows: ∑e[−12σL2(e−rL)2 + 12σS2(e−rS)2]/ne.

A Bayesian model of boundary-updating (BMBU)

The generative model

The generative model is the observers' causal account for noisy sensory measurements, where the true ring size, S, causes a noisy sensory measurement on a current trial, m(t), which becomes noisier as i trials elapse, thus turning into a noisy retrieved measurement of the value of S on trial t−i, r(t−i) (Fig. 5D). Hence, the generative model can be specified with the following three probabilistic terms: a prior of S, p(S), a likelihood of S given m(t), p(m(t)|S), and a likelihood of S given r(t−i), p(r(t−i)|S). These three terms were all modeled as normal distribution functions, the shape of which is specified with mean and standard deviation parameters, μ and σ: μ0 and σ0 for the prior, μm(t) and σm(t) for the likelihood for m(t), and μr(t−i) and σr(t−i) for the likelihood for r(t−i). The mean parameters of the two likelihoods, μm(t) and μr(t−i), are identical to m(t) and r(t−i); therefore, the parameters that must be learned are reduced to μ0, σ0, σm(t), and σr(t−i).

σm(t) is assumed to be invariant across different values of m(t), as well as across trials. Therefore, σm(t) is reduced to a constant σm. Finally, because σr(t−i) is assumed to originate from σm and to increase as trials elapse (Gorgoraptis et al., 2011; Zokaei et al., 2015), σr(t−i) is also reduced to the following parametric function: σr(t−i)=σm(1+κ)i, where κ>0. As a result, the generative model is completely specified by the four parameters, Θ={μ0,σ0,σm,κ}.

The primary purpose of BMBU is to build a generative Bayesian model which allows us to estimate the trial-to-trial latent states of the class boundary variable that are likely to be used by human observers whose class boundary is continually attracted to previous stimuli as posited by the boundary-updating hypothesis on “repulsive bias.” In doing so, we intended to build a parsimonious model with minimal free parameters as long as the model implements the strategy essential to the boundary-updating hypothesis. For this reason, we had to introduce several arbitrary assumptions in building BMBU. For example, although we assumed that memory precision decays exponentially, other forms of decay function are also possible, such as hyperbolic, power, and logarithmic ones. We also assumed that the noisy sensory measurement on a current trial, m(t), becomes the noisy retrieved measurement of the value of S as trials elapse. However, it is equally possible that the memory measurements of S in the elapsed trials can be retrieved independently from the sensory measurement used for decision-making. Whether or not these assumptions are valid might be an interesting research question but is beyond the scope of the current work, especially in that the alternative assumptions about such detailed modeling aspects are unlikely to affect the way BMBU shifts the class boundary toward previous stimuli.

Stimulus inference (s)

A Bayesian estimate of the value of S on a current trial, s(t), was distributed as a posterior function of a given sensory measurement m(t): p(s(t))=p(S|m(t))∝p(m(t)|S)p(S).

The posterior p(S|m(t)) is a conjugate normal distribution of the prior and likelihood of S given the evidence m(t) whose mean μs(t) and standard deviation σs(t) were calculated as follows (Fig. 5D): μs(t)=σ02m(t) + σm2μ0σ02 + σm2;σs(t)=σ0σmσ02 + σm2.

Class boundary inference (b)

The Bayesian observer infers the value of class boundary on a current trial, b(t), by inferring the posterior function of a given set of retrieved sensory measurements r→(t)={r(t−1),r(t−2),...r(t−n)}: b(t)=S∼=argmaxSp(S|r→(t)), where the maximum number of measurements that can be retrieved, n, was set to 7. We set 7 because it is much longer than the effective trial lags of the previous stimulus effect (Fig. 5C). Here, p(S|r→(t)) is a conjugate normal distribution of the prior and likelihoods of S given the evidence r→(t): p(S|r→(t))∝p(r→(t)|S)p(S)=p(r(t−1)|S)p(r(t−2)|S)...p(r(t−7)|S)p(S), whose mean and standard deviation were calculated (Bromiley, 2003) based on the knowledge of how the retrieved stimulus becomes noisier as trials elapse: μb(t)=β0μ0 + ∑i=17βir(t−i);σb(t)=β02σ02 + ∑i=17βi2σr(t−i)2, where β0=σ0−2σ0−2+∑i=17σr(t−i)−2 and βi=σr(t−i)−2σ0−2+∑i=17σr(t−i)−2. We postulated that the uncertainty of b(t) is equivalent to σb(t) (Fig. 5G).

Deduction of decision variable (v), decision (d), and decision uncertainty (u)

On each trial, the Bayesian observer makes a binary decision d(t) by calculating the probability of s(t) is larger than b(t), which is called the decision variable, v(t), defined as v(t)=p(s(t)>b(t))=Φ[s(t)−b(t)σs(t)2 + σb(t)2].

Then, if v(t) is larger than 0.5, d(t) is large. Otherwise, d(t) is small. Also, we defined the decision uncertainty, u(t), which represents the odds that the current decision will be incorrect (Sanders et al., 2016), as follows: u(t)=Φ[−|s(t)−b(t)|σs(t)2+σb(t)2].

Fitting the parameters of BMBU

For each human participant, the parameters of the generative model, Θ={μ0,σ0,σm,κ}, were estimated as those maximizing the sum of log-likelihoods for T individual choices made by the observer, D→(T)=[D(1),D(2),...,D(T)]: Θ̂=argmaxΘ∑t=1Tlogp(D(t)|Θ).

For each participant, estimation was conducted in the following steps. First, we found local minima of parameters using a MATLAB function, fminsearchbnd.m, with the iterative evaluation number set to 50. We repeated this step by choosing 1000 different initial parameter sets, that were randomly sampled within uniform prior bounds, and acquired 1000 candidate sets of parameter estimates. Second, from these candidate sets of parameters, we selected the top 20 in terms of goodness-of-fit (sum of log-likelihoods) and searched the minima using each of those 20 sets as initial parameters by increasing the iterative evaluation number to 100,000 and setting tolerances of function and parameters to 10−7 for reliable estimation. Finally, using the parameters fitted via the second step, we repeated the second step one more time. Then, we selected the parameter set that showed the largest sum of likelihoods as the final parameter estimates. We discarded (1) the first trial of each run and (2) the trials in which RTs were too short (less than 0.3 s) for parameter estimation for any further analyses because (1) the first trial of each run does not have its previous trial, which is necessary for investigating the repulsive bias, and (2) the response made during the stimulus is shown (0–0.3 s) can be considered too hasty to reflect a normal cognitive decision-making process.

A constant-boundary model

The constant-boundary model has two parameters, bias of class boundary μ0 and measurement noise σm. Stimulus estimates, s(t), were assumed to be sampled from a normal distribution, N(S(t),σm). Each stimulus sample has uncertainty σs(t)=σm. Class boundary b(t) was assumed to be a constant, μ0; so σp(b(t))=σb(t)=0.

Estimation of the latent states of the variables of BMBU

Fitting the model parameters separately for each human participant (Θ̂={μ̂0,σ̂0,σ̂m,κ̂}) allowed us to create the same number of Bayesian observers, each tailored to each human individual. We repeated the experiment on these Bayesian observers using the stimulus sequences identical to those presented to their human partners for the following two purposes. First, we wanted to examine whether BMBU's choice (d(t)) can reproduce the human partners' repulsive bias. Second, we need to estimate the trial-to-trial latent states of the model variables (s(t), b(t), v(t), u(t)) that were used by the human partners, thus represented in their brains engaged in the binary classification task. We acquired a sufficient number (106 repetitions) of simulated choices, d(t), and decision uncertainty values, u(t), which were determined by the corresponding number of the stimulus estimates, s(t), and the boundary estimates, b(t), for each Bayesian observer. Then, the averages across those 106 simulations were taken as the final outcomes. When estimating s(t), b(t), v(t), and u(t) for the observed choice D(t), we only included the simulation outcomes in which the simulated choice d(t) matched the observed choice D(t).

Recovery of the true states of the model variables

To ascertain the validity of our procedure of estimating the latent variables of BMBU described above, we checked how accurately it recovers the true states of the variables. This recovery test was conducted in the following procedure.

First, we created 256 different sets of parameter values by taking the possible combinations of the four different values of each of the four model parameters, where the four different values corresponded to the 20th, 40th, 60th, and 80th percentiles of the parameter values fitted to the observers' choices. Second, we acquired the synthetic choices and the true model variables b, s, v, and u by plugging one parameter set into BMBU and simulating it on the actual stimulus sequence presented to the observers. Third, we fitted the parameters of BMBU to the synthetic choices in the same procedure conducted for fitting BMBU to the observed choices. Fourth, we simulated a set of the recovered states of the model variables using the fitted model parameters. Fifth, we calculated the R2 between the true and the recovered variables to assess how reliably our model fitting procedure can recover the true states of the model variables. Finally, we repeated the above procedure for all the remained parameter sets and used the R2 averaged across the 256 parameter sets as the performance measure of the recovery test.

The multiple logistic regression model for capturing the repulsive bias

To capture the repulsive bias in human classification, we logistically regressed the current choice onto stimuli and choices using the following regression model to obtain regression coefficients p→={p(1),⋯,p(11)} for each observer: D(t)=eK(t)1 + eK(t), where K(t)=p(0) + p(1)S(t) + ∑i=15(p(1 + i)S(t−i) + p(6 + i)D(t−i)), the independent variables were each standardized to z scores for each participant. S(t) and D(t) are the stimulus and the observed choice values at trial t. S(t−i) and D(t−i) are the stimulus and the observed choice at the ith trial lags from trial t.

To capture the repulsive bias of the Bayesian observers, the Bayesian observers' choices were also regressed with the logistic regression model by substituting d(t) and d(t−i), the simulated choices, for D(t) and D(t−i), the observed choices. The regression was repeatedly conducted for each simulation, and the regression coefficients that were averaged across simulations were taken as final outcomes. The simulation was repeated 105 times. We confirmed that the simulation number was sufficiently large to produce stable simulation outcomes.

The average marginal effect analysis

Average marginal effect (AME) was calculated by using the R-package “margins” (Leeper et al., 2018). AME quantifies the average marginal effect between an ordinal dependent variable (i.e., binary choice) and an independent variable of a multiple logistic (or probit) regression model (Williams and Jorgensen, 2023). To calculate the AMEs of any given variable on the current choice (D(t)) without controlling the previous (S(t−1)) and current stimuli (S(t); i.e., the baseline AME), we implemented a logistic regression model with two regressors –the variable of interest X (i.e., V1, b, s, or v) and the previous choice (D(t−1)): D(t)∼logit(β0 + βXX + βD1D(t−1)).

We always included D(t−1) as a regressor because the effect of D(t−1) would confound the effect of S(t−1), if D(t−1) is not included in the regression model. Specifically, because S(t−1) and D(t−1) are highly correlated, it would be unclear whether the AME difference before and after controlling S(t−1) is ascribed to the effect of S(t−1) or that of D(t−1), if D(t−1) is not controlled. The effect of D(t−1) was controlled in all regression models.

To test whether the AME of X decreased after controlling S(t−1) (or S(t)), we calculated the AME of X from the logistic regression model including S(t−1) (or S(t)) as an additional regressor, as follows: D(t)∼logit(β0 + βXX + βD(t−1)D(t−1) + βS(t−1)(orS(t))S(t−1)(orS(t))), and subtracted the new AME from the baseline AME to see whether the baseline AME significantly changed after controlling previous or current stimuli.

Searching for the multivoxel patterns of activity representing the latent variables of BMBU

We assumed that (1) activity patterns of neural population for representing the latent variables are different between participants, but (2) locations and (3) timings of the activity patterns overlap across participants. Therefore, to identify the brain signals of the latent variables of BMBU in fMRI responses, the support vector regression (SVR) decoding was conducted for each human participant within specific spatial and temporal windows.

As for the spatial window, we implemented a searchlight technique (Kahnt et al., 2011b; Haynes, 2015). A searchlight has a radius of 9 mm (= 3 voxels; Soon et al., 2008) and thus can contain 123 voxels at most. Of the 123 voxels, we excluded the voxels located in CSF or white matter because they reflect non-neural signals. Thus, the effective number of voxels in a searchlight used for the analysis can vary searchlight by searchlight.

As for the temporal windows, we implemented the time-resolved decoding technique in which a target variable is decoded from the BOLD responses at each of the within-trial time points (Fig. 6B). We used the first four time points (out of six in total) because the BOLD responses associated with the action of button press, the last process of the sensory-to-motor decision-making stream, is maximized at the fourth time point (the result is not shown here). In sum, SVR is trained for each participant, each time point, and each searchlight.

View this table:
  • View inline
  • View popup
Table 1.

The sets of regressions that BMBU requires the brain signals of its latent variables to satisfy

Before training SVR, the BOLD responses in a searchlight and a target latent variable were z-scored across trials. Then, the z-scored variable was decoded for each searchlight using the cross-validation method of leave-one-run-out (eightfold cross-validation). As a result, for each searchlight and at each time point, we acquired a set of decoded latent variables in all trials. In other words, on each time point, we acquired the 4-dimensional map of the decoded variable (i.e., three spatial dimensions and 1 trial dimension). The 3D spatial dimensions of the decoded variables were smoothed with a 5 mm FWHM Gaussian kernel on each trial.

After this subject-wise decoding analysis, we conducted the across-subject analysis to test whether the decoded variables are significantly informative. To do so, for each searchlight locus and each time point, we regressed the smoothed decoded variable onto the regression conditions of the target variable by using a generalized linear mixed effect regression model (GLMM) with a random effect of subjects. The number of regression conditions was 14, 14, and 17 for b(t), s(t), and v(t), respectively (Table 1). Those regression models were deduced from the causal structure between the variables of BMBU (see the next section). We accepted a given cluster as the brain signals of b(t), s(t), or v(t) only when they satisfied those regression models over >12 contiguous searchlights. For the ROI analysis, the decoded variables were averaged over all searchlights within each ROI.

SVR was conducted using LIBSVM (https://www.csie.ntu.edu.tw/∼cjlin/libsvm/) with a linear kernel and constant regularization parameter of 1 (Soon et al., 2008; Kahnt et al., 2011b). The brain imaging results were visualized using Connectome Workbench (Marcus et al., 2011) and xjview.

The regression-model test for verifying the brain signals of b(t), s(t), and v(t)

To identify the brain signals of b(t), s(t), and v(t), we defined three respective lists of regressions that must be satisfied by the brain signals. We stress that each of these lists consists of the necessary conditions to be satisfied because the conditions are deduced from the causal structure of the variables in BMBU (Fig. 5G). Below, we specify the specific regression tests for s(t) and v(t) that constitute these lists. For the tests for b(t), see Results.

The 14 regressions for the brain signal of s(t) (Table 1): (#1–4), ys, s decoded from brain signals, must be regressed positively onto s, the variable it represents, even when the false discovery rate is controlled (Benjamini and Hochberg, 1995), and s orthogonalized to v or d because it should reflect the variance irreducible to the offspring variables of s; (#5), ys must not be regressed onto b because s and b are independent of each other (b↮s; Fig. 5G); (#6, 7), ys must be positively regressed onto v (s→v; Fig. 5G) but not when v is orthogonalized to s because the influence of s on v is removed; (#8, 9) ys must be positively regressed onto d (s→v→d; Fig. 5G) but not onto u because u cannot be linearly correlated with s (s→v→u is blocked by the interaction between u and v; Fig. 5G); (#10–12), ys must be positively regressed onto the current stimuli and not the past stimuli because s is inferred solely from the current stimulus measurement; (#13, 14), ys must not be regressed onto previous decisions because s is inferred solely from the current stimulus measurement. #10–14 were investigated by a multiple regression with regressors [S(t),S(t−1),S(t−2),D(t−1),D(t−2)]. We did not include D(t) as a regressor because D(t) may induce a spurious correlation between b and s by controlling the collider v (Elwert and Winship, 2014; b→v←s and v→d; Fig. 5G).

The 17 regressions for the brain signal of v(t) (Table 1). (#1–5), yv, v decoded from brain signals, must be positively regressed onto v, the variable it represents, even when the false discovery rate is controlled (Benjamini and Hochberg, 1995), and v orthogonalized to b,s, or d, because it should reflect the variance irreducible to the offspring variables of v; (#6, 7), yv must be negatively regressed onto one of its parents b (b→v; Fig. 5G), but not when b is orthogonalized to v, because the influence of b on v is removed; (#8, 9), yv must be positively regressed onto one of another parent s (s→v; Fig. 5G), but not when s is orthogonalized to v, because the influence of s on v is removed; (#10, 11), yv must be regressed onto d but not onto u because u's correlation with its parent v cannot be revealed without holding the variability of d (the interaction between u and v); (#12–14), yv must be positively regressed onto the current stimulus because the influence of the current stimulus on v is propagated via s (S(t)→s→v), and negatively regressed onto the past stimuli because the influence of the past stimuli on v is propagated via b (S(t−1)→b→v) —strongly onto the 1-back stimulus and more weakly onto the two-back stimulus (thus, nonsignificant regression with one-tailed regression in the opposite sign is modeled moderately); (#15–17), yv must be regressed onto the current decision and not the past decisions because the current decision is a dichotomous translation of v (v→d; Fig. 5G), whereas past decisions have nothing to do with the current state of v. #12–17 were investigated by a multiple regression with regressors [S(t),S(t−1),S(t−2),D(t),D(t−1),D(t−2)]. D(t) was included as a regressor because v does not suffer from a spurious correlation that arises by controlling a collider variable which is absent in this case.

Bayesian network analysis

To investigate whether the relationship between decoded b, s, and v is consistent with the causal structure postulated by BMBU, we calculated the BIC values for all the three-node networks consisting of the time series of three brain signals {yb, ys, yv} (Scutari, 2010) and determined the causal graph whose likelihood is maximal. The three-node network has 162 possible structures, as follows. A total of 27 edge structures can be created out of three nodes since three types of edges are possible for any given pair of nodes (i.e., x→y, x←y or x↮y) and there are three pairs (i.e., {b,v}, {v,s}, {s,b}; 33). Also, a total of 6 combinations of three nodes exist for {yc, ys, yv} since we have three (IPLb1, pSTGb3, pSTGb5), two (DLPFCs3, Cerebs5), and single (aSTGv5) brain signals of b, s, and v, respectively (3×2×1). Thus, because each of the 6 possible node combinations can have 27 edge structures, there are 162 possible three-node causal networks.

We opted to apply this Bayesian network analysis to the three-node networks instead of the six-node network consisting of all the six brain signals identified by the searchlight analysis because the number of possible six-node networks (N=36C2=315=14,348,907) was unrealistically large so that the statistical results are likely to suffer from type I errors. In addition, guided by BMBU, we were interested in identifying the causal structure of the three brain signals, each corresponding to one of the three model variables (b, s, and v). In other words, we were not interested in the causal relationship between the brain signals representing the same model variable (e.g., between pSTGb3, pSTGb5).

Statistics

We used the searchlight technique to look for brain signals related to the latent variables of the BMBU. To make the searchlight analysis statistically powerful by reducing the noise effect in the BOLD signals, we applied a generalized linear mixed effect model (GLMM) with the random effect of observers to calculate the association between the true and the decoded model variables. We applied the mixed effect model only to the searchlight analysis (Fig. 6; Table 1). For the other regression analyses, we conducted the analysis for each individual, respectively, because the mixed effect model was too computationally demanding to be applied to all other analyses. For instance, applying GLMM to the model simulation depicted in Figure 5C requires 105repetitions of regression analysis. The significance tests were two-tailed except for the searchlight analysis as specified in Table 1. Also, for the time-resolved searchlight analysis, we implemented the multiple-comparison test (the false discovery rate (fdr) correction; Benjamini and Hochberg, 1995) for each of the fMRI time frames. In the figures summarizing statistical results, all confidence intervals are the 95% confidence intervals of the mean across individual observers.

Results

Experimental paradigm

Over consecutive trials, participants sorted ring sizes into two classes, small and large, under moderate time pressure (Fig. 2A). To ensure decision-makings with uncertainty, we presented three rings (small, medium, and large) differing by a threshold size (Δ), which was tailored for individuals (Fig. 2B; see Materials and Methods). The ring sizes were presented in m-sequence to rule out any correlation between consecutive stimulus sizes (Buracas and Boynton, 2002). We provided participants with feedback after each scan run by summarizing their performance with the proportion of correct trials.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Binary classification task on ring size. A, Within-trial procedure. With the eyes fixed, human participants were prewarned (2.2 s), with the increase of the fixation dot, to get ready for the upcoming trial after a long intertrial interval (9.5 s), briefly viewed the ring stimulus (0.3 s), and judged its size as large or small in respect to the medium size ring within a limited window of time (1.5 s). B, Ring stimuli with threshold-level differences in size. On each trial, a participant viewed one of the three rings, small (S), medium (M), large (L), the size contrast (Δ) of which was optimized to ensure threshold-level classification performance on a participant-to-participant basis in a separate calibration run inside the MR scanner, right before the main session of fMRI scan runs. The order of ring sizes over trials was constrained with an m-sequence to preclude the temporal correlation among stimuli. Here, the luminance of the rings is inverted here for an illustrative purpose.

To verify the sensory-adaptation hypothesis, we conducted experiment 1, where 19 participants performed the classification task while BOLD measurements with a high spatial resolution were acquired only from their early visual cortices. To verify the boundary-updating hypothesis, we conducted experiment 2, where 18 participants performed the same task while their whole brains were imaged. The data of experiment 1 had been used in our published work (Choe et al., 2014).

Repulsive bias in experiment 1

The participants in experiment 1 displayed a substantive amount of repulsive bias. As anticipated, the proportion of large choices (PL) increased as the ring size on the current trial (S(t)) increased. Importantly, when the psychometric curves were conditioned on the previous stimulus (S(t−1)), they shifted upward as the ring size in the previous trial decreased (the contrasts between the solid, dotted, and dashed lines in Fig. 3A), which indicates the presence of repulsive bias. By contrast, the psychometric curves were not affected much by the previous choice (the contrasts between the gray and black lines in Fig. 3A). To quantify the impact of the previous stimulus on the current choice, we subtracted the PLs acquired when the previous ring size was S from those when L separately for each of the six combinatorial conditions of the current stimulus (three sizes) and previous choice (two alternatives) and then averaged those six PL differences. The averaged PL difference (−0.20) was significantly smaller than zero (t(18)=−8.9,p=5.1×10−8; Fig. 3B, left). We also quantified the impact of the previous choice on the current choice similarly: the PL differences of previous large from small choices were calculated separately for the nine combinatorial conditions of the current and previous stimulus and then averaged. The averaged PL difference (−0.018) did not significantly differ from zero (t(18)=−0.68,p=0.50; Fig. 3B, right).

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Influences of previous and current stimuli on classification behavior and V1 activity in experiment 1. A–C, Repulsive bias in psychometric curves (A, B) and regression analysis (C). The psychometric curves, where the fractions of large choices are plotted against the current stimulus, are shown separately for the six possible combinations defined by the previous stimulus and choice (A). As the summary of the effects of the previous stimulus on the current choice, the differences in the fractions of large choices between the previous stimuli were L-ring and S-ring (p(D(t)=large|S(t−1)=1)−p(D(t)=large|S(t−1)=−1)) are computed separately for the six combinations of the current stimulus and previous choice and then averaged (B, left). As the summary of the effects of the previous choice on the current choice, the differences in the fractions of large choices between the previous choices were large and small (p(D(t)=large|D(t−1)=large)−p(D(t)=large|D(t−1)=small)) are computed separately for the nine combinations of the current and previous stimuli and then averaged (B, right). The small gray circles represent the individual observers. The multiple logistic regression coefficients of the current choice are plotted against trial lags (C). In the inset, the regression coefficients for the previous-stimulus (S(t−1)) regressor are plotted against those for the previous-choice (D(t−1)) regressor for individual observers, where the red error bars demarcate the 95% CIs of the means. D, Eccentricity map of V1 on the flattened left occipital cortex of a representative brain, S08. The dot, curves, and colors correspond to those in the inset depicting the visual field. The image is borrowed from our previous work (Choe et al., 2014). E, H, Spatiotemporal BOLD V1 responses to L-ring (left) and S-ring (middle), and their differentials (right), presented on the current (E) and previous (H) trials. The color bars indicate BOLD changes in the unit of % signal, averaged across all participants. The vertical dashed line marks the time point for stimulus onset. The horizontal dashed line corresponds to the eccentricity of M-ring, splitting the voxels into “L-prefer” and “S-prefer” groups based on their preferred ring size. F, The differential of BOLD responses at peak between the small and large ring on the current trial. The vertical dashed line marks the eccentricity of M-ring. The horizontal red and blue lines mark the average BOLD signals of the L-prefer and S-prefer voxels, respectively. The vertical orange line quantifies the stimulus-driven gain of V1 responses. G, I, Time courses of the stimulus-driven gain of V1 responses to the current (G) and previous (I) stimuli. The stimulus duration and response window are demarcated by the light and dark gray bars demarcate (G, I). The 95% CIs of the mean across observers are indicated by the shaded areas (F) or by the vertical error bars (B, C, G, I). Asterisks indicate the statistical significance (*P<0.05, **P<10−3, ***P<10−4; B, C, G, I). The orange boxes and arrows are drawn to help the relationships between the panels (E–G).

To ensure this repulsive effect of the previous stimulus on the current choice, we logistically regressed each participant's current choice (D(t)) simultaneously onto the previous stimulus and choice. The regression coefficients for the previous stimuli were significant up to two trial lags across participants (S(t−1), β=−0.39, t(18)=−9.6, p=1.6×10−8; S(t−2), β=−0.11, t(18)=−2.4,p=0.026; D(t−1), β=−0.17, t(18)=−2.8,p=0.012), which confirms the robust presence of repulsive bias in experiment 1 (Fig. 3C).

Sensory adaptation in V1

As a first step toward the verification of the sensory-adaptation hypothesis, we defined the size-encoding signal in V1. As our group showed previously (Choe et al., 2014); the eccentricity-tuned BOLD responses in V1 (Fig. 3D) readily resolved the threshold-level differences in ring size, as anticipated by the retinotopic organization of the V1 architecture (Fig. 3E). Thus, the subtraction of the BOLD responses at the voxels preferring S-ring to L-ring from those at the voxels preferring L-ring to S-ring (Fig. 3F) was significantly greater when S(t) was large than when small (the third and the fourth time points, β=0.11, t(18)=4.8, p=1.5×10−4and β=0.13, t(18)=6.6, p=3.7×10−6; Fig. 3G).

Next, having defined the size-encoding signal in V1, which will be referred to as “V1,” we sought evidence of sensory-adaptation in that signal. According to the previous work on sensory-adaptation (Clifford et al., 2007; Kohn, 2007; Solomon and Kohn, 2014; Weber et al., 2019), we expected V1 to decrease following the large size and to increase following the small size because of the selective gain reduction at the sensory neurons tuned to previous stimuli. In line with this expectation, V1 indeed significantly decreased when preceded by L-ring than when preceded by S-ring (the fourth time point, β=−0.45, t(18)=−2.2, p=0.040; Fig. 3H,I). Although we rendered ineffective the autocorrelation between consecutive stimuli using an m-sequence (see Materials and Methods), we additionally checked the possibility that the observed adaptation of V1 might have spuriously occurred because of any imbalance in the ring size of the current stimuli. To do so, we first calculated the differences in V1 between the previous S-rings and L-rings separately for the three current stimuli and then averaged those three differences. We confirmed that the averaged V1 differences were smaller when preceded by L-ring than when preceded by S-ring (the fourth time point, β=−0.44, t(18)=−2.1, p=0.049).

In sum, the V1 population activity reliably encoded the ring size and exhibited sensory adaptation.

The variability of V1 associated with previous stimuli fails to contribute to the choice variability

Next, we verified the critical prediction of the sensory-adaptation hypothesis on repulsive bias. Below, we will define what this crucial prediction is and how we empirically examine that prediction.

Above, we confirmed that the ring size, not only on the current trial (S(t)) but also on the previous trial (S(t−1)), affects V1 on the current trial (S(t−1)→V1←S(t) in Fig. 4A). What we do not know yet is whether the variabilities of V1 that originate from S(t) and S(t−1), respectively, flow all the way into the observer's current choice (S(t)→V1→D(t) and S(t−1)→V1→D(t) in Fig. 4A). Critically, if the sensory-adaptation hypothesis is true, the variability of V1 associated with S(t−1) must contribute to the current choice (D(t); S(t−1)→V1→D(t)), just as that associated with S(t) must do so (S(t)→V1→D(t)). Here, it is important to realize that the mere association between S(t) and V1 (S(t)→V1) does not warrant their contribution to D(t) (S(t)→V1→D(t)). Likewise, the association between S(t−1) and V1 (S(t−1)→V1) does not warrant their contribution to D(t) (S(t−1)→V1→D(t)).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Origin of the covariation between the stimulus-encoding signal of V1 and the current choice. A, The causal structure of the variables implied by the sensory-adaptation hypothesis. The stimulus-encoding signal of V1 (V1) is influenced by the current stimulus (S(t)), the previous stimulus (S(t−1)), and the unknown sources (UV1). In turn, V1 influences the current choice (D(t)). If the sensory-adaptation hypothesis is true, part of the causal influence of V1 on D(t) must originate from S(t−1), as indicated by the connected chain of the dotted arrows. B, Extraction of the stimulus-encoding signal of V1. For any given run from any participant, the matrix of spatiotemporal BOLD responses in V1 (top left) was multiplied by one of the three weighting vectors (right; blue, red, and yellow lines represent the uniform, discriminability, and log-likelihood ratio readout schemes, respectively) to result in the vector of stimulus-encoding signal (V1) in the same trial length (bottom left). The positive and negative values of V1 indicate the larger and smaller sizes of the ring, respectively. C, Multiple linear regression of the stimulus-encoding signal of V1 on S(t), S(t−1), and D(t−1). The colors correspond to the three different readout schemes in B. D-F The average marginal effects (AMEs) of V1 on D(t), with V1 extracted by the uniform (D), discriminability (E), and log-likelihood ratio (F) readout schemes. In each panel, the influence of V1 on D(t) that can be ascribed to S(t−1) and S(t) were assessed by checking i) whether the AME of V1 on D(t) (left) significantly decreased or not after controlling the influence of S(t−1) (second from the left) and S(t) (second from the right), respectively, or ii) whether the AME of V1 on D(t) controlling the influence of both S(t−1) and S(t) (right) significantly increased or not after only controlling the influence of S(t) (second from the right) and S(t−1) (second from the left), respectively. Asterisks indicate the statistical significance (*P<0.05, **P<0.01, ***P<0.001), and “n.s.” stands for the nonsignificance of the test (C–F). The 95% CIs of the mean across participants are indicated by the vertical error bars (C–F).

We can test the critical implication of the sensory-adaptation hypothesis by comparing the average marginal effect (AME; Williams and Jorgensen, 2023) of V1 on D(t) (V1→D(t)) to that of V1 on D(t) with S(t−1) controlled (S(t−1)↛V1→D(t)). The rationale behind this comparison is that the contribution of V1 to D(t) must be substantially smaller when S(t−1) was controlled than when not if the contribution of S(t−1) to D(t) via V1 (i.e., S(t−1)→V1→D(t)) is substantial. In addition, the critical implication can also be tested by comparing the AME of V1 on D(t) with S(t) only controlled (S(t)↛V1→D(t)) to that of V1 on D(t) with S(t−1) and S(t) both controlled (S(t−1)&S(t)↛V1→D(t)). In this case, the contribution of V1 to D(t) must be greater when only S(t) is controlled than when both S(t−1) and S(t) are controlled if the contribution of S(t−1) to D(t) via V1 is substantial. AME was adopted instead of comparing regression coefficients because it does not suffer from the scale problem, unlike logistic and probit regression coefficients (Mize et al., 2019).

In doing so, the trial-to-trial measures of V1 were acquired by taking the sum of BOLDs across the eccentricity bins with the same readout weights used in the previous section (Fig. 4B). At first, we confirmed that V1 contains both current stimuli and adaptation signals by regressing V1 on to S(t), S(t−1), and D(t−1) concurrently for each participant (Fig. 4C). This multiple regression analysis indicates that the previously observed adaptation to S(t−1) (Fig. 3H,I) was still significant across participants (β=−0.047, t(18)=−2.2, p=0.039), even when we controlled the variability of D(t−1) (β=0.021, t(18)=0.82, p=0.42), a potential confounding variable.

The AME of V1 on D(t) was significant across participants (β=0.020,t(18)=2.3,p=0.031; Fig. 4D, the first bar). Importantly, it did not significantly decrease across participants when the influence of S(t−1) was controlled (t(18)=−1.6,p=0.13; Fig. 4D, the change of the first to second bars). Given the significant repulsive bias associated with S(t−1) presented on the two-back trial, we also controlled S(t−2) in addition to S(t−1). Despite this additional control, the AME of V1 on D(t) did not significantly decrease (t(18)=−1.5, p=0.15). By contrast, the AME of V1 on D(t) substantially decreased across participants, almost to none, when the influence of S(t) was controlled (t(18)=−6.0, p=1.1×10−5; Fig. 4D, the change of the first to third bars). Likewise, the AME of V1 on D(t) with S(t) only controlled did not differ from that of V1 on D(t) with S(t−1) and S(t) both controlled (t(18)=1.4, p=0.17; Fig. 4D, the change of the fourth to third bars), whereas the AME of V1 on D(t) with S(t−1) controlled was greater than that of V1 on D(t) with S(t−1) and S(t) both controlled (t(18)=6.02, p=1.1×10−5; Fig. 4D, the change of the fourth to second bars). These results coherently indicate that the contribution of the previous stimuli to D(t) via V1 is absent or negligible, which is at odds with the sensory-adaptation hypothesis.

The analyses above were conducted for V1 acquired at the fourth time point, where sensory adaptation was significant. However, an insignificant but substantial amount of sensory adaption occurred also at the preceding (third) time point (Fig. 3I). To check the possibility that the contribution of S(t−1) to D(t) via V1 might be present if V1 is alternatively defined, we redefined V1 by averaging those acquired at the third and fourth points and repeated the same AME analyses as above. However, the contribution of the previous stimuli to D(t) via V1 is still absent or negligible: the AME of V1 on D(t) did not differ from that of V1 on D(t) with S(t−1) controlled (t(18)=−1.4, p=0.19); the AME of V1 on D(t) with S(t) only controlled did not differ from that of V1 on D(t) with S(t−1) and S(t) both controlled (t(18)=1.03, p=0.32).

Furthermore, the same pattern of AMEs was observed when we used two alternative readout schemes for extracting V1. The AME of V1 on D(t) decreased after S(t) was controlled (the discriminability scheme: t(18)=−5.4, p=4.3×10−5; the log likelihood scheme: t(18)=−6.0, p=1.1×10−5; Fig. 4E,F, the change of the first to third bars) but not after S(t−1) was controlled (the discriminability scheme: t(18)=−1.4, p=0.19; the log likelihood scheme: t(18)=−1.5, p=0.14; Fig. 4E,F, the change of the first to second bars). Likewise, the AME of V1 on D(t) with S(t−1) only controlled was larger than that of V1 on D(t) with S(t−1) and S(t) both controlled (the discriminability scheme: t(18)=5.4, p=4.0×10−5; the log likelihood scheme: t(18)=6.0, p=1.2×10−5; Fig. 4E,F, the change of the fourth to second bars), while that with S(t) only controlled did not differ from that of V1 on D(t) with S(t−1) and S(t) both controlled (the discriminability scheme: t(18)=1.3, p=0.22; the log likelihood scheme: t(18)=1.4, p=0.18; Fig. 4E,F, the change of the fourth to third bars). Put together, the AME analyses suggest that the contribution of V1 to the current choice is ascribed mostly to the current stimulus but hardly to the previous stimuli, which is inconsistent with the sensory-adaptation hypothesis.

Repulsive bias in experiment 2

Having failed to find the evidence supporting the sensory-adaptation hypothesis in experiment 1, we conducted experiment 2 to search the whole brain for the signal representing the class boundary and to test whether that signal relates to the previous stimuli and the current choice in a manner consistent with the boundary-updating hypothesis. As mentioned earlier (see above, Experimental paradigm), the experimental procedure in experiment 2 was the same as in experiment 1, except for the fMRI protocol.

The behavioral performance in experiment 2 closely matched that in experiment 1 (Fig. 3A–C) in many aspects. The PL difference induced by the previous stimulus (−0.25) substantially differed from zero (t(17)=−7.3, p=1.3×10−6) indicating the existence of repulsive bias, whereas that by the previous choice (0.027) did not significantly differ from zero (t(17)=1.3, p=0.19; Fig. 5A,B). The logistic regression analysis confirmed the significant presence of repulsive bias across participants (S(t−1), β=−0.54, t(17)=−7.9, p=4.6×10−7; S(t−2), β=−0.24, t(17)=−4.7,p=2.3×10−4; D(t−1), β=0.0055, t(17)=0.13,p=0.90; Fig. 5C).

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Repulsive bias in experiment 2 and a Bayesian model of boundary updating (BMBU). A–C, Repulsive bias in psychometric curves (A, B) and regression analysis (C). The formats were identical to those in the corresponding figure panels for experiment 1 (Fig. 3A–C), except that the ex post model simulation results (green lines and symbols) are added. In the bottom insets of B, the observed (x-axis) and simulated (y-axis) average differences in the fractions of large choices between the trials in which the previous stimulus was L-ring and those in which it was S-ring are plotted against one another, where the red diagonal demarcates the identity line. In the bottom insets of C, the observed (x-axis) and simulated (y-axis) regression coefficients for the previous stimulus (S(t−1)) regressor are plotted against one another for individual observers, where the red diagonal demarcates the identity line. D–G, The measurement generation (C), stimulus inference (D), class-boundary inference (E), and decision-variable deduction (F) processes of BMBU. BMBU posits that the Bayesian decision-maker has an internal causal model of how a physical stimulus size (S) engenders a current sensory measurement (m(t)) and a retrieved memory measurement from ith preceding trial (r(t−i); D, top), which specifies the probability distribution of m(t) and r(t−i) conditioned on S, respectively (D, bottom). In turn, p(m(t)|S) allows the Bayesian decision-maker to infer S on observing m(t) by combining it with the prior knowledge about S, p(S), to compute the posterior probability of S given m(t), p(S|m(t)) (E). Similarly, p(r(t−i)|S) allows for inferring the class boundary (b(t)) on retrieving the memory of previous sensory measurements (r→(t)=[r(t−1),r(t−2),...]) by combining it with p(S) to compute the posterior probability of S given r→(t), p(S|r→(t)) (F). In D–F, black dotted curves, p(m(t)|S); gray dotted curves, p(r(t−i)|S), the darker the dotted curve is, the more recent the memory is; gray dashed curves, p(S); black solid curve, p(S|m(t)); red solid curve, p(S|r→(t)). Finally, the inferred stimulus, s(t), and the inferred class boundary, b(t), allow for deducing the decision variable, v(t), the choice variable, d(t), and the uncertainty variable, u(t) (G, top), as illustrated in an example bivariate distribution of s(t) and b(t), from which v(t), d(t) and u(t) are derived (G, bottom). H, An example temporal trajectory of the class boundary inferred by BMBU in a single scan run of a representative subject 04. The black and red lines indicate the sizes of physical stimulus and the boundary inferred by BMBU, respectively. I, J, Ex post simulation results of the constant-boundary model. The formats are identical to those of A and B.

Bayesian model of boundary-updating (BMBU)

As we identified V1 in experiment 1, we first need to identify the brain signal that reliably represents the class boundary. However, it is challenging to identify such signals in two aspects. First, unlike in experiment 1, where V1 was the obvious cortical region to bear the size-encoding signal susceptible to adaptation given a large volume of previous work (Kohn, 2007; Patterson et al., 2013; Morgan, 2014; Solomon and Kohn, 2014; Weber et al., 2019; Fritsche et al., 2022) and our own work (Choe et al., 2014), we have no such a priori region where the boundary-representing signal resides. This aspect requires us to explore the whole brain. Second, unlike in experiment 1, where the size variable was physically prescribed by the experimental design, we need to infer the trial-to-trial states (i.e., sizes) of the class boundary, which is an unobservable, thus latent, variable. This aspect requires us to build a model. To address these challenges, we inferred the latent state of the class boundary using a Bayesian model of boundary-updating (BMBU) and searched the whole brain for the boundary-representing signal using a searchlight multivariate pattern analysis technique.

We developed BMBU by formalizing the binary classification task in terms of Bayesian decision theory (Knill and Richards, 1996), a powerful framework for modeling human decision-making behavior under uncertainty. Binary classification is to judge whether the “ring size on the current trial t (S(t))” is larger or smaller than the “the typical size of rings appearing across the entire trials (S∼).” Therefore, a classifier must infer them based on the measurements of stimulus size in the sensory and memory systems.

The generative model

On trial t, S(t) is randomly sampled from a probability distribution p(S) and engenders a measurement in the sensory system m(t), which is a random sample from a probability distribution p(m(t)|S(t)) (Fig. 5D, bottom, black dotted curve). Critically, as i trials elapse, m(t) is re-encoded into a mnemonic measurement in the working-memory system r(t−i), which is a random sample from a probability distribution p(r(t−i)|S(t)) (Fig. 5D, bottom, light-gray dotted curve). Here, we assumed that the width of p(r(t−i)|S(t)) increases as i increases reflecting the working memory decay (Gorgoraptis et al., 2011; Zokaei et al., 2015).

Inferring the current stimulus size

On trial t, the Bayesian classifier infers S(t) by inversely propagating m(t) in the generative model (Fig. 5E, top). As a result, the inferred size (s(t)) is defined as the value of S given m(t), as captured by the following equation: p(s(t))=p(S|m(t)),(1) where the width of p(S|m(t)) reflects the precision of s(t) (Fig. 5E, bottom).

Inferring the class boundary

On trial t, the Bayesian classifier infers the class boundary (b(t)), i.e., the inferred value of S∼, by inversely propagating a set of retrieved measurements in the working memory system r→(t)={r(t−1),r(t−2),r(t−3),...,r(t−n)} (Fig. 5F, top). b(t) is defined as the most probable value of S given r→(t), as captured by the following equation: b(t)=argmaxSp(S|r→(t))(2) where the width of p(S|r→(t)) reflects the precision of b(t). Notably, Equation 2 implies that b(t) must be attracted more to recent stimuli than to old ones because (1) the precision of working memory evidence decreases as trials elapse (Fig. 5F, bottom, dotted curves) and (2) the more uncertain the evidence is, the less weighed the evidence is for class-boundary inference.

Making a decision with the inferred current stimulus size and the inferred class boundary

Having estimated s(t) and b(t), the Bayesian classifier deduces a decision variable (v(t)) from s(t) and b(t) and translating it into a binary decision (d(t)) with a degree of uncertainty (u(t); Fig. 5G). Here, v(t) is the probability that s(t) will be greater than b(t) (v(t)=p(s(t)>b(t))); d(t) is large or small if v(t) is greater or smaller than 0.5, respectively; u(t) is the probability that d(t) will be incorrect (u(t)=p(s(t)<b(t)|d(t)=large) or p(s(t)>b(t)|d(t)=small); Sanders et al., 2016).

In sum, BMBU models a human decision-maker as the Bayesian classifier who, over consecutive trials, continuously infers the class boundary (b) and the current stimulus size (s), deduces the decision variable (v) from s and b, and makes a decision (d) with a varying degree of uncertainty (u). As shown below, BMBU well predicts human participants' choices and reproduces their repulsive bias.

The prediction and simulation of human choices and repulsive bias by BMBU

We assessed BMBU's accountability for human behavior in the binary classification task in two aspects, comparing its (1) predictability of the choices and (2) reproducibility of repulsive bias to those of the control model which does not update the class boundary (“constant-boundary model”; see Materials and Methods).

We assessed the predictability of BMBU and the constant-boundary model by fitting them to human choices using the maximum likelihood rule (see Materials and Methods). BMBU excels over the constant-boundary model in goodness-of-fit. The average AIC difference across participants is −10.48 and was significantly less than the conventional threshold (−4; Anderson and Burnham, 2004; t(17)=−2.6, p=0.020). The variance explained by BMBU, measured by the Nagelkerke R2, is equal to 132% of that by the constant-boundary model.

After equipping the models with their best-fit parameters, we assessed their reproducibility by making them simulate the decisions over the same sequence of ring sizes presented to the human participants (see Materials and Methods). From this simulation, we can also vividly appreciate how BMBU updates its class boundary (b(t)) depending on the ring sizes encountered over a sequence of classification trials (Fig. 5H). As implied by Equation 2, BMBU continuously shifts b(t) toward the ring sizes shown in previous trials. Such attractive shifts are pronounced especially when streaks of S-ring (Fig. 5H, solid arrow) or L-ring (Fig. 5H, dashed arrow) appeared over trials. Importantly, we confirmed that such boundary-updating of BMBU reproduces the repulsive bias displayed by the human participants with a remarkable level of resemblance across participants, both for the psychometric curves (the R2 of the effect of previous stimulus on PL between humans and BMBU was 0.89; Fig. 5A,B) and for the coefficients of the stimulus and choice regressors (the R2 of coefficients of the immediately preceding stimulus between humans and BMBU was 0.94; Fig. 5C). None of the simulated PLs and coefficients, a total of 17 points, fell outside the 95% confidence intervals of the corresponding human PLs and coefficients. Not surprisingly, the constant-boundary model failed to show any slightest hint of repulsive bias (Fig. 5I,J). Although we used m-sequences to prevent any auto-correlation among ring sizes, the failure of the constant-boundary model in reproducing repulsive bias reassures that the actual stimulus sequences used in the experiment do not contain any unwanted statistics that might induce spurious kinds of repulsive bias.

In sum, BMBU's inferences of the class boundary based on past stimuli accounted for a substantive fraction of the choice variability of human classifiers and successfully captured their repulsive bias.

Brain signals of the class boundary and the other latent variables

In the previous section, we demonstrated that BMBU accounted well for the variability of human choices and successfully reproduced the observed repulsive bias. However, such correspondences between the humans' and the models' choices do not necessarily warrant the validity of our procedure of estimating the latent states of the model variables (b, s, and v), which is crucial in testing the boundary-updating hypothesis. To validate our estimation procedure, we tested whether it could accurately recover the true states of the model variables based on the synthetic datasets simulated with 256 ground-truth model parameter sets (see the Materials and Methods). The recovered states of the model variables well matched the corresponding true states (R2 = 0.98±0.0044, 0.96±0.0073, and 0.96±0.0040 for b, s, and v, respectively; mean±95% confidence interval), which ascertains the validity of our procedure of estimating the latent states of the model variables.

Then, with the trial-to-trial states of the simulated latent variables, we identified the brain signals of those variables with the following rationale and procedure. On any given trial t, a classifier makes a decision in the manner constrained by the causal structure of BMBU (Fig. 5G). This causal structure implies two important points to be considered when identifying the neural representations of b, s and v. First, for any cortical activity, its significant correlation with the variable of interest does not necessarily imply that it represents that variable per se but is open to the possibility that it may represent the other variables that are associated with the variable of interest. Second, if any given cortical activity represents the variable of interest, that activity must not violate any of its relationships with the other variables that are implied by the causal structure (Table 1; see Materials and Methods).

We incorporated these two points in our search of the brain signals of b, s and v, as follows. Initially, we identified the candidate brain signals of b, s, and v by localizing the patterns of activities that closely reflect the trial-to-trial states of b, s, and v. For localization, we used the support vector regressor decoding with the searchlight technique (Kahnt et al., 2011a; Hebart et al., 2016), which is highly effective in detecting the local patterns of population fMRI responses associated with the latent variables of computational models (Kriegeskorte et al., 2006). Next, we put those candidate brain signals to a strong test of whether their trial-to-trial states satisfy the causal relationships with the other variables. Specifically, we converted those causal relationships into the empirically testable sets of regression models (Table 1), respectively for b (14 regressions), s (14 regressions), and v (17 regressions) and checked whether all the regressors' coefficients derived from the brain signals were consistent with the regression models (see Materials and Methods). In what follows, we will describe how the regression tests for the brain signal of b (yb) were derived from the causal structure of the variables defined by BMBU (see Materials and Methods for those for the two remaining variables s and v).

According to the causal relationship of b with the latent variables, yb must satisfy the following single linear regression models: yb must be positively regressed onto b (#1) and be so even when the false discovery rate (Benjamini and Hochberg, 1995) is applied (#2); yb must be positively regressed onto b even when b is orthogonalized to v (#3) or d (#4) because yb should reflect the variance irreducible to the offspring variables of b; yb must not be regressed onto s because b and s are independent of one another (b↮s; Fig. 5G, #5); yb must be negatively regressed onto v (b→v; Fig. 5G, #6) but not when v is orthogonalized to b because such orthogonalization removes the influence of b on v (#7); yb must be negatively regressed onto d (b→v→d; Fig. 5G, #8) but not onto u because u is not linearly correlated with b (b→v→u is blocked by the nonlinear relationship between u and v, Fig. 5G, #9). In addition, according to the causal relationship of the latent variables with the stimuli and choices (Fig. 5D–G), yb must satisfy the following multiple linear regression model defined by the observable variables [S(t),S(t−1),S(t−2),D(t−1),D(t−2)]: yb must not be regressed onto the current stimulus (#10) because b is independent of S(t); yb must be positively regressed onto the 1-back stimulus for sure (#11) because b firmly shifts toward S(t−1); the regression of yb onto the two-back stimulus must be weaker than that onto the 1-back stimulus (#12) because of memory decay (Fig. 5D; accordingly, the sign of the regression coefficient of S(t−2) was defined as the complementary part of that of S(t−1)); yb must not be regressed onto previous decisions because previous decisions do not have any influence on b (#13, 14). We did not include D(t) as a regressor in the multiple regression because D(t) may induce a spurious correlation between b and s by controlling the collider (common offspring) variable v (Elwert and Winship, 2014; b→v←s; Fig. 5G) via its relationship with v (v →d; Fig. 5G).

As a result, the brain signals that survived the exhaustive regression tests clustered in six separate regions (Fig. 6; Table 2). The signal of b appeared in three separate regions at different time points relative to stimulus onset, a region in the left inferior parietal lobe at 1.1s (IPLb1) and two regions in the left posterior superior temporal gyrus at 3.3 and 5.5 s (pSTGb3, pSTGb5). The signal of s appeared in the left dorsolateral prefrontal cortex at 3.3 s (DLPFCs3) and in the right cerebellum at 5.5 s (Cerebs5). The signal of v appeared in the left anterior superior temporal gyrus at 5.5 s (aSTGv5). To ascertain the robustness of the neural representations of the latent variables in these six areas, we repeated the searchlight decoding analysis using a different searchlight size (87 voxels, which is smaller than the original one, 123 voxels). Despite the change in searchlight size, we could detect the clusters that survived all regression tests around the six regions (Table 2).

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Brain signals of the latent variables of BMBU. A, Loci of the brain signals. The brain regions where BOLD activity patterns satisfied all the regressions implied by the causal structure of the variables in BMBU are overlaid on the inflated cortex and the axial view of the cerebellum of the template brain. B, Within-trial time courses of the satisfied regressions in number. The within-trial task phases are displayed (top panel) to help appreciate when the brain signals become pronounced, with the hemodynamic delay (4–5 s) in BOLD (bottom three panels). C, The coefficients and the 95% CIs of the generalized linear mixed effect model (GLMM) of the decoded variable averaged across the searchlights of each ROI on the time points on which each ROI was detected. The regression index indicates the index specified in Table 1. B, C, The colors of the symbols and lines correspond to those of the brain regions shown in A. Asterisks indicate the statistical significance (*P,0:05, **P,0:01, ***P,0:001). The 95% CIs of the mean across participants are indicated by the vertical error bars.

View this table:
  • View inline
  • View popup
Table 2.

Specification of the brain signals of the latent variables of BMBU

Lastly, we investigated whether the probable causal structures between the brain signals of b, s, and v are consistent with BMBU in the following two critical aspects. First, the brain signal of v should be concurrently affected by the brain signals of b and s: b→v←s. Second, there should be no causal connection between b and s because BMBU is built on the assumption that b and s are independent of one another (i.e., b and s are biased by previous and current stimuli, respectively): b↮s (Fig. 5G). To examine these aspects, we investigated all of the three-node networks (N = 162) composed of the brain signals of b, s, and v, and calculated their Bayesian Information Criterion (BIC; see Materials and Methods).

The outcomes of BIC evaluation were consistent with BMBU. First, out of the 162 possible causal graphs, the smallest (best) BIC value was found for “pSTGb5→aSTGv5 ←Cerebs5” (Fig. 7). Second, We found that any graph with the causal arrows between b(t) and s(t) is significantly less likely than the best causal graph (BIC > 2; shown at the bottom of Fig. 7; Kass and Raftery, 1995). The results indicate that the relationship between the identified brain signals faithfully reflects the causal relationship of the latent variables implied by BMBU.

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

The probable causal structures between the brain signals of the latent variables in BMBU. For each row, the value in the left indicates the relative BIC scores of the causal structures in reference to the most probable one at the top.

The variability of the class-boundary brain signals associated with previous stimuli contributes to the variability of choice

Finally, with the brain signals that represent the class boundary (IPLb1, pSTGb3, and pSTGb5) in our hands, we verified the boundary-updating hypothesis with the rationale and analysis identical to those for the verification of the sensory-adaptation hypothesis.

We stress that the respective associations of the brain signal of b with the previous stimulus (S(t−1); Table 1, eleventh row) and with the variable d (Table 1, eighth row) do not necessarily imply that the variability of the brain signal of b that is associated with S(t−1) contributes to the choice variability (as implied by the causal information flows through b depicted in Fig. 8A), for the same reasons mentioned when verifying the sensory-adaptation hypothesis. To verify such contribution, we need to compare the AME of the brain signals of b on the current choice (D(t); pSTGb5→D(t)) to the AME of the brain signals of b on D(t) with S(t−1) controlled (S(t−1)↛pSTGb5→D(t)).

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

Origin of the covariation between the current choice and the brain signals of the latent variables in BMBU. A, The causal structure of the variables implied by the boundary-updating hypothesis. The brain signal of the decision variable (v(t)) is influenced by the brain signal of the inferred class criterion (b(t)), brain signal of the inferred stimulus (s(t)), and the unknown sources (Uv). In turn, b(t) is influenced by the previous stimulus (S(t−1)) and the unknown sources (Ub) whereas s(t) is influenced by the current stimulus (S(t)) and the unknown sources (Us). Lastly, v(t) influences the current choice (D(t)). If the boundary-updating hypothesis is true, part of the causal influence of b(t) on D(t) must originate from S(t−1), as indicated by the connected chain of the dotted arrows. B–G, The average marginal effects (AMEs) of the brain signals on D(t), with the brain signals of b(t) from pSTGb5 (B), IPLb1 (C), and pSTGb3 (D), s(t) from DLPFCs3 (E), and Cerebs5 (F), and v(t) from aSTGv5 (G). In each panel, the influences of the given brain signal on D(t) that can be ascribed to S(t−1) and S(t) were assessed by checking (1) whether the AME of the given brain signal on D(t) (left) is significantly reduced or not after controlling the influence of S(t−1) (second from the left) and S(t) (second from the right), respectively, or (2) whether the AME of V1 on D(t) controlling the influence of both S(t−1) and S(t) (right) significantly increased or not after only controlling the influence of S(t) (second from the right) and S(t−1) (second from the left), respectively. The colors of the bars correspond to those of the brain regions shown in Figure 6A. Asterisks indicate the statistical significance (*P<0.05, **P<0.01, ***P<0.001), and “n.s.” stands for the nonsignificance of the test. The 95% CIs of the mean across participants are indicated by the vertical error bars.

As anticipated, the AME of pSTGb5 on D(t) was negatively significant across participants (t(17)=−4.8,p=1.7×10−4; Fig. 8B, the first bar). Importantly, unlike the size-encoding signal in V1, the negative AME significantly weakened across participants when the contribution of S(t−1) was controlled (t(17)=2.8,p=0.012; Fig. 8B, the change of the first to second bars). On the other hand, controlling S(t) did not affect the AME of pSTGb5 on D(t) at all (t(17)=0.29,p=0.77; Fig. 8B, the change of the first to third bars), which is consistent with the absence of the contribution of S(t) on b in the causal relationship defined by BMBU (Fig. 5G). Likewise, the null effect of S(t) on the AMEs of pSTGb5 on D(t) was confirmed by the insignificant difference between the AME with S(t−1) controlled and that with S(t) and S(t−1) both controlled (t(17)=−0.31,p=0.77; Fig. 8B, the change of the fourth to second bars). Also, the effect of S(t−1) on the AMEs of pSTGb5 on D(t) was confirmed by the significant difference between the AME with S(t−1) controlled and that with S(t) and S(t−1) both controlled (t(17)=−2.7,p=0.014; Fig. 8B, the change of the fourth to third bars).

The same patterns were also observed for IPLb1 and pSTGb3 (Fig. 8C,D). Especially, the AMEs of pSTGb3 and IPLb1 on D(t) both weakened after controlling S(t−1) (pSTGb3: t(17)=2.2,p=0.046; Fig. 8D, the change of the first to second bars; IPLb1: t(17)=2.1,p=0.0503; Fig. 8C, the change of the first to second bars), but not after controlling S(t) (pSTGb3: t(17)=−0.57,p=0.58; Fig. 8D, the change of the first to third bars; IPLb1: t(17)=0.22,p=0.83; Fig. 8C, the change of the first to third bars). The null effect of S(t) was confirmed by the insignificance difference between the AME with S(t−1) controlled and that with S(t) and S(t−1) both controlled (pSTGb3: t(17)=0.43,p=0.67, Fig. 8D; IPLb1: t(17)=−0.41,p=0.69, Fig. 8C, the change of the fourth to second bars). Also, the effect of S(t−1) was confirmed by the significant or marginally significant differences between the AME with S(t−1) controlled and that with S(t) and S(t−1) both controlled (pSTGb3: t(17)=−1.9,p=0.081, Fig. 8D, the change of the fourth to third bars; IPLb1: t(17)=−2.2,p=0.045, Fig. 8C, the change of the fourth to third bars). Put together, the AME analyses suggest that the contribution of the class boundary to the current choice is significantly ascribed to the previous stimuli supporting the boundary-updating hypothesis on repulsive bias.

Having found the evidence supporting the boundary-updating hypothesis in the brain signals of b, we also conducted the same AME analysis on the signals of s and v below. Given the causal structure of b, s, and v, the validity of the boundary-updating hypothesis will be reinforced if the brain signals of s and v also turn out acting as fulfilling their causal roles defined by BMBU. According to BMBU, the contribution of s to D(t) must originate not from S(t−1) but from the S(t) (the causal route indicated by the solid arrows in Fig. 8A). In line with this implication, the AMEs of DLPFCs3 and Cerebs5 on D(t) were both significant across participants (t(17)=3.8, p=0.0014 for DLPFCs3; t(17)=3.3, p=0.0041 for Cerebs5; Fig. 8E,F, first bars) and significantly decreased after controlling S(t) (t(17)=−4.4, p=4.1×10−4 for DLPFCs3; t(17)=−3.7,p=0.0019 for Cerebs5; Fig. 8E,F, the change of the first to third bars) but not after controlling S(t−1) (t(17)=1.2, p=0.26 for DLPFCs3; t(17)=0.69,p=0.50 for Cerebs5; Fig. 8E,F, the change of the first to second bars). Likewise, the AMEs of DLPFCs3 and Cerebs5 on D(t) with S(t−1) controlled were both larger than those with both S(t) and S(t−1) controlled (t(17)=4.3,p=0.0050 for DLPFCs3; t(17)=3.8, p=0.0016for Cerebs5; Fig. 8E,F, the change of the fourth to second bars), whereas the AMEs of DLPFCs3 and Cerebs5 on D(t) with S(t) controlled did not differ from those with both S(t) and S(t−1) controlled (t(17)=−0.92,p=0.37 for DLPFCs3; t(17)=−0.057, p=0.96 for Cerebs5; Fig. 8E,F, the change of the fourth to third bars). Put together, the AME analyses suggest that the contribution of the inferred stimulus to the current choice is significantly ascribed to the current but not to the previous stimuli supporting the boundary-updating hypothesis.

On the contrary, the contribution of v to D(t) must originate not only from S(t−1) but also from S(t) (Fig. 8A). In line with this implication, the AME of aSTGv5 on D(t) was significant (t(17)=5.1,p=9.7×10−5; Fig. 8G, the first bar) and significantly decreased both after controlling S(t−1) (t(17)=−2.8,p=0.012; Fig. 8G, the change of the first to second bars) and after controlling S(t) (t(17)=−4.1,p=7.5×10−4; Fig. 8G, the change of the first to third bars). Likewise, the AME of aSTGv5 on D(t) with controlled both S(t) and S(t−1) significantly increased both after controlling S(t−1) (t(17)=4.1,p=6.7×10−4; Fig. 8G, the change of the fourth to second bars) and after controlling S(t) (t(17)=2.8,p=0.012; Fig. 8G, the change of the fourth to third bars). Put together, the AME analyses suggest that the contribution of the decision variable to the current choice is significantly ascribed to both current and the previous stimuli supporting the boundary-updating hypothesis.

On a separate note, the six loci of the brain signals of b,s, and d were defined by applying the conservative criterion that any given cluster satisfying all the regression tests (Table 1) should be the same or larger than 12. We note that there was a focal region in the right-hemisphere medial visual cortex that survived the regression tests for s(t) on the 3 s after stimulus onset (VCs3) but failed to reach the threshold size (N voxels = 6).

To examine the neural loci of the inferred stimulus further, we checked the possibility that VCs3 might carry the signal via which the current stimulus (S(t)) contributes to the current choice (D(t)). The AME of VCs3 on D(t) was significant (t(17)=3.0,p=0.0074), but no longer when S(t) was controlled (t(17)=1.6, p=0.14), which indicates that the noise variability of VCs3 is not tightly linked to the variability of the current choice. However, the AME of DLPFCs3 on D(t) with S(t) controlled was significant (t(17)=2.5,p=0.023; Fig. 8E, the third bar) and that of Cerebs5 was marginally significant (t(17)=2.0,p=0.063; Fig. 8F, the third bar). The results indicate that DLPFCs3 and Cerebs5 carry the signal via which the current stimulus (S(t)) contributes to the current choice (D(t)), whereas such contribution is not evident for VCs3.

Furthermore, to test the sensory-adaptation hypothesis, we examined whether VCs3 carries the stimulus signal via which the previous stimulus (S(t−1)) contributes to the current choice (D(t)). However, the AME of VCs3 on D(t) did not decrease when the contribution of S(t−1) was controlled (t(17)=0.28, p=0.78). Likewise, the AME of VCs3 on D(t) with S(t) controlled did not differ from that with both S(t−1) and S(t) controlled (t(17)=0.70,p=0.49). These results corroborate the AME analyses on V1 in experiment 1 (Fig. 4D–F), confirming that the previous stimulus is unlikely to contribute to the current choice via the stimulus-related signals in the early visual cortex.

In sum, the results suggest that neural signals of b and s transferred previous and current stimuli to current decisions, respectively, and the neural signal of v transferred both previous and current stimuli to current decisions as BMBU implies, which is consistent with the boundary-updating hypothesis.

Discussion

Here, we explored the two possible origins of repulsive bias, sensory-adaptation versus boundary-updating, in binary classification tasks. Although V1 adapted to the previous stimulus, its variability associated with the previous stimulus failed to contribute to the choice variability. By contrast, the variability associated with the previous stimulus in the boundary-representing signals in IPL and pSTG contributed to the choice variability. These results suggest that the repulsive bias in binary classification is likely to arise as the internal class boundary continuously shifts toward the previous stimulus.

Dissociation between sensory-adaptation in V1 and repulsive bias

What makes sensory-adaptation a viable origin of repulsive bias is not its mere presence but its contribution to repulsive bias. The presence of sensory-adaptation in V1 has been firmly established (Clifford et al., 2007; Kohn, 2007; Solomon and Kohn, 2014; Weber et al., 2019) and is the necessary premise for the sensory-adaptation hypothesis to work. What matters is whether the trial-to-trial variability of V1 because of such adaptation exerts its influence on the current choice. Such an influence was not observed in our data.

From a general perspective, our findings demonstrate a dissociation between the impact of previous decision-making episodes on the sensory-cortical activity and the contribution of that sensory-cortical activity to decision-making behavior. In this regard, V1 in the current work acts like the binocular-disparity-encoding signal of V2 neurons in a recent single-cell study on monkeys (Lueckmann et al., 2018), where, despite the impact of the history on V2 activity, the variability of V2 activity associated with the history failed to contribute to the history effects on decision-making behavior. Similarly, our findings also echo the failure of the sensory-adaptation of V1 in influencing the visual orientation estimation in an fMRI study on human participants (Sheehan and Serences, 2022). There, while sensory-adaptation was evident along the hierarchy of visual areas including V1, V2, V3, V4, and interaparietal sulcus (IPS), the history effect of the previous stimulus on the current estimation behavior was opposite to that expected from sensory-adaptation, which suggests that a downstream mechanism compensates for sensory-adaptation. Such a mechanism was also called for when the single-cell-recording work on monkeys tried to explain their intriguing adaptation effects found along the visual processing hierarchy (McLelland et al., 2009). For instance, static visual stimuli engendered prolonged, on the order of tens of seconds, adaptation in the lateral geniculate nucleus but the adaptation in V1 was paradoxically short-lived, on the order of 100 ms.

The representations of the class boundary in IPL and pSTG

To account for the repulsive bias in binary classification, previous studies proposed descriptive models based on the common idea that the internal boundary continuously shifts toward the previous stimuli (Treisman and Williams, 1984; Lages and Treisman, 1998, 2010; Dyjas et al., 2012; Raviv et al., 2014; Norton et al., 2017; Hachen et al., 2021). However, the neural concomitant of class-boundary updating has rarely been demonstrated.

To our best knowledge, this issue has so far been addressed by one fMRI work (White et al., 2012); which reported the class-boundary signal in the left inferior temporal pole. However, several aspects of this work make it hard to consider the reported brain signal to represent the class boundary inducing repulsive bias. First, they experimentally manipulated the class boundary in a block-by-block manner. Thus, it is unclear whether the reportedly boundary-representing signal was updated by previous stimuli trial-to-trial, which is required to induce repulsive bias. Second, the class boundary size correlated with the average stimulus size block-by-block in their experiments. Because of this confounding factor, one cannot rule out the possibility that the reported brain signal reflects the sensory signal associated with the average stimulus size induced by the current stimulus. By contrast, the brain signal of the class boundary in our work is free from these methodological limitations, because it is updated on a trial-to-trial basis and survived the rigorous set of tests, including those addressing possible confounding variables (Table 1). In this sense, the current work can be considered the first demonstration of the brain signals representing the class boundary that is dynamically updated in such a way that it can account for repulsive bias.

We emphasize that we developed BMBU to infer the trial-to-trial latent states of the class boundary used by human observers for the purpose of verifying the boundary-updating hypothesis on repulsive bias. In this sense, BMBU should not be taken as a unified account of the history effects reported by previous studies. For example, BMBU does not account for the influence of previous decisions on subsequent decision-making, another significant contributor to the history effects (Akaishi et al., 2014; Urai and Donner, 2022). To be sure, we are open to the possibility that there might be a unified mechanism relating the previous, and current, as well, stimuli and previous decisions to the current decision in an integrative manner. To incorporate the previous decisions into such a unified mechanism, it is important to distinguish the influence of the previous choice from that of the previous motor response, which we could not do in the current work because choices and motor responses covaried. In this regard, the weak but significant negative regression coefficient of the previous decision in experiment 1 (Fig. 3C) could have been reflective of the influence of the previous motor response, as previously suggested (Zhang and Alais, 2020).

The representations of inferred stimuli in DLPFC and cerebellum

The brain signals of the inferred ring size (s(t)) in dorsolateral prefrontal cortex (DLPFC) and cerebellum share many features withV1 in that their covariation with the current choice did not decrease after controlling the previous stimulus but decreased after controlling the current stimulus (Figs. 4D–F, 8E,F). This commonality suggests that DLPFC, cerebellum, and V1 alike route the flow of information originating from the current stimulus. Then, what made V1 ineligible for the brain signal of s(t)?

It is notable that BMBU treats s(t) as the random variable that has the noise variability in addition to being influenced by the physical stimulus (Fig. 8A). This means that the brain signal of s(t) is supposed to be associated with the choice even when the current stimulus was controlled because the noise variability can also influence the current choice, as captured by the concept of “choice probability” (Macke and Nienborg, 2019). However, unlike DLPFC and cerebellum, the AME of V1 on the current choice disappeared after controlling the current stimuli, which disqualifies V1 as the brain signal of s(t). In line with this, the AME of VCs3 on D(t) also disappeared after S(t) was controlled in experiment 2, which again disqualifies VCs3 as the valid brain signal of s(t).

The residence of the inferred, i.e., subjective or perceived, stimulus representation in DLPFC and cerebellum, instead of the visual cortex, seems consistent with previous reports. DLPFC and cerebellum have been well known for their critical involvement in visual awareness (Gao et al., 1996; Rees et al., 2002; Dehaene and Changeux, 2011; Lau and Rosenthal, 2011; Baumann et al., 2015). By contrast, the visual cortex is likely to be involved more in a faithful representation of physical input than its subjective representation (Renart and Machens, 2014), consistent with the previous findings of our group (Lee et al., 2007; Choe et al., 2014).

The representation of the decision variable in aSTG

Whereas previous single-cell studies have reported that the decision variable is represented in the prefrontal cortex (Kim and Shadlen, 1999; Hanks et al., 2015; Hebart et al., 2016), we identified the brain signal of v only in aSTG but not in PFC. This inconsistency may reflect the poor spatial and temporal resolution of fMRI measurements. For example, if any given signal of interest is encoded in the sequential or dynamical activity patterns across a neural population, as recently demonstrated theoretically (Orhan and Ma, 2019) or empirically (Wutz et al., 2018), such signals cannot be decoded from fMRI responses. Alternatively, the inconsistency may have been a result of the previous studies not taking into account the history effect in defining the decision variable, in contrast to our study which did, given the prevalence of diverse history effects in various decision-making tasks (Fründ et al., 2014; Lak et al., 2020). In this scenario, the brain signal of the inferred stimulus in DLPFC in our study hints at the possibility that the previously reported decision variable signal in PFC could have reflected the inferred stimulus, which is closely associated with the decision variable when the decision boundary is assumed to be fixed (Gold and Shadlen, 2007). Understanding the functional role of DLPFC in perceptual decision-making seems to require further future studies, especially those in which the history effects are considered in decision variable definition while neural responses are probed at a sufficiently high spatiotemporal resolution.

Footnotes

  • This research was supported by the Seoul National University Research Grant 339-20220013 and by the Brain Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and Information and Communications Technology Grant No. NRF-2021R1F1A1052020.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Sang-Hun Lee at visionsl{at}snu.ac.kr

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

References

  1. ↵
    1. Akaishi R,
    2. Umeda K,
    3. Nagase A,
    4. Sakai K
    (2014) Autonomous mechanism of internal choice estimate underlies decision inertia. Neuron 81:195–206. https://doi.org/10.1016/j.neuron.2013.10.018 pmid:24333055
    OpenUrlCrossRefPubMed
  2. ↵
    1. Anderson D,
    2. Burnham K
    (2004) Model selection and multi-model inference. New York: Springer.
  3. ↵
    1. Ashburner J
    (2007) A fast diffeomorphic image registration algorithm. Neuroimage 38:95–113. https://doi.org/10.1016/j.neuroimage.2007.07.007 pmid:17761438
    OpenUrlCrossRefPubMed
  4. ↵
    1. Baumann O,
    2. Borra RJ,
    3. Bower JM,
    4. Cullen KE,
    5. Habas C,
    6. Ivry RB,
    7. Leggio M,
    8. Mattingley JB,
    9. Molinari M,
    10. Moulton EA,
    11. Paulin MG,
    12. Pavlova MA,
    13. Schmahmann JD,
    14. Sokolov AA
    (2015) Consensus paper: the role of the cerebellum in perceptual processes. Cerebellum 14:197–220. https://doi.org/10.1007/s12311-014-0627-7 pmid:25479821
    OpenUrlCrossRefPubMed
  5. ↵
    1. Benjamini Y,
    2. Hochberg Y
    (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Series B Stat Methodol 57:289–300. https://doi.org/10.1111/j.2517-6161.1995.tb02031.x
    OpenUrlCrossRefPubMed
  6. ↵
    1. Bierwisch M
    (1989) The semantics of gradation. In Manfred Bierwisch, Ewald Lang (eds.), Dimensional adjectives. Grammatical structure and conceptual interpretation, 71–261. Berlin, etc: Springer.
  7. ↵
    1. Bosch E,
    2. Fritsche M,
    3. Ehinger BV,
    4. de Lange FP
    (2020) Opposite effects of choice history and evidence history resolve a paradox of sequential choice bias. J Vis 20:9. https://doi.org/10.1167/jov.20.12.9 pmid:33211062
    OpenUrlPubMed
  8. ↵
    1. Bromiley P
    (2003) Products and convolutions of Gaussian probability density functions. Tina-Vision Memo 3:1.
    OpenUrl
  9. ↵
    1. Buracas GT,
    2. Boynton GM
    (2002) Efficient design of event-related fMRI experiments using M-sequences. Neuroimage 16:801–813. https://doi.org/10.1006/nimg.2002.1116 pmid:12169264
    OpenUrlCrossRefPubMed
  10. ↵
    1. Carlson JM,
    2. Foti D,
    3. Mujica-Parodi LR,
    4. Harmon-Jones E,
    5. Hajcak G
    (2011) Ventral striatal and medial prefrontal BOLD activation is correlated with reward-related electrocortical activity: a combined ERP and fMRI study. Neuroimage 57:1608–1616. https://doi.org/10.1016/j.neuroimage.2011.05.037 pmid:21624476
    OpenUrlCrossRefPubMed
  11. ↵
    1. Carter CS,
    2. Braver TS,
    3. Barch DM,
    4. Botvinick MM,
    5. Noll D,
    6. Cohen JD
    (1998) Anterior cingulate cortex, error detection, and the online monitoring of performance. Science 280:747–749. https://doi.org/10.1126/science.280.5364.747 pmid:9563953
    OpenUrlAbstract/FREE Full Text
  12. ↵
    1. Cavanagh JF,
    2. Frank MJ
    (2014) Frontal theta as a mechanism for cognitive control. Trends Cogn Sci 18:414–421. https://doi.org/10.1016/j.tics.2014.04.012 pmid:24835663
    OpenUrlCrossRefPubMed
  13. ↵
    1. Choe KW,
    2. Blake R,
    3. Lee S-H
    (2014) Dissociation between neural signatures of stimulus and choice in population activity of human V1 during perceptual decision-making. J Neurosci 34:2725–2743. https://doi.org/10.1523/JNEUROSCI.1606-13.2014 pmid:24523561
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Clifford CW,
    2. Webster MA,
    3. Stanley GB,
    4. Stocker AA,
    5. Kohn A,
    6. Sharpee TO,
    7. Schwartz O
    (2007) Visual adaptation: neural, psychological and computational aspects. Vision Res 47:3125–3131. https://doi.org/10.1016/j.visres.2007.08.023 pmid:17936871
    OpenUrlCrossRefPubMed
  15. ↵
    1. Cox RW
    (1996) AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Comput Biomed Res 29:162–173. https://doi.org/10.1006/cbmr.1996.0014 pmid:8812068
    OpenUrlCrossRefPubMed
  16. ↵
    1. Dehaene S,
    2. Changeux J-P
    (2011) Experimental and theoretical approaches to conscious processing. Neuron 70:200–227. https://doi.org/10.1016/j.neuron.2011.03.018 pmid:21521609
    OpenUrlCrossRefPubMed
  17. ↵
    1. Dyjas O,
    2. Bausenhart KM,
    3. Ulrich R
    (2012) Trial-by-trial updating of an internal reference in discrimination tasks: evidence from effects of stimulus order and trial sequence. Atten Percept Psychophys 74:1819–1841. https://doi.org/10.3758/s13414-012-0362-4 pmid:23055085
    OpenUrlCrossRefPubMed
  18. ↵
    1. Elwert F,
    2. Winship C
    (2014) Endogenous selection bias: the problem of conditioning on a collider variable. Annu Rev Sociol 40:31–53. https://doi.org/10.1146/annurev-soc-071913-043455 pmid:30111904
    OpenUrlCrossRefPubMed
  19. ↵
    1. Engel SA,
    2. Rumelhart DE,
    3. Wandell BA,
    4. Lee AT,
    5. Glover GH,
    6. Chichilnisky E-J,
    7. Shadlen MN
    (1994) fMRI of human visual cortex. Nature 369:525–525. https://doi.org/10.1038/369525a0
    OpenUrlCrossRefPubMed
  20. ↵
    1. Friston KJ,
    2. Williams S,
    3. Howard R,
    4. Frackowiak RS,
    5. Turner R
    (1996) Movement-related effects in fMRI time-series. Magn Reson Med 35:346–355. https://doi.org/10.1002/mrm.1910350312 pmid:8699946
    OpenUrlCrossRefPubMed
  21. ↵
    1. Fritsche M,
    2. Solomon SG,
    3. de Lange FP
    (2022) Brief stimuli cast a persistent long-term trace in visual cortex. J Neurosci 42:1999–2010. https://doi.org/10.1523/JNEUROSCI.1350-21.2021 pmid:35064003
    OpenUrlAbstract/FREE Full Text
  22. ↵
    1. Fründ I,
    2. Wichmann FA,
    3. Macke JH
    (2014) Quantifying the effect of intertrial dependence on perceptual decisions. J Vis 14:9. https://doi.org/10.1167/14.7.9
    OpenUrlAbstract/FREE Full Text
  23. ↵
    1. Gao JH,
    2. Parsons LM,
    3. Bower JM,
    4. Xiong J,
    5. Li J,
    6. Fox PT
    (1996) Cerebellum implicated in sensory acquisition and discrimination rather than motor control. Science 272:545–547. https://doi.org/10.1126/science.272.5261.545 pmid:8614803
    OpenUrlAbstract
  24. ↵
    1. Gibson JJ,
    2. Radner M
    (1937) Adaptation, after-effect and contrast in the perception of tilted lines. I. Quantitative studies. J of experimental psychology 20:453–467. https://doi.org/10.1037/h0059826
    OpenUrl
  25. ↵
    1. Gold JI,
    2. Shadlen MN
    (2007) The neural basis of decision making. Annu Rev Neurosci 30:535–574. https://doi.org/10.1146/annurev.neuro.29.051605.113038
    OpenUrlCrossRefPubMed
  26. ↵
    1. Gorgoraptis N,
    2. Catalao RF,
    3. Bays PM,
    4. Husain M
    (2011) Dynamic updating of working memory resources for visual objects. J Neurosci 31:8502–8511. https://doi.org/10.1523/JNEUROSCI.0208-11.2011 pmid:21653854
    OpenUrlAbstract/FREE Full Text
  27. ↵
    1. Grinband J,
    2. Hirsch J,
    3. Ferrera VP
    (2006) A neural representation of categorization uncertainty in the human brain. Neuron 49:757–763. https://doi.org/10.1016/j.neuron.2006.01.032 pmid:16504950
    OpenUrlCrossRefPubMed
  28. ↵
    1. Hachen I,
    2. Reinartz S,
    3. Brasselet R,
    4. Stroligo A,
    5. Diamond M
    (2021) Dynamics of history-dependent perceptual judgment. Nat Commun 12:15. https://doi.org/10.1038/s41467-021-26104-2
    OpenUrl
  29. ↵
    1. Hanks TD,
    2. Kopec CD,
    3. Brunton BW,
    4. Duan CA,
    5. Erlich JC,
    6. Brody CD
    (2015) Distinct relationships of parietal and prefrontal cortices to evidence accumulation. Nature 520:220–223. https://doi.org/10.1038/nature14066 pmid:25600270
    OpenUrlCrossRefPubMed
  30. ↵
    1. Haynes JD
    (2015) A primer on pattern-based approaches to fMRI: principles, pitfalls, and perspectives. Neuron 87:257–270. https://doi.org/10.1016/j.neuron.2015.05.025 pmid:26182413
    OpenUrlCrossRefPubMed
  31. ↵
    1. Hebart MN,
    2. Schriever Y,
    3. Donner TH,
    4. Haynes JD
    (2016) The relationship between perceptual decision variables and confidence in the human brain. Cereb Cortex 26:118–130. https://doi.org/10.1093/cercor/bhu181 pmid:25112281
    OpenUrlCrossRefPubMed
  32. ↵
    1. Holroyd CB,
    2. Nieuwenhuis S,
    3. Yeung N,
    4. Nystrom L,
    5. Mars RB,
    6. Coles MG,
    7. Cohen JD
    (2004) Dorsal anterior cingulate cortex shows fMRI response to internal and external error signals. Nat Neurosci 7:497–498. https://doi.org/10.1038/nn1238 pmid:15097995
    OpenUrlCrossRefPubMed
  33. ↵
    1. Jenkinson M,
    2. Bannister P,
    3. Brady M,
    4. Smith S
    (2002) Improved optimization for the robust and accurate linear registration and motion correction of brain images. Neuroimage 17:825–841. https://doi.org/10.1006/nimg.2002.1132
    OpenUrlCrossRefPubMed
  34. ↵
    1. Kahnt T,
    2. Grueschow M,
    3. Speck O,
    4. Haynes JD
    (2011a) Perceptual learning and decision-making in human medial frontal cortex. Neuron 70:549–559. https://doi.org/10.1016/j.neuron.2011.02.054 pmid:21555079
    OpenUrlCrossRefPubMed
  35. ↵
    1. Kahnt T,
    2. Heinzle J,
    3. Park SQ,
    4. Haynes JD
    (2011b) Decoding different roles for vmPFC and dlPFC in multi-attribute decision making. Neuroimage 56:709–715. https://doi.org/10.1016/j.neuroimage.2010.05.058 pmid:20510371
    OpenUrlCrossRefPubMed
  36. ↵
    1. Kass RE,
    2. Raftery AE
    (1995) Bayes factors. J Am Stat Assoc 90:773–795. https://doi.org/10.1080/01621459.1995.10476572
    OpenUrlCrossRefPubMed
  37. ↵
    1. Kepecs A,
    2. Uchida N,
    3. Zariwala HA,
    4. Mainen ZF
    (2008) Neural correlates, computation and behavioural impact of decision confidence. Nature 455:227–231. https://doi.org/10.1038/nature07200 pmid:18690210
    OpenUrlCrossRefPubMed
  38. ↵
    1. Kim JN,
    2. Shadlen MN
    (1999) Neural correlates of a decision in the dorsolateral prefrontal cortex of the macaque. Nat Neurosci 2:176–185. https://doi.org/10.1038/5739 pmid:10195203
    OpenUrlCrossRefPubMed
  39. ↵
    1. Klein E
    (1980) A semantics for positive and comparative adjectives. Linguist Philos 4:1–45. https://doi.org/10.1007/BF00351812
    OpenUrlCrossRef
  40. ↵
    1. Knapen T,
    2. Rolfs M,
    3. Wexler M,
    4. Cavanagh P
    (2010) The reference frame of the tilt aftereffect. J Vis 10:8–13. https://doi.org/10.1167/10.1.8
    OpenUrlAbstract/FREE Full Text
  41. ↵
    1. Knill DC,
    2. Richards W
    (1996) Perception as Bayesian inference. Cambridge: Cambridge University Press.
  42. ↵
    1. Kohn A
    (2007) Visual adaptation: physiology, mechanisms, and functional benefits. J Neurophysiol 97:3155–3164. https://doi.org/10.1152/jn.00086.2007 pmid:17344377
    OpenUrlCrossRefPubMed
  43. ↵
    1. Kriegeskorte N,
    2. Goebel R,
    3. Bandettini P
    (2006) Information-based functional brain mapping. Proc Natl Acad Sci U S A 103:3863–3868. https://doi.org/10.1073/pnas.0600244103 pmid:16537458
    OpenUrlAbstract/FREE Full Text
  44. ↵
    1. Lages M,
    2. Treisman M
    (1998) Spatial frequency discrimination: visual long-term memory or criterion setting? Vision Res 38:557–572. https://doi.org/10.1016/s0042-6989(97)88333-2 pmid:9536378
    OpenUrlCrossRefPubMed
  45. ↵
    1. Lages M,
    2. Treisman M
    (2010) A criterion setting theory of discrimination learning that accounts for anisotropies and context effects. Seeing Perceiving 23:401–434. https://doi.org/10.1163/187847510x541117 pmid:21466134
    OpenUrlPubMed
  46. ↵
    1. Lak A,
    2. Costa GM,
    3. Romberg E,
    4. Koulakov AA,
    5. Mainen ZF,
    6. Kepecs A
    (2014) Orbitofrontal cortex is required for optimal waiting based on decision confidence. Neuron 84:190–201. https://doi.org/10.1016/j.neuron.2014.08.039 pmid:25242219
    OpenUrlCrossRefPubMed
  47. ↵
    1. Lak A,
    2. Hueske E,
    3. Hirokawa J,
    4. Masset P,
    5. Ott T,
    6. Urai AE,
    7. Donner TH,
    8. Carandini M,
    9. Tonegawa S,
    10. Uchida N,
    11. Kepecs A
    (2020) Reinforcement biases subsequent perceptual decisions when confidence is low, a widespread behavioral phenomenon. Elife 9:e49834. https://doi.org/10.7554/eLife.49834
    OpenUrlCrossRefPubMed
  48. ↵
    1. Lassiter D,
    2. Goodman ND
    (2017) Adjectival vagueness in a Bayesian model of interpretation. Synthese 194:3801–3836. https://doi.org/10.1007/s11229-015-0786-1
    OpenUrl
  49. ↵
    1. Lau H,
    2. Rosenthal D
    (2011) Empirical support for higher-order theories of conscious awareness. Trends Cogn Sci 15:365–373. https://doi.org/10.1016/j.tics.2011.05.009 pmid:21737339
    OpenUrlCrossRefPubMed
  50. ↵
    1. Lee SH,
    2. Blake R,
    3. Heeger DJ
    (2007) Hierarchy of cortical responses underlying binocular rivalry. Nat Neurosci 10:1048–1054. https://doi.org/10.1038/nn1939 pmid:17632508
    OpenUrlCrossRefPubMed
  51. ↵
    1. Leeper TJ,
    2. Arnold J,
    3. Arel-Bundock V
    (2018) Margins: marginal effects for model objects. R package version 03 23. https://cran.r-project.org/web/packages/margins/margins.pdf.
  52. ↵
    1. Lueckmann J-M,
    2. Macke JH,
    3. Nienborg H
    (2018) Can serial dependencies in choices and neural activity explain choice probabilities? J Neurosci 38:3495–3506. https://doi.org/10.1523/JNEUROSCI.2225-17.2018 pmid:29440531
    OpenUrlAbstract/FREE Full Text
  53. ↵
    1. Macke JH,
    2. Nienborg H
    (2019) Choice (-history) correlations in sensory cortex: cause or consequence? Curr Opin Neurobiol 58:148–154. https://doi.org/10.1016/j.conb.2019.09.005 pmid:31581052
    OpenUrlPubMed
  54. ↵
    1. Marco-Pallarés J,
    2. Müller SV,
    3. Münte TF
    (2007) Learning by doing: an fMRI study of feedback-related brain activations. Neuroreport 18:1423–1426. https://doi.org/10.1097/WNR.0b013e3282e9a58c pmid:17712267
    OpenUrlCrossRefPubMed
  55. ↵
    1. Marcus D,
    2. Harwell J,
    3. Olsen T,
    4. Hodge M,
    5. Glasser M,
    6. Prior F,
    7. Jenkinson M,
    8. Laumann T,
    9. Curtiss S,
    10. Van Essen D
    (2011) Informatics and data mining tools and strategies for the human connectome project. Front Neuroinform 5:4. https://doi.org/10.3389/fninf.2011.00004 pmid:21743807
    OpenUrlCrossRefPubMed
  56. ↵
    1. McLelland D,
    2. Ahmed B,
    3. Bair W
    (2009) Responses to static visual images in macaque lateral geniculate nucleus: implications for adaptation, negative afterimages, and visual fading. J Neurosci 29:8996–9001. https://doi.org/10.1523/JNEUROSCI.0467-09.2009 pmid:19605637
    OpenUrlAbstract/FREE Full Text
  57. ↵
    1. Mize TD,
    2. Doan L,
    3. Long JS
    (2019) A general framework for comparing predictions and marginal effects across models. Sociol Methodol 49:152–189. https://doi.org/10.1177/0081175019852763
    OpenUrlCrossRef
  58. ↵
    1. Morgan M
    (2014) A bias-free measure of retinotopic tilt adaptation. J Vis 14:7. https://doi.org/10.1167/4.1.7
    OpenUrlAbstract/FREE Full Text
  59. ↵
    1. Nahum M,
    2. Daikhin L,
    3. Lubin Y,
    4. Cohen Y,
    5. Ahissar M
    (2010) From comparison to classification: a cortical tool for boosting perception. J Neurosci 30:1128–1136. https://doi.org/10.1523/JNEUROSCI.1781-09.2010 pmid:20089921
    OpenUrlAbstract/FREE Full Text
  60. ↵
    1. Nakashima Y,
    2. Sugita Y
    (2017) The reference frame of the tilt aftereffect measured by differential Pavlovian conditioning. Sci Rep 7:40525. https://doi.org/10.1038/srep40525 pmid:28094321
    OpenUrlPubMed
  61. ↵
    1. Nestares O,
    2. Heeger DJ
    (2000) Robust multiresolution alignment of MRI brain volumes. Magn Reson Med 43:705–715. https://doi.org/10.1002/(SICI)1522-2594(200005)43:5<705::AID-MRM13>3.0.CO;2-R
    OpenUrlCrossRefPubMed
  62. ↵
    1. Norton EH,
    2. Fleming SM,
    3. Daw ND,
    4. Landy MS
    (2017) Suboptimal criterion learning in static and dynamic environments. PLoS Comput Biol 13:e1005304. https://doi.org/10.1371/journal.pcbi.1005304 pmid:28046006
    OpenUrlPubMed
  63. ↵
    1. Olman CA,
    2. Inati S,
    3. Heeger DJ
    (2007) The effect of large veins on spatial localization with GE BOLD at 3 T: displacement, not blurring. Neuroimage 34:1126–1135. https://doi.org/10.1016/j.neuroimage.2006.08.045 pmid:17157534
    OpenUrlCrossRefPubMed
  64. ↵
    1. Orhan AE,
    2. Ma WJ
    (2019) A diverse range of factors affect the nature of neural representations underlying short-term memory. Nat Neurosci 22:275–283. https://doi.org/10.1038/s41593-018-0314-y pmid:30664767
    OpenUrlCrossRefPubMed
  65. ↵
    1. Patterson CA,
    2. Wissig SC,
    3. Kohn A
    (2013) Distinct effects of brief and prolonged adaptation on orientation tuning in primary visual cortex. J Neurosci 33:532–543. https://doi.org/10.1523/JNEUROSCI.3345-12.2013 pmid:23303933
    OpenUrlAbstract/FREE Full Text
  66. ↵
    1. Pavan A,
    2. Marotti RB,
    3. Campana G
    (2012) The temporal course of recovery from brief (sub-second) adaptations to spatial contrast. Vision Res 62:116–124. https://doi.org/10.1016/j.visres.2012.04.001 pmid:22521510
    OpenUrlCrossRefPubMed
  67. ↵
    1. Raviv O,
    2. Lieder I,
    3. Loewenstein Y,
    4. Ahissar M
    (2014) Contradictory behavioral biases result from the influence of past stimuli on perception. PLoS Comput Biol 10:e1003948. https://doi.org/10.1371/journal.pcbi.1003948 pmid:25474117
    OpenUrlCrossRefPubMed
  68. ↵
    1. Rees G,
    2. Kreiman G,
    3. Koch C
    (2002) Neural correlates of consciousness in humans. Nat Rev Neurosci 3:261–270. https://doi.org/10.1038/nrn783 pmid:11967556
    OpenUrlCrossRefPubMed
  69. ↵
    1. Renart A,
    2. Machens CK
    (2014) Variability in neural activity and behavior. Curr Opin Neurobiol 25:211–220. https://doi.org/10.1016/j.conb.2014.02.013 pmid:24632334
    OpenUrlCrossRefPubMed
  70. ↵
    1. Rips LJ,
    2. Turnbull W
    (1980) How big is big? Relative and absolute properties in memory. Cognition 8:145–174. https://doi.org/10.1016/0010-0277(80)90010-4 pmid:7389286
    OpenUrlPubMed
  71. ↵
    1. Sanders JI,
    2. Hangya B,
    3. Kepecs A
    (2016) Signatures of a statistical computation in the human sense of confidence. Neuron 90:499–506. https://doi.org/10.1016/j.neuron.2016.03.025 pmid:27151640
    OpenUrlCrossRefPubMed
  72. ↵
    1. Scutari M
    (2010) Learning Bayesian Networks with the bnlearn R Package. Journal of Statistical Software 35:1–22.
    OpenUrl
  73. ↵
    1. Sereno MI,
    2. Dale A,
    3. Reppas J,
    4. Kwong K,
    5. Belliveau J,
    6. Brady T,
    7. Rosen B,
    8. Tootell R
    (1995) Borders of multiple visual areas in humans revealed by functional magnetic resonance imaging. Science 268:889–893. https://doi.org/10.1126/science.7754376 pmid:7754376
    OpenUrlAbstract/FREE Full Text
  74. ↵
    1. Sheehan TC,
    2. Serences JT
    (2022) Attractive serial dependence overcomes repulsive neuronal adaptation. PLoS Biol 20:e3001711. https://doi.org/10.1371/journal.pbio.3001711 pmid:36067148
    OpenUrlCrossRefPubMed
  75. ↵
    1. Shmuel A,
    2. Yacoub E,
    3. Chaimow D,
    4. Logothetis NK,
    5. Ugurbil K
    (2007) Spatio-temporal point-spread function of fMRI signal in human gray matter at 7 Tesla. Neuroimage 35:539–552. https://doi.org/10.1016/j.neuroimage.2006.12.030 pmid:17306989
    OpenUrlCrossRefPubMed
  76. ↵
    1. Smith AM,
    2. Lewis BK,
    3. Ruttimann UE,
    4. Frank QY,
    5. Sinnwell TM,
    6. Yang Y,
    7. Duyn JH,
    8. Frank JA
    (1999) Investigation of low frequency drift in fMRI signal. Neuroimage 9:526–533. https://doi.org/10.1006/nimg.1999.0435 pmid:10329292
    OpenUrlCrossRefPubMed
  77. ↵
    1. Solomon SG,
    2. Kohn A
    (2014) Moving sensory adaptation beyond suppressive effects in single neurons. Curr Biol 24:R1012–R1022. https://doi.org/10.1016/j.cub.2014.09.001 pmid:25442850
    OpenUrlCrossRefPubMed
  78. ↵
    1. Solt S
    (2015) Vagueness and imprecision: empirical foundations. Annu Rev Linguist 1:107–127. https://doi.org/10.1146/annurev-linguist-030514-125150
    OpenUrl
  79. ↵
    1. Soon CS,
    2. Brass M,
    3. Heinze H-J,
    4. Haynes J-D
    (2008) Unconscious determinants of free decisions in the human brain. Nat Neurosci 11:543–545. https://doi.org/10.1038/nn.2112 pmid:18408715
    OpenUrlCrossRefPubMed
  80. ↵
    1. Stocker AA,
    2. Simoncelli EP
    (2006) Sensory adaptation within a Bayesian framework for perception. In: Advances in neural information processing systems, pp 1289–1296. MIT Press, Vancouver.
  81. ↵
    1. Treisman M,
    2. Williams TC
    (1984) A theory of criterion setting with an application to sequential dependencies. Psychol Rev 91:68–111. https://doi.org/10.1037/0033-295X.91.1.68
    OpenUrlCrossRef
  82. ↵
    1. Tribushinina E
    (2011) Once again on norms and comparison classes. Linguistics 49:525–553. https://doi.org/10.1515/ling.2011.016
    OpenUrl
  83. ↵
    1. Urai AE,
    2. Donner TH
    (2022) Persistent activity in human parietal cortex mediates perceptual choice repetition bias. Nat Commun 13:6015. https://doi.org/10.1038/s41467-022-33237-5
    OpenUrlCrossRef
  84. ↵
    1. Weber AI,
    2. Krishnamurthy K,
    3. Fairhall AL
    (2019) Coding principles in adaptation. Annu Rev Vis Sci 5:427–449. https://doi.org/10.1146/annurev-vision-091718-014818 pmid:31283447
    OpenUrlCrossRefPubMed
  85. ↵
    1. White CN,
    2. Mumford JA,
    3. Poldrack RA
    (2012) Perceptual criteria in the human brain. J Neurosci 32:16716–16724. https://doi.org/10.1523/JNEUROSCI.1744-12.2012 pmid:23175825
    OpenUrlAbstract/FREE Full Text
  86. ↵
    1. Williams R,
    2. Jorgensen A
    (2023) Comparing logit and probit coefficients between nested models. Soc Sci Res 109:102802. https://doi.org/10.1016/j.ssresearch.2022.102802 pmid:36470637
    OpenUrlPubMed
  87. ↵
    1. Wutz A,
    2. Loonis R,
    3. Roy JE,
    4. Donoghue JA,
    5. Miller EK
    (2018) Different levels of category abstraction by different dynamics in different prefrontal areas. Neuron 97:716–726.e8. https://doi.org/10.1016/j.neuron.2018.01.009 pmid:29395915
    OpenUrlCrossRefPubMed
  88. ↵
    1. Zhang H,
    2. Alais D
    (2020) Individual difference in serial dependence results from opposite influences of perceptual choices and motor responses. J Vis 20:2. https://doi.org/10.1167/jov.20.8.2 pmid:32744618
    OpenUrlCrossRefPubMed
  89. ↵
    1. Zokaei N,
    2. Burnett Heyes S,
    3. Gorgoraptis N,
    4. Budhdeo S,
    5. Husain M
    (2015) Working memory recall precision is a more sensitive index than span. J Neuropsychol 9:319–329. https://doi.org/10.1111/jnp.12052 pmid:25208525
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 43 (25)
Journal of Neuroscience
Vol. 43, Issue 25
21 Jun 2023
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Neural Evidence for Boundary Updating as the Source of the Repulsive Bias in Classification
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Neural Evidence for Boundary Updating as the Source of the Repulsive Bias in Classification
Heeseung Lee, Hyang-Jung Lee, Kyoung Whan Choe, Sang-Hun Lee
Journal of Neuroscience 21 June 2023, 43 (25) 4664-4683; DOI: 10.1523/JNEUROSCI.0166-23.2023

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Neural Evidence for Boundary Updating as the Source of the Repulsive Bias in Classification
Heeseung Lee, Hyang-Jung Lee, Kyoung Whan Choe, Sang-Hun Lee
Journal of Neuroscience 21 June 2023, 43 (25) 4664-4683; DOI: 10.1523/JNEUROSCI.0166-23.2023
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • Bayesian inference
  • history effect
  • perceptual decision
  • repulsive bias
  • sensory adaptation

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Gene expression-based lesion-symptom mapping: FOXP2 and language impairments after stroke
  • Visual Distortions in Human Amblyopia Are Correlated with Deficits in Contrast Sensitivity
  • Distinct Portions of Superior Temporal Sulcus Combine Auditory Representations with Different Visual Streams
Show more Research Articles

Behavioral/Cognitive

  • Gene expression-based lesion-symptom mapping: FOXP2 and language impairments after stroke
  • Distinct Portions of Superior Temporal Sulcus Combine Auditory Representations with Different Visual Streams
  • Microsaccade Direction Reveals the Variation in Auditory Selective Attention Processes
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.