Shared Neural Representations of Cognitive Conflict and Negative Affect in the Dorsal Anterior Cingulate Cortex

Influential theories of dorsal anterior cingulate cortex (dACC) function suggest that the dACC registers cognitive conflict as an aversive signal, but no study directly tested this idea. In this pre-registered human fMRI study, we used multivariate pattern analyses to identify which regions respond similarly to conflict and aversive signals. The results show that, of all conflict- and value-related regions, only the dACC/pre-SMA showed shared representations, directly supporting recent dACC theories.

shared neural representations of conflict and affect in the brain. Namely, by using multivariate 48 cross-classification analyses, we assessed whether and where a classifier algorithm trained to 49 discern conflict (incongruent vs congruent events) can successfully predict affect (negative vs 50 positive events), and vice versa. Successful classification would indicate a similarity between the 51 neural pattern response, and thus a shared representational code between these two domains 18,19 . 52 Specifically, 38 human subjects performed a color Stroop 20 and flanker task 21 in the conflict 53 domain, and two closely matched tasks in the affective domain (Fig. 1A). Importantly, we used 54 two tasks in each domain in order to demonstrate an abstract representation of conflict (and 55 affect), that is independent of conflict type (and affect source) 22 . Conflict and affect-related brain 56 signals were used to perform a leave-one-run-out cross-classification analysis using a linear 57 Support Vector Machine (see Methods). We performed preregistered Region of Interest (ROI) 58 and whole brain searchlight analyses (Supplementary Table 1 Table 2) from the conflict tasks showed the typical differences between 76 congruent and incongruent trials. In the affective tasks, catch trials (where subjects had to make a 77 valence judgement instead of a color judgement) and a post-experiment incidental memory test 78 were used to inform processing of the (task-irrelevant) affective stimuli (see Supplementary   79   SHARED REPRESENTATIONS OF CONFLICT AND NEGATIVE AFFECT   5   Table 4 for behavioral results). We observed above-chance catch trial performance (chance level Finally, we evaluated our main hypothesis by training a classifier on discerning conflict 103 (incongruent vs congruent) and testing its performance on discerning affect (negative vs 104 positive), and vice versa. For this analysis, we focussed on the cross-domain cross-task decoding 105 (train and test in different domains on different tasks) as this analysis also controls for more low-106 level shared features between the two tasks (Fig. 2C, right panel). The cross-domain cross-task                    The study was pre-registered with the pre-registration template from AsPredicted.org on the 211 Open Science Framework (https://osf.io/p5frq/). As pre-registered, 40 participants participated in 212 our study. Two participants were excluded (one due to excessive head motion [>2.5mm   The conflict-version of the color-word naming task was a Stroop task 20 , where the meaning of 231 the words could either be congruent or incongruent with the actual color of the word. For 232 example, participants could see the words "BLUE", "RED", "GREEN" or "YELLOW" (Dutch: 233 "ROOD", "BLAUW", "GROEN" or "GEEL") presented in a blue, red, green or yellow font. The

255
Each trial started with a fixation sign ("+") that was presented for 3 to 6.5 seconds (in steps of 256 0.5 s; M=3.5 s; drawn from an exponential distribution). Next, the target stimulus was presented 257 for 1.5 seconds (fixed presentation time regardless of RT). In order to increase the saliency of the stimulus could be repeated. This restriction was used to investigate confound-free congruency 267 sequence effects (see 33 ; but this was not the aim of the current study and will not be discussed 268 further). In total, each participant made 640 trials (i.e., five runs of four blocks of 32 trials).

269
In each task context (block), we also included one catch trial (at random, but not in the first two 270 or last two trials of each block). In these catch trials, the presentation of the task-irrelevant word, 271 picture, or colored square would not be followed by the presentation of the target color, and 272 remain on screen for three seconds. Participants were instructed that during these catch trials, crossed random effects for Participant and Item. We also pre-registered some exclusion criteria 307 based on behavioral performance. Participants with a mean RT outside 3 SD from the sample 308 mean or a hit rate below 3 SD or 60 % (chance level=25 %) from the sample mean were 309 excluded. Participants that performed poorly on the post-scanning recognition memory test, i.e., 310 hit rate or false alarm rate outside 3 SD of the sample mean were also excluded. In the end, no 311 exclusions based on task performance had to be made. While performance on catch trials was not 312 a pre-registered exclusion criterion, we found that two participants responded on chance level in 313 the catch trials of the affective domain (chance level=50 %, positive vs. negative judgement).

336
Results were analyzed using a mass-univariate approach. Although we pre-registered that we 337 would not normalize and smooth the data for our classification analyses, we found that Signal-to-338 Noise Ratio (SNR) was significantly improved with these additional preprocessing steps 339 ( Supplementary Fig. 3A). In addition, an independent classification analysis (classifying left vs. analyses cross task type combinations (i.e., from color-circle naming to color-word naming, or 369 from color-word naming to color-circle naming) to further control for low-level task features, 370 following the same reasoning as the within-domain cross-task classification analyses. The results 371 from these classification analyses were then averaged to return the cross-domain cross-task 372 decoding results. For each of these three decoding analyses, we also ran ANOVAs to evaluate 373 whether the result differed depending on the task (e.g., color-circle naming versus color-word 374 naming) or task-to-task direction (i.e., from color-circle naming to color-word naming, or from 375 color-word naming to color-circle naming). Finally, we also report an "overall decoding" 376 analysis, where the classifier was trained across the two task types at once, thereby ignoring 377 whether the event featured words or pictures/colored backgrounds.

378
Each classification analysis resulted in 'accuracy-minus-chance' decoding maps for each subject.

379
These maps were then entered into a group second-level GLM analysis in SPM12. Here, a one-380 sample t-test determined which voxels show significant accuracy above chance level.

381
Next to MVPA, we also conducted classic univariate analyses. Here, we constructed a set of  activations). Although this ROI was based on the "dacc" search term, the peak effect of studies 397 reporting dACC activity actually lies more dorsally than the cingulate gyrus, overlapping with 398 the pre-SMA 11 *. Therefore, we refer to this ROI as the dACC/pre-SMA. Next, we built a 10 mm 399 sphere around the peak activation point in this activation map (association map). Because the 400 dACC ROI was spherical (in contrast to the other six atlas ROIs), we also re-analyzed our results 401 from the atlas ROIs with 10 mm spherical alternatives retrieved from Neurosynth, which 402 returned highly similar results and did not change our statistical conclusions.

403
In addition to the pre-registered ROI analyses which were based on anatomically determined 404 ROIs, we also ran a second set of ROI analyses with functionally informed ROIs. Namely, we 405 created 10 mm sphere ROIs for all conflict-sensitive regions based on the most recent and 406 inclusive meta-analysis we could find on cognitive conflict 23 .

407
Each ROI decoding analysis returned one accuracy-minus-chance value per ROI and participant. 408 We tested whether these values were significantly higher than zero (one-tailed) with the non-409 parametric Wilcoxon signed-rank test and a Bayesian t-test (using the default priors from the 410 BayesFactor package in R; Cauchy prior width: r=.707). We report the Bayes Factor (BF) that 411 quantifies the evidence for the alternative hypothesis (i.e., decoding accuracy is higher than 412 SHARED REPRESENTATIONS OF CONFLICT AND NEGATIVE AFFECT 21 zero). Our pre-registered stopping criterion was if the main finding was BF>6 (i.e., or if we had 413 reached 40 subjects, for financial reasons), but we would like to note that, if so, this result was 414 typically also p<.00714, which is the Bonferroni-corrected alpha for the main set of 7 ROIs. 415 Finally, we investigated whether the significant cross-task cross-domain classification accuracy  The minimal data necessary to replicate the reported findings can be found on the Open Science 422 Framework (https://osf.io/p5frq/). Raw fMRI data and preprocessing scripts will be uploaded to a 423 repository in the near future.