Abstract
Localizing touch relies on the activation of skin-based and externally defined spatial frames of reference. Psychophysical studies have demonstrated that early visual deprivation prevents the automatic remapping of touch into external space. We used fMRI to characterize how visual experience impacts the brain circuits dedicated to the spatial processing of touch. Sighted and congenitally blind humans performed a tactile temporal order judgment (TOJ) task, either with the hands uncrossed or crossed over the body midline. Behavioral data confirmed that crossing the hands has a detrimental effect on TOJ judgments in sighted but not in early blind people. Crucially, the crossed hand posture elicited enhanced activity, when compared with the uncrossed posture, in a frontoparietal network in the sighted group only. Psychophysiological interaction analysis revealed, however, that the congenitally blind showed enhanced functional connectivity between parietal and frontal regions in the crossed versus uncrossed hand postures. Our results demonstrate that visual experience scaffolds the neural implementation of the location of touch in space.
SIGNIFICANCE STATEMENT In daily life, we seamlessly localize touch in external space for action planning toward a stimulus making contact with the body. For efficient sensorimotor integration, the brain has therefore to compute the current position of our limbs in the external world. In the present study, we demonstrate that early visual deprivation alters the brain activity in a dorsal parietofrontal network typically supporting touch localization in the sighted. Our results therefore conclusively demonstrate the intrinsic role that developmental vision plays in scaffolding the neural implementation of touch perception.
- blindness
- crossed-hands deficit
- spatial frames of reference
- tactile localization
- temporal order judgment
Introduction
Quickly and accurately localizing touch in space is crucial for efficient action planning toward an external stimulus making contact with the body. Although we seamlessly do it in daily life, it is not a trivial operation because the hands move constantly within the peripersonal space as different postures are adopted. Therefore, the brain must transform tactile coordinates from an initial skin-based representation to a representation that is defined by coordinates in external space (Yamamoto and Kitazawa, 2001; Shore et al., 2002; Azañón and Soto-Faraco, 2008; Azañón et al., 2010a, 2015). For example, when sighted individuals have to judge which of their two hands receive a tactile stimulation first [a temporal order judgment task (TOJ)], they make more errors when their hands are crossed over the body midline compared with when the hands are uncrossed (Yamamoto and Kitazawa, 2001; Shore et al., 2002; Heed and Azañón, 2014). This crossed-hands deficit has been attributed to the misalignment of anatomical and external frames of reference (Yamamoto and Kitazawa, 2001; Shore et al., 2002). Because the task requirements have nothing spatial (in theory, the task could be solved by using somatotopic coordinates only), this crossing-hand effect compellingly illustrates how the external remapping of touch is automatic in sighted people (Heed and Azañón, 2014). Specific brain networks including parietal and premotor areas support this automatic remapping of touch into an external spatial coordinate system (Lloyd et al., 2003; Matsumoto et al., 2004; Azañón et al., 2010b; Wada et al., 2012; Takahashi et al., 2013).
Congenitally blind people, in contrast, do not show any crossing-hand deficit when involved in a tactile TOJ task (Röder et al., 2004; Crollen et al., 2017). This suggests that the default remapping of passive touch into external spatial coordinates is acquired during development as a consequence of visual experience. Interestingly, a similar reduction in the default external remapping of touch was also observed in patients who had been totally deprived of early vision but whose sight was restored during childhood (Ley et al., 2013; Azañón et al., 2017). This suggests the presence of a sensitive period early in life for the development of the automatic use of an external visuospatial frame of reference for coding touch in space (Pagel et al., 2009).
Does the absence of visual experience also alter the neural network typically recruited when people experience a conflict between skin-based and external spatial coordinates of touch? Investigating how congenital blindness reorganizes the brain network supporting touch localization is crucial to conclusively determine the intrinsic role vision plays in scaffolding the neural implementation of the perception of touch location. To address this question, we used functional magnetic resonance imaging (fMRI) to characterize the brain activity of congenitally blind individuals and sighted control subjects performing a tactile TOJ task with either their hands uncrossed or with the hands crossed over the body midline.
Materials and Methods
Participants
Eleven sighted control subjects (SCs; four females; age range, 22–64 years; mean ± SD age, 46 ± 14 years) and 8 congenitally blind (CB) participants (2 females; age range, 24–63 years; (mean ± SD age, 47 ± 13 years) took part in the study (Table 1, detailed description of the CB participants). The mean age of the SC and CB groups did not statistically differ (t(17) = 0.11, p = 0.92). At the time of testing, the participants in the blind group were totally blind or had only rudimentary sensitivity for brightness differences and no patterned vision. In all cases, blindness was attributed to peripheral deficits with no additional neurological problems. Procedures were approved by the Research Ethics Boards of the University of Montreal. Experiments were undertaken with the understanding and written consent of each participant. Both groups of participants were blindfolded when performing the task.
Task and general experimental design
In this task, two successive tactile stimuli were presented for 50 ms to the left and right middle fingers at the following six different stimulus onset asynchronies (SOAs): −120, −90, −60, 60, 90, and 120. Negative values indicated that the first stimulus was presented to the participant's left hand; positive values indicated that the first stimulus was presented to the participant's right hand. Tactile stimuli were delivered using a pneumatic tactile stimulator (Institute for Biomagnetism and Biosignal Analysis, University of Muenster, Germany). A plastic membrane (1 cm in diameter) was attached to the distal phalanxes of the left and right middle fingers and was inflated by a pulse of air pressure delivered through a rigid plastic tube. Participants had to press a response button placed below the index finger of the hand that they perceived to have been stimulated first. They had 3550 ms to respond, otherwise the trial was terminated. Participants were asked to perform the task either with their hands in a parallel posture (i.e., uncrossed posture) or with their arms crossed over the body midline. Stimuli were delivered and responses were recorded using Presentation software (Neurobehavioral Systems) running on a Dell XPS computer using a Microsoft Windows 7 operating system.
Participants were scanned in two fMRI sessions using a block design. One run consisted of 16 successive blocks (22 s duration each) separated by rest periods ranging from 11 to 14 s (median, 12.5 s), during which participants had to perform the TOJ judgments either with the hands uncrossed or with the hands crossed. The starting run (uncrossed or crossed) was counterbalanced across participants. Each block, either uncrossed or crossed, consisted of six successive pairs of stimulations (each SOA was randomly presented once in each block).
Before the fMRI acquisition, all participants underwent a training session in a mock scanner, with recorded scanner noise played in the bore of the stimulator to familiarize them with the fMRI environment and to ensure that the participants understood the task.
Behavioral data analyses
Behavioral data were analyzed by using a procedure similar to the one described by Shore et al. (2002), Röder et al. (2004), Azañón et al. (2017), and Crollen et al. (2017). The mean percentages of “right-hand-first” responses were calculated for each participant, SOA, and posture. These raw proportions were transformed into their standardized z-score equivalents and then used to calculate the best-fitting linear regression lines of each participant (Shore et al., 2002). The standard score was obtained by subtracting the group mean from an individual raw score and then dividing the difference by the group SD. This probit analysis allowed us to transform the sigmoid response curve to a straight line that can then be analyzed by regression.
fMRI data acquisition and analyses
Acquisition.
Functional MRI series were acquired using a 3-T TRIO TIM system (Siemens), equipped with a 12-channel head coil. Multislice T2*-weighted fMRI images were obtained with a gradient echoplanar sequence using axial slice orientation [TR = 2200 ms; TE = 30 ms; flip angle = 90°; 35 transverse slices; 3.2 mm slice thickness; 0.8 mm interslice gap; field of view (FoV) = 192 × 192 mm2; matrix size = 64 × 64 × 35; voxel size = 3 × 3 × 3.2 mm3]. Slices were sequentially acquired along the z-axis in feet-to-head direction. The four initial scans were discarded to allow for steady-state magnetization. Participants' head was immobilized with the use of foam pads that applied pressure onto headphones. A structural T1-weigthed 3D MP-RAGE sequence (voxel size = 1 × 1 × 1.2 mm3; matrix size = 240 × 256; TR = 2300 ms; TE = 2.91 ms; TI = 900 ms; FoV = 256; 160 slices) was also acquired for all participants.
Analyses.
Functional volumes from the uncrossed and crossed conditions were preprocessed and analyzed separately using SPM8 (http://www.fil.ion.ucl.ac.uk/spm/software/spm8; Welcome Department of Imaging Neuroscience, London, UK), implemented in MATLAB (MathWorks). Preprocessing included slice timing correction of the functional time series (Sladky et al., 2011), realignment of functional time series, coregistration of functional and anatomical data, spatial normalization to an echoplanar imaging template conforming to the Montreal Neurological Institute (MNI) space, and spatial smoothing [Gaussian kernel, 8 mm full-width at half-maximum (FWHM)]. Serial autocorrelation, assuming a first-order autoregressive model, was estimated using the pooled active voxels with a restricted maximum likelihood procedure, and the estimates were used to whiten the data and design matrices.
Following preprocessing steps, the analysis of fMRI data, based on a mixed-effects model, was conducted in two serial steps accounting, respectively, for fixed and random effects. For each subject, changes in regional brain responses were estimated through a general linear model including the responses to the two experimental conditions (uncrossed, crossed). These regressors consisted of a boxcar function convolved with the canonical hemodynamic response function. Movement parameters derived from realignment of the functional volumes (translations in x, y, and z directions and rotations around x-, y-, and z-axes) and a constant vector were also included as covariates of no interest. We used a high-pass filter with a discrete cosine basis function and a cutoff period of 128 s to remove artifactual low-frequency trends.
Linear contrasts tested the main effect of each condition ([Uncrossed], [Crossed]), the main effects of general involvement in a tactile TOJ task ([Uncrossed + Crossed]), the specific effect of the uncrossed condition ([Uncrossed > Crossed]), and the specific effect of the crossed condition [Crossed > Uncrossed]. These linear contrasts generated statistical parametric maps [SPM(T)]. The resulting contrast images were then further spatially smoothed (Gaussian kernel 8 mm FWHM) and entered in a second-level analysis, corresponding to a random-effects model, accounting for intersubject variance. One-sample t tests were run on each group separately. Analyses characterized the main effect of each condition ([Uncrossed], [Crossed]), the main effect of general TOJ ([Uncrossed + Crossed]), the specific effects of the uncrossed ([Uncrossed > Crossed]), and the crossed condition [Crossed > Uncrossed]. Two-sample t tests were then performed to compare these effects between groups (Blind vs Sighted).
Statistical inferences.
Statistical inferences were performed at a threshold of p < 0.05 after correction for multiple comparisons [familywise error (FWE) method] over either the entire brain volume or small spherical volumes (15 mm radius) located in structures of interest. Coordinates of interest for small volume corrections (SVCs) were selected from three articles precisely examining brain activations related to the external representation of space in sighted participants (Lloyd et al., 2003; Matsumoto et al., 2004; Takahashi et al., 2013).
Standard stereotactic coordinates (x, y, z) used for SVC (in MNI space)
Frontal locations.
The frontal locations are as follows: left precentral gyrus: −46, 8, 46; 24, 4, 58 (Lloyd et al., 2003); −40, 4, 40 (Takahashi et al., 2013); and dorsolateral prefrontal cortex: −52, 14, 26 (Takahashi et al., 2013).
Parietal locations.
The parietal locations were as follows: left superior parietal lobule/precuneus: −8, −56, 58; −14, −66, 52 (Matsumoto et al., 2004); right posterior parietal cortex (PPC): 26, −54, 42 (Lloyd et al., 2003); left medial intraparietal area (MIP): −46, −52, 50 (Lloyd et al., 2003); and superior parietal gyrus: −26, −72, 32 (Takahashi et al., 2013).
Temporal locations.
The temporal locations were as follows: right middle temporal gyrus: 46, −40, 2 (Takahashi et al., 2013).
Psychophysiological interaction
Psychophysiological interaction (PPI) analyses were computed to identify any brain regions showing a significant change in the functional connectivity with seed regions (the left precuneus and the left MIP) that showed a significant activation in the [(CB > SC) × (crossed > uncrossed)] contrast. In each individual, time series of activity (first eigenvariate) were extracted from a 10 mm sphere centered on the local maxima detected within 10 mm of the identified peaks in the second-level analysis [(SC > CB) × (Crossed > Uncrossed)]. New linear models were generated at the individual level, using three regressors. One regressor represented the condition (Crossed > Uncrossed). The second regressor was the activity extracted in the reference area. The third regressor represented the interaction of interest between the first (psychological) and the second (physiological) regressors. To build this regressor, the underlying neuronal activity was first estimated by a parametric empirical Bayes formulation, combined with the psychological factor and subsequently convolved with the hemodynamic response function (Gitelman et al., 2003). The design matrix also included movement parameters. A significant PPI indicated a change in the regression coefficients between any reported brain area and the reference region, related to the experimental condition (Crossed > Uncrossed). Next, individual summary statistic images obtained at the first-level (fixed-effects) analysis were spatially smoothed (6 mm FWHM Gaussian kernel) and entered in a second-level (random-effects) analysis using a one-sample t test contrasting CB > SC and SC > CB. Statistical inferences were conducted as for the main-effect analysis described above.
Results
Behavioral data
The slopes of each individual line (calculated from the z-scores of the mean percentages of right-hand-first responses) were submitted to an ANOVA with posture (uncrossed vs crossed) as the within-subject factor and group (SC, CB) as the between-subject variable. Results showed the following (see Fig. 1A): (1) a significant effect of posture (F(1,17) = 6.52, p = 0.02, ηP2 = 0.28), the regression line for the uncrossed posture being steeper [mean (M) ± SE = 0.95 ± 0.01] than the regression line for the crossed posture (M ± SE = 0.58 ± 0.14); (2) a significant effect of group (F(1,17) = 8.27, p = 0.01, ηP2 = 0.33), the CB (M = 0.97 ± 0.11) performing better (steeper regression) than the SC (M = 0.57 ± 0.09); and (3) a significant posture × group interaction (F(1,17) = 6.75, p = 0.02, ηP2 = 0.28). To further examine this interaction, paired-samples t tests compared hand positions in each group separately. In the SC group, participants' performances were better in the uncrossed posture (M = 0.94 ± 0.02) than in the crossed posture (M = 0.20 ± 0.24; t(10) = −3.04, p = 0.01, d = 0.92). In deep contrast, the CB group did not show any effect of posture (t(7) = 1.05, p = 0.33, d = 0.37), as the slope of the regression lines was similar in the uncrossed postures (M = 0.96 ± 0.004) and the crossed postures (M = 0.97 ± 0.004).
fMRI data
We first tested whether our paradigm allowed us to observe the activation of the external remapping network in the SC group. Results revealed that the crossed posture, compared with the uncrossed posture, elicited brain responses in a large frontoparietal network including the left superior parietal gyrus, the right PPC, the left precuneus, the left precentral gyrus, the left dorsolateral prefrontal cortex, and the right middle temporal gyrus (Fig. 1B, Table 2). The same contrast (crossed > uncrossed) performed in the CB group did not reveal any significant result. When the [crossed > uncrossed] contrast was directly compared between groups (SC vs CB), the SC group showed significantly more activity than the CB group in the left precuneus, the left MIP, the left dorsolateral prefrontal cortex, and the right middle temporal gyrus (Fig. 1C, Table 2). The CB group did not show more activity than the sighted group for this contrast in any region.
PPI analyses were computed to identify between-group differences in the functional connectivity maps of the regions involved in the automatic external remapping of touch identified in the sighted group. For these analyses, the left precuneus (−20, −66, 60 mm) was selected as the seed region since it displayed the strongest between-group differences for the contrast [SC > CB] × [Crossed > Uncrossed] and also because this region was already reported in the literature as the neural basis of the external remapping of touch (Lloyd et al., 2003; Matsumoto et al., 2004; Azañón et al., 2010b; Wada et al., 2012; Takahashi et al., 2013). Interestingly, the results revealed that the seed regions showed stronger connectivity with an extended parietal network in the CB group compared with SC group for the crossed over the uncrossed posture (Fig. 1D, Table 2).
We performed regression analyses between a crossed-hand index (measuring the impact of crossing the hands on the TOJ task: percentage of correct responses in the crossed condition − percentage of correct responses in the uncrossed condition) and (1) the individual t-maps of the Crossed > Uncrossed contrast and (2) the functional connectivity (PPIs) maps in both groups separately. No significant results were found.
Discussion
We assessed the role that visual experience plays in shaping the neural correlates of tactile localization. For this purpose, SC and CB participants were scanned while performing TOJ judgments with the hands uncrossed or crossed over the body midline. At a behavioral level, we observed that crossing the hands massively disrupted TOJ performance in the SC group but not in the CB group (Fig. 1A), replicating the previous demonstration by Röder et al. (2004). While exploring the neurophysiological underpinning of this effect, we observed that the crossed posture, when compared with the uncrossed posture, elicited significantly more activity in the parietal and premotor areas in sighted participants, but not in blind participants. Our findings thus compellingly demonstrated that visual experience plays a crucial role in the development and/or automatic engagement of a parietofrontal network involved in remapping touch in external space.
In sighted individuals, vision is a dominant sense for processing space due to the typically higher reliability, when compared with other senses, of the signal it provides for such a process. For instance, auditory or tactile information is typically remapped toward visual positions if inputs are spatially misaligned (Alais and Burr, 2004; Charbonneau et al., 2013); owls reared with prisms deviating their vision show permanent biases in auditory localization (Knudsen and Knudsen, 1989); and short-term adaptation to spatially conflicting visual and auditory stimuli biases subsequent auditory localization (Recanzone, 1998; Zwiers et al., 2003). In the “rubber hand illusion” (Botvinick and Cohen, 1998), the sight of brushing of a rubber hand at the same time as brushing of the person's own hidden hand produces a feeling of ownership at the spatial location of the fake hand. Actually, when we hear or feel something approaching or touching the body, we typically orient our vision toward this event and then use our motor system to guide appropriate action plans based on a precise location of the target in the external world (Goodale, 2011). As a result of their lack of visual experience, congenitally blind people have to rely exclusively on spatial information delivered by the remaining intact senses, such as hearing and touch (Collignon et al., 2006; Collignon and De Volder, 2009; Dormal et al., 2012). Thus, it seems likely that spatial perception in congenitally blind and in sighted people develops along different trajectories and operates in a qualitatively different way in adulthood (Crollen and Collignon, 2012). Several studies have indeed pointed toward a reduced sense of external space in the early blind (Andersen et al., 1984; Bigelow, 1987; Dunlea, 1989; Millar, 1994; Ruggiero et al., 2012). The early blind seems, for example, to be immune to the nonvisual version of the rubber hand illusion (Petkova et al., 2012; Nava et al., 2014). Moreover, while the detection of deviant tactile stimuli at the hand induced event-related potentials that varied in crossed when compared with uncrossed posture in sighted people, changing the posture of the hand had no influence on the early blind subjects' brain activity (Röder et al., 2008; Schubert et al., 2015).
It has been shown that parietal and dorsal premotor regions play a crucial role in coregistering spatial information collected from various senses and frames of reference into a common coordinate system for the guidance of both eye and limb movements onto the external world (Graziano et al., 1994, 1997; Duhamel et al., 1998; Colby and Goldberg, 1999; Lloyd et al., 2003; Mullette-Gillman et al., 2005; Makin et al., 2007; Brozzoli et al., 2012). For instance, it was shown that the position of the arm is represented in the premotor (Graziano, 1999) and parietal (Graziano et al., 2000) cortices of the monkey by means of a convergence of visual and proprioceptive cues onto the same neurons. More particularly, these regions are thought to be part of a network that is responsible for the remapping of skin-based touch representations located in somatosensory regions into external spatial coordinates (Lloyd et al., 2003; Matsumoto et al., 2004; Bolognini and Maravita, 2007; Zhaele et al., 2007; Azañón et al., 2010a; Longo et al., 2010; Wada et al., 2012; Gentile et al., 2013; Takahashi et al., 2013; Limanowski and Blankenburg, 2016). Accordingly, transiently disrupting the activity of the right posterior parietal cortex with transcranial magnetic stimulation selectively impairs the tactile remapping process but does not disrupt proprioceptive and somatosensory localization processes, highlighting the causal role of this region in remapping touch into external space (Azañón et al., 2010b).
When the hands are crossed, the conflict between external and anatomical representations of the hands increases the computational demands of the external remapping process, which is typically observed in the “default” uncrossed posture (Melzack and Bromage, 1973; Bromage and Melzack, 1974). Crossing the hands therefore triggers enhanced activity in the dorsal parietofrontal network in sighted people (Fig. 1B). In early blind people, however, the absence of a mandatory external remapping process prevents the increased recruitment of this neural network while crossing the hands. It is therefore possible that the total absence of visual experience strongly decreases the automatic coupling of external and proprioceptive representations of the body, explaining why the CB group does not present any decrease of performance when crossing the hands in the TOJ task (Röder et al., 2004; Crollen et al., 2017) or why the CB group is less susceptible to the nonvisual version of the rubber hand illusion (Nava et al., 2014; Petkova et al., 2012). Therefore, by using blindness as a model system, we demonstrated that developmental vision plays a causal role in developing the computational architecture of parietal and dorsal premotor regions for the automatic transformation of tactile coordinates from an initial skin-based representation to a representation that is defined by coordinates in external space.
Interestingly, it has recently been suggested that the integration of spatial information from different reference frames actually depends on the relative weight attributed to the internal and external coordinates during a specific task (Azañón et al., 2010a; Badde et al., 2015; Badde and Heed, 2016). While integration seems mandatory in the SC group (Yamamoto and Kitazawa, 2001; Shore et al., 2002; Azañón et al., 2010b), the relative weight attributed to each coordinate system seems to be more dependent on task demands and instructions in the CB group (Heed and Röder, 2014; Heed et al., 2015a,b; Crollen et al., 2017). Actually, the default use of internal versus external spatial coordinates in the CB group seems to also depend on the requirement of the task (parity judgment vs magnitude comparison) when processing symbolic information (e.g., numbers; Crollen et al., 2013). Further studies should therefore examine whether the external remapping network could be active in the CB group while performing a task emphasizing external instructions. It is indeed possible that the external coordinate system is less automatically activated in the CB group than in the SC group without meaning that this system is not readily accessible when the task necessarily requires its activation (e.g., when people perform an action directed toward the external world; Crollen et al., 2017; Fiehler et al., 2009; Lingnau et al., 2014), which is not the case in a TOJ task (Heed and Azañón, 2014).
A recent study in the sighted demonstrated that the crossed-arms posture elicited stronger functional connectivity between the left intraparietal sulcus and the right frontal gyrus and the left PPC (Ora et al., 2016). By performing task-dependent functional connectivity analyses (psychophysiological interactions), we demonstrate that blind individuals rely on enhanced integration between dorsal regions while experiencing a conflict between body-centered and world-centered coordinates (Fig. 1D). This raises the intriguing possibility that changes in the connectivity pattern of the parietal cortex gates the activation of the external remapping process in congenitally blind people depending on task demands. Enhanced parietofrontal connectivity in the crossed posture in the blind may therefore prevent the automatic remapping process from occurring in a task that does not necessitate such a computation (the TOJ task can be resolved by using pure skin-based coordinates). This could potentially explain the enhanced performance of the blind population in the crossed condition of the TOJ task (Fig. 1A).
In conclusion, we demonstrate that early visual deprivation alters the development of the brain network involved in the automatic multisensory integration of touch and proprioception into a common, external, spatial frame of reference. Moreover, the enhanced connectivity between dorsal regions in the CB group may provide a mechanistic framework to understand how blind people differently weight specific spatial coordinate systems depending on the task at play (Badde et al., 2015; Heed et al., 2015a; Badde and Heed, 2016). These results have important implications for our understanding of how visual experience calibrates the development of brain networks dedicated to the spatial processing of touch. Since early visual deprivation has also been shown to have specific effects on the spatial mapping of higher-level cognitive functions like arithmetic or working memory (Crollen et al., 2013, 2014; Bottini et al., 2015, 2016), it would be interesting to see whether the brain network associated with the spatial grounding of those computations is similarly altered in early blind people.
Footnotes
This research and the authors were supported by the Canada Research Chair Program (F.L.), the Canadian Institutes of Health Research (F.L.), the Belgian National Funds for Scientific Research (V.C.), a Wallonie-Bruxelles International WBI-World Excellence Fellowship (V.C.), the European Union Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie Grant Agreement No. 700057 (V.C.), and the MADVIS European Research Council Starting Grant (O.C.; ERC-StG 337573). O.C. is a research associate at the Belgian National Fund for Scientific Research. We thank Giulia Dormal for her help in implementing the experimental design.
The authors declare no competing financial interests.
- Correspondence should be addressed to either of the following: Virginie Crollen, CIMeC—Center for Mind/Brain Sciences, University of Trento, via delle Regole 101, 38123 Mattarello (TN), Italy, virginie.crollen{at}unitn.it; or Olivier Collignon, Institut de Recherche en Sciences Psychologiques (IPSY), Université Catholique de Louvain, Place Cardinal Mercier 10, 1348 Louvain-la-Neuve, Belgium, olivier.collignon{at}uclouvain.be