Professional Documents
Culture Documents
DOI 10.1007/s002210100743
R E S E A R C H A RT I C L E
Abstract Previous research has demonstrated that vision of a body site, without proprioceptive orienting of
eye and head to that site, could affect tactile perception.
The body site viewed was the hand, which can be seen
directly under normal viewing conditions. The current
research asked three further questions: First, can vision
similarly affect tactile perception at a body site that cannot normally be viewed directly such as the face or
neck? Second, does prior experience of seeing a body
site, such as occurs when viewing the face in mirrors,
produce larger effects of viewing than body sites rarely
seen such as the back of the neck? And third, how quickly can visual information affect tactile target detection?
We observe that: detection of tactile targets at these body
sites was influenced by whether or not they were viewed,
this effect was greater when viewing the more familiar
site of the face than that of the neck, and significant effects were observed when the stimulus onset asynchrony
between visual display and tactile target was as little as
200 ms.
Keywords Proprioceptive orienting Target detection
Vision-touch interactions Prior experience Human
Introduction
It is essential for goal-directed behaviour that perceptual
systems are able to rapidly deliver accurate information
about the external environment. This information can arS.P. Tipper () L.A. Howard
School of Psychology, University of Wales, Bangor, Gwynedd,
Wales, LL57 2DG, UK
e-mail: s.tipper@bangor.ac.uk
Fax: +44-1248-382599
N. Phillips C. Dancer D. Lloyd F. McGlone
Cognitive Neuroscience Group, Unilever Research, Port Sunlight,
Wirral, CH63 3JW, UK
F. McGlone
Centre for Cognitive Neuroscience, University of Wales, Bangor,
Gwynedd, Wales, LL57 2DG, UK
161
Experiment 1
Materials and methods
Subjects
Subjects were 24 right-handed women (12 in the pure tactile condition and 12 in the tactile + auditory condition) selected from the
Unilever subject panel, who were paid a small fee. They were in
the age range 2235 years (mean 25.6 years) and had normal or
corrected-to-normal visual acuity. All subjects gave informed consent.
Stimuli
Stimuli were delivered via two electromagnetic contact transducers (bone conductors normally used in hearing aids) secured to the
skin with double-sided adhesive tape in two positions: one on the
centre of the right cheek just below the cheekbone, the other on
the back of the neck to the right of the spine (see Fig. 1 for details). A third stimulator was fixed to the thenar eminence (fleshy
base of thumb) of the right hand, but subjects did not receive any
stimulation to this site, as it was placed there to maintain consistency when viewing this neutral condition. The target was an isointense 200-Hz, 150-ms stimulus, set at 5 times the detection
threshold for each site (14 dB attenuation), reflecting the relative
sensitivities of these areas. Headphones providing white noise
masked any auditory cues generated by the stimulators.
Design
Whilst undertaking the tactile target-detection task, subjects experienced three visual-tactile compatibility conditions: (1) compatible, for example detecting a tactile target at the face while viewing
the face, or detecting a tactile target at the neck while viewing the
neck; (2) neutral, for example detecting a tactile target at the face
or neck while viewing the hand which was never stimulated; (3)
incompatible, for example detecting tactile targets on the face
while viewing the neck, or vice versa.
Each subject was tested in all three conditions (compatible,
neutral and incompatible) at both test sites on each of two consecutive days. Six test blocks (three test conditions per stimulated
site) were undertaken each day, which took approximately 30 min.
Within each block, 20 stimuli were randomly presented: 18 to the
target site and 2 to the distractor site (totaling 60 trials per test site
per day); however, trials were repeated in the case of errors. The
order of test blocks was counterbalanced between subjects and
across days.
Procedure
After depressing the foot-pedal, tactile targets were presented randomly between 850 and 1850 ms later. This variability prevented
prediction of tactile-target onset. The subject's primary task was to
release the foot pedal when a vibration was detected at the target
site and to refrain from responding when stimulated at the distractor site (10% of all trials). Error feedback was provided in the
form of a beep if subjects responded to a distractor, responded before onset of the target or failed to respond to a target (more than
1,000 ms) and the trial was repeated. Vision of the three body sites
(face, neck and hand) was achieved by means of a monochrome
video camera, trained only on the site to be tested, which presented a real-time image on a monitor situated at eye level 50 cm in
front of the subject at their midline. Subjects rested their heads on
an adjustable chin rest. The surrounding body areas, and especially the eyes, were never seen on the monitor, as a pilot study
showed that this was too distracting for the subjects and increased
error rates. In addition, when the camera was moved between
sites, subjects were told to close their eyes.
162
discarding values more than 150% or less than 66% of the median,
and taking the mean of the remaining sample.
A practice session was included at the start of each day's test
session (consisting of five stimuli to the face and five to the neck)
in which subjects also rated the intensity of the stimuli at both
sites so that these could be equated. In tactile-only conditions, the
auditory mask was raised above the level of the noise from the
stimulators for each subject during the practice trials.
Results
Subjects were advised at the beginning of each block as to
where the target site was and to respond only to stimuli at that site
and ignore stimuli to the distractor site. They were also instructed
to keep looking at the video monitor throughout each test block
during each of the three test conditions (compatible, looking at the
target site; neutral, looking at the hand; and incompatible, looking
at the distractor site). Trimmed means were obtained for each subject in each cell of the analyses by identifying their median score,
Table 1 Mean and standard deviation of foot-pedal response
times in milliseconds to tactile
or tactile + auditory stimuli on
the face and neck under compatible, neutral or incompatible
viewing conditions over 2 days
of testing. The proportion of errors are given as misses (miss)
and false alarms (FA)
Stimulation site
Face
Neck
Compatible
(face)
Neutral
(hand)
Incompatible Compatible
(neck)
(neck)
Neutral
(hand)
Incomatible
(face)
432
90
0.02
0.20
433
69
0.02
0.20
444
77
0.03
0.14
445
81
0.02
0.11
453
89
0.01
0.20
437
77
0.03
0.11
441
70
0.02
0.17
420
64
0.02
0.14
441
71
0.02
0.17
408
54
0.01
0.23
460
64
0.02
0.33
429
71
0.01
0.17
Tactile only
Day 1
Mean
SD
Miss
FA
Day 2
Mean
SD
Miss
FA
481
129
0.12
0.33
461
115
0.05
0.38
482
126
0.08
0.25
469
98
0.14
0.23
489
132
0.15
0.29
466
91
0.10
0.31
402
76
0.06
0.27
433
99
0.07
0.14
421
77
0.06
0.29
413
85
0.06
0.25
460
83
0.03
0.33
446
104
0.09
0.25
163
Experiment 2
Therefore, in experiment 2, three cameras were used to
project displays of the three body sites (face, neck,
hand). At the beginning of each trial, one of these visual
displays was randomly presented and then the tactile target (to the face or neck) was presented 200 ms or 700 ms
later. This procedure had a number of advantages: First,
we predicted that the sudden onset of an unpredictable
visual stimulus, rather than continuous viewing for
3 min, would engage attention and produce more robust
cross-modal interactions. Second, we could test whether
visual inputs affected tactile processing at short stimulus-onset asynchronies (SOA). Third, the short, 200-ms
SOA between vision and touch helps reduce proprioceptive orienting of eyes and head to particular sides of
164
test block, with attention being switched to the other site stimulated for the next block. Attention to body sites was alternated and
counterbalanced across subjects: ABAB (BABA). Ninety trials
were presented per block, of which 72 were target trials, and 18
distractor trials (presenting tactile stimulation to the non-attended
site). Of these trials, half were the short (200 ms) SOA between
visual onset and tactile target, while half were the long SOA
(700 ms). Presentation of these trials was randomised within each
block.
Procedure
Subjects
Subjects were 20 na, right-handed female volunteers with a mean
age of 33 years, all employees of Unilever Research, Port Sunlight. Subjects were paid 5 (GBP5) for their time.
Stimuli
Tactile stimuli were delivered to two body sites the left back of
the neck, and the left cheek below the cheekbone via two moving-magnet tactile stimulators attached to the skin with double-sided adhesive tape. These tactile stimulators were smaller and quieter
devices than those used during experiment 1, and capable of delivering a more readily controllable tactile stimulation. A third tactile
stimulator, from which a tactile stimulation was not delivered, was
attached to the top of the forearm (close to the wrist), to attain consistency in the neutral Compatibility condition. Targets were
100-Hz, 50-ms stimulations delivered via the tactors.
Unlike experiment 1, in which the relative intensity levels for
the tactors were pre-set at 5 times the detection threshold for each
site, in experiment 2 the relative intensity of the stimuli delivered
to the face and neck sites was controlled through a pre-trial, using
an improved procedure for equalising stimulus intensity at these
body sites with their differing sensory acuities. This was attained
through a novel, subjective level equalisation (SLEEQ) paradigm.
This employed a modified parameter estimation by sequential testing (PEST) paradigm, using a suprathreshold stimulus delivered to
the neck as a reference level (see Taylor et al. 1983). During this
test, subjects were asked to rate the intensity of a stimulus delivered to the face as stronger or weaker than a reference stimulus to
the neck, using a simple yes-or-no response button. Multiple
pairs of individual face and/or neck stimulations were delivered.
SLEEQ uses a tracking paradigm to home in on the stimulus level
for the face that is subjectively equivalent to the reference stimulus (delivered to the neck), attaining a rapid equalisation of levels
for the stimulation at the face and neck.
The sound produced by the tactors was masked by white noise,
played into the soundproof booth through speakers. Such sound
was significantly quieter in experiment 2 than in experiment 1,
owing to the use of more sophisticated tactors. Unlike experiment
1, which used a single camera, experiment 2 used three CCD colour cameras, trained on the three body sites, capturing real-time
images in colour. The images from these three cameras were presented not in blocks, as in experiment 1, but with switching within
each block between the three body sites. These images were presented, as in experiment 1, at subjects' midlines but, unlike experiment 1, these images were presented in colour and as mirror images.
Design
Subjects were tested individually in single sessions (lasting
4555 min), each tested under all test conditions. Each participant
was presented with six blocks of trials, including two practice
blocks (with the participant attending to the face or the neck during separate practice blocks) and four test blocks (two each with
the participant attending to the face and neck). Subjects sustained
attention to one of the stimulated body sites throughout a single
165
General discussion
Stimulation site
Face
Camera
Compatible
(face)
Neutral
(hand)
Mean
SD
Miss
FA
Mean
SD
Miss
FA
406
69
0.004
0.083
398
78
0.008
0.017
420
416
72
71
0.000
0.000
0.083
0.100
419
407
76
65
0.017
0.013
0.050
0.117
423
63
0.004
0.133
419
62
0.004
0.067
420
77
0.008
0.117
429
68
0.000
0.050
Mean
SD
Miss
FA
Mean
SD
Miss
FA
366
75
0.004
0.017
364
87
0.004
0.067
382
385
71
63
0.000
0.004
0.050
0.033
377
375
83
74
0.004
0.008
0.100
0.083
387
53
0.013
0.067
369
65
0.004
0.067
387
400
63
63
0.004
0.000
0.067
0.017
373
380
59
61
0.008
0.000
0.067
0.017
SOA 1
Block 1
Block 2
SOA 2
Block 1
Block 2
Neck
Incompatible Compatible Neutral
(neck)
(neck)
(hand)
Incompatible
(face)
437
70
0.009
0.117
428
69
0.004
0.067
166
Fig. 4A, B Difference scores (viewing neutral hand minus viewing face or neck) revealing overall facilitation (positive) and inhibition (negative) scores. A represents the facilitation and inhibition effects when viewing the face for experiment 1, and the short
and long SOA conditions for experiment 2. B shows the data when
viewing the neck
such cells. More recently, the flexibility of these bi-modal neural units has been demonstrated. When the animal
uses a tool to reach for a stimulus, the visual receptive
field expands around the full length of the tool (Iriki et
al. 1996). Interestingly, Farne and Ladavas (2000) have
recently demonstrated the same phenomenon in humans.
As proposed by Tipper et al. (1998), the experiments reported here suggest that this flexibility may be even more
remarkable when proprioception and vision are dissociated.
Integration can normally be achieved because the environmental loci of somatosensory and visual inputs remain constant. Thus the spider walking across the hand is spatially
localised in the visual system (e.g. retinotopic and/or headcentred frames) and in the somatosensory system via proprioceptive inputs concerning hand location. That is, the visual and somatosensory properties of the spider come from
the same place in the world. But facilitation and interference between modalities can also take place when the spatial link is broken, when visual and somatosensory inputs
from the same body site come from different locations.
Rorden et al. (1999) have similarly argued that effects
of vision on touch cannot be attributed solely to the simple
spatial proximity between visual and tactile stimuli. Rather,
tactile perception can be modulated by high-level semantic
information. Of course, the ability of humans to fluently
integrate spatially dissociated vision and action (proprioception and somatosensation) should not come as such a
surprise. Such fluent interactions between somatosensation
or proprioception and vision can easily be observed in our
daily use of mirrors and in human-machine interactions.
Take, for example, the use of a computer mouse: the hand
moving across a horizontal surface and vision of the cursor
moving across the vertically oriented surface of the computer screen are in completely different locations.
167
References
Driver J, Grossenbacher PG (1996) Multimodal spatial constraints
on tactile selective attention. In: Inui T, McClelland JL (eds) Information integration in perception and communication. (Attention and performance XVI) MIT, Cambridge MA, pp 209235
Driver J, Spence C (1998) Attention and the crossmodal construction of space. Trends Cogn Sci 2:254262
Farne A, Ladavas E (2000) Dynamic size-change of hand peripersonal space following tool use. Neuroreport 11:16451649
Graziano MS, Gross GG (1996) Multiple pathways for processing
visual space. In: Inui T, McClelland JL (eds) Information integration in perception and communication. (Attention and performance XVI:) MIT Press, Cambridge MA, pp 181207
Groh JM, Sparks DL (1996) Saccades to somatosensory targets. 1.
Behavioural characteristics. J Neurophysiol 75:412427
Gross GG, Graziano MS (1995) Multiple representations of space
in the brain. Neuroscientist 1:4350
Harris LR (1980) Superior colliculus and movement of the head
and eyes in cats. J Physiol (Lond) 300:376391
Honore J, Bourdeaud'hui M, Sparrow L (1988) Reduction of cutaneous reaction time by directing eyes towards the source of
stimulation. Neuropsychologia 27:367371
Iriki A, Tanaka M, Iwamura Y (1996) Coding of modified body
schema during tool use by macaque postcentral neurones.
Neuroreport 7:23252330
Ladavas E, Zeloni G, Farne A (1998) Visual peripersonal space on
the face in humans. Brain 121:23172326
Ladavas E, Farne A, Zeloni G, Pellegrino G di (2000) Seeing or
not seeing where your hands are. Exp Brain Res 31:458467
Larmande P, Cambier J (1981) Effect of the state of activation of the
cerebral hemisphere on sensory extinction: a study in 10 patients with right-hemisphere lesions. J Rev Neurol 137:285290
Macaluso E, Frith C, Driver J (2000) Selective spatial attention in
vision and touch: unimodal and multimodal mechanisms revealed by PET. J Neurophysiol 83:30623075
Maravita A, Spence C, Clarke K, Husain M, Driver J (2000) Vision and touch through the looking glass in a case of crossmodal extinction. Neuroreport 11:35213526
Pellegrino G di, Frassinetti F (2000) Direct evidence from parietal
extinction of enhancement of visual attention near a visible
hand. Curr Biol 10:14751477
Pierson JM, Bradshaw JL, Meyer TF, Howard, MJ, Bradshaw, JA
(1991) Direction of gaze during vibrotactile choice reactiontime tasks. Neuropsychologia 29:925928
Rorden C, Heutink J, Greenfield E, Robertson IH (1999) When
the rubber hand feels what the real hand cannot. Neuroreport 10: 135138
Stein BE, Meredith MA (1993) The merging of the senses. MIT
Press, Cambridge, MA
Taylor MM, Forbes SM, Creelman CD (1983) PEST reduces bias in
forced choice psychophysics. J Acoust Soc Am 74:13671374
Tipper SP, Lloyd D, Shorland B, Dancer C, Howard LA, McGlone
F (1998) Vision influences tactile perception without proprioceptive orienting. Neuroreport 9:17411744
Tomasello M, Call J (1997) Primate cognition. Oxford University
Press, London