Professional Documents
Culture Documents
M
J USTIN P IERRE B ACHORIK , M ARC B ANGERT, USIC, BY ITS VERY NATURE, OCCURS OVER time. As
AND P SYCHE LOUI such, the emotional response it elicits from lis-
Beth Israel Deaconess Medical Center and Harvard teners is inherently dynamic, and a clear
Medical School understanding of exactly how and when this response
unfolds is fundamental to our greater understanding of
K EVIN L ARKE musical emotion. Though the time-course of this
University of California, San Diego response (and how it is affected by musical features as
they develop) has been investigated previously, it
J EFF B ERGER remains little understood. The present study is primarily
Sourcetone LLC, Brooklyn, NY concerned with clarifying the temporal nature of our
emotional responses to musical stimuli by applying a
R OBERT R OWE temporally sensitive approach to measuring musical
Steinhardt School, New York, NY emotions.
The subject of emotion perception in music has
G OTTFRIED S CHLAUG received much interest in both theoretical and empirical
Beth Israel Deaconess Medical Center and Harvard studies (Juslin & Sloboda, 2001). More specifically, stud-
Medical School ies have attempted to identify the emotions elicited by
music, as well as the axes along which music-induced
MUSIC ELICITS PROFOUND EMOTIONS ; HOWEVER , THE emotions can be defined (Hevner, 1936; Sloboda, 1991;
time-course of these emotional responses during listen- Terwogt & van Grinsven, 1991). Additional studies have
ing sessions is unclear. We investigated the length of investigated the emotional perception of music from a
time required for participants to initiate emotional wide variety of perspectives, including the role of cultur-
responses (“integration time”) to 138 musical samples al cues in emotional perception (Balkwill & Thompson,
from a variety of genres by monitoring their real-time 1999), the influence of age on emotional understanding
continuous ratings of emotional content and arousal (Cunningham & Sterling, 1988), and the physiological
level of the musical excerpts (made using a joystick). reactions (e.g., chills and shivers; Panksepp, 1995) that
On average, participants required 8.31 s (SEM = 0.10) arise as a result of the emotional response to music.
of music before initiating emotional judgments. However, the operational definitions of “emotion,” “feel-
Additionally, we found that: 1) integration time ing,” and “affect,” as well as the differences between per-
depended on familiarity of songs; 2) soul/funk, jazz, ceived and felt emotions (Gabrielsson, 2002; Scherer,
and classical genres were more quickly assessed than 2004), are unclear in these studies.
other genres; and 3) musicians did not differ signifi- While autonomic and electrophysiological record-
cantly in their responses from those with minimal ings have provided more time-sensitive biological
instrumental musical experience. Results were partially markers for emotion perception in music (Bernardi,
explained by the tempo of musical stimuli and suggest Porta, & Slight, 2006; Blood & Zatorre, 2001;
that decisions regarding musical structure, as well as Krumhansl, 1997; Sammler, Grigutsch, Fritz, &
prior knowledge and musical preference, are involved Koelsch, 2007; Steinbeis, Koelsch, & Sloboda, 2006), the
in the emotional response to music. degree to which biological markers predict the multidi-
mensional psychological experience of musical emotion
Received April 9, 2008, accepted December 10, 2008.
is unclear. Similarly, the growing body of neuroimaging
Key words: music, emotions, arousal, valence, time work investigating the neural basis of music-induced
emotions has implicated brain regions involved in syn-
tax processing, sequencing, and reward and motivation
Music Perception VOLUME 26, ISSUE 4, PP. 355–364, ISSN 0730-7829, ELECTRONIC ISSN 1533-8312 © 2009 BY THE REGENTS OF THE UNIVERSITY OF CALIFORNIA . ALL
RIGHTS RESERVED. PLEASE DIRECT ALL REQUESTS FOR PERMISSION TO PHOTOCOPY OR REPRODUCE ARTICLE CONTENT THROUGH THE UNIVERSITY OF CALIFORNIA PRESS ’ S
RIGHTS AND PERMISSIONS WEBSITE , HTTP :// WWW. UCPRESSJOURNALS . COM / REPRINTINFO. ASP. DOI:10.1525/MP.2009.26.4.355
356 Justin Pierre Bachorik, Marc Bangert, Psyche Loui, Kevin Larke, Jeff Berger, Robert Rowe, & Gottfried Schlaug
perception, as well as primary sensory areas To that end, Schubert (2004) conducted a study in
(Altenmüller, Schurmann, Lim, & Parlitz, 2002; Blood which participants continuously responded in two-
& Zatorre, 2001; Brown, Martinez, & Parsons, 2004; dimensional emotion space to four different musical
Koelsch, Fritz, von Cramon, Muller, & Friederici, 2006; stimuli whose lower-level acoustic characteristics and
Menon & Levitin, 2005; Mizuno & Sugishita, 2007; higher-level musical features were modeled against par-
Pallesen et al., 2005). However, the extent to which ticipants’ responses (see Schubert, 1999, for an extensive
these brain differences reflect the different axes of emo- examination of this approach). Each stimulus was thus
tion is yet to be determined. used to construct a separate model of emotional response
One question raised by both the cognitive and neu- to acoustic and musical cues, and the models’ approxima-
roscience studies concerns the temporal resolution of tion of “lag-time” between causal musical/acoustic cue
musical emotions. Several studies have examined and emotional event ranged from zero to three seconds.
responses to music over the course of a given musical The stimulus set used in this experiment was somewhat
stimulus. In an investigation using categorization and limited, however, as the four songs were all classical pieces
grouping techniques, Perrot and Gjerdingen (1999) drawn from the romantic repertoire.
reported that individuals are capable of classifying the More recently, Grewe, Nagel, Kopiez, and Altenmüller
genre to which a given piece of music belongs when (2007) investigated the effects of six full-length pieces
exposed to as little as 250 ms of a musical excerpt. of music on participants’ continuous two-dimensional
Results from this categorization study suggest that fea- ratings (as in Schubert, 2004, and Nagel, Kopiez, Grewe,
tures in music may be available for recognition by the & Altenmüller, 2007) as well as their subjective
cognitive system in as little as 250 ms. In a similar study reports of feelings and biological signals (skin conduc-
by Schellenberg, Iverson, and McKinnon (1999), the tance and facial electromyography). This triple-meas-
experimenters found that participants performed well urement approach was chosen in response to some of
above chance at matching song titles and artists to stim- the criticism of previous studies by Scherer (2004), who
uli, even when the musical excerpts were only 100 ms in put forth the component process model of emotion.
length. Though these studies do not directly address the This model consists of the “emotion response triad” of
time-course of the emotional response to music, they physiological arousal, motor expression, and subjective
demonstrate the (relatively small) amount of stimulus feelings. Though Grewe et al. (2007) were able to find
exposure that individuals require in order to make cog- significant correlations between the stimuli’s musical
nitive decisions regarding musical content. features and each of the three outcome measures, there
In a study directly investigating the effects of duration were no obvious “affective events that regularly
of musical stimuli on their emotional classification, occurred across all three components in response to
Bigand, Filipic, and Lalitte (2005) employed a novel any of [the] stimuli” (p. 786). They also analyzed the
experimental paradigm in which participants were time-series of the continuous two-dimensional emo-
given a number of musical excerpts (represented on a tion space responses and were able to extract some
computer screen as icons) and were asked to divide causal relationships between musical features and emo-
them into groups based upon the similarity of the emo- tional response (e.g., beginning of a new musical sec-
tional responses they provoked. Multidimensional scal- tion or entrance of a new leading voice).
ing methods revealed that participants, when given Despite this extensive research, however, neither of
musical excerpts only 1 s in length, grouped songs simi- the above studies reported the amount of time partici-
larly to those who were given much longer excerpts (up pants required to integrate information from musical
to 250 s). However, the rapid classification of emotions excerpts before initiating a physical movement to indi-
(as in the above study) may not be fully representative of cate an emotional judgment. It is this question that
the musical experience, since music is of a dynamic motivated the present study.
nature and requires the integration and evaluation of In the present study we used musical stimuli of vari-
continuous information over time. As many have pointed ous genres and levels of familiarity and instructed par-
out, music perception involves a time-course that begins ticipants to respond to musical excerpts by making
with an initial emotional percept and continues with joystick movements in an onscreen two-dimensional
subsequent appraisal, regulation, and evolution of this grid whose x- and y-axes represented emotional valence
emotion over time (Bigand, Filipic, & Lalitte, 2005; (positive/negative) and arousal (calming/stimulating)
Krumhansl, 1996; Krumhansl, 1997; Krumhansl, 2002; respectively (see Russell, 1980, for a discussion of “emo-
Vines, Krumhansl, Wanderley, Dalca, & Levitin, 2005; tion space” and these axes). This approach to gathering
Vines, Krumhansl, Wanderley, & Levitin, 2006). a continuous participant response was first used by
Emotional Judgments of Musical Stimuli 357
Schubert (1996) but to our knowledge has never been from 19 to 82 years of age (median age = 29). All partic-
used to gauge the length of time participants require to ipants completed a questionnaire regarding their musi-
make an initial judgment about the emotional content cal background (including whether or not they had
of a piece of music. We referred to this time as the “inte- previously studied a musical instrument, for how long,
gration time.” In addition to this real-time continuous how intensely they practiced, and whether they had
feedback, we gathered data from participants regarding taken music theory courses) and musical preferences
their musical background and familiarity with each (what music they liked to listen to now and what music
stimulus song for additional analysis. they grew up with, etc.). In order to differentiate people
We posit that affective perception of music might based upon their level of musicianship, we separated the
work as an incremental-information-integration appa- participants into two groups—musicians with substan-
ratus: as soon as initial sensory information is per- tial instrumental musical experience and individuals
ceived, emotional processing begins, producing with minimal instrumental musical experience (MIMEs,
hypotheses about the genre and emotional content. As which may be referred to as nonmusicians in other pub-
processing continues, it constantly incorporates addi- lications)—based upon their pre-trial-block question-
tional musical information from the sensory stream, naire responses. Of our participants, 35 (14 males and
which in turn serves to test, refine, and retest the emo- 21 females) were musicians, and 46 (18 males and 28
tional hypothesis. As perceived musical information females) were MIMEs. The musician group consisted
accumulates, more information is gathered about the both of individuals that had chosen music as their pro-
target emotion of the song; thus, we expect to observe a fession and were actively performing musicians and of
significant correlation between the “integration time” amateur musicians that had a different profession but
(the amount of time from the beginning of playback till had substantial instrumental training and regularly
the first movement) and the agreement of its direction played/practiced a musical instrument. The two groups
with that of the overall judgment that participants differed significantly in their years of instrumental
make at the end of each musical excerpt. The earlier the musical experience: musicians averaged 19.57 years of
movement, the more uncertainty (less correlation instrumental musical experience (SEM = 1.72) and
between the angle of the first movement and the over- MIMEs averaged 0.83 years (SEM = 0.19).
all emotional judgment) such a movement might This study was approved by the Institutional Review
entail; conversely, the later the movement (i.e., after Board of Beth Israel Deaconess Medical Center. All par-
several incremental emotional hypothesis-iterations), ticipants signed a written informed consent form prior
the higher its correlation to the overall emotional judg- to engaging in the experiment.
ment (assuming that each song maintains a relatively
constant emotional ambience). Furthermore, we would Stimuli
predict that participants possessing a stronger musical
background or greater familiarity with the stimulus The stimuli consisted of 138 musical excerpts 60 s long
genre or stimulus excerpt would need fewer iterations and were drawn from 11 genres: ambient, classical,
of comparing and testing the continuous auditory electronic, hip-hop, jazz, Latin, movie soundtracks,
stream with their emotional hypothesis; therefore their pop, rock, soul/funk, and world. All excerpts were taken
integration times would be shorter. In the present from recordings that are publicly available for pur-
study, we test the incremental-information-integration chase. The initial categorization of these stimuli into
hypothesis by obtaining continuous emotion ratings genres was performed by two independent investiga-
for musical stimuli. Results validate the use of two- tors, and their categorizations were then cross-checked
dimensional emotion ratings and provide support for with existing Billboard classifications in order to verify
musical emotion perception as an information- their accuracy. Only songs that were categorized with-
dependent, incremental process. out any uncertainty as to genre were used in this exper-
iment. The 60 s excerpts from each song were chosen
Method by one investigator who had an extensive musical back-
ground and was blind with regard to the research
Participants questions and hypotheses of this study. The only
instructions given to this investigator were to select the
Eighty-one participants (49 females and 32 males) were 60 s excerpts that best characterized each song.
recruited from the greater Boston metropolitan area via Approximately 10% of the stimuli contained vocals,
advertisements in daily newspapers. Participants ranged and the rest were purely instrumental. Each excerpt was
358 Justin Pierre Bachorik, Marc Bangert, Psyche Loui, Kevin Larke, Jeff Berger, Robert Rowe, & Gottfried Schlaug
Procedure
grown up listening to, their favorite genres, their variables simultaneously in a multivariate model. A
favorite features of music, their favorite pieces, and compound symmetry correlation structure was
their favorite performers/groups. assumed among multiple measurements within a single
Due to technical and scheduling difficulties, some of participant and empirical standard errors were used
the participants were unable to complete all three trial because they are known to be less sensitive to the cho-
blocks. As our analysis showed no effects of trial block sen covariance structure of the repeated measures
or session order, however, all data points were incorpo- (Venables & Ripley, 1994). Analyses were carried out
rated into our analyses in order to present the most using the MIXED procedure in SAS v 9.1 (SAS Institute
complete data set possible, while accounting statistically Inc., Cary, NC).
for missing data by making the appropriate adjust-
ments in degrees of freedom. Results
Table 1. Predictors of Initial Integration Time Considered Separately (Univariate Models) or Jointly (Multivariate Models).
Gender 2 Categories F(1, 79) = 0.03 .86 F(1, 72) = 1.14 .29
Age Continuous, linear F(1, 79) = 2.58 .11 F(1, 72) = 3.66 .06
Musicianship 2 Categories F(1, 79) = 0.09 .76 F(1, 72) = 0.39 .54
Stimulus Familiarity 5 Categories F(4, 280) = 19.80 < .01 F(4, 280) = 16.40 < .01
Stimulus Genre 11 Categories F(10, 789) = 6.13 < .01 F(10, 789) = 5.38 < .01
Musical Preference 6 Categories F(5, 75) = 1.19 .32 F(5, 72) = 0.95 .45
Of the 81 participants, 25% reported rock as their different from those preferring “other” (p <. 05) or
preferred musical genre, 17% pop, 14% rap/hip-hop, pop (p < .05), but not significantly different from those
9% orchestral music, 6% soul, and 30% reported pre- preferring orchestral (p = .30), rap/hip-hop (p = .08), or
ferring other musical genres. Given this categorization, rock music (p = .09). Initial integration times did not
integration times did not differ significantly based on vary appreciably between participants preferring rock,
self-reported preferred musical genre, F(5, 75) = 1.19, pop, rap/hip-hop, or orchestral music. Interestingly,
p = .32 (Figure 4). Interestingly, average integration initial integration times were significantly shorter when
times for participants that reported a preference for the stimulus genre was the same as a participant’s
soul music were 3.19 s (SEM = 1.93) longer than the self-reported musical preference (mean for stimulus/
average integration time for participants that reported a preference mismatch: 8.41 s; SEM = 0.52; mean for
preference for rock music. In a posthoc analysis based stimulus/preference match: 7.04 s; SEM = 0.53; F(1,
on this observed difference, we found that the integra- 6994) = 13.90, p < .01) (Figure 5).
tion time among those preferring soul was significantly The above results were not significantly different
when predictors were considered simultaneously in a
multivariate model (Table 1).
decision (Bigand, Filipic, & Lalitte, 2005; Bigand, Kratus, 1993; Kreutz, Bongard, & Von Jussis, 2002),
Vieillard, et al., 2005; Perrot & Gjerdingen, 1999), as there is other evidence to suggest that musicianship can
the mean time to initial response in our study was 8.31 s. play a role in the time-course of musical perception; for
We believe that this is primarily due to our experimen- example, a study using the gating paradigm (Dalla
tal paradigm, which required participants to make Bella, Peretz, & Aronoff, 2003) found that musicians
judgments that were not of a strictly categorical were able to recognize familiar songs faster (i.e., with
nature. Rather than simply classifying each stimulus as fewer notes) than nonmusicians. However, the notes
belonging to one of several listed musical genres, par- used in their study were not all of the same duration,
ticipants’ decisions about arousal and valence may and drawing conclusions about the absolute time
involve more cognitive appraisal processes as to which required to make such judgments is difficult in the
combination of several dimensions best represented absence of temporal consistency. To the best of our
their emotional response, based on a comparison knowledge, previous studies using the continuous-rating
between the presently-perceived stimulus and previ- two-dimensional emotion space paradigm did not ana-
ously-experienced emotional stimuli (Bigand, Filipic, lyze their results on the basis of participants’ musician-
& Lalitte, 2005; Bigand, Vieillard, et al., 2005). This ship, though in some cases the participant pools were
explanation also highlights another difference between balanced across musicians and nonmusicians (Grewe et
previous investigations and the present study: judg- al., 2007; Schubert, 2004). It is possible that the present
ments about emotion and affect might require more study suffered from a lack of power, or too much noise
introspection on the part of the participant than judg- in the dataset; furthermore, actively recruiting musi-
ments regarding properties inherent to the stimulus, cians (as opposed to identifying them posthoc) might
such as genre. A given song might elicit one emotion have yielded better balance between the musician and
from participant A and a completely different senti- MIME groups both in terms of absolute number of
ment from participant B (see Blood & Zatorre, 2001), participants (our two groups differed in size by 11) and
despite the fact that both would agree on the song’s in terms of music training and experience.
genre. This dichotomy between classification and emo- On another note, our study also revealed that the
tion perception is corroborated by the result that our longer participants took to listen to the musical
participants’ integration times seem to be mediated by excerpts before moving (i.e., the longer they integrated
top-down factors, including song familiarity and par- the continuous auditory information), the higher the
ticipants’ favorite musical genres. The latter is particu- correspondence between the angle of the initial move-
larly interesting in light of the different musical ment and the final overall judgment of the musical
preferences that might be correlated with potential excerpt. We suggest that affective listening to streaming
personality differences (Rentfrow & Gosling, 2003), music might work as an incremental-information-inte-
which in turn influences the time taken to make emo- gration apparatus—i.e., as soon as initial sensory infor-
tional judgments. It also is interesting to note that our mation comes in, the processing starts producing
participants responded more quickly to stimuli from hypotheses about the emotional effect. As integration
their favorite musical genre; favorite genre may be a time increases and processing continues, it continually
proxy for increased song familiarity, which also short- incorporates additional musical information from the
ened integration times. At any rate, if participants sensory stream, which in turn serves to refine the emo-
using strictly bottom-up strategies in responding to tional hypothesis.
the music, we would not expect to see such variation in We also found that the variability in integration time
response times between different types of participants. across different songs was linked to the varying tempo
The nature of participants’ responses indicates that of each stimulus; faster songs elicited shorter integra-
making graded, introspective judgments about com- tion times. This finding is in keeping with the incre-
plex auditory stimuli requires much more time than mental-information-integration apparatus posited
assessing stimuli on a purely categorical basis. above; presumably, faster songs deliver sensory
We were surprised to find no significant differences information at a higher rate and therefore require
in integration time between the musicians and nonmu- less listening time prior to evaluation. It also is likely
sicians in our study. Although this finding is in keeping that differences in response lags are caused by higher-
with studies suggesting that emotions induced by level structural properties of individual musical
music are influenced little by background factors, such excerpts (Grewe et al., 2007; Hevner, 1936; Schubert,
as level of music education, age or gender (Bigand, 2004; Sloboda, 1991; Steinbeis et al., 2006), though
Filipic, & Lalitte, 2005; Bigand, Vieillard, et al., 2005; such analyses are beyond the scope of the present
Emotional Judgments of Musical Stimuli 363
study. A theoretical analysis of the first 20 s or so of emotion determinants in a more temporally sensitive
each song stimulus, including its harmonic structure context. Music is a temporal art; by using this method
(Piston & DeVoto, 1987), melodic processes (Narmour, we can begin to understand, at a behavioral level, how
1990), and rhythmic structure might yield valuable and when such temporal features lead to the develop-
insight as to how participants’ response times are affect- ment of musical emotion.
ed by higher-level factors. Also, while the present study
assumes a two-dimensional interpretation of emotional Author Note
space, further analyses and experiments could incorpo-
rate different perspectives, such as the separation of This work was supported by a research grant from
arousal into the subtypes of tension arousal and excita- Sourcetone, LLC, given to Beth Israel Deaconess
tory arousal (Ilie & Thompson, 2005). Medical Center to support research on music and emo-
In this study, we showed that individuals require tions. The authors have no conflict of interest and no
between 8.31-11.00 s (depending on the measurement financial interests in Sourcetone, LLC.
being used) to begin formulating emotional judgments We would like to thank L. Cuddy, S. Koelsch and two
regarding musical stimuli, and that this integration other reviewers for their helpful comments on this
time can be modulated by participant-specific variables manuscript.
including familiarity with the musical excerpt and pref-
erence for the excerpt’s genre. Future studies might Correspondence concerning this article should be
consider these results and use stimuli at least 11.00 s in addressed to Gottfried Schlaug, Department of
length in measuring the time-course of emotional Neurology, Beth Israel Deaconess Medical Center and
responses to music. Our results highlight the elements Harvard Medical School, 330 Brookline Avenue, Boston,
of music that drive emotions and begin to put these MA 02215. E-MAIL: gschlaug@bidmc.harvard.edu
References
A LTENMÜLLER , E., S CHÜRMANN , K., L IM , V. K., & PARLITZ , C UNNINGHAM , J. G., & S TERLING , R. S. (1988). Develop-
D. (2002). Hits to the left, flops to the right: Different mental change in the understanding of affective meaning
emotions during listening to music are reflected in cortical in music. Motivation and Emotion, 12, 399-413.
lateralisation patterns. Neuropsychologia, 40, 2242-2256. DALLA B ELLA , S., P ERETZ , I., & A RONOFF, N. (2003). Time
B ALKWILL , L. L., & T HOMPSON , W. F. (1999). A cross- course of melody recognition: A gating paradigm study.
cultural investigation of the perception of emotion in Perception and Psychophysics, 65, 1019-1028.
music: Psychophysical and cultural cues. Music Perception, F ITZMAURICE G. M., L AIRD N. M., & WARE J. H. (2004).
17, 43-64. Applied longitudinal analysis. Hoboken, NJ: Wiley-
B ERNARDI , L., P ORTA , C., & S LEIGHT, P. (2006). Cardio- Interscience.
vascular, cerebrovascular, and respiratory changes induced by G ABRIELSSON , A. (2002). Emotion perceived and emotion felt:
different types of music in musicians and non-musicians: Same or different? Musicae Scientiae, Special issue 2001-2002,
The importance of silence. Heart, 92, 445-452. 123-147.
B IGAND, E., F ILIPIC , S., & L ALITTE , P. (2005a). The time G REWE , O., NAGEL , F., KOPIEZ , R., & A LTENMÜLLER , E.
course of emotional responses to music. Annals of the New (2007). Emotions over time: Synchronicity and development
York Academy of Science, 1060, 429-437. of subjective, physiological, and facial affective reactions to
B IGAND, E., V IEILLARD, S., M ADURELL , F., M AROZEAU, J., music. Emotion, 7, 774-788.
& DACQUET, A. (2005b). Multidimensional scaling of H EVNER , K. (1936). Experimental studies of the elements of
emotional responses to music: The effect of musical expertise the expression in music. American Jounal of Psychology, 48,
and of the duration of the excerpts. Cognition and Emotion, 246-268.
19, 1113-1139. I LIE , G., & T HOMPSON , W. F. (2005). A comparison of acoustic
B LOOD, A. J., & Z ATORRE , R. J. (2001). Intensely pleasurable cues in music and speech for three dimensions of affect.
responses to music correlate with activity in brain regions Music Perception, 23, 319-329.
implicated in reward and emotion. Proceedings of the National J USLIN , P. N., & S LOBODA , J. A. (2001). Music and emotion:
Academy of Sciences, 98, 11818-11823. Theory and research. Oxford: Oxford University Press.
B ROWN , S., M ARTINEZ , M. J., & PARSONS , L. M. (2004). J USLIN P. N., & V ÄSTFJÄLL D. (2008). Emotional responses
Passive music listening spontaneously engages limbic and to music: The need to consider underlying mechanisms.
paralimbic systems. Neuroreport, 15, 2033-2037. Behavioral and Brain Sciences, 31, 559-575.
364 Justin Pierre Bachorik, Marc Bangert, Psyche Loui, Kevin Larke, Jeff Berger, Robert Rowe, & Gottfried Schlaug
KOELSCH , S., F RITZ , T., VON C RAMON , D. Y., M ÜLLER , K., & R ENTFROW, P. J., & G OSLING , S. D. (2003). The do re mi’s of
F RIEDERICI , A. D. (2006). Investigating emotion with music: everyday life: The structure and personality correlates of
An fMRI study. Human Brain Mapping, 27, 239-250. music preferences. Journal of Personality and Social
K RATUS , J. (1993). A developmental study of children’s Psychology, 84, 1236-1256.
interpretation of emotion in music. Psychology of Music, 21, RUSSELL , J. A. (1980). A circumflex model of affect. Journal of
3-19. Personality and Social Psychology, 39, 1161-1178.
K REUTZ , G., B ONGARD, S., & VON J USSIS , J. (2002). S AMMLER , D., G RIGUTSCH , M., F RITZ , T., & KOELSCH , S.
Cardiovascular responses to music listening: The effects of (2007). Music and emotion: Electrophysiological correlates of
musical expertise and emotional expression. Musicae Scientiae, the processing of pleasant and unpleasant music. Psychophys-
6, 257–278. iology, 44, 293-304.
K RUMHANSL , C. L. (1996). A perceptual analysis of Mozart’s S CHELLENBERG , E. G., I VERSON , P., & M C K INNON , M. C.
Piano Sonata K.282: Segmentation, tension, and musical (1999). Name that tune: Identifying popular recordings from
ideas. Music Perception, 13, 401-432. brief excerpts. Psychonomic Bulletin and Review, 6, 641-646.
K RUMHANSL , C. L. (1997). An exploratory study of musical S CHERER , K. R. (2004). Which emotions can be induced by
emotions and psychophysiology. Canadian Journal of music? What are the underlying mechanisms? And how
Experimental Psychology, 51, 336-353. can we measure them? Journal of New Music Research, 33,
K RUMHANSL , C. L. (2002). Music: A link between cognition 239-251.
and emotion. Current Directions in Psychological Science, 11, S CHUBERT, E. K. (1996). Continuous response to music using a
45-50. two dimensional emotion space. Proceedings of the 4th
M ENON , V., & L EVITIN , D. J. (2005). The rewards of music International Conference of Music Perception and Cognition
listening: Response and physiological connectivity of the (pp. 263-268). Montreal, Canada: ICMPC.
mesolimbic system. Neuroimage, 28, 175-184. S CHUBERT, E. K. (1999). Measuring emotion continuously:
M IZUNO, T., & S UGISHITA , M. (2007). Neural correlates Validity and reliability of the two-dimensional emotion-
underlying perception of tonality-related emotional contents. space. Australian Journal of Psychology, 51,154-165.
Neuroreport, 18, 1651-1655. S CHUBERT, E. K. (2004). Modeling perceived emotion with
NAGEL , F., KOPIEZ , R., G REWE , O., & A LTENMÜLLER , E. continuous musical features. Music Perception, 21, 561-585.
(2007). EMuJoy: Software for continuous measurement of S LOBODA , J. A. (1991). Music structure and emotional
perceived emotions in music. Behavioral Research Methods, response: Some empirical findings. Psychology of Music,
39, 283-290. 19, 110-120.
NARMOUR , E. (1990). The analysis and cognition of basic melodic S TEINBEIS , N., KOELSCH , S., & S LOBODA , J. A. (2006). The
structures: The implication-realization model. Chicago: role of harmonic expectancy violations in musical emotions:
University of Chicago Press. Evidence from subjective, physiological, and neural responses.
PALLESEN , K. J., B RATTICO, E., B AILEY, C., KORVENOJA , A., Journal of Cognitive Neuroscience, 18, 1380-1393.
KOIVISTO, J., G JEDDE , A., ET AL . (2005). Emotion processing T ERWOGT, M. M., & VAN G RINSVEN , F. (1991). Musical
of major, minor, and dissonant chords: A functional magnetic expression of moodstates. Psychology of Music, 37, 99-109.
resonance imaging study. Annals of the New York Academy of V ENABLES W. N., & R IPLEY B. D. (1994). Modern applied
Science, 1060, 450-3. statistics with S-Plus. New York, NY: Springer.
PANKSEPP, J. (1995). The emotional sources of “chills” induced V INES , B. W., K RUMHANSL , C. L., WANDERLEY, M. M.,
by music. Music Perception, 13, 171-207. DALCA , I. M., & L EVITIN , D. J. (2005). Dimensions of
P ERROT, D., & G JERDINGEN , R. O. (1999). Scanning the dial: emotion in expressive musical performance. Annals of the
An exploration of factors in the identification of musical style New York Academy of Science, 1060, 462-466.
[Abstract]. Proceedings of the 1999 Society for Music Perception V INES , B. W., K RUMHANSL , C. L., WANDERLEY, M. M., &
and Cognition (p. 88). Evanston, IL: SMPC. L EVITIN , D. J. (2006). Cross-modal interactions in the
P ISTON , W., & D E VOTO, M. (1987). Harmony. New York, NY: perception of musical performance. Cognition, 101,
WW Norton. 80-113.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.