Correlations between Major/Minor Mode Key Signatures and Perceived Emotions

0
218

Abstract

People often listen to music because it makes them feel emotional. This could be caused by memories associated with songs or something about a piece that sparks interest in the listener. Still, you could also feel something because of the characteristics of the music. Studies have proven that music can have correlations to specific emotions, specifically major and minor key signatures. This project aims to investigate how different key signatures cue certain perceived emotions in the context of simple melodies. This study sent out a voluntary response survey that was created by the author of this study.  The stimuli were developed by selecting a single eight-second melody from an open-source music theory textbook, which used the melody as an example of a clear tonal center. I then created different melody versions using an online music transposition platform to transpose the simple melody into different keys – all seven natural notes in the C major scale (C, D, E, F, G, A, and B) in both major and minor modes. This resulted in 14 different versions of the melody. I then tasked participants with completing a short online survey. Participants listened to the melodies and chose the primary emotions they expressed from six options: happiness, sadness, fear, anger, disgust, and surprise. The data was hypothesized to show correlations between net positive emotions (happiness) and melodies in major key signatures. In contrast, minor key signatures would show correlations with net negative emotions (sadness and anger), which was correct. Not only were conclusions made from strong visual correlations between major/minor mode melodies and net positive/negative emotions, as seen in column charts, but the Chi-Square Test of Independence was also used to determine whether these observed correlations were statistically significant. The data concluded that there was a statistically significant relationship between major mode melodies and net positive emotions, as well as minor mode melodies and net negative emotions. 

Introduction

This paper investigates how the perceived emotions of a melody correlate to the major or minor mode that the piece is situated in. Perceived emotions refer to the emotions that the listener feels the music is expressing, which is evoked by the music and determined by the listener’s interpretation of the tone/mood of the piece1. Additionally, the study examines net positive and negative emotions, which refer to the overall emotional impact of the music2. If the music evokes more positive emotions than negative ones, it is considered as having a net positive emotional impact. Conversely, if it evokes more negative emotions, it has a net negative emotional impact.

Key signatures are also central to this study. Determining which key the piece is written in, or which notes will naturally be played in the harmony or melody, a key signature consists of sharps (#) or flats (♭) that appear at the beginning of a piece of music. Major and minor modes especially are foundational in determining the emotional tone of the music3. The major mode typically produces a “happier” or brighter sound, like in the generic Happy Birthday song, while the minor mode tends to evoke a “sadder” or darker sound, like in Star Wars’ “The Imperial March,” based on the different interval patterns between the notes in these modes4.

The significance of this research determines whether there is a statistically significant correlation between specific emotions and melodies in either major or minor modes. Understanding how key signatures and modes influence emotional responses could offer valuable insight into why individuals choose certain types of music for specific moods or situations, and how these musical elements might alter one’s emotional perspective. This study was inspired by the common and frequent experience of selecting music based on mood and the curiosity surrounding whether there is a scientifically established relationship behind these choices.

The findings of this research contribute to the broader conversation surrounding music and emotion, further solidifying or challenging existing theories on the subject. In an era where people frequently listen to music and experience emotional reactions—often without being aware of the musical elements driving these emotions—this study aims to clarify how perceived emotions are shaped by key signatures and modes. Individuals may use these insights to consciously choose music according to their mood; they could choose songs in major keys to counteract stress. Therapists can also use the knowledge of how major and minor modes influence emotions to choose music in a minor key, for example, to create a safe space for processing grief or sadness.

The paper begins with an introduction to the topic, followed by a brief literature review that introduces the study within the context of pre-existing research. Then, the methodology used for data collection is discussed, and both quantitative and qualitative analyses of the results are presented. The paper concludes with key takeaways and recommendations for future research.

Background Literature

Researchers have found various ways in which music can evoke or regulate emotions2,5,6. Several key challenges have been highlighted, such as defining broad terms like “emotions” and distinguishing between emotions felt by the listener and those recognized within music5. Over time, research has shifted towards distinguishing between perceived (temporary) and induced (felt) emotions,1 clarifying that emotions in music can be more complex than everyday emotions. This becomes more complex when considering how individual personality traits, such as empathy, impact emotional responses to music6.

Empathy, for example, has been a strong predictor of emotional responses to sad music, with high-empathy individuals engaging more deeply in emotional cues from music6. Sad music can also elicit both positive and negative emotional experiences, depending on factors like familiarity and the listener’s emotional state7. Similarly, other studies have examined how genres of music and musical characteristics, such as tempo and key signature, influence emotional regulation. Genres like classical music are often suggested to be associated with emotional regulation, yet the role of popular genres remains underexplored7. Studies have also suggested that faster tempos can lead to emotions such as excitement, while slower tempos can evoke feelings of sadness or calmness8. The harmony of a piece can also influence emotional responses, with consonant harmonies typically evoking positive emotions and dissonant harmonies typically evoking tension or discomfort8. Also, instrumentation can have a role: string instruments have been linked to sadness or melancholy while brass instruments have been linked to joy or triumph8. These factors must be considered as potential confounds in studies, such as this one, that explore the emotional aspect of music since they could contribute to the emotional perception beyond the key signature.

In the survey itself, the possibilities of emotions that participants could select included: happiness, sadness, fear, anger, disgust, and surprise. These emotions were chosen because they represent the six basic emotions identified by Ekman in 1992 as universally recognized across different cultures9. For the clarity of this paper, defining these emotions is necessary: happiness is typically associated with feelings of joy and contentment, sadness with sorrow and grief, fear with anxiety or apprehension, anger with frustration and aggression, disgust with revulsion, and surprise with unexpectedness or astonishment9. Studies have shown that these fundamental emotions are frequently used in music perception research to explore emotional responses, as they encompass both positive and negative emotional spectra that are easily relatable to musical stimuli10. Including these options allows for a comprehensive evaluation of the participants’ perceived emotional responses to the stimuli.

In a more recent study, researchers used methods such as digital interfaces to manipulate musical features and observe emotional responses11. These methodologies are advancing our understanding of how specific musical elements, like pitch and key, affect emotional perception. The current study adds to this body of research by focusing specifically on how simple melodies in different major and minor key signatures cue perceived emotions. Through voluntary response surveys and statistical analysis, this study seeks to build on the understanding of how musical characteristics shape emotional responses.

The predicted outcome is a statistically significant correlation between major mode key signatures and net positive emotions, such as happiness. It is also hypothesized that there would be correlations between minor mode key signatures and net negative emotions, such as fear, anger, and sadness.

Methods

This study was conducted through a cross-sectional observational design. A Microsoft Forms survey was distributed, and participant responses were collected using a voluntary sampling method. The approach of gathering data at a single point in time allowed for gathering descriptive data on the variables of interest without altering any experimental conditions. The voluntary aspect of participation and the use of an online survey platform were chosen due to their accessible and effective method of data collection. In the survey, participants were shown the prompt, “After listening to the clip, which emotion do you perceive?” This was followed by an embedded audio file of six-second stimuli and six multiple-choice options, each with a different emotion from which the participants could pick one. The prompt and multiple-choice options presented were identical to each participant throughout the entire survey, and the stimuli’s main melody, rhythm, duration, and timbre were constants. The key signature and major/minor mode of the stimuli would purposefully change each question, allowing for the perceived emotion that the participants chose from the multiple-choice options to depend on their perception of the stimuli’s independent variables. Although real-world music perception is influenced by a variety of factors (such as instrumentation, harmony, tempo, dynamics, and rhythm), they were all kept controlled and constant throughout the stimuli, excluding the key signature and major/minor mode. The possibilities of emotions that participants could select included: happiness, sadness, fear, anger, disgust, and surprise9

Creating the survey took many steps. To make the stimuli for the survey, a melody with a clear tonal center was needed, so looking through an open-source music theory textbook could provided an example melody that fit the requirements. The original melody was found in the Open Music Theory, Version 2, Mark Gotham et. al. music theory textbook as an example of a strong tonal center that started and ended on the same note. This was important because it would allow for the participant’s reaction to the stimuli be solely based on the key signature and the fact that it was in a major or minor mode. The melody notes from the textbook were manually inputted into an online transcription software to be exported as an audio file. The melody was transcribed on the Flat.io music software, and each note was manually transcribed to fit each desired key signature. To ensure consistent musical qualities across transpositions, the transposition process maintained the same tempo, dynamic levels, articulation, and note spacing as the original melody. The melody was first transcribed into the C major key signature, then transposed into 13 additional keys, totaling 14 unique stimuli: 7 major and 7 minor keys, with one of each mode for seven different key signatures: C, D, E, F, G, A, B. The transpositions followed standard key signature adjustments, maintaining the same melodic contour and rhythm. After the final stimuli were created, they were exported as consistent MP3 files at a rate of 44.1 kHz and embedded into the survey so participants did not have to leave the Forms tab to listen to the clip.

The survey method of voluntary sampling through a Microsoft Forms survey also utilized quota sampling to bring in a predetermined number of responses to ensure a sufficient amount of data that would allow for meaningful analysis. It took eight days to collect 50 unique survey responses, meaning there were 50 participants. The participants’ average was 24.88, the median was 16, and the mode was 16, as that numerical age value appeared 16 times. 78% of participants identified as female, 20% as male, and 2% preferred not to say. As for their musical exposure, participants were asked to rank their familiarity with the genre of music that the stimuli were in on a Likert scale of 1 through 5 (1 being very unfamiliar and five being very familiar), the average rating was 3.22 with 36% of participants responding with a 3. The participants were exposed to the stimuli in an entirely randomized order determined by Forms. Each participant would listen to the stimuli in a different order, which helped mitigate bias because each stimulus was independent of the others, and there was no logical sequence in which they needed to be listened to. Randomizing the order of the questions mitigated biases that the sequence of the stimuli could have introduced, and it avoided fatigue effects that could have caused the questions at the end of the form to be answered unfocused throughout all the responses. One response was excluded from the data analysis because it was a duplicate entry in Microsoft Forms, with identical timestamps and responses, including the participant’s name.

Results

Each clip was 6 seconds long, and with 14 clips in total, participants listened to 84 seconds of stimuli. Including the time taken to read the prompts and select a perceived emotion, participants completed the survey in an average of 4 minutes and 21 seconds.

After excluding one duplicate response, a total of 50 valid responses were analyzed. Above is a column chart comparing the responses for all the survey questions regarding major and minor mode stimuli, as well as a data table used to calculate the chi-squared value to determine if the study results are statistically significant. All but three stimuli had the emotion of happiness as the most selected emotion out of the six options. Those three stimuli were melodies in a minor mode. Looking at the column chart, the major mode stimuli had a robust relation to the perceived emotion of happiness: 65% of the responses related to the major mode stimuli selected happiness. However, the minor mode stimuli did not have as strong of a relation with a single perceived emotion; 26% of the respondents associated the minor mode stimuli with fear, a higher percentage than any other perceived emotion option for those melodies. 

Statistical analysis was a Chi-Square Test of Independence, a statistical hypothesis test used to determine whether two categorical variables are likely related by comparing observed frequencies to expected ones. It is used to see if patterns in categories, like key signatures and perceived emotions, are related or simply happen by chance. The Chi-Square test concluded that the null hypothesis could be rejected, stating that the emotions about the specific stimuli were chosen at random. The chi-squared value can be calculated by adding all values in the far-right column together, which equals 149.886. Once the chi-squared value is calculated, the critical chi-squared value needs to be found and compared to the chi-squared value. The critical chi-squared value can be found by using universally accepted charts, which can be navigated using two values: significance level and degrees of freedom. The significance value is 0.05, which is standard, and the degree of freedom is five based on the variable groups. Using these numbers, the critical chi-squared value for this combination of values equals 11.070. Since the calculated chi-squared value is larger than the critical chi-squared value, the null hypothesis stating that emotions about the specific stimuli were chosen at random can be rejected. This means the correlations between major and minor modes and certain perceived emotions were statistically significant. One response was excluded from the data analysis because it was a duplicate entry in Microsoft Forms, with identical timestamps and responses, including the participant’s name. The correlations between major and minor modes to certain perceived emotions were statistically significant. 

Although the Chi-Square Test of Independence showed that the relationship between key signature (mode) and perceived emotion was statistically significant, this alone does not indicate the strength or practical importance of the relationship. To assess this, Cramér’s V was calculated to measure the strength of the association. Since it is scaled between 0 (no association) and 1 (perfect association), values closer to 1 suggest a stronger relationship between variables. Cramér’s V was calculated to be approximately 0.774 – which is greater than 0.5 and therefore closer to 1 – and indicated a strong effect size. While the Chi-Square test confirmed that an association exists, Cramér’s V shows that the connection between key signature and perceived emotion is not only statistically significant but also strong enough to have practical significance. This suggests that the emotional responses to major and minor modes are not random but rather reflect a meaningful and consistent relationship that has explanatory power in a real-world context.

The survey results indicate a range of familiarity with the stimuli among the respondents. As seen in the figure above, the average familiarity rating was 3.22, having responses distributed amongst all levels. The most common rating was Level 3 – neutral familiarity – with 18 respondents, followed by Level 2 with 11 respondents and Level 5 with 10 respondents. This variation suggests that the participants had diversified levels of prior exposure to the musical style in which the stimuli was presented. 

Discussion

These results support the hypothesis because, most of the time, the major mode stimuli ended up being perceived as happiness, which is a net positive emotion and supports the hypothesis. This is the same with the minor mode stimuli since the hypothesis also predicted that they would be most related to net negative emotions, which in this case was fear. 

These results support previous findings which also found that major mode melodies are commonly associated with positive emotions like happiness, while minor mode melodies are perceived as evoking negative emotions, such as sadness or fear12,13. However, while their studies focused on a broader range of emotional responses, our research specifically highlighted the dominance of fear in the minor mode, which adds a nuanced understanding to how different negative emotions may be prioritized within this mode. This suggests that not all negative emotions are perceived equally in minor mode stimuli, contributing to the growing amount of research on how musical structure influences emotional interpretation.

One possible explanation for why fear was more strongly associated with minor mode stimuli than sadness could be the structural elements of the melodies themselves, such as tempo, rhythm, or dissonance. Research has shown that these elements can amplify certain emotional responses14,15. Fear being more of an urgent or unsettling emotion may have been more readily triggered by the minor key’s inherent tension, especially if the melodies involved faster tempos or more dissonant intervals. This study’s findings suggest that fear’s prevalence in minor mode stimuli responses may be linked to the interaction of mode with other musical elements, which emphasizes the importance of considering multiple factors when analyzing musical emotion.

However, This empirical research proved that significant mode melodies are often perceived as net positive emotions, and minor mode melodies are often perceived as net negative emotions. There were a few limitations in the data collection process, as well as data that had to be excluded. Limitations of the study include that one of the options respondents were shown in the questionnaire was the emotion of surprise, which could be a net negative or net positive emotion depending on context, therefore introducing ambiguity into the participant responses. Another limitation was that the volume at which respondents heard the stimuli could have varied since the respondent’s device and control could have affected the participant’s ability to hear certain stimuli if their volume was turned down too much. Future studies may benefit from excluding emotions with such dual interpretations and can also include additional blended emotions to give more nuanced options to respondents. For example, differentiating between ‘anxious fear’ and ‘startled fear’ could clarify how different subtypes of fear interact with musical mode. To further investigate the emotional responses, future research could also consider examining the tempo and dynamics of minor mode melodies to see if these elements contribute to the higher association with fear. Additionally, the inherent self-selection bias in the voluntary online survey was a limitation due to the possibility of participants choosing to respond if they were more musically inclined or emotionally attuned to music, potentially skewing the results. Although the participants’ musical familiarity suggested a variety, to enhance the applicability for future studies, using a random sampling method would help mitigate bias and provide a more representative dataset. A randomized or stratified sampling method could have also eliminated gender imbalance, as seen in the high female participation (78%) and age bias, with the mode and median age both being 16; these factors could potentially have affected how emotions were perceived, but a sampling method to ensure gender, age, and musical background balance can be considered in future extensions of the study. Adding on, to further improve on the generalizability of the study, future iterations of the study/survey could validate findings across different melodic patterns by including multiple distinct melodies in the stimuli.

While the hypothesis predicted a correlation between minor mode and net negative emotions, fear emerged as the dominant emotion. This was somewhat unexpected, as sadness is often cited as the most frequent emotional response to minor modes in other studies16. This deviation suggests that the specific melodic structure or other contextual factors, such as tempo or timbre, may play a larger role in emotional interpretation than previously thought. This raises the possibility that some emotional responses, such as fear, could be more context-dependent than previously understood. Future research could explore the role of these variables in more detail to better understand the full range of emotional responses elicited by different modes.

The findings of this study have implications for fields such as music therapy and emotional AI, where understanding the emotional impact of musical modes can aid in developing more effective therapeutic techniques or responsive technologies. For instance, in music therapy, minor mode melodies could be strategically used not only to evoke sadness for catharsis but also to manage fear-related emotional states, such as anxiety. Similarly, AI-driven music recommendation systems could refine their algorithms to differentiate between negative emotions like sadness and fear, enhancing personalization in mood-based music selection.

Furthermore, the findings contribute to the broader field of affective neuroscience by demonstrating how musical structure can systematically influence emotional perception. The observed dominance of fear in minor mode melodies suggests that specific features—such as tempo, rhythmic complexity, or harmonic dissonance—might act as intensifiers of emotional responses. Future interdisciplinary studies could further investigate how these variables interact with neurological and psychological mechanisms to shape musical emotion processing.

Future research could further investigate how different musical elements, such as rhythm and harmony, interact with mode to shape emotional perception, offering even more precise tools for emotional modulation through music. By expanding the study across different cultural contexts, researchers could also examine whether these emotional associations are universal or shaped by cultural exposure and musical training. The continued exploration of music’s emotional power could offer further insights into human perception, emotional processing, and the potential applications of music across healthcare, media, and artificial intelligence.

Conclusion

It was concluded that major mode melodies are perceived as expressing net positive emotions, while minor mode melodies are perceived to be expressing net negative emotions. The next steps could include examining certain key signatures inside each mode (major or minor) to see if there are any perceived emotions that strongly relate to a certain key signature when the mode is constant. Another direction this study could take in the future could be to test what felt emotions are strongly related to either major or minor mode melodies or looking more specifically at the relationship between certain key signatures and modes.

This study provided valuable insights into how musical modes influence the perception of emotions. Specifically, I found that major mode melodies are predominantly perceived as expressing net positive emotions, while minor mode melodies are associated with net negative emotions. These findings contribute to the understanding of the emotional language of music and reinforce the significance of key signatures and modes when considering perceived emotions.

Moving forward, the next steps in this research could involve examining the emotional associations of specific key signatures within each mode. By narrowing the focus to keys, future studies may uncover whether certain signatures within the major or minor mode evoke more nuanced emotional responses. This could deepen comprehension of how subtle differences in tonality contribute to emotional perception.

Additionally, expanding the research to explore felt emotions—those directly experienced by listeners while hearing the melodies—could provide new insights into the distinction between perceived and felt emotional responses. This direction could be particularly beneficial in areas such as music therapy or media composition, where understanding both perceived and felt emotions is crucial to influencing emotional states.

Acknowledgement

The author expresses gratitude to Natalie Miller at Princeton University for her valuable guidance and insight throughout the process.

References

  1. A. Lamont, T. Eerola, Music and emotion: themes and developments. Musicae Scientiae. 15, 212-213 (2011). [] []
  2. P. N. Juslin, P. Laukka, Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological Bulletin. 129, 770-814 (2003). [] []
  3. R. Turek, The Complete Idiot’s Guide to Music Theory. Alpha Books (2002). []
  4. B. R. Burnham, E. Long, J. Zeide, Pitch direction on the perception of major and minor modes. Attention, Perception, & Psychophysics. 83, 399-414 (2021). []
  5. P. N. Juslin, M. R. Zentner, Current trends in the study of music and emotion: overture. Musicae Scientiae. 3-21 (2001-2002). [] []
  6. J. K. Vuoskoski, T. Eerola, The pleasure evoked by sad music is mediated by feelings of being moved. Frontiers in Psychology. 8 (2017). [] [] []
  7. N. Cook, Music as Creative Practice. Oxford University Press (2018). [] []
  8. J. C. Jailstone, R. Omar, S. M. Henley, C. Frost, M.G. Kenward, J. D. Warren, It’s not what you play, it’s how you play it: timbre affects perception of emotion in music. Quarterly Journal of Experimental Psychology. 62, 2141-55 (2009). [] [] []
  9. P. Ekman, An argument for basic emotions. Cognition and Emotion. 6 (1992). [] [] []
  10. P. N. Juslin, P. Laukka, Expression, perception, and induction of musical emotions: a review and a questionnaire study of everyday listening. Journal of New Music Research. 33, 563-590 (2004). []
  11. A. M. Grimaud, T. Eerola, Emotional expression through musical cues: a comparison of production and perception approaches. PLoS One. 17 (2022). []
  12. H.-A. Arjmand, J. Hohagen, B. Paton, N. S. Rickard, N. S.  Emotional responses to music: the influence of lyrics and mode. Psychology of Music. 45, 457–472 (2017). []
  13. J. C. Hailstone, R. Omar, S. M. Henley, C. Frost, M. G. Kenward, J. D. Warren, It’s not what you play, it’s how you play it: timbre affects perception of emotion in music. Quarterly Journal of Experimental Psychology. 62 (2009). []
  14. F  E. Bigand, B. Poulin-Charronnat, Are we “experienced listeners”? A review of the musical capacities that do not depend on formal musical training. Cognition. 100, 100-130 (2006). []
  15. P. N. Juslin, D. Västfjäll, Emotional responses to music: the need to consider underlying mechanisms. The Behavioral and Brain Sciences. 31, 559-575 (2008). []
  16. G. Carraturo, V. Pando-Naude, M. Costa, P. Vuust, L. Bonetti, E. Brattico, The Major-Minor mode Dichotomy in Music Perception: A Systematic Review and Meta-Analysis on its Behavioural, Physiological, and Clinical Correlates. Physics of Life Reviews. 52,80-106 (2024). []

LEAVE A REPLY

Please enter your comment!
Please enter your name here