In Lesson 5, we're going to consider language, speech, and cultural differences in music. So obviously in music in different cultures, even though what we talked about in lesson 4 was there are similarities in the way emotion gets expressed, obviously the music sounds different, and a fundamental question in musicology is, why should that be the case? If you're a westerner and have dined in a Chinese restaurant, you know that the music that you hear in the background sounds very different than the pop music that you're hearing on your car radio. And if you're an easterner, you'll know about the traditional music that you are used to in China is very different from the pop music now that everybody worldwide is exposed to more than they probably all want to be. What's the reason for those differences in the traditional music of different cultures? And the implication for what we've been talking about is that that very well might have something do with the tonal nature of the speech that the culture is using in their language. So let me make the point that probably most of you know anyway, that languages around the world, I mentioned earlier on that there were 6,000 of these, but there are two major categories, ways of categorizing language among very different cultures. And these are languages that are called tone languages, and languages that are called non-tone languages. So what does that mean? In tone languages, and examples would be standard Mandarin, Thai, Vietnamese, the speakers of that language are using the tones of each syllable to convey a lexical meaning, an actual word meaning by virtue of the tones that they use. And there are typically four, five, six tones in those different languages that probably many of you watching this will be familiar with. In a non-tone language, tones are not used in that way. Syllables have no tonality to them in non-tone languages such as American English, German, French. And again, I think all of us have been exposed to these kinds of languages and have heard them many times, but have probably not thought about what the differences are. And I want you to hear these differences between languages that use tones to express lexical meanings, word meanings, tone languages, and languages, non-tone languages, that don't. So, let's listen to a tone and a non-tone language, and in this case the tone language is standard Mandarin and the non-tone language is Korean. I'll use these two languages, because most people watching this video, probably not familiar, don't understand the words that are being spoken in Mandarin or Korean. And so it's a little easier to hear the differences between standard Mandarin, a tone language, and Korean, a non-tone language, if you really don't understand and are not presented with the complication of the words that are actually being spoken, the meanings of the words that are actually being spoken. >> Hi, my name is Jay, and I'm a native Korean speaker. >> And I'm Jin and I'm a native Mandarin speaker. This is the sentence we are going to be reading. We are both undergraduates majoring in neuroscience here and we are having a good time assisting with this project. >> In Korean the sentence is [FOREIGN]. >> In Mandarin, the sentence is [FOREIGN] >> So in thinking about how you compare tone and non tone-languages and how the speech of those cultures, the vocalization of those cultures might influence the music of those cultures, let's come back to the issue of speech prosody. I mentioned this to you before, but let me say again, because this is the method that we're going to use to compare the tonal differences in tone and non-tone languages. Speech prosody is basically the up and down pitch in a sequence of speech, so here is a sentence. And this is a time signal of the sentence being spoken. Each one of these components of the speech is a word or syllable that is analyzed in terms of its frequency. Remember that in speech you have voice speech, which are segments of speech that are tonal, that have a fundamental frequency. And then you have constants, those are generally vowels, and we’ve talked about this before. Then you have consonants which are non-tonal. Generally the beginning and the end of syllables that define the vowel nucleus that’s tonal, for the most part. So that's what's being recorded here in this time signal of a sense that's being spoken. And for any one of these tonal segments you can extract, or there are very user-friendly algorithms that can extract the fundamental frequencies. So, these yellow lines that you see here are the frequency of the tonal voice parts of speech. And for each one you can get the average fundamental frequency and represent that as a bar here in this transformation of these kind of messier frequency analyses to a simple bar that represents the fundamental over that voice-speech segment. And then you can ask in the same way that we did before for tone language or non-tone language, what are the intervals between the fundamentals of the voice segments of speech? So these dotted lines going up a little bit, a lot, down. Each of these dotted lines is the interval. Not the musical interval, but the personic interval, the pitch interval, that is being expressed in that speech between one voiced speech segment and another. So again, the reason for doing this is because it's easy to compare that to the prosody, the pitch intervals that are going on in music and to ask the basic question here. Is the prosody that is exemplified in a tone language or a non-tone language, is that related to the prosody, the interval characteristics of the music? And the reason for making that comparison is to ask whether the language that people are exposed to in that culture every day, is that influencing the music that they have traditionally composed? Here are some examples of speech prosody in different languages that can be analyzed in this way. So this is an English monologue, a French monologue, a German monologue. Those are all non-tone languages. And so we're going to be asking for people, native speakers of of course, speaking these languages In their native tongue, what's the prosody? What's the up and downness of that language? And how can it be compared eventually to the music? And the tone languages that we're going to be looking at here, the monologues in the tone languages, are standard Mandarin, Vietnamese, and Thai, all tone languages that are different from each other, just as German, French, and English are different from each other. Of course they're related historically in some sense, but they're fundamentally different languages. So let's look at that, and compare the interval size in speech with the interval size in music for tone languages and non-tone languages. So here is the analysis of music in these terms, and the analysis of speech. So in music, we're looking at the tonal intervals. In the music of tone language speaking cultures, and non-tone language speaking cultures, the tone language speaking cultures being the red bars, the non-tone language speaking being the blue bars. And here the speech of these cultures, tone languages being in red, prosodic intervals in tone language being in red, and the prosodic intervals in non-tone languages being in blue. And what you see is I think just what you might expect to see. First let's take a look at the speech, that the tone languages are using larger intervals in the ordinary expression of speech, as exemplified by the monologues that I just showed you, analyzed in this same way in both music and speech. Looking at the jumps between sne tonal interval and the next, whether it's in speech, or music. And the tone language is, the bottom line of this, is that the tone language cultures are using larger intervals in their speech than the non-tone language cultures, that's in blue, and that that is reflected in the music, the intervals in the music of tonal language cultures. The three cultures that we've been talking about, Thai, Mandarin, and Vietnamese. The traditional music that can be analyzed in those cultures again shows larger intervals for the tone languages, the red bars, than for non-tone languages, the blue bars. The upshot of this is just what you might expect, that the speech of the culture, the intervals that are use in the speech of the culture, are tracking the intervals that are used in the music of the culture. Now, that's just a correlation, but the implication of that correlation is that speech and music are intimately related across tone and non-tone language cultures. And that the reason why when you go into a Chinese restaurant the music sounds different than the music that you're hearing on your car radio is not because the scales are different, but it's because the use of the scales in the languages and cultures are different. The music of tone speaking cultures using larger intervals in the same way that larger intervals are used in the ordinary speech of that culture, and conversely, smaller intervals for non-tone language music and non-tone language speech in non-tone language speaking cultures. So again, a correlation, but it's a correlation that suggests the type linkage in a different dimension here. The nature of the music, even though the scales that are being used are the same, as we discussed, a tight link is between the influence of speech on music. So let me sum up the main points that I've tried to make in this module. First one is that excited and subdued emotions in speech are conveyed by the size of pitch intervals. And I think the evidence for that is really quite strong. The second point is that when imitated in music, this difference rationalizes the different emotional impact of major versus minor scales. That the way in which emotion is conveyed in speech and music tracks, and the emotion that is conveyed in music, apparently derives from the way in which tones are used to convey emotion in speech. This difference is apparent across cultures. It's not limited to Eastern cultures or Western cultures. We saw that in comparing Carnatic music with traditional Western music. The point I just finished making is that tone language cultures use larger intervals in both speech and music than non-tone language cultures, again, demonstrating the link between music and speech. And finally, that this difference contributes to the different character of music in these different cultures. So again, the link between speech and music throughout is a remarkable one, and to go back to the point that we have used as a theme throughout these six modules, the idea is that speech similarity is the rationale for tonality and its uses and its different uses in different cultures in music. So at the end of these modules, let's just briefly sum up some of the main points that have been themes that have carried through all the modules. The first of these, and it's the most important that I would hope you take away if you take away nothing else from having gone through the effort of listing and thinking about the issues that we've been talking about. This is that musical tonality and its attraction can be explained at least in part by the biological importance of recognizing conspecific vocalization. Let me just expand on that a little bit to reiterate what I said earlier on. That the idea is that we developed a sense of tonality in our human evolution to be able to distinguish sounds that are basically harmonic series. That's the definition of a tonal sound, a harmonic series with a fundamental, and integer multiples of that fundamental as the higher harmonics or upper partials. The reason for that is because of the biological value of being able to identify human speech. We wanna identify human speech because just a second or two of exposure to an individual's utterance tells us about the size, the gender, the age, the emotional state, and the individual identity of the speaker. So it has enormous biological value, and the argument is, and I think it's pretty much an unimpeachable one, that the reason for developing a sense of tonality in the first place is by and large to be able to identify human vocalizations and extract the biologically important information that's contained in those vocalizations. The second general point is that once you take this perspective to heart, a lot of musical phenomena can be rationalized on the basis of speech similarity, and these include octave equivalence, the small number of scales that are preferred worldwide, the genesis of emotions by musical tones, and the differences in tone music across cultures. These are all things that we've talked about, we've talked about additional ones as well. But all of these things that have no other real explanation in music theory or musicology can be explained in this biological context. The next general point is that any animal that generates a harmonic series in their social vocalizations can, in principle, be musical, but that they're unlikely to go in this direction because of the limited social and cultural interactions that they have. So this is really quite an important point. There are lots of animals out there that in their vocalizations generate harmonic series. We mentioned bird song. We mentioned frog calls, and there are lots of others that you can think about. In principle, any of those animals, because they're generating a harmonic series, and because their harmonic series's identification and its characteristics are the essence of all of these issues in music that we need to explain, and even those animals could in principal have music. Well, they obviously don't, or have it to a very limited degree. The best they can do perhaps is show in an operative conditioning paradigm that they can respond to identify an octave difference or something very rudimentary, such as an octave difference. The reason that they haven't gone further in this direction is presumably that they haven't had the evolutionary motivation to develop the degree of socialization and culture that we have. If, obviously evolution is going on, and if it does, well, maybe animals will develop over time the kind of musicality that we enjoy, but it's unlikely, certainly in the short term. And I would argue that this is the reason that other animals don't have an expressed music in the same way that we do. Thus music really is uniquely human, but that's an empirical fact. It's a fact that comes from the analysis that I've just gone through. It's not demanded by biology, so we shouldn't consider ourselves particularly privileged in that sense. So let me say at the end that I've very much enjoyed thinking about and putting together these lectures, I hope you've gotten something out of them. I hope you'll go to the forums and have lots of questions for me, and suggestions, and point out the many mistakes that I've no doubt made, and the many controversial points that have come up in these modules. And I wish you well in your further efforts in thinking about music, both as biology and as a wonderful art form that we all enjoy.