Acoustic and Linguistic Properties of Turkish Whistle Language

Abstract

This paper presents the acoustic and linguistic properties of Turkish Whistle Language. Whistle Language is a natural communication method usually used for far-distance interaction in some regions in the world. In a whistled speech, auditory features of spoken languages are transposed. Therefore, whistle languages carry some properties of vocal speech with its own vocabulary, grammar, phonology and prosodic features. There are a few places in the world using this whistled communication style and Kuskoy region in Turkey is one of them. Although there are some researches on Turkish Whistle Language, unfortunately there have been a limited number of scientific publications in the literature. On the other hand, the research results present very stunning results such as people can still continue articulating some words while whistling and there is a high understandability rate while communicating. Therefore, it is described as an incomplete form of Turkish Language. The research results also indicate that Turkish Whistle Language is a non-tonal language transposing formants and therefore it can be used to evaluate the formant changes in the transform of a language. These research results indicate many other valuable properties of Turkish whistle language. But, unfortunately there has not been a collective study combining all these properties. Besides, there are still unclear and conflicted points in the literature as can be implied in this paper. This study aims to bring together the research results to underline the distinct features of Turkish Whistle Language and to motivate researchers to go forward on this subject. The paper is mainly focused on acoustic and phonetic properties of Turkish Whistle Language. Historical or cultural based details are out of the scope of this study.

Share and Cite:

Ozaydin, S. (2018) Acoustic and Linguistic Properties of Turkish Whistle Language. Open Journal of Modern Linguistics, 8, 99-107. doi: 10.4236/ojml.2018.84011.

1. Introduction

The Whistle Language (WsL) is a natural way of communication usually used for interaction in long-distance interaction, secrecy, communication in noisy environment or a briefed communication. This type of whistled communication allows a potentially unlimited set of messages to transmit and exchange over long distances. People encode auditory features of spoken languages by transposing key components of speech sounds. In whistled speech, basic amplitude envelope of the spoken utterance is transposed. This envelope provides a frame for the alignment of whistled melodies with phone boundaries ( Rialland, 2005 ). Phonological and acoustic structure of whistled languages is partly related to the spoken language with some constraints of the whistled medium. A whistled speech has some properties from vocal speech with its own vocabulary, grammar, phonology and prosodic features.

The WsL used in the world is usually found in the regions located in mountains or densely vegetated landscapes in which speakers are unable to communicate via speaking or shouting. A WsL can overcome an ambient noise much more efficiently than a normal or shouted voice. Therefore, the people in these regions can use whistled speech in some daily life activities such as far-distance communication, secret messaging, chatting, warning, in night communication, harvesting, gathering, hunting or communication in noisy environment. Moreover, the highly robust nature of whistled speech through ambient noise gives an efficient way for signaling in any emergency. Some regions using WsL in the world according to the continents are Africa (Ewe, Ari), Asia (Southeast Asia (Akha, Hmong)), America (Mexico (Chinantec, Mazatec, Mixtec), Alaska (Siberian Yupik), Brazil (Gaviao, Surui)), Europe (Greek (Antia), Spain (La Gomera), Turkey (Kuskoy), French Pyreness (Aas)) and Oceania (Abu-Wam). Besides, the list of the twelve WsLs that have currently been studied in linguistic can be found in ( Meyer, 2004 ; Meyer, 2007b ).

Although there are some studies about Turkish Whistled Language (TWsL), there is no article evaluating the results of all these studies. This article aims to present the main issues from these studies. In this scope, the paper is organized as follows. Section 1 gives a general background of WsL. Section 2 presents the properties of tonal and non-tonal whistle languages. Section 3 presents the latest studies and acoustic and phonetic properties of TWsL.

2. Properties of Tonal and Non-Tonal Whistle Languages

There are two types of whistled languages in the world when converting a spoken language into whistled form as pitch-whistling for tonal languages (it emulates the pitch contours) and formant whistling for non-tonal languages (it emulates formants). In both cases, people use the acoustic properties of whistles to convey their messages while keeping the syntax, vocabulary and the grammar of the local spoken language. Despite the vocalic and consonantal reductions, whistled speech still remains highly intelligible to trained speakers. The complexity of converted speech depends on the complexity of spoken language. The acoustic reduction exists at frequency level and it relies on the selection of key phonetic cues from spoken language. Non-tonal languages do not make lexical or grammatical use of tone. Spectral analysis of a non-tonal WsL can represent how formant distribution is reduced into a whistled form. Because the acoustic resonance of the whistled signal occurs primarily in the front oral cavities of the reduced vocal tract, whistled speech signals bear frequency shapes similar in several aspects to the second formant of spoken speech. Besides, in non-tonal languages, the whistlers intend to articulate every vowel and every consonant while whistling ( Meyer, 2015 ; Rialland, 2005 ).

The voice signal carries two perceptive qualities called as pitch and timbre. Pitch characterizes the tone of a voice and the timbre characterizes the vowels through the formants. In a spoken speech, the vibration of the vocal cords determines fundamental frequency (F0 or pitch) and the vocal tract acts as a resonator to the harmonics of the vocal folds’ vibrations which gives formants to identify vowels and consonants. Whereas, the whistled speech does not require the vibration of vocal cords and the fundamental frequency carries all of the useful linguistic information by emulating either the formants (in non-tonal languages) or the pitch (in tonal languages). Non-tonal whistled languages transpose formants while tonal whistled languages transpose tones. In both types of WsL, the pronunciation of words is converted into whistles with an acoustic transformation from the multidimensional frequency spectrum of the voice to the mono-dimensional one of whistles. In order to increase the intelligibility of the message, a whistler adapts his whistle (pitch, timbre) according to the phonological rules of their local languages. This is also used to overcome ambient noise and fight reverberation in far-distance ( Meyer, 2007b ; Rialland, 2005 ).

The examples of non-tonal languages are Greek, Spanish Turkish and the examples of tonal languages are Mazatec, Chinantec, Akha, Hmong. Turkish WsL is participated in non-tonal language category with its highest number of vowels and consonants. While non-tonal WsL has some segmental cues (internal prosody of vowels and consonants) about spoken speech, tonal WsL has some supra-segmental cues (such as pitch, duration, loudness, nasality). For non-tonal languages, the information is primarily encoded in the formants such as vowel identity and consonant transitions and pitch plays a secondary role for intelligibility. For tonal languages, a pitch variation occurs in a frequency band of whistles and variation in pitch (tones or fundamental frequency, F0) is used to contrast word meaning. Even if they look like in different categories, both tonal and non-tonal languages have some similar properties. Both transpose the basic amplitude envelope of a spoken speech and they have a phonological structure partly related to local spoken language. In non-tonal whistling, frequencies of vowels and consonants are approximated to a whistled form with a vocalic and consonantal reduction. Here, vowels are characterized by both fundamental frequency (F0) resulting from the vibration of the vocal folds and resonant structure from the vocal tract (formants). The F0 of a whistled speech is also the whistled resonance in the vocal tract and a separate control of F0 and formants is not possible. Whistlers should choose either pitch or timbre to adapt it to the phonetics of their language. Whereas, these two frequency levels can be independently controllable and recoverable through pitch and timbre (formants) in a normal speech ( Meyer, 2004 ; Meyer, 2005 ; Meyer, 2007a ).

For tonal whistle languages, the pitch level of the main band of frequencies characterizes the composition of the phonemes and therefore whistles are focused on suprasegmental features and reproduce mainly the fundamental frequency of spoken languages. For non-tonal languages like Spanish or Turkish, the pure frequency of whistles reproduces mainly segmental features of the language. The transitions of the consonants are influenced by the pitch of the neighboring vowels. The consonantal modulations of pitches of whistled vowels have a strong conformity to the second formant of the spoken language ( Meyer & Gautheron, 2006 ).

A Rialland performed a perception test for consonant phonemes in Silbo gomero and compared Silbo Gomero with the TWsL (Rialland, 2005) . She used nonsense tokens for test data to prevent listeners an interference from lexical content. When selecting test context, she considered the contribution of consonants on recognition. Two whistlers were used in the tests. In order to compare Silbo Gomero with TWsL in terms of consonant locations, she used the data recorded by Busnel in 1967. Rialland also emphasize which strategies were applied for the survival of Silbo Gomero in ( Rialland, 2005 ). Some of them are development of an instruction program for schoolteachers, preparation of several documentary films, implementation of perception tests or functional magnetic resonance imaging studies ( Rialland, 2005 ; Carreras, 2005 ). These strategies can also be applied for the survival of TWsL.

3. Acoustic and Phonetic Properties of Turkish Whistle Language

The first scientific study on Turkish Whistle language was performed by R.G. Busnel with his research team in 1967 ( Busnel, 1976 ). They performed recognition tests in Kuskoy with spoken and whistled words including isolated words and sentences. The recognition tests were performed in terms of word, sex, age and identity recognition. The details of these tests can be found in ( Busnel, 1976 ). O. Baskan, who was a member of Busnel’s research team, analysed TWsL and found that whistlers tended to understand the phonetic alphabet of Turkish by three phonologically contrasting vowels and three consonants ( Baskan, 1968 ). Another research on TWsL was performed by D. Aksan at Kuskoy region with his own team ( Aksan, 1968 ). Their test results showed that whistlers could convert nearly all the words in the test list into whistled form even they heard a foreign word. He also stated that the roles of thick and thin vowels were high in understanding of TWsL. Lately, J. Meyer performed an statistical analysis of vowels and consonants in TWsL based on the data both recorded by Busnel in Kuskoy in 1967 ( Busnel, 1976 ) and the data recorded by Meyer in 2003 ( Meyer, 2007a ). Besides, Gungorkun ( Gungorkun et al., 2015 ) was performed a dichotic speech listening test on TWsL to evaluate the comprehension performance of it on left and right hemispheres. The results of all these studies and in this scope, acoustic and phonetic structure of TWsL are briefed in this paper.

Turkish language has 8 vowels and 21 consonants. Both the vowel and consonant numbers in Turkish are higher than the numbers in Spanish and Greek. This complexity reflects to the quality of TWsL. When compared to the other formant based European Whistle Languages used in Greek and Spain, TWsL is seen as having more phonetic and phonological properties due to its rich phonetic structure. Generally speaking, there are two to four vowels and four consonants in a whistle language. For a comparison, there are 5 spoken vowels in Antia in Greek as [i, ε, a, o, u] and whistled in statistically 3 main bands of frequencies as [(i), (ε, u), (a, o)]. Again, there are 5 spoken vowels in Silbo in Spain as (i, e, a, o, u) and whistled in 3 main bands of frequencies [i, e, (a, o, u)]. The three bands of frequencies of Silbo are around 2600 Hz for [i], 2100 Hz for [e] and 1600 Hz for [(a, o, u)] ( Meyer, 2005 ; Meyer, 2007a ; Rialland, 2005 ). Eight types of vowels in Turkish language are (i, ʏ, w, e, œ, u, a, o) in IPA form (or can be written as (i, ü, ı, e, ö, u, a, o) in Turkish letters, respectively). These Turkish vowels are whistled in a decreasing order of mean frequencies in eight intervals as can be seen in ( Meyer, 2007a ). An statistical ANOVA analysis of frequency distribution of those 8 whistled vowels in ( Meyer, 2007a ) shows that there is an overlap among some vowel frequencies and therefore frequencies of the whistled vowels are combined in 4 main bands of frequencies such that [(i), (ʏ, w), (e, œ, u), (a, o)] according to vocalic harmony rules in Turkish. They are accepted as statistically distinct in these groups. These four groups are concluded with a phonetic reduction in the whistled sentences while a phonologic structure is preserved. The acoustical analysis on TWsL also confirms that there are some phonetic reductions in whistled signal when compared to the spoken signal while articulatory information is tried to be saved ( Meyer, 2005 ; Meyer, 2007a ) (this supports the theory that Turkish whistled vowels can be piled up in (i, œ,o) in ( Baskan, 1968 )). Meyer examines whistled consonants in five groups according to resulting frequency shapes (close articulatory loci) of their whistled articulation ( Meyer, 2007a ).

It is stated in ( Meyer, 2005 ) that the highest pitch is always attributed to [i] and the lowest to [u] or [o] for the whistled form of vowels in all the non-tonal languages. It is also stated that the first reduction is due to the impossibility to produce a whistle below 1 kHz. When there is a potential ambiguity in a vowel which might be overlapped by neighboring vowels, the whistler makes an effort to place the vowels at opposed extremes of their own band. For example, for the Turkish word “kolay” (/kolaj/), /o/ and /a/ are effectively distinguished because /a/ bears a higher pitch despite the fact that these two vowels are usually whistled in the same way ( Meyer, 2005 ).

The amplitude envelop of a whistled sentence reproduces the spoken speech units with a clearer syllable segmentation. Whistled consonants are modulations in frequency and amplitude of a whistled speech. The vocalic duration in the vowels in whistled Turkish is longer than spoken form and this helps to increase the intelligibility. In a shouted or whistled speech, vowel lengthening is an adaptation to the difficult conditions of communication. However, when the duration of the vowels is beyond a tolerance threshold, the performance of recognition decreases ( Meyer, 2007b ).

Whistled speech is well carried in noisy natural environments such as valleys which form a natural guide and can reach to long distances in mountains or in forests (for example, the signal remains understandable at 8 km in La Gomera). A WsL is the result of the adaptation of the human intelligence to a natural acoustic and linguistic environment. Vocalic scale of vowels is limited with the communication distance. The farther the whistles means the higher the frequencies which also defines the scale of vocalic interval of a whistled speech. Whistles cover the central domain of frequencies for which the sounds resist to reverberation in forests. Moreover, in natural conditions the background noise is weak in high frequencies (except in windy weather), therefore the signal to noise ratio is better than 6 dB at 1 km and is enough to be clearly heard. In the analysis of whistled speech signal, the acoustic view of vowels and consonants represent that the intelligibility of spoken words is relative to the lexical environment and to the structure of the concerned language. Psychoacoustic tests realized in Kuskoy in 1967 showed that the intelligibility of words was eased when they contain the most frequent segmental features ( Meyer, 2005 ).

A whistle has a narrow bandwidth modulated in frequency and amplitude. The superiority of whistle to a speech is due to its simple and natural tone. The pitches of whistles are concentrated in a narrow bandwidth which is within the most sensitive and selective band of hearing. With this property, wideband spectrum of speech (0.1 - 16 kHz) reduces to about (0.9 - 4 kHz) frequency band (The coverage is described as between 1.5 - 2.5 kHz and maximum 4 kHz in Kuskoy in ( Busnel, 1976 )). The heard of whistle from far distance and in noisy conditions depends on this property. For example, in the mountains of La Gomera in Spain a whistle can travel up to 10 km while saving the understanding of message in it. Besides, whistled speech signal remains highly above the natural background noise at relatively long distance. A whistle can reach to a level of amplitude of 100 dB (at 1 m) while a normal speech is about 50 dB at this distance. Furthermore, the dynamic range in amplitude is reduced compared to spoken speech. While the dynamic range of amplitude of whistled speech is less than 20 dB, the range of spoken speech is more than 50 dB ( Baskan, 1968 ; Busnel, 1976 ; Meyer, 2004 ; Meyer, 2015 ).

When Meyer analyzed the frequency distribution of whistled vowels in various non-tonal languages, he found that each vowel position was whistled in a definite frequency interval and their frequency distribution was related to articulation of vowels in that language. The results in ( Meyer, 2015 ; Rialland, 2005 ) confirms that second formant (F2) is most of the time the principal component emulated by whistling and therefore a key acoustic cue between whistled speech and spoken speech although it is not always the only one. For example, in bilabial whistling production a whistle frequency was captured by either the second (F2) or third formant (F3) of the vocal tract and there was a frequency jump between (F2) and (F3) when they were close. It is also stated in ( Meyer, 2015 ) that test results of Turkish whistled speech indicated that the whistlers were influenced by some other acoustic cues than second formant (F2) to emulate vowels. In non-tonal whistle languages, stress only slightly influences the whistled frequencies and it is a secondary feature. It increases magnitude and frequency without changing the level-distribution of the vocalic frequency intervals ( Meyer, 2015 ).

The frequency of a WsL is modulated by the variation of the volume of the resonating cavity related to the articulation of the equivalent spoken form. The movements of the tongue and of the epiglottis affect tuning of the vowels and consonants. A pitch frequency variation of vowels inside a sentence doesn’t change the level-distribution of the vowel intervals because it acts as a secondary feature and it is participated in the highest part of related vowel interval. Diphthongs present a continue modulation going from the first to the second vocal frequencies, with a significant frequency depth for different vowel types. The articulation of consonants while whistling produces simple frequency shapes. Whistled consonants are modulations in frequency and amplitude of a whistled speech. When the amplitude modulation shuts off the whistle, consonants are also characterized by silent gaps. As a consequence, the signal to noise ratio (SNR) at the reception of a WsL is sufficiently high for a good perception. Furthermore, the bandwidth of its fundamental frequency and the dynamic range in amplitude is reduced compared to spoken speech. Long distance whistled speeches are higher in frequencies (approximately 100 to 250 Hz) than short distance ones. This aspect underlines that the range of frequency is relative to the distance of communication. In WsL, the complex frequency spectrum of the voice is reduced to a pitch variation produced by a narrow frequency band of whistles ( Meyer, 2005 ; Meyer, 2007a ; Meyer, 2007b ).

As is known, while the area responsible for understanding speech and language is located in the left hemisphere of our brain, the right hemisphere is specialized to encode suprasegmental prosodic properties like spectral cues, intonation, and melodic lines. Because the right ear connects to the brain’s left hemispehere and the left ear connects to right hemispehere, right ear is the dominant one for processing sounds. In order to investigate neural processing areas of the brain in whistlers of Silbo Gomero, Carretas et al. ( Carreras et al., 2005 ) performed a test and functional neuroimaging data were collected from the whistlers. According to the test results they demonstrated that the language-processing region of the human brain could adapt itself to different kind of signalling forms. Similarly, in order to evaluate the comprehension performance of brain in whistlers of TWsL, Gungorkun ( Gungorkun et al., 2015 ) performed a dichotic speech listening test. In this test, 31 participants were asked to listen some Turkish vocal syllables from left and right ears respectively and asked what they heard. Testers answered more correctly the syllables heard by their right ears as expected. The same test performed with the Turkish whistled syllables. In this case, testers understood the syllables heard by left and right ears in nearly equal amount. Besides, understandability rate of whistled syllables by right hemisphere (left ear) increased when compared to vocal syllables. Gungorkun attributes the comprehension performance increase of TWsL in right hemisphere to the information in ( Meyer, 2015 ) where formant transitions in TWsL were evaluated as modulated pitches. These test results showed that TWsL causes to use both side of the brain and the usage of right hemispehere of brain increases when compared to vocal speech. This result changes the theory of cerebral asymmetry in terms of language and the theory that language specialization of the left hemispehere is input-invariant ( Gungorkun et al., 2015 ).

Whistled Turkish is described as one of the best-preserved whistled forms of languages in ( Meyer, 2007a ). It is also stated that a spoken Turkish sentence transposed into whistles remains highly intelligible for a fluent whistler, even for non-standardized sentences ( Meyer, 2007a ). Even though Turkish is the language having highest numbers of vowels and consonants, how the magnitude and frequency modulations are combined to produce the consonants is not detailed up to now. Besides, It is also stated in ( Meyer, 2007a ) that how the phonological rules of Turkish vocalic system are phonetically reduced in TWsL by well balancing the vowel harmony has not been explained. Further analysis on TWsL should be done to explain these issues. Despite dissappearing whistle languages in the world like in Greece in Antia, TWsL is still used in the area of Kuskoy and it can provide a reliable data for furher analysis of a non-tonal whistle language.

4. Conclusion

Whistle language can be considered as an amazing reaction of human intelligibility to natural acoustic environment while communicating. A whistled speech includes specific cues of a local spoken language. Turkish Whistle Language (TWsL) is a very good representation of this. Even though the first test on TWsL was performed nearly fifty years ago, there are still limited research studies performing on it. This study collects together the acoustic and linguistic properties of TWsL and aims to motivate researchers to go forward on this subject by presenting the previous research results and by underlining the distinct features of TWsL. In TWsL, complex frequency spectrum of the spoken sentences is reduced to a narrow frequency band while keeping the intelligence. Because pronunciation of words is converted into whistles with a syllable-to-syllable transform technique, phonetic complexity of Turkish Language reflects directly on TWsL and it could be a good model to further analyze the perception of phonetic information in a whistle language. While other whistle languages in the world are becoming distinct, TWsL is still alive by keeping its many kinds of potential distinct properties as a secret. Therefore, further analysis of TWsL might help us better understand the acoustic and phonetic structure of this kind of simplified communication method. Furthermore, the outputs of such kind of analysis can support the other scientific areas such as phonetics, acoustics, speech processing or language modeling.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Aksan, D (1968). Anadoluda Islιk Dili Araσtιrmasι Ön Raporu, Ankara üniversitesi Dil Tarih Coγrafya fakültesi. Türkoloji Dergisi, 1/3, 49-64.
[2] Baskan, O. (1968). Türkçe Islιk dili. Ιstanbul üniversitesi Türk Dili ve Edebiyatι Dergisi, 16/1, 1-10.
[3] Busnel, R. G. (1976). Whistled Languages. Berlin: Springer-Verlag.
https://doi.org/10.1007/978-3-642-46335-8
[4] Carreras, M., Lopez, J., Rivero, F., & Corina, D. (2005). Neural Processing of a Whistled Language. Nature, 433, 31-32.
https://doi.org/10.1038/433031a
[5] Gungorkun, O., Gungorkun, M., & Hahn, C. (2015). Whistled Turkish alters Language Asymmetries. Biophysical Journal Collection, Current Biology, 25, PR706-R708.
https://doi.org/10.1016/j.cub.2015.06.067
[6] Meyer, J. (2004). Bioacoustics of Human Whistled Languages: An Alternative Approach to the Cognitive Processes of Language. Anais da Academia Brasileira de Ciências, 76.
https://doi.org/10.1590/S0001-37652004000200033
[7] Meyer, J. (2005). Whistled Speech: A Natural Phonetic Description of Languages Adapted to Human Perception and to the Acoustical Environment. In INTERSPEECH_2005, Eurospeech, 9th European Conference on Speech Communication and Technology, Lisbon, 4-8 September 2005.
[8] Meyer, J. (2007a). Whistled Turkish: Statistical Analysis of Vowel Distribution and Consonant Modulations. In Proceedings of XVI International Conference of Phonetic Sciences.
[9] Meyer, J. (2007b). Acoustic Features and Perceptive Cues of Songs and Dialogues in Whistled Speech: Convergences with Sung Speech. In Proceedings of the International Symposium on Musical Acoustics 2007, Barcelona.
[10] Meyer, J. (2015). Whistled Languages: A Worldwide Inquiry on Human Whistled Speech. Berlin: Springer-Verlag.
https://doi.org/10.1007/978-3-662-45837-2
[11] Meyer, J., & Gautheron, B. (2006). Whistled Speech and Whistled Languages. In K. Brown (Ed.), Encyclopedia of Language and Linguistics (2nd ed., pp. 573-576). Oxford: Elsevier.
https://doi.org/10.1016/B0-08-044854-2/00034-1
[12] Rialland, A. (2005). Phonological and Phonetic Aspects of Whistled Languages. Phonology, 22, 237-271.
https://doi.org/10.1017/S0952675705000552

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.