As r Tutorial

of 16
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Information About Audio and AS R
  Automatic Speech Recognition Samudravijaya KTata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400005Email: 1 Introduction Information processing machines have become ubiquitous. However, the current modes of human machinecommunication are geared more towards living with the limitations of computer input/output devicesrather than the convenience of humans. Speech is the primary mode of communication among humanbeings. On the other hand, prevalent means of input to computers is through a keyboard or a mouse.It would be nice if computers could  listen  to human speech and carry out their commands. AutomaticSpeech Recognition (ASR) is the process of deriving the transcription (word sequence) of an utterance,given the speech waveform. Speech understanding goes one step further, and gleans the meaning of theutterance in order to carry out the speaker’s command. This article gives a tutorial introduction to ASR.Reader may refer to [1] for an overview of speech recognition and understanding.This article is organised as follows. This section mentions salient application areas of ASR and lists thetypes of speech recognition systems. After describing basic steps of production of speech sounds, Section2 illustrates various sources of variability of speech signal that makes the task of speech recognition hard.Section 3 describes the signal processing, modeling of acoustic and linguistic knowledge, and matching of test pattern with trained models. A small sample of ASR application systems in use in India and abroadis given in Section 4. Section 5 lists some serious limitations of current ASR models, and briefly discussesthe challenges ahead on the road to realisation of natural and ubiquitous speech I/O interfaces. It alsomentions a few Indian efforts in this direction. Section 6 draws some conclusions. 1.1 Applications areas of ASR ASR systems facilitate a physically handicapped person to command and control a machine. Evenordinary persons would prefer a voice interface over a keyboard or mouse. The advantage is more obviousin case of small hand held devices. Dictation machine is a well known application of ASR. Thanks tothe ubiquitous telecommunication systems, speech interface is very convenient for data entry, access of information from remote databases, interactive services such as ticket reservation. ASR systems areexpedient in cases where hands and eyes are busy such as driving or surgery. They are useful for teachingphonetic and programmed teaching as well. 1.2 Types of ASR Speech Recognition Systems can be categorisedinto different groupsdepending on the constraints imposedon the nature of the input speech.  ã  Number of speakers : A system is said to be  speaker independent   if it can recognise speech of any and every speaker; such a system has learnt the characteristics of a large number of speakers.A large amount of a user’s speech data is necessary for training a  speaker dependent   system. Sucha system does not recognise other’s speech well.  Speaker adaptive   systems, on the other hand, arespeaker independent systems to start with, but have the capability to adapt to the voice of a newspeaker provided sufficient amount of his/her speech is provided for training the system. Populardictation machine is a speaker adapted system. ã  Nature of the utterance : A user is required to utter words with clear pause between wordsin an  Isolated Word Recognition   system. A  Connected Word Recognition   system can recognisewords, drawn from a small set, spoken without need for a pause between words. On the otherhand,  Continuous Speech Recognition   systems recognise sentences spoken continuously.  Spontaneous speech   recognition system can handle speech disfluencies such as ah, am or false starts, grammaticalerrors present in a conversational speech. A  Keyword Spotting System   keeps looking for a pre-specified set of words and detects the presence of any one of them in the input speech. ã  Vocabulary size : An ASR system that can recognise a small number of words (say, 10 digits)is called a small vocabulary system. Medium vocabulary systems can recognise a few hundreds of words. Large and Very Large ASR systems are trained with several thousands and several tens of thousands of words respectively. Examples application domains of small, medium and very largevocabulary systems are telephone/credit card number recognition, command and control, dictationsystems respectively. ã  Spectral bandwidth : The bandwidth of telephone/mobile channel is limited to 300-3400Hz andtherefore attenuates frequency components outside this passband. Such a speech is called  narrow-band   speech. In contrast, normal speech that does not go through such a channel is call  wideband  speech; it contains a a wider spectrum limited only by the sampling frequency. As a result, recog-nition accuracy of ASR systems trained with wideband speech is better. Moreover, an ASR systemtrained with narrow band speech performs poorly with wideband speech and vice versa. 2 Why speech recognition is difficult? Despite decades of research in the area, performance of ASR systems is nowhere near human capabilities.Why Speech Recognition is so difficult? This is primarily due to variability of speech signal.Speech Recognition is essentially a decoding process. Figure 2 illustrates the encoding of a message intospeech waveform and the decoding of the message by a recognition system. Speech can be modelledas a sequence of linguistic units called phonemes. For example, each character of Devanagari alphabetessentially represents a phoneme. In order to better appreciate the difficulties associated with ASR, it isnecessary to understand the production of speech sounds and sources of variabilities. 2.1 Production of speech sounds A knowledge of generation of various speech sounds will help us to understand spectral and temporalproperties of speech sounds. This, in turn, will enable us to characterise sounds in terms of featureswhich will aid in recognition and classification of speech sounds.  Figure 1: Message encoding and decoding. (Source: [2])Sounds are generated when air from the lungs excite the air cavity of the mouth. Figure 2 shows thehuman anatomy relevant to speech production. In case of production of a voiced sound, say vowel /a/, theglottis opens and closes periodically. Consequently, puffs of air from lungs excite the oral cavity. Duringthe closure periods of the glottis, resonances are set up in the oral cavity. The waveform coming out of the lips has the signature of both the excitation and the resonant cavity. The frequency of vibration of the glottis is popularly known as the pitch frequency.For the production a nasal sound, the oral passage is blocked, and the velum that normally blocks thenasal passage is lifted. During the production of unvoiced sounds, the glottis does not vibrate and isopen. The oral cavity is excited by aperiodic source. For example, in the production of /s/, air rushingout of a narrow constriction between the tongue and upper teeth excites the cavity in front of the teeth.In order to produce different sounds, a speaker changes the size and shape of oral cavity by movementof articulators such as tongue, jaw, lips. The resonant oral tract is generally modelled as a time-varyinglinear filter. Such a model of speech production is called a  source-filter model . The excitation sourcecan be periodic (as in case of voiced sounds) or aperiodic (example: /s/) or both (example: /z/).For the vowel /a/, the vocal tract can be approximated, during the closed phase of glottis vibration, as auniform tube closed at one end. The fundamental mode of resonance corresponds to a quarter wave. If we assume 340m/s as the speed of sound in air and 17 cm as the length,  L  of the vocal tract from glottisto lips, the fundamental frequency of resonance can be calculated as ν   =  c/λ  =  c/ (4  ∗ L ) = 34000 / 4 ∗  17 = 500 Hz  (1)The frequencies of odd harmonics are 1500Hz, 2500Hz etc. The spectrum of glottal vibration is a linespectrum with peaks at 100, 200, 300Hz etc. if the pitch frequency is 100 Hz. From the theory of digitalfilters, it can be easily shown that the log power spectrum of the output of the filter (speech wave) is thesum of the log spectra of source and filter. Figure 3 shows these 3 spectra for neutral vowel /a/, for theideal case as represented by Eqn. 1. Top and bottom panels of the figure correspond to cases when thepitch ( F  0) is 100 and 200Hz respectively. Notice that although the spectrum of the speech waveforms(right figures) appear slightly differently due to different pitch, both correspond to the same vowel. Thus,variation in speech spectrum due to different pitch should be ignored while doing speech recognition.Figure 4 shows actual power spectra of two speech sounds of the Hindi word “ki” on a log scale. The  Figure 2: Human vocal system (Source: [3])Figure 3: Ideal spectra of source, filter and output speech for vowel /a/. Top and bottom panels corre-spond to cases when the pitch is 100 and 200Hz respectively. (Source: [4])light and dark curves show the spectra of the vowel (/i/) and the unvoiced consonant (/k/) respectively.One may note the periodicity of spectrum of vowel. This is due to the harmonics of the glottal vibrationsuperimposed over the resonant spectrum of the vocal tract in the log scale. The resonances of vocaltract give rise to broad major peaks (known as  f  ormants) in the spectrum. There is no periodicity in thespectrum of the unvoiced consonant (/k/) because the source of excitation is aperiodic in nature.
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks