UFPr Arts Department
Electronic Musicological Review
Vol. 6 / March 2001
EMR Home  Home  Editorial  Articles  Past Issues
USING NEW AND STABLE WAVELET COEFFICIENTS IN INSTRUMENT SOUND ANALYSIS
This paper explores the application of new wavelet coefficients to bassoon and French horn sound analysis. The coefficients were constructed based on musical chromatic intervals and applied to the dilation equation to yield wavelet filter coefficients. The examples provided compare recorded and inverse transformed signals, which suggests the potential of the model when applied to sound analysis of data segments of interest.
Sound synthesis methods that can reproduce acoustic instruments are based on analysis methods that provide computationally efficient transfer functions. In addition to their application in the reproduction of natural sounds, powerful analysis tools can be used to produce new sounds that can be used in electroacoustic music composition.
During the last two decades there has been great interest in functions that, as transforms, can be used in the analysis and synthesis of sound events. Amongst them, wavelets have caught the attention of scientists in many areas who search for new methods for signal analysis, filtering and reconstruction [1,18]. Wavelets have been used in image processing [1,2], restoration of recordings [13], seismology [14], and economy [11], amongst other subjects. As the Discrete Fourier Transform (DFT), the Discrete Wavelet Transform (DWT) is a set of linear operations performed on a 2^{n} vector that yields another vector of the same size; it is an orthogonal function, therefore inversible like other transfoms. The idea in wavelet filtering is to use variable scales in time and frequency domains which can sparsely represent each data set when a proper wavelet function is applied. This means that each data interval can be amplified for further analysis of each component in its corresponding scale.
Grossman and Morlet's work [12] at the end of the 70's played an important role in engineering and mathematics research into the bases for harmonic analysis in other function spaces. Searching for square integrals of (ax + b), they tackled the problem of constructing a base for L^{2} (R)^{1} from a discrete set of ax+b. They showed that, if there is a function , then its elements can be used as an orthogonal base (see eq. 1). Alternatively, each finite energy data of a signal can be represented as a linear combination of , and its coefficients can be represented by their scalar products , which measure the fluctuation of the signal f(t) around b on scale a.
Meyer [17] has found a special smooth function which he called Wavelet due to its oscillation around the x axis. This function tends to zero at and its discrete orbit yields a Hilbert base for L^{2}(R) as well as an unconditional base for all the Banach spaces^{2}. Lemarié [15] has also used it to prove basic facts of the CalderónZigmund algebra, but it was only a few years ago that Daubechies [5] developed an algorithm to construct other wavelets for some particular spaces of functions, including orthonormal wavelets with compact support.
The advantage of DWT over DFT methods is that Fourier bases are frequencydependent but not timedependent, which means that small changes in the frequency domain produce changes all over the time domain. Wavelets depend on the frequency domain via dilation^{3} and on the time domain via translation, ^{4} which is an advantage. For a more detailed analysis please refer to [20].
DWT is the most recent solution to some of the DFT limitations, once it can solve the problem of specific event localization in a signal through scaling and modular windowing of the function. Wavelet transform functions allow a more compact representation in comparison with other transform methods and can be used for analysis, synthesis and compression of signals, images and other numerical analysis.
In recent research reports [7,9] we have presented a new set of coefficients based on a chromatic subdivision of the musical scale and shown its applicability to sound signals analysis. We have also presented some results of a combined method of signal processing using the Short Time Fourier Transform (STFT) to test results achieved with DWT [8].
Multiresolution analysis (MRA) [6] can be undertaken through a scaling function which is a sequence of near subspaces V_{j} in L such that
and which satisfies the following properties:
The MRA is generated by the scaling function , where, for each j the subspace V_{j} is generated by , for . Since is an orthonormal basis in V_{j} for each , if we assume that , then we can write:
(2) 
Equation 3 is called the dilation equation, and its coefficients h_{k} are the wavelet filter coefficients of the MRA^{5}.
The Fourier transform of the dilation equation can be calculated from
(4) 
As shown by Daubechies et.al. [6], the coefficients of a wavelet filter must satisfy the equation:
(5) 
and, once [18], h_{k} can be written as
h_{k}  =  (6)  
=  (7) 
The orthogonality of or
(8) 
where is the Kronecker symbol^{6}, can be calculated from the dilation equation.
Equation 9 can be expressed as or
(10) 
where k is an arbitrary integer^{7}. If we assume that can be integrated and
we can integrate both sides of the dilation equation as
Thefore, if we keep the first and the third parts of equation 12 and divide the expression by I (see equation 11), we obtain:
(13) 
We carried out the analysis of recorded notes of French horn and bassoon, focusing on the attack and steadystate transients. Analyses were carried out as described in [7]. In short, this entailed: application of the DWT, splitting of the lowpass window, thresholding, highpass recovery and inverse DWT. Using Pollen's parametric equations [19] we calculated two sets of wavelet coefficients (chroma4 and chroma6). Cody and Daubechies [3,6] also describe the calculation of the space parameter^{8} of a system h_{n} for , which can be obtained from:
h_{2}  =  (14)  
h_{1}  =  (15)  
h_{0}  =  (16)  
h_{1}  =  (17)  
h_{2}  =  1  h_{2}  h_{0}  (18) 
h_{3}  =  1  h_{1}  h_{1}  (19) 
Using and we have a set of 4 coefficients we named chroma4, and using and we have a set of 6 coefficients we named chroma6.
Signals were converted from RIFF/WAV format to a text numeric vector and splitted into files with 4096 samples from the attack and the steadystate parts. Using a similar algorithm as described by Mallat [16], each transformation was splitted into a highpass and lowpass segment. The highpass was discarded and the lowpass was used as input to a new transformation. We studied the signal reconstruction from the lowpass segment for each step (decimated by a factor of 2, 4 and 8) until the this segment could yield a signal reconstruction without significant loss of quality. Figure 1 shows the highpass segments for each decimation. The results of the transform and the recovery of the signal were compared to the original sound using STFT analyses.
Figure 1: Example of the splitting algorithm of the transformed signal (the lowpass segment was used as scaling function).
Wavelet coefficients were used as lowpass and highpass complementary filters that can be written as a Finite Impulse Response (FIR) filter by the equation:
(20) 
where y_{k} is the output of a signal x_{k} convolved with c_{(kt)}.
Although we have analysed all the notes of each instrument, we present examples for low fundamental frequency sounds of the bassoon and characteristic sounds of the French horn, since these sounds are more difficult to analyse using traditional Fourier methods. Low sounds of the bassoon present an overlap in their spectra due to the small difference between its harmonics, which cannot be well visualized in Fourier spectra. Additionally, the high energy harmonics are found around 500 Hz and, considering low sounds, this can be located far away from the fundamental frequency. Sounds of the French horn were choosen because of their harmonic richness.
The harmonic energies of each segment refer to a 100samples size window of the STFT of the attack and the steadystate sections of its envelope. Windows are represented by t_{n} and values are absolute magnitude around n for . Table 1 shows harmonic frequency energies of each window (t_{1}, ... t_{5}) obtained from the original recorded sound of a french horn playing an A and the ones yielded by recovered signals after transformation with chroma4 and chroma6. Sound signals reconstructed after using chroma4 and chroma6 wavelet transforms showed no difference to the original recorded sound for all notes of the french horn. As we have shown in an earlier study [8], when compared to other DWT coefficients (Haar, Daubechies4 and Coiflet), these coefficients yield better results.

Figure 2: Recorded sound spectrum of the attack section of a bassoon playing a low C ( Hz).
Figure 3: Recovered sound spectrum of the attack section of a bassoon playing a low C ( Hz) after transformation using chroma4 wavelet coefficient.
Figure 4: Recovered sound spectrum of the attack section of a bassoon playing a low C ( Hz) after transformation using chroma6 wavelet coefficient.
We have also tested transforms with more than four steps and compared them to other wavelet coefficients (e.g. Haar, Daubechies4, Daubechies6 and Coiflet) and we have obtained some sound distortion. Analysis showed a pattern repetition of the centroid section in harmonic basis. This is more characteristic when using chroma4 coefficients and sugests we can use it to study instrument sound signatures or quality once we stablish some standards. We are carrying out further research into this issue.
The set of new wavelet coefficients proved to be stable and yields good results for instrument sound analysis. For all tested signals, results were very consistent allowing the decomposition, analysis and reconstruction of sound signals as well as very accurate identification of sound events at specific data segments of interest. Compression of signals needs to be studied to verify its application on this domain. Once transforms and data recovering are precise, using windowing methods and convolution with those coefficients can also ease the spectral analysis and provide means for the calculation of transfer funtions to be used in synthesizers and music composition. The study of the distortion produced with transformations above 8 steps of the MRA needs to be carried out to verify its applicability in sound centroid analysis and quality control of acoustic instrument construction.
Available at http://www.demac.ufu.br/lab/papers/fftwav.ps
This research was sponsored by the Conselho Nacional de Pesquisa (CNPq), Universidade Federal de Uberlândia (UFU), Programa Institucional de Bolsas de Iniciação Científica (PIBIC/CNPq). We also thank Centro de Processamento de Alto Desempenho de Minas Gerais (CENAPADMG), where great part of this work was undertaken.
João Cândido Dovicchi is an Assistant Professor at the Federal University of Uberlândia and a Researcher at the Center for Advanced Sonic Computing (NACS/UFU)