Exam (elaborations) TEST BANK FOR Statistical and Adaptive Signal Processing By Dimitris G. Manolakis, Dimitris Manolakis, Vinay K. Ingle, Stephen M.
... [Show More] Kogon (Solution Manual) Solutions Manual for Statistical and Adaptive Signal Processing Chapter 2 Discrete-Time Signals and Systems 2.1 Sampling frequency Fs = 100 sam/sec (a) Continuous-time signal xc(t) = 2 cos(40πt + π/3) has frequency of 20 Hz. Hence x(n) = xc (t)|t=n/Fs = 2 cos 40πn 100 + π/3 which implies that ω0 = 40π 100 = 2π 5 . (b) Steady-state response yc,ss(t): Given that h(n) = 0.8nu(n), the frequency response function is H(e jω) = 1 1 − 0.8e−jω Since ω0 = 2π/5, the system response at ω0 is H(e jω) = 1 1 − 0.8e−j2π/5 = 0.9343 e−j0.2517π Hence yss(n) = 2(0.9343) cos(2πn/5 + π/3 − 0.2517π), or yc,ss(t) = 1. 868 6 cos(40πt + 0.585π) (c) Any xc(t) that has the same digital frequency ω0 after sampling and the same phase shift as above will have the same steady state response. Since Fs = 100 sam/sec, the two other frequencies are 120 and 220 Hz. 2.2 The discrete-time signal is x(n) = A cos(ω0n)wR(n) where wR(n) is an N-point rectangular window. (a) The DTFT of x(n) is determined as X(e jω) = F [A cos(ω0n)wR(n)] = (A/2) F e jω0 wR(n) + (A/2) F e−jω0 wR(n) (1) Using the DTFT of wR(n) as F [wR(n)] = N−1 n=0 e−jωn = e−jω(N−1)/2 sin(ωN/2) sin(ω/2) (2) and the fact that complex exponential causes a translation in the frequency domain (1) can be written after a fair amount of algebra and trigonometry as X(e jω) = XR(e jω) + j XI(e jω) 1 2 Statistical and Adaptive Signal Processing - Solution Manual −4 −2 0 2 4 −5 0 5 10 15 20 32−point DTFT (Real) −4 −2 0 2 4 −15 −10 −5 0 5 10 15 32−point DTFT (Imaginary) Figure 2.2bc: Real and Imaginary DTFT and DFT Plots(ω0 = π/4) where XR(e jω) = A 2 cos[(ω − ω0)(N − 1)/2] sin[(ω − ω0)N/2] sin[(ω − ω0)/2] + A 2 cos[(ω + ω0)(N − 1)/2] sin{[ω − (2π − ω0)]N/2} sin{[ω − (2π − ω0)]/2} (3) and XR(e jω) = −A 2 sin[(ω − ω0)(N − 1)/2] sin[(ω − ω0)N/2] sin[(ω − ω0)/2] − A 2 sin[(ω + ω0)(N − 1)/2] sin{[ω − (2π − ω0)]N/2} sin{[ω − (2π − ω0)]/2} (4) (b) N = 32 and ω0 = π/4. The DTFT plots are shown in Figure 2.2bc. (c) The DFT samples are shown in Figure 2.2bc. (d) N = 32 and ω0 = 1.1π/4. The plots are shown in Figure 2.2d. The added spectrum for the second case above (ω0 = 1.1π/4) is a result of the periodic extension of the DFT. For a 32-point sequence, the end of each extension does not line up with the beginning of the next extension. This results in sharp edges in the periodic extension, and added frequencies in the spectrum. 2.3 The sequence is x(n) = cos(πn/4), 0 ≤ n ≤ 15. (a) The 16-point DFT is shown in the top-left plot of Figure 2.3. (b) The 32-point DFT is shown in the top-right plot of Figure 2.3. (c) The 64-point DFT is shown in the bottom plot of Figure 2.3. (d) The zero padding results in a lower frequency sampling interval. Hence there are more terms in the DFT representation. The shape of the DTFT continues to fill in as N increases from 16 to 64. 2.4 x(n) = {1, 2, 3, 4, 3, 2, 1}; h(n) = {−1, 0, 1} Statistical and Adaptive Signal Processing - Solution Manual 3 −4 −2 0 2 4 −5 0 5 10 15 20 32−point DTFT (Real) −4 −2 0 2 4 −15 −10 −5 0 5 10 15 32−point DTFT (Imaginary) Figure 2.2d: Real and Imaginary DTFT Plots (ω0 = 1.1π/4) 0 1 2 3 4 5 6 7 16−point DFT 0 π/4 π/2 3π/4 π 5π/4 3π/2 7π/4 0 1 2 3 4 5 6 7 32−point DFT 0 π/4 π/2 3π/4 π 5π/4 3π/2 7π/4 0 2 4 6 8 64−point DFT 0 π/4 π/2 3π/4 π 5π/4 3π/2 7π/4 Figure 2.3: The 16, 32, and 64-point DFTs of x(n) = cos(πn/4) 4 Statistical and Adaptive Signal Processing - Solution Manual (a) Convolution using matrix-vector multiplication approach. y = Xh −1 −2 −2 −2 0 2 2 2 1 = 1 0 0 2 1 0 3 2 1 4 3 2 3 4 3 2 3 4 1 2 3 0 1 2 0 0 1 −1 0 1 (b) The Matlab function convtoep function [y]=convtoep(x,h) % Convolution using Toeplitz Arrays % y = convtoep(x,h) nx = length(x); x = reshape(x,nx,1); nh = length(h); h = reshape(h,nh,1); X = toeplitz([x; zeros(nh-1,1)],[x(1) zeros(1,nh-1)]); y = X*h; (c) Verification: 2.5 x(n) = 0.9nu(n) (a) Analytical evaluation of x(n) ∗ x(n) : y(n) = x(n) ∗ x(n) = ∞ k=−∞ x(k)x(n − k) = ∞ k=−∞ (0.9)ku(k)(0.9)n−ku(n − k) = ∞ k=0 (0.9)k(0.9)n−ku(n − k) y(n) = (n + 1)(0.9)n This sequence is shown in the leftmost plot in Figure 2.5. (b) Convolution using the conv function: The sequence x(n) is truncated to 51 samples. This convolution is done using n = 0:50; x =(0.9).^n; y = conv(x,x); This sequence is in the center plot in in Figure 2.5. (c) Convolution using the filter function: To use this function, we have to represent one of the x(n) by coefficients in an equivalent difference equation. This difference equation is given by x(n) = δ(n) + 0.9 x(n − 1) which means that the filter coefficients are b = 1, a = [1,-0.9]. Thus this convolution is done using y = filter(1,[1,-0.9],x); This sequence is in the rightmost plot in Figure 2.5. Statistical and Adaptive Signal Processing - Solution Manual 5 (d) The three plots appear to be identical. However, the conv function gives the largest error since both sequences are truncated. The filter function would be best suited for infinite length sequences. 0 20 40 60 80 100 0 0.5 1 1.5 2 2.5 3 3.5 4 y(n)=(n+1)(0.9)n 0 20 40 60 80 100 0 0.5 1 1.5 2 2.5 3 3.5 4 Using Conv 0 20 40 60 80 100 0 0.5 1 1.5 2 2.5 3 3.5 4 Using filter Figure 2.5: x(n) = (0.9)nu(n) convolved with itself 2.6 Hap(n) is a causal and stable all-pass system with input x(n) and output y(n). Thenwe have ∞ n=0 |y(n)|2 = ∞ n=0 |x(n)|2 Consider x(n) = x0(n) + x1(n) and yi (n) = h(n) ∗ xi (n) Then ∞ n=0 |y0(n) + y1(n)|2 = ∞ n=0 |x0(n) + x1(n)|2 ∞ n=0 |y0(n)|2 + ∞ n=0 |y1(n)|2 + ∞ n=0 |y0(n)y1(n)| = ∞ n=0 |x0(n)|2 + ∞ n=0 |x1(n)|2 ∞ n=0 |y0(n)|2 + ∞ n=0 |y0(n)y1(n)| = ∞ n=0 |x0(n)|2 Hence ∞ n=0 |y0(n)|2 ≤ ∞ n=0 |x0(n)|2 Define x0 (n) as 0 above n0, then n0 n=0 |y(n)|2 ≤ n0 n=0 |x(n)|2 2.7 Monotone phase-response property of a causal and stable PZ-AP system: (a) A real first-order system: Consider H(z) = p − z−1 1 − p z−1 , |p| < 1 6 Statistical and Adaptive Signal Processing - Solution Manual Then ]H(e jω) = arctan sin ω cos ω − 1/p − arctan sin ω cos ω − p Clearly, ]H(e jω) decreases from ]H(e j0) = π to ]H(e j2π) = −π. To show that it decreases monotonically, consider d dω ]H(e jω) = d dω arctan sin ω cos ω − 1/p − arctan sin ω cos ω − p = p2 − 1 1 + p2 − 2 (cos ω) p which is negative for |p| < 1 . This proves that ]H(e jω) decreases monotonically from ]H(e j0) = π to ]H(e j2π) = −π. (b) A real second-order (complex-conjugate pole-pair) system: The system function is H(z) = (r]θ) − z−1 1 − (r]θ) ∗ z−1 (r]θ) ∗ − z−1 1 − (r]θ) z−1 , 0 < r < 1 Consider the term H1(z) , (r]θ) − z−1 1 − (r]θ) z−1 Then angle calculations are similar to those for the real first-order case if we rotate the coordinate system by the angle θ, that is, ]H1(e jω) = arctan sin ω cos ω − 1/r + θ − arctan sin ω cos ω − r + θ = arctan sin ω cos ω − 1/r − arctan sin ω cos ω − r Thus following the arguments in (a), we conclude that ]H1(e jω) decreases monotonically from ]H1(e j0) to ]H1(e j0) − π. Similarly, consider H2(z) , (r]θ) − z−1 1 − (r]θ) z−1 Then ]H2(e jω) decreases monotonically from ]H2(e j0) to ]H2(e j0) − π. Finally, since ]H1(e jω) = ]H1(e jω) + ]H2(e jω) we conclude that ]H(e jω) decreases monotonically from ]H(e j0) to ]H(e j0) − 2π. (c) Generalizing the above result for a real, causal, and stable PZ-AP system with P pole-zero pairs, we can show that ]H(e jω) decreases monotonically from ]H(e j0) to ]H(e j0) − 2π P. 2.8 Minimum group delay property: Consider the systems Hmin(z) = 1 − 0.25z−1 1 + 0.5z−1 Hmax(z) = 0.25 − z−1 0.5 + z−1 Hmix(z) = 1 − 0.25z−1 0.5 + z−1 Statistical and Adaptive Signal Processing - Solution Manual 7 0 0.2 0.4 0.6 0.8 1 −1 0 1 2 3 Normalized frequency (Nyquist == 1) Group delay (in samples) H max H min H mix Figure 2.8: Group delay plots (a) Group delay response plots are shown in Figure 2.8. (b) Proof of minimum group delay property 2.9 Minimum- and maximum-phase components (a) Ry(z) = 1 − 2.5z−1 + z−2 1 − 2.05z−1 + z−2 = (1 − 12 z−1)(1 − 2z−1) (1 − 45 z−1)(1 − 54 z−1) Hence the minimum phase component is (1−1 2 z−1) (1−4 5 z−1) and the maximum phase component is (1−2z−1) (1−54 z−1) . (b) Ry (z) = 3z2 − 10 + 3z−2 3z2 + 10 + 3z−2 = (1 − 13 z−2)(1 − 3z−2) (1 + 13 z−2)(1 + 3z−2) Hence the minimum phase component is (1−1 3 z−2) (1+1 3 z−2) and the maximum phase component is (1−3z−2) (1+3z−2) . 2.10 Consider the all-pass system function Hap(z) = 1 − αz−1 z−1 − α∗ , |α| < 1 (a) |Hap(z)|2 = Hap(z)H∗ ap(z) = ( 1 − αz−1 z−1 − α∗ )( 1 − α ∗z−1∗ z−1∗ − α ) = ( 1 − αz−1 z−1 − α∗ )( 1 − α ∗ z |z|2 z |z|2 − α ) where z−1∗ = z |z|2 = |z|2 − αz−1|z|2 − α ∗z − |α|2 1 − α∗z − αz−1|z|2 + |α|2|z|2 8 Statistical and Adaptive Signal Processing - Solution Manual 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 100 200 300 400 Normalized frequency (Nyquist == 1) Phase (degrees) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −1 −0.5 0 0.5 1 x 10−3 Normalized frequency (Nyquist == 1) Magnitude Response (dB) Figure 2.11: Frequency Response of H(e jω) (b) D|H|2(z) = |z|2 − αz−1|z|2 − α ∗z − |α|2 A|H|2 (z) = 1 − α ∗z − αz−1|z|2 + |α|2|z|2 D|H|2 (z) − A|H|2 (z) = |z|2 − |α|2 − 1 − |α|2|z|2 = (|z|2 − 1)(1 − |α|2) (c) Using the result from part 2 above (for |α| < 1), |Hap(z)| >1 when (D|H|2(z) − A|H|2 (z)) > 0 ⇒ |z| > 1 |Hap(z)| =1 when (D|H|2(z) − A|H|2 (z)) > 0 ⇒ |z| = 1 |Hap(z)| <1 when (D|H|2(z) − A|H|2 (z)) > 0 ⇒ |z| < 1 2.11 Consider the system function H(z) = a + b z−1 + c z−2 c + b z−1 + a z−2 (a) Magnitude of the frequency response function: |H(e jω)|2 = H(e jω)H(e−jω) = ( a + b e−jω + c e−2 jω c + b e−jω + a e−2 jω )( a + b ejω + c e2 jω c + b ejω + a e2 jω ) Simplifying |H(e jω)|2 = 1, for all ω ⇒ |H(e jω)| = 1, for all ω (b) Magnitude and phase response plots are shown in Figure 2.11. 2.12 Consider the third-order FIR system H(z) = 12 + 28z−1 − 29z−2 − 60z−3 = 12(1 + 2.5z−1)(1 + 1.33z−1)(1 − 1.5z−1) Statistical and Adaptive Signal Processing - Solution Manual 9 0 1 2 3 4 0 1000 2000 3000 4000 5000 6000 Energy Delay of the Eight Systems in part 1 Energy n Emax Emin Figure 2.12: Energy Delay of Eight Systems (a) Therefore, an FIR system with three zeros results in eight (23) FIR systems with identical magnitude responses. They are obtained by reflecting each zero about the unit circle and applying the appropriate gain factor. Zero@ z1 z2 z3 Gain System Hmax(z) -2.5 -1.33 1.5 12 12 + 28z−1 − 29z−2 − 60z−3 Hmin(z) -0.4 -0.75 0.67 60 60 + 29z−1 − 28z−2 − 12z−3 Hmix1(z) -2.5 -1.33 0.67 18 18 + 57z−1 + 14z−2 − 40z−3 Hmix2(z) -2.5 -0.75 1.5 16 16 + 28z−1 − 48z−2 − 45z−3 Hmix3(z) -2.5 -0.75 0.67 24 24 + 62z−1 − 7z−2 − 30z−3 Hmix4(z) -0.4 -1.33 1.5 30 30 + 7z−1 − 62z−2 − 24z−3 Hmix5(z) -0.4 -1.33 0.67 45 45 + 48z−1 − 28z−2 − 16z−3 Hmix6(z) -0.4 -0.75 1.5 40 40 − 14z−1 − 57z−2 − 18z−3 (b) Clearly from the above table, Hmax(z) and Hmin are the maximum and minimum phase systems respectively. (c) Energy delay plots are shown in Figure 2.12. 2.13 Consider the system function H(z) = 1 + z−1 − 6z−2 1 + 14 z−1 − 18 z−2 (a) After factorization, we obtain H(z) = (1 + 3z−1)(1 − 2z−1) (1 + 12 z−1)(1 − 14 z−1) Zeros outside unit circle⇒Not minimum phase. (b) Minimum-phase system: (1 + 3z−1)(1 − 2z−1) ⇒ 6(1 + 1 3 z−1)(1 − 1 2 z−1) = 6 − z−1 − z−2 Hence Hmin(z) = 6 − z−1 − z−2 1 + (14 )z−1 − (18 )z−2 10 Statistical and Adaptive Signal Processing - Solution Manual x(n ) z y(n ) -1 z-1 z-1 0.5 0.5 -0.5 0.5 1/3 -1/3 Figure 2.14: Parallel Structure of two AP systems (c) This system is a Maximum-phase system since all zeros are outside the unit circle. 2.14 Parallel connection of two all-pass systems: H(z) = 3(1 + z−1)(1 + z−1)(1 + z−1) 12(1 + 1 2 z−1)(1 + 13 z−1) = 3 12 A( 12 + z−1) 1 + 12 z−1 + Bz−1( 13 + z−1) 1 + 13 z−1 ⇒ A = 2, B = 2 H(z) = 12 ( 12 + z−1) 1 + 1 2 z−1 + 12 z−1( 13 + z−1) 1 + 13 z−1 The block diagram is shown in Figure 2.14. 2.15 Impulse response of an all-pole system with lattice parameters: k1 = 0.2; k2 = 0.2; k3 = 0.2; k4 = 0.2; The direct form coefficients {ak}40 can be computed using the lat2dir function. The corresponding system function is H(z) = 1 1 + 0.76z−1_0.731z−2 + 0.787z−3 + 0.7z−4 Using partial fraction expansion we obtain H(z) = 0.3506 − 0.4334z−1 1 − 0.6654z−1 + 0.9109z−2 + 0.6492 + 0.3656z−1 1 + 0.4254z−1 + 0.7685z−2 Hence upon inverse transformation we obtain h(n) = 0.3506 (0.9544)n cos(0.3867πn) − 0.3540 (0.9544)n sin(0.3867πn) u(n) + 0.6494 (0.8766)n cos(0.8021πn) − 0.1903 (0.8766)n sin(0.8021πn) u(n) The direct- and lattice-form structures are shown in Figure 2.15. Statistical and Adaptive Signal Processing - Solution Manual 11 z-1 z-1 + + z-1 z-1 + + -0.731 -0.787 -0.7 -0.76 x(n) y(n) + + z+ -1 + z-1 + + z-1 + + z-1 x(n) y(n) -0.7 -0.5 -0.3 0.7 0.5 0.3 0.2 -0.2 Figure 2.15: Direct and Lattice Form Structures Chapter 3 Random Variables, Random Vectors, & Stochastic Processes 3.1 Exponential Density: fx (x) = 1 a e−x/a u(x) (a) Density plot for a = 1: fx (x) = e−x u(x) is shown in Figure 3.1 0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1 f x (x)=e−xu(x) Figure 3.1: Exponential density function (b) Moments: i. Mean: μx = ∞ −∞ x fx (x)dx = 1/a = 1 ii. Variance: σ2x = ∞ −∞(x − μx )2 fx (x)dx = (1/a)2 = 1 iii. Skewness: The third central moment is given by γ (3) x = ∞ −∞ x − μx 3 fx (x) dx = ∞ 0 (x − 1)3 (e−x )dx = 2 Hence skewness = 1 σ3x γ (3) x = 2 (⇒ leaning towards right) iv. Kurtosis: The fourth central moment is given by γ (4) x = ∞ −∞ x − μx 4 fx (x) dx = ∞ 0 (x − 1)4 (e−x )dx = 9 Hence kurtosis = 1 σ4x γ (4) x − 3 = 9 − 3 = 4 which means a much flatter shape compared to the Gaussian shape. (c) Characteristic function: x = E{esx(ξ)} = ∞ −∞ fx (x)esxdx = ∞ 0 1 a e−x( 1 a −s)dx = 1 a 1 a − s = 1 1 − as 12 Statistical and Adaptive Signal Processing - Solution Manual 13 0 2 4 6 8 10 0 0.1 0.2 0.3 0.4 0.5 f x (x)=xe−x2 u(x) Figure 3.2: Rayleigh density function 3.2 Rayleigh density : fx (x) = x σ2 e−x2/(2σ2)u(x) (a) Density function for σ = 1: fx (x) = x e−x2/2u(x) is shown in Figure 3.2. (b) Moments: i. Mean: μx = ∞ −∞ x fx (x)dx = ∞ 0 x2e−x2/2dx = √ π/2 ii. Variance: σ2x = ∞ −∞(x − μx )2 fx (x)dx = ∞ 0 (x − √ π/2)2x e−x2/2dx = 2 − π 2 iii. Skewness: The third central moment is given by γ (3) x = ∞ −∞ x − μx 3 fx (x) dx = ∞ 0 x − π/2 3 (x e−x2/2)dx = 1 √ 2 √ π 3 − 3 √ 2 √ π Hence skewness = 1 σ3x γ (3) x = √1 2 √ π 3 − 3 √ 2 √ π 2 − π 2 3 = . 631 11 (⇒ leaning towards right) iv. Kurtosis: The fourth central moment is given by γ (4) x = ∞ −∞ x − μx 4 fx (x) dx = ∞ 0 x − π/2 4 (x e−x2/2)dx = 8 − 3π2 4 : Hence kurtosis = 1 σ4x γ (4) x − 3 = 8 − 3π2 4 2 − π 2 2 − 3 = 0.2451 which means a flatter but almost a Gaussian shape. (c) Characteristic function (σ = 1): x (s) = E{esx(ξ)} = ∞ −∞ fx (x)esxdx = y 0 xe−x2esxdx = 1 4 s √ πe 14 s2 + 1 2 + 1 4 s √ πe 14 s2 erf 1 2 s 14 Statistical and Adaptive Signal Processing - Solution Manual 3.3 Relation between r(m) x and γ (m) x : γ (m) x = E{[x(ζ ) − μx ]m} = E m k=0 (−1)k m k x(ζ )kμm−k x = m k=0 (−1)k m k μk x E{x(ζ )k} γ (m) x = m k=0 (−1)k m k μk xr(n−k) x Similarly r(m) x = E{( x(ζ ) − μx y(ζ ) + μx )m} = E m k=0 m k y(ζ )kμm−k x = m k=0 m k μk x E{y(ζ )k} r(m) x = m k=0 m k μk xγ (n−k) x 3.4 First four cumulants: The cumulants are given by κ(m) x , dm ¯ x (s) dsm s=0 where ¯ x (s) = ln ¯ x (s) = ln E esx(ζ ) = ln [x (s)] and x (s) = ∞ m=0 sm m!r(m) x Thus κ(1) x = 1 x (s) dx (s) ds s=0 = !∞ m=0 sm m! r(m+1) x !∞ m=0 sm m! r(m) x s=0 = r(1) x = μx = 0 [ * zero mean x(ζ )] Similarly κ(2) x = d ds !∞ m=0 sm m! r(m+1) x !∞ m=0 sm m! r(m) x s=0 = !∞ m=0 sm m! r(m+2) x !∞ m=0 sm m! r(m) x − !∞ m=0 sm m! r(m+1) x 2 !∞ m=0 sm m! r(m) x 2 s=0,r(1) x =0 = r(2) x = σ2x Statistical and Adaptive Signal Processing - Solution Manual 15 κ(3) x = d ds !∞ m=0 sm m! r(m+2) x !∞ m=0 sm m! r(m) x − !∞ m=0 sm m! r(m+1) x 2 !∞ m=0 sm m! r(m) x 2 s=0,r(1) x =0 = γ (3) x 3.5 Random vector x(ζ ) = [x1(ζ ) x2(ζ )] (a) Mean vector: μy = Aμx = 1 3 −1 2 2 3 1 2 = 7 3 8 (b) Autocorrelation matrix: y = AxAH = 1 3 −1 2 2 3 4 0.8 0.8 1 1 −1 2 3 2 3 = 17.8 1.2 24.2 1.2 4.8 −1.2 24.2 −1.2 34.6 (c) Crosscorrelation matrix: Rx = x + μxμHx = 5 2.8 2.8 5 Rxy = RxAH = 13.4 0.6 18.4 17.8 7.2 20.6 3.6 Let x(ζ ) be a Gaussian random vector with mean vector μx and covariance matrix x, then the characteristic function is x (ξ ) = exp j ξTμx−1 2 ξ Txξ Let y(ζ ) = Ax where A is a non-singular matrix. Its characteristic function is given by y (ξ ) = E " e j ξTy # = E " e j ξTAx # = E " e j(ATξ) Tx # = exp $ j ATξ T μx−12 ATξ T x ATξ % = exp j ξT Aμx −12 ξ T AxAT ξ which is a characteristic function of a Gaussian random vector with mean vector Aμx and covariance matrix AxAT. 3.7 Sum of independent exponential random variables xk(ζ ): fxk (x) = e−x u(x). (a) y2(ζ ) = x1(ζ ) + x2(ζ ). Hence fy2(x) = fx1(x) ∗ fx2(x) = x 0 e−ses−xds u(x) = xe−xu(x) The plot of this density is shown in Figure 3.7(a). 16 Statistical and Adaptive Signal Processing - Solution Manual (b) y3(ζ ) = y2(ζ ) + x3(ζ ). Hence fy3(x) = fy2(x) ∗ fx3(x) = x 0 se−ses−xds u(x) = 1 2 x2e−xu(x) The plot of this density is shown in Figure 3.7(b). (c) y4(ζ ) = y3(ζ ) + x4(ζ ). Hence fy4(x) = fy3(x) ∗ fx4(x) = 1 2 x 0 s2e−ses−xds u(x) = 1 6 x3e−xu(x) The plot of this density is shown in Figure 3.7(c). 0 5 10 15 0 0.5 1 y1 f1(y1) y1 = x1 0 5 10 15 0 0.5 1 y2 f2(y2) y2 = x1+x2 0 5 10 15 0 0.5 1 y3 f3(y3) y3 = x1+x2+x3 0 5 10 15 0 0.5 1 y4 f4(y4) y4 = x1+x2+x3+x4 Figure 3.7: Sums of IID exponentially distributed random variables (d) As k increases, the distribution of yk(ζ ) approaches a Gaussian distribution, with a mean equal to the sum of the exponential distribution means. 3.8 Test ofWSS: μx = constant and rx (n1, n2) = rx (n1 − n2) Test of m.s. ergodicity in the mean: E " 1 2N+1 !N− N x(n, ζ) # = μx and var " 1 2N+1 !N− N x(n, ζ) # N→∞ −→ 0 (a) x(n, ζ ) = A(ζ ), where random variable A(ζ ) is uniformly distributed between 0 and 1. Now μx (n) = E {x(n, ζ )} = E {A(ζ )} = 1 2 Statistical and Adaptive Signal Processing - Solution Manual 17 and rx (n1, n2) = E {x(n1, ζ)x(n2, ζ)} = E A2(ζ ) = 1 3 Hence the process isWSS. Consider E 1 2N + 1 N −N x(n, ζ) = 1 2N + 1 N −N E {x(n, ζ)} = E {A(ζ )} = 1 2 = μx However, the computed mean remains at the observed value of x(n, ζ 0) , a0 and does not converge to the true mean of 1/2. Hence the process is not m.s. ergodic in the mean. (b) x(n, ζ ) = A(ζ ) cos ω0n, where random variable A(ζ ) is a unit Gaussian random variable. Then μx (n) = E{A(ζ ) cos ωon} = E{A(ζ )} cos ωon = 0 and rx (n1, n2) = E {x(n1, ζ)x(n2, ζ)} = E A2(ζ ) cos ω0n1 cos ω0n2 = cos ω0n1 cos ω0n2 Thus the process is not WSS. Therefore, the m.s. ergodicity cannot be determined because the process must be WSS. (c) x(n, ζ ) is Bernoulli process with Pr [x(n, ζ) = 1] = p and Pr [x(n, ζ ) = −1] = 1 − p. Then μx (n) = E{x(n, ζ)} = p + (−1)(1 − p) = 2p − 1 and rx (n1, n2) = & (2p − 1)2 , n1 = n2 1, n1 = n2 (5) Hence it isWSS. Consider the random variable x(n, ζ )N = 1 2N+1 !N− N x(n, ζ ) E x(n, ζ) N = 1 2N + 1 N −N E {x(n, ζ)} = 2p − 1 = μx (n) Now consider the second moment of the random variable x(n, ζ )N E " x(n, ζ) N 2 # = 1 (2N + 1)2 E N −N x(n1, ζ) N −N x(n2, ζ) = 1 (2N + 1)2 E N −N N −N x(n1, ζ)x(n2, ζ ) = 1 (2N + 1)2 N −N N −N rx (n1, n2) Let M = 2N + 1 and using (5) E " x(n, ζ )N 2 # = 1 M2 M + M2 − M ( [Show Less]