# notes for communication, Formulas and forms for Data Communication Systems and Computer Networks. University of Westminster

PDF (150 KB)
5 pages
279Number of visits
Description
notes for communicationsystems for exams
20 points
this document
Preview3 pages / 5

Memoryless Modulation

PAM sm(t) = Re[Amg(t)ej2πfct] = Amg(t)cos2πfct m = 1, 2, 3, .., M Am = (2m− 1−M)d Energy Em = 12A2mEg Perror = Q

(√ 2Eb N0

) Pb = Q

(√ d2

2N0

)

M-ary PAM

error = 2(M−1)M Q (√

d2Eg N0

)

PSK sm(t) = Re

[ g(t)ej2π

(m−1) M ej2fct

] = g(t)cos

[ 2πfct + 2πM (m− 1)

]

= g(t)cos 2πM (m− 1)cos2πfct− g(t)sin 2πM (m− 1)sin2πfct constant envelope Energy Em = 12Eg

Orthogonal Signaling FSK sm(t) = Acos(2π [f0 + m4f ] t) = Re

[ Ae2πf0te2πm4ft

]

4fbandpass min = 12T 4flowpass min = 1T distance =

2E or 2

E for biorthogonal Dimensionality theorem N 2WT where W = bandwidth and T = duration Union Bound for SNR > 4 ln2 Union Bound for SNR < 4 ln2 PM < 2ke−kEb/2N0 PM < 2e−k(

√ EbN0

√ ln2)2

Given EbN0 > 2ln2 = 1.39 = 1.42dB Given Eb N0

> ln2 = 0.693 = 1.6dB Pe(M) (M − 1)Q(

d2min 2N0

) Simplex Signaling

if s(t) = 1M M

i=1 si(t) s ′ m(t) = sm(t)− s(t) Energy E′ = M−1M E

Memory Modulation

NRZI bk = ak xor bk−1 10110001 0, 11011110

CPFSK s(t) = cos

[ 2πf0t + 4πTfd

t −∞ v(τ)

] v(t) =

n ang(t− nTs)

at any instant fi(t) = f0 + fdan h = 2Tfd θn = πh n−1

k=−∞ ak θ(t; a) = 2πh

t −∞ v(τ)= θn + 2πhanq(t− nT )

MSK s(t) = cos(2πf0t + 2πt4T an + θ − nπ2 an) this is CPFSK with h = 12 θ(t; a) = π2

n−1 k=∞ ak + πanq(t− nT ) 4f = 12T

Power spectra density

Sx(f) = 1T SI(f)|G(f)|2 Ri(m) = 12E[In+mIn] Input Output signals

For a probabilistic signal let x = input and y = output Sy(f) = Sx(f)|H(f)|2 SXY (f) = SX(f)H∗(f) RY (τ) = RX(τ) ∗ h(τ) ∗ h∗(−τ)

Match filter response

1

let s = the signal, the match filter would be h(t) = s(T − t) and the response would be y(t) =

t 0

s(τ)s(T − t + τ)Noise due to match filter y(t) =

t 0

r(τ)h(t− τ) = ∫ t 0

s(τ)h(t− τ) + ∫ t 0

n(τ)h(t− τ)= ys(T ) + yn(T ) variance of noise = E[y2n(T )] =

1 2N2

T 0

h2(T − t)dt SNR = y2s(T )E[y2n(T )] = 2E N0

Frequency domain H(f) = ∫ T 0

s(T − t)e−j2πftdt = S∗(f)e−j2πfT

MAP detection, MAX likelyhood detection

Let r1 = s + n1 and r2 = n1 + n2 we want to find the best decision scheme such that P (s =

√ E|r1, r2) > P (s =

√ E|r1, r2) if we assume that

both signals are equally likely we can use max likelyhood detection.P (r1, r2|s =√ E) > P (r1, r2|s =

√ E)

P (r1 = s + n1, r2 = n1 + n2|s =

E) > P (r1 = s + n1, r2 = n1 + n2|s = − √

E) P (r1 =

√ E + n1, r2 = n1 + n2|s =

√ E) > P (r1 =

√ E + n1, r2 = n1 + n2|s =

−√E) You want to rearrange the algebra to isolate the two noise signals since they are independent and can be split. P (n1 = r1

√ E)P (n2 = r2 − n1) > P (n1 = r1 +

√ E)P (n2 = r2 − n1)

P (n1 = r1− √

E)P (n2 = r2−r1+

E) > P (n1 = r1+

E)P (n2 = r2−r1− √

E) From this you see that this becomes independent gaussian distributions which we can compare.

Information Theory

Individual information mutual information = I(xi; yj) = log

P (xi|yi) P (xi)

self info = I(xi) = −logP (xi) conditional info = I(xi) = −logP (xi|yi) I(xi; yj) = I(xi)− I(xi|yj)

Average information I(X; Y ) =

n i=1

m j=1 P (xi, yj)I(xi, yj) H(X) =

n i=1 P (xi)I(xi)

H(X|Y ) = ni=1 ∑m

j=1 P (xi, yj)P (xi|yj) I(X,Y ) = H(X) − H(X|Y ) H(X1, X2, X3...) = H(X1) + H(X2|X1) + ... Rate distortion function

Rg(D) = 12 log2 σ2x D (0 < D < σ

2 x) R

(D) ≤ R(D) = 12 log2 σ2x D

R∗(D) = H(X)12 log22πeD pg 109 Channel Capacity

C = maxP (xj) n−1∑

i=1

m−1∑

j=1

I(X, Y ) = maxP (xj) n−1∑

i=1

m−1∑

j=1

p(xj , yi)log P (yi|xj)

p(yi)

with power and bandwidth constrain, shannon’s theorem C = Wlog

( 1 + PavgWN0

) since Pavg = CEb this could also be written as

r = CW = log2 ( 1 + CW

Eb N0

) where r is measured in bits/second/Hz. If we solve

for SNR we have Eb N0

= 2 C/W−1 C/W =

2r−1 r

Eb N0

= limC/W−>0 2 C/W−1 C/W = ln2 = 1.6dB

This means that as SNR approach ln 2, the bandwidth required approaches in- finity.

2

Ro Theorem , Error bounds

Paverage error for random coding ≤ 2n(R0−R) P (E|Xm) M

m′=1

(y|xm′)p1−λ(y|xm)dy

Special case when λ = 12 we have EDλ(1− > 2) =

∫ ( ∑

x p(x) √

P (y|x))2 Special case when λ = 12 and p(x) = uniform , ||x|| = Q , and p(x) = 1Q

EDλ(1− > 2) = 1Q2 ∫ (∑Q

i=1

p(y|xi

)2 dy

R0 = 2log2Q− log2 ∫ (∑Q

i=1

p(y|xi

)2 dy

R0 for Binary Symmetric Channel = log2 2 1+2

√ p(1−p)

R0 for AWGN BSC = log2 21+e−Ec/N0 Linear block code

You start off with a Generator matrix where XmG = Code, where x are the bits, G is the transitional matrix. In the reduce form,G = [Ik|P ]. We use the parity bit H to check for errors. H = [−P |In−k]

Hard code procedure

if Y = Ccode + eerror and Y H = (Cm + e)H ′ = eH ′ = S where S are the syndrom bits. There will be 2#symdrombits number of errors, we need to find all the error patterns that corresponds with the right syndrom. To find the error patterns, we first use up all the fewest bits, (1 bits) and move to different combinations of 2 and 3 .. bits. With this info, we can construct the standard array and the syndrom table. When we multiply H to Y, we get the syndroms and we look up the syndrom table for the possible error. We then subtract the error from the Code.

Rate relationship coding gain = Rcdmin dmin is the number of columns in the H matrix that are required to have

an dependent columns. if k = input bits and n = output bits Rc = kn E = E , Ebk = Ecn , EbRc = Ec

Probability error where dHmin is the Hamming distance

Soft decision Pe ≤ (m− 1)Q (√

2RcdHmin Eb N0

)

Hard decision Pe ≤ (M − 1)[4p(1− p)] dHmin

2

Hamming code(Type of systematic code) (n, k) = (2m− 1, 2m− 1−m) where H (parity check) is the complete set of

permutations

Convolutional Code

Transfer Function

-The transfer function is defined as T (D) = XeXa and when you solve for the equation you would end up with an expression. -If you start from 000, ad tells you the number of paths that has the distance d. So if you look below you have 1 branch with distance 6, 2 branches with distance 8 and etc.

3

Xc = D3Xa + DXb Xb = DXc + DXd Xd = D2Xc + D2Xd Xe = D2Xb T (D) = D

6

12D2 = D 6 + 2D8 + 4D10 + 8D12 + .... =

d

Probability error if we have T (D) =

d

Pe ≤

1 2e −Rcd EbN0 = 12T (D)|

D=e −Rcd

Eb N0

Trellis Coded Modulation 1. Draw constilation 2. label constilation with numbers sequencially 3.

set partition n1 times. 4.Use the size v convolutional code to decide # v states needed # = 2k(l−1) where k is num v input and l is num of boxes in convolutional code. 5. Use set partition to randomly assign the trellis. (the branches from the trellis must have the same parents) 6. Convert the branch num into binary bits 7. notice that branches from the same state have equal bits, keep the equal bits and make the branches single branches. 8 Now we have the state transition trellis for the convolutional code.

Figure 1: Convolutional code

Bandlimited Channels

S = ∑

n=0 Ing(t− nT ) -> C(t) + n(t) = r(t) = Inh(t−NT ) + n(t) h(t) = g(t) ∗ c(t) and yk = Ik +

∞ n=0 InXk−n + Vk

to avoid ISI we need to satisfy ∑

m=−∞X(f + m T ) = T

SNR Eb N0

= PbTN0 = Pb(1−β)

N0W Raise Cosine function

Xrc(f) = T 0 ≤ f ≤ 1−β2T Xrc(f) = T2

{ 1 + cos[πTβ (|f | − 1−β2T )]

} 1−β 2T ≤ |f | ≤ 1+β2T

Xrc(f) = 0 |f | ≥ 1+β2T Probability of error

PM = 1(1− P√M )2 P√M = 2(11√M )Q (√

3Eavg (M−1)N0

)

The key equation to note is 1+β2T = BW

2 Where 1 T is equal to the symbol Rate

4

(RS) measured in symbols/second. If you need to transmit 9600 bits/s with 8PSK,(3 bits) You would have 9600bits/s3bits/symbol . In general we want a β value that’s greater than .5. Example: 1. If we have 4000 Hz voice-bandpass channel, what’s the bit rate if we use BPSK? (let beta = 1/2)

1 2T (1 + 1/2) = 2000 from this we get

1 T = 2666.666symbol/sec

Since this is BPSK with 1 bit per symbol this means we also have 2666 bits/sec 2. If we use 8QAM?

We know that the symbol rate is 2666 symbol/sec, with 3 bits we have 2666*3 = 8001 bits/sec 3. If we use 4FSK?

We divide the total bandwidth by 4 4000Hz4 = 1 T = 1000symbol/sec with

this value we can multiply by 2 bits to get the bit rate.

5