## Search in the document preview

Memoryless Modulation

PAM
*sm*(*t*) = *Re*[*Amg*(*t*)*ej*2*πfct*] = *Amg*(*t*)*cos*2*πfct m *= 1*, *2*, *3*, .., M
Am *= (2*m− *1*−M*)*d Energy Em *= 12*A*2*mEg
Perror *= *Q
*

(√
2*Eb
N*0

)
*Pb *= *Q
*

(√
*d*2

2*N*0

)

M-ary PAM

*error *= 2(*M−*1)*M Q
*(√

*d*2*Eg
N*0

)

PSK
*sm*(*t*) = *Re
*

[
*g*(*t*)*ej*2*π
*

(*m−*1)
*M ej*2*fct
*

]
= *g*(*t*)*cos
*

[
2*πfct *+ 2*πM *(*m− *1)

]

= *g*(*t*)*cos *2*πM *(*m− *1)*cos*2*πfct− g*(*t*)*sin *2*πM *(*m− *1)*sin*2*πfct
*constant envelope *Energy Em *= 12*Eg
*

Orthogonal Signaling FSK
*sm*(*t*) = *Acos*(2*π *[*f*0 + *m4f *] *t*) = *Re
*

[
*Ae*2*πf*0*te*2*πm4ft
*

]

*4fbandpass min *= 12*T 4flowpass min *= 1*T distance *=
*√
*

2*E or *2
*√
*

*E for biorthogonal
*Dimensionality theorem
N *≈ *2*WT *where W = bandwidth and T = duration
Union Bound for SNR *> *4 ln2 Union Bound for SNR *< *4 ln2
*PM < *2*ke−kEb/*2*N*0 *PM < *2*e−k*(

*√
EbN*0*−
*

*√
ln*2)2

Given *EbN*0 *> *2*ln*2 = 1*.*39 = 1*.*42*dB *Given
*Eb
N*0

*> ln*2 = 0*.*693 = *−*1*.*6*dB
Pe*(*M*) *≤ *(*M − *1)*Q*(

√
*d*2*min
*2*N*0

) Simplex Signaling

if *s*(*t*) = 1*M
*∑*M
*

*i*=1 *si*(*t*) *s
′
m*(*t*) = *sm*(*t*)*− s*(*t*) *Energy E′ *= *M−*1*M E
*

Memory Modulation

NRZI
*bk *= *ak xor bk−*1 10110001 *→ *0, 11011110

CPFSK
*s*(*t*) = *cos
*

[
2*πf*0*t *+ 4*πTfd
*

∫ *t
−∞ v*(*τ*)*dτ
*

]
*v*(*t*) =

∑
*n ang*(*t− nTs*)

at any instant *fi*(*t*) = *f*0 + *fdan h *= 2*Tfd θn *= *πh
*∑*n−*1

*k*=*−∞ ak
θ*(*t*; *a*) = 2*πh
*

∫ *t
−∞ v*(*τ*)*dτ *= *θn *+ 2*πhanq*(*t− nT *)

MSK
*s*(*t*) = *cos*(2*πf*0*t *+ 2*πt*4*T an *+ *θ − nπ*2 *an*) this is CPFSK with h = 12
*θ*(*t*; *a*) = *π*2

∑*n−*1
*k*=*∞ ak *+ *πanq*(*t− nT *) *4f *= 12*T
*

Power spectra density

*Sx*(*f*) = 1*T SI*(*f*)*|G*(*f*)*|*2 *Ri*(*m*) = 12*E*[*In*+*mIn*]
Input Output signals

For a probabilistic signal let x = input and y = output
*Sy*(*f*) = *Sx*(*f*)*|H*(*f*)*|*2 *SXY *(*f*) = *SX*(*f*)*H∗*(*f*)
*RY *(*τ*) = *RX*(*τ*) *∗ h*(*τ*) *∗ h∗*(*−τ*)

Match filter response

1

let s = the signal, the match filter would be *h*(*t*) = *s*(*T − t*) and the response
would be *y*(*t*) =

∫ *t
*0

*s*(*τ*)*s*(*T − t *+ *τ*)*dτ
*Noise due to match filter
*y*(*t*) =

∫ *t
*0

*r*(*τ*)*h*(*t− τ*) = ∫ *t
*0

*s*(*τ*)*h*(*t− τ*) + ∫ *t
*0

*n*(*τ*)*h*(*t− τ*)*dτ *= *ys*(*T *) + *yn*(*T *)
variance of noise = *E*[*y*2*n*(*T *)] =

1
2*N*2

∫ *T
*0

*h*2(*T − t*)*dt SNR *= *y*2*s*(*T *)*E*[*y*2*n*(*T *)] =
2*E
N*0

Frequency domain *H*(*f*) =
∫ *T
*0

*s*(*T − t*)*e−j*2*πftdt *= *S∗*(*f*)*e−j*2*πfT
*

MAP detection, MAX likelyhood detection

Let *r*1 = *s *+ *n*1 and *r*2 = *n*1 + *n*2 we want to find the best decision
scheme such that *P *(*s *=

*√
E|r*1*, r*2) *> P *(*s *= *−
*

*√
E|r*1*, r*2) if we assume that

both signals are equally likely we can use max likelyhood detection.*P *(*r*1*, r*2*|s *=*√
E*) *> P *(*r*1*, r*2*|s *= *−
*

*√
E*)

*P *(*r*1 = *s *+ *n*1*, r*2 = *n*1 + *n*2*|s *=
*√
*

*E*) *> P *(*r*1 = *s *+ *n*1*, r*2 = *n*1 + *n*2*|s *= *−
√
*

*E*)
*P *(*r*1 =

*√
E *+ *n*1*, r*2 = *n*1 + *n*2*|s *=

*√
E*) *> P *(*r*1 = *−
*

*√
E *+ *n*1*, r*2 = *n*1 + *n*2*|s *=

*−√E*)
You want to rearrange the algebra to isolate the two noise signals since they are
independent and can be split.
*P *(*n*1 = *r*1 *−
*

*√
E*)*P *(*n*2 = *r*2 *− n*1) *> P *(*n*1 = *r*1 +

*√
E*)*P *(*n*2 = *r*2 *− n*1)

*P *(*n*1 = *r*1*−
√
*

*E*)*P *(*n*2 = *r*2*−r*1+
*√
*

*E*) *> P *(*n*1 = *r*1+
*√
*

*E*)*P *(*n*2 = *r*2*−r*1*−
√
*

*E*)
From this you see that this becomes independent gaussian distributions which
we can compare.

Information Theory

Individual information
*mutual information *= *I*(*xi*; *yj*) = *log
*

*P *(*xi|yi*)
*P *(*xi*)

*self info *= *I*(*xi*) = *−logP *(*xi*)
*conditional info *= *I*(*xi*) = *−logP *(*xi|yi*)
*I*(*xi*; *yj*) = *I*(*xi*)*− I*(*xi|yj*)

Average information
*I*(*X*; *Y *) =

∑*n
i*=1

∑*m
j*=1 *P *(*xi, yj*)*I*(*xi, yj*) *H*(*X*) = *−
*

∑*n
i*=1 *P *(*xi*)*I*(*xi*)

*H*(*X|Y *) = *−*∑*ni*=1
∑*m
*

*j*=1 *P *(*xi, yj*)*P *(*xi|yj*) *I*(*X,Y *) = *H*(*X*) *− H*(*X|Y *)
*H*(*X*1*, X*2*, X*3*...*) = *H*(*X*1) + *H*(*X*2*|X*1) + *...
*Rate distortion function

*Rg*(*D*) = 12 *log*2
*σ*2*x
D *(0 *< D < σ
*

2
*x*) *R
*

*∗*(*D*) *≤ R*(*D*) *≤*= 12 *log*2
*σ*2*x
D
*

*R∗*(*D*) = *H*(*X*)*− *12 *log*22*πeD *pg 109
Channel Capacity

*C *= *maxP *(*xj*)
*n−*1∑

*i*=1

*m−*1∑

*j*=1

*I*(*X, Y *) = *maxP *(*xj*)
*n−*1∑

*i*=1

*m−*1∑

*j*=1

*p*(*xj , yi*)*log
P *(*yi|xj*)

*p*(*yi*)

with power and bandwidth constrain, shannon’s theorem
*C *= *Wlog
*

(
1 + *PavgWN*0

)
since *Pavg *= *CEb *this could also be written as

*r *= *CW *= *log*2
(
1 + *CW
*

*Eb
N*0

) where r is measured in bits/second/Hz. If we solve

for SNR we have
*Eb
N*0

= 2
*C/W−*1
*C/W *=

2*r−*1
*r
*

*Eb
N*0

= *limC/W−>*0 2
*C/W−*1
*C/W *= *ln*2 = *−*1*.*6*dB
*

This means that as SNR approach ln 2, the bandwidth required approaches in- finity.

2

*Ro *Theorem , Error bounds

*Paverage error for random coding ≤ *2*n*(*R*0*−R*) *P *(*E|Xm*) *≤
*∑*M
*

*m′*=1

∫
*pλ*(*y|xm′*)*p*1*−λ*(*y|xm*)*dy
*

Special case when *λ *= 12 we have
*EDλ*(1*− > *2) =

∫ ( ∑

*x p*(*x*)
√

*P *(*y|x*))2
Special case when *λ *= 12 *and p*(*x*) = *uniform , ||x|| *= *Q , and p*(*x*) = 1*Q
*

*EDλ*(1*− > *2) = 1*Q*2
∫ (∑*Q
*

*i*=1

√
*p*(*y|xi
*

)2
*dy
*

*R*0 = 2*log*2*Q− log*2
∫ (∑*Q
*

*i*=1

√
*p*(*y|xi
*

)2
*dy
*

*R*0 for Binary Symmetric Channel = *log*2 2
1+2

*√
p*(1*−p*)

*R*0 for AWGN BSC = *log*2 21+*e−Ec/N*0
Linear block code

You start off with a Generator matrix where *XmG *= *Code*, where x are
the bits, G is the transitional matrix. In the reduce form,*G *= [*Ik|P *]. We
use the parity bit H to check for errors. *H *= [*−P |In−k*]

Hard code procedure

if *Y *= *Ccode *+ *eerror *and *Y H *= (*Cm *+ *e*)*H ′ *= *eH ′ *= *S
*where S are the syndrom bits. There will be 2#*symdrombits *number
of errors, we need to find all the error patterns that corresponds with
the right syndrom. To find the error patterns, we first use up all the
fewest bits, (1 bits) and move to different combinations of 2 and 3
.. bits. With this info, we can construct the standard array and
the syndrom table. When we multiply H to Y, we get the syndroms
and we look up the syndrom table for the possible error. We then
subtract the error from the Code.

Rate relationship
coding gain = *Rcdmin
dmin *is the number of columns in the H matrix that are required to have

an dependent columns.
if k = input bits and n = output bits *Rc *= *kn
*E = E , *Ebk *= *Ecn *, *EbRc *= *Ec
*

Probability error
where *dHmin *is the Hamming distance

Soft decision *Pe ≤ *(*m− *1)*Q
*(√

2*RcdHmin
Eb
N*0

)

Hard decision *Pe ≤ *(*M − *1)[4*p*(1*− p*)]
*dHmin
*

2

Hamming code(Type of systematic code)
(*n, k*) = (2*m− *1*, *2*m− *1*−m*) where H (parity check) is the complete set of

permutations

Convolutional Code

Transfer Function

-The transfer function is defined as *T *(*D*) = *XeXa *and when you solve
for the equation you would end up with an expression.
-If you start from 000, *ad *tells you the number of paths that has the
distance d. So if you look below you have 1 branch with distance 6,
2 branches with distance 8 and etc.

3

*Xc *= *D*3*Xa *+ *DXb Xb *= *DXc *+ *DXd
Xd *= *D*2*Xc *+ *D*2*Xd Xe *= *D*2*Xb
T *(*D*) = *D
*

6

1*−*2*D*2 = *D
*6 + 2*D*8 + 4*D*10 + 8*D*12 + *.... *=

∑*∞
d*=6 *adD
*

*d
*

Probability error
if we have *T *(*D*) =

∑
*d*=*dfree adD
*

*d
*

*Pe ≤
*∑*∞
*

*d*=*dfree
ad
*

1
2*e
−Rcd EbN*0 = 12*T *(*D*)*|
*

*D*=*e
−Rcd
*

*Eb
N*0

*≈ ad *12*e−Rcd
Eb
N*0

Trellis Coded Modulation 1. Draw constilation 2. label constilation with numbers sequencially 3.

set partition *n*1 times. 4.Use the size v convolutional code to decide # v states
needed # = 2*k*(*l−*1) where k is num v input and l is num of boxes in convolutional
code. 5. Use set partition to randomly assign the trellis. (the branches from
the trellis must have the same parents) 6. Convert the branch num into binary
bits 7. notice that branches from the same state have equal bits, keep the equal
bits and make the branches single branches. 8 Now we have the state transition
trellis for the convolutional code.

Figure 1: Convolutional code

Bandlimited Channels

*S *=
∑*∞
*

*n*=0 *Ing*(*t− nT *) -*> *C(t) + n(t) = *r*(*t*) = *Inh*(*t−NT *) + *n*(*t*)
*h*(*t*) = *g*(*t*) *∗ c*(*t*) and *yk *= *Ik *+

∑*∞
n*=0 *InXk−n *+ *Vk
*

to avoid ISI we need to satisfy
∑*∞
*

*m*=*−∞X*(*f *+
*m
T *) = *T
*

SNR
*Eb
N*0

= *PbTN*0 =
*Pb*(1*−β*)

*N*0*W
*Raise Cosine function

*Xrc*(*f*) = *T *0 *≤ f ≤ *1*−β*2*T
Xrc*(*f*) = *T*2

{
1 + *cos*[*πTβ *(*|f | − *1*−β*2*T *)]

}
1*−β
*2*T ≤ |f | ≤ *1+*β*2*T
*

*Xrc*(*f*) = 0 *|f | ≥ *1+*β*2*T
*Probability of error

*PM *= 1*− *(1*− P√M *)2 *P√M *= 2(1*− *1*√M *)*Q
*(√

3*Eavg
*(*M−*1)*N*0

)

The key equation to note is 1+*β*2*T *=
*BW
*

2 Where
1
*T *is equal to the symbol Rate

4

(*RS*) measured in symbols/second. If you need to transmit 9600 bits/s with
8PSK,(3 bits) You would have 9600*bits/s*3*bits/symbol *. In general we want a *β *value that’s
greater than .5.
Example:
1. If we have 4000 Hz voice-bandpass channel, what’s the bit rate if we use
BPSK? (let beta = 1/2)

1
2*T *(1 + 1*/*2) = 2000 from this we get

1
*T *= 2666*.*666*symbol/sec
*

Since this is BPSK with 1 bit per symbol this means we also have 2666 bits/sec 2. If we use 8QAM?

We know that the symbol rate is 2666 symbol/sec, with 3 bits we have 2666*3 = 8001 bits/sec 3. If we use 4FSK?

We divide the total bandwidth by 4 4000*Hz*4 =
1
*T *= 1000*symbol/sec *with

this value we can multiply by 2 bits to get the bit rate.

5