Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
An in-depth analysis of TCP connection establishment and termination processes, including the 3-way handshake, sequence number prediction, piggybacking, and finite state machine representation. The document also covers important topics such as the 2-person consensus problem, call collision, and TCP data transfer. It explains the sliding window protocol, asynchrony between TCP module and application, and issues like letting the sender know of change in receiver window size and silly window syndrome.
Typology: Summaries
1 / 19
TCP connection establishment (3-way handshake):
A (^) B
SYN = 1, Seq. No. = X
SYN = 1, Seq. No. = Y ACK = 1, Ack. No. = X + 1
ACK = 1, Ack. No. = Y + 1
2-person consensus problem: are A and B in agreement about the state of affairs after 3-way handshake?
−→ in general: impossible −→ can be proven −→ “acknowledging the ACK problem” −→ also TCP session ending −→ lunch date problem
Call Collision:
A (^) B
SYN = 1, Seq. No. = X
SYN = 1, Seq. No. = Y
SYN = 1, Seq. No. = Y Ack. No. = X + 1
SYN = 1, Seq. No. = X Ack. No. = Y + 1
−→ only single TCB gets allocated −→ unique full association
TCP connection termination:
A (^) B
Ack. No. = Y
Ack. No. = X + 1
Ack. No. = Y + 1
FIN = 1, Seq. No. = Y
FIN = 1, Seq. No. = X
Seq. No. = X + 1
More generally, finite state machine representation of TCP’s control mechanism:
−→ state transition diagram
Features to notice:
Basic TCP data transfer:
0K
0K
A (^) B
Ack = 1024, Win = 1024
Seq = 1024
Ack = 2048, Win = 0
Seq = 1024
Seq =
Seq = 0
Timer Expires;Retransmit
1K
1K
0K
2K
Ack = 2048, Win = 0 Ack = 2048, Win = 1024
TCP’s sliding window protocol
StreamByte
StreamByte
Receiver: NextByteExpected
LastByteRead LastByteRcvd
Sender:
LastByteAcked
LastByteSent
LastByteWritten
Note asynchrony between TCP module and application.
Sender side: maintain invariants
−→ buffer flushing (advance window) −→ application blocking
Thus,
EffectiveWindow = AdvertisedWindow− (LastByteSent − LastByteAcked)
−→ upper bound on new send volume
Actually, one additional refinement:
−→ CongestionWindow
EffectiveWindow update procedure:
EffectiveWindow = MaxWindow− (LastByteSent − LastByteAcked)
where
MaxWindow = min{ AdvertisedWindow, CongestionWindow }
How to set CongestionWindow.
−→ domain of TCP congestion control
Receiver side: maintain invariants
−→ buffer flushing (advance window) −→ application blocking
Thus,
AdvertisedWindow = MaxRcvBuffer− (LastByteRcvd − LastByteRead)
Issues:
How to let sender know of change in receiver window size after AdvertisedWindow becomes 0?
−→ design choice: smart sender/dumb receiver −→ same situation for congestion control
Silly window syndrome: Assuming receiver buffer is full, what if application reads one byte at a time with long pauses?
Do not want to send too many 1 B payload packets.
Nagle’s algorithm:
−→ useful for telnet-type applications
Sequence number wrap-around problem: recall sufficient condition
SenderWindowSize < (MaxSeqNum + 1)/ 2
−→ 32-bit sequence space/16-bit window space
However, more importantly, time until wrap-around im- portant due to possibility of roaming packets.
bandwidth time until wrap-around † T1 (1.5 Mbps) 6.4 hrs Ethernet (10 Mbps) 57 min T3 (45 Mbps) 13 min F/E (100 Mbps) 6 min OC-3 (155 Mbps) 4 min OC-12 (622 Mbps) 55 sec OC-24 (1.2 Gbps) 28 sec
Even more importantly, “keeping-the-pipe-full” consider- ation.
bandwidth delay-bandwidth product † T1 (1.5 Mbps) 18 kB Ethernet (10 Mbps) 122 kB T3 (45 Mbps) 549 kB FDDI (100 Mbps) 1.2 MB OC-3 (155 Mbps) 1.8 MB OC-12 (622 Mbps) 7.4 MB OC-24 (1.2 Gbps) 14.8 MB
−→ 100 ms latency
Also, throughput limitation imposed by TCP receiver window size.
−→ e.g., high-performance grid apps
RTT estimation
... important to not underestimate nor overestimate.
Karn/Partridge: Maintain running average with precau- tions
EstimateRTT ← α · EstimateRTT + β · SampleRTT
−→ need to be careful when taking SampleRTT −→ infusion of complexity −→ still remaining problems
Hypothetical RTT distribution:
RTT
RTT
−→ need to account for variance −→ not nearly as nice
Jacobson/Karels:
Here 0 < δ < 1.
Finally,
where μ = 1, φ = 4.
−→ persistence timer −→ how to keep multiple timers in UNIX