End-to-end congestion control in TCP
is done by having each sender limit the rate at which it sends traffic
into the network. But how does a TCP sender detect congestion? In case
of congestion, one or more router buffers overflow and packets are dropped.
This, in turn, results in a timeout at the sender, which is taken as
an indication of congestion. The TCP congestion control algorithm has
When a TCP connection begins or a timeout occurred, the congestion
window is typically initialized to 1 segment. As soon as an acknowledgement
is received, the window is doubled and two segments can be sent. This
leads to four segments in the next transmission round, then eight and
so on. During slow start the sender begins by transmitting at a slow
rate but increases the congestion window exponentially.
As soon as a Threshold is reached, TCP enters the second phase:
congestion avoidance. Now the congestion window grows linearly -for
each transmission round, one segment. As soon as a loss occurs, the
value of threshold is set to one half of the current value of the congestion
window and the slow start phase is entered again.
The application allows two different views on TCP congestion control.
If you choose "congestion window" you are able to interact with the TCP
congestion control. If you select a transmission round, loss will occur
and the algorithm acts accordingly. Under "slow start" you are able to
see how many segments can be sent in each transmission round.
Comment: Normally TCP functions operate on bytes but for the
sake of simplicity our application shows segments.