Many reliable protocols, like TCP (Transmission Control Protocol) use
a timer for retransmission. When a packet is sent, the timer is started
and if it goes off before the acknowledgement for this packet comes in,
the packet is retransmitted. Questions in this context are: How long
should the timeout interval be? And what happens if the period is too
long or too short?
In practice a highly dynamic algorithm is used that constantly adjusts
the timeout interval based on measurements of network performance. If
the interval is set too short, unnecessary retransmissions will occur,
congesting the network with duplicates of already received packets. If
it is set too long, the performance will suffer due to long retransmission
delays when packets are lost.
The application allows you to see these different scenarios. Just try!