Home » TCP/IP » TCP Protocol: Slow Start

TCP Protocol: Slow Start

In the last post we explained the basic idea of using sequence and acknowledgement numbers to track how many bytes were sent and received. We also has encountered the term “slow start” and elaborated how TCP uses this concept on the server to send few segments of data to the receiver instead of sending the full receive window (RWIN) of the receiver and congest the path between them. Today, we will try to dive into slow start and, as every previous post, relate the theoretical part with a real capture file.

Have you ever wondered, why the first few seconds of any download process starts with a high download rate then deflates? This is called TCP slow start process.

In a nutshell; TCP slow start algorithm will initially send the full ICW “Initial Congestion Window” to the receiver, and for every ACK packet received, TCP will increase the CW with one segment.

Before applying this to the capture file from the last post; I want to present you with one of the graphs that Wireshark can draw for a TCP stream. This graph is called Stevens Time Sequence graph and it can be generated when you select a packet that carries an actual data from the stream in the direction from the server to the client and go to Statistics -> TCP Stream Graphs.

The Stevens graph will simply draw two things; 1) How many bytes have the TCP sent and will be presented on the Y-axis, and 2) When TCP did send those bytes and will be presented on the X-axis. The line in the middle is actually a line of “dots” and every dot is representing a TCP segment that TCP sent.

We already know by now that every sequence number represent a single byte sent; and that I was downloading a 10MB file in the last example; So, if Wireshark started its sequence number count (which is relative, don’t forget that) from SEQ=1; then 10Mbyte file will be represented by exactly 10485760 bytes which are represented on the Y-axis. I have download this file in around 2.8 seconds and that’s the time presented on the X-axis.

You can zoom in to see every dot of the line that represent a single segment sent and when it was actually sent. You can see that after the absolute time 0 and at approximately 60ms, TCP has sent the first 10 segments which constituted approximately the first 10000 bytes of data. Notice that the segments have a slight gaps in time (this is in microsecond) between them and it was due to the serialization delay that the network interface has induced when it wrote the packets on the wire, and it’s negligible.

Now, applying the slow start to our capture file from the last post on the Stevens graph, we will find that the server has paused for around 60ms until it begin to send another batch of bytes into segments. In this time, TCP sent 12 segment and paused again.

In response to the second batch, the server has received 6 ACK packets (1 ACK packet for every 2 segments due to the delayed ACK concept that we discussed). Doing the math here we will conclude that in the next iteration, the server will increase its CW to be 12 + 6 =18~20 segments. If you noticed the third iteration, you will see that 21 segments have been sent.


In the below graph, I have drawn a hypothetical line that shows a “What If” condition. What if TCP slow start never ended and it increased the CW exponentially in every iteration. Apparently this will decrease the download time, and in this case, I might have downloaded the file in only 2 seconds instead of 2.8. However, this will increase the download rate inducing a congestion.

Fortunately, the slow start process will exponentially increase the CW in every iteration until one of two things happen;

  1. Congestion is induced along the path and packet loss begin to appear, or
  2. The Congestion Window exceeds the Receive Window of the receiver

The second case doesn’t happen frequently due to the large RWIN value that the receiver usually allocate. So, we are left with the first case which is congestion and packet loss. Once the TCP detect a packet loss (the mechanism that TCP detects packet loss will be discussed in future posts) it will activate another algorithm called “Congestion Avoidance” to stop the exponential increase of segments that the slow start caused.

In our example, the reason that limited the slow start from continuing its job in the above example is packet loss. Notice the little hiccup or gap on the middle of the line, these are packets that are re-transmitted by the server because the client failed to receive them due to packet loss.

Remember when I said that TCP activates the Congestion Avoidance algorithm when it detects packet loss, here is an example of this. When congestion avoidance is activated, TCP will halves its CW. So, assuming that TCP’s congestion window just before the congestion was 200 segments, TCP in this case will set the CW to be 100 and will set another important variable called “ssthresh” or “Slow Start Threshold” to be also 100 (equal to the CW).

The “ssthresh” variable can be considered like a marker that TCP sets to itself when it detects a congestion so that it reminds itself with the value at which a congestion happened. This variable is set to a maximum value when TCP first initiates its TCB (transmission control block) and it usually been set to the value of the RWIN/AWIN (advertised window). When TCP detects a congestion, it halves the CW and set the “ssthresh” variable to this value. More to come to this value when we discuss congestion management and avoidance techniques in next posts.

I hope you find this post useful and comprehended the slow start process, how to read the stream graphs and what causes slow start to stop. In the next posts we will discuss the rest of the processes that TCP implement to flawlessly and reliably guarantee data delivery.



One comment

  1. Extremely useful for a confusing topic

Leave a Reply

Your email address will not be published. Required fields are marked *