Search Mailing List Archives
[mininet-discuss] TCP Small Queues throttling never engages in Mininet?
nick.bastin at gmail.com
Sat Apr 8 05:58:00 PDT 2017
On Sat, Apr 8, 2017 at 8:15 AM, Max Weller <luelistan at gmail.com> wrote:
> If there is a stall below the TCP layer, the TCP small queues 
> algorithm ensures not too much data gets buffered in the buffers of the
> queueing discipline level. To do this, the algorithm checks
> sk->sk_wmem_alloc, the amount of memory allocated for send buffers of
> this socket. In Mininet, sk_wmem_alloc never increases at all,
> regardless how much data is buffered by the tc qdisc layer.
I believe this is because netem orphans the sk_buff from TCP when delay is
enabled (actually skb_orphan_partial, but it will likely have the same
effect for your purposes).
The minimal test case I've built is to set a fixed delay on host1 "tc
> qdisc add dev h1-eth0 root netem delay 5000ms", and run iperf between
> the hosts.
> I added a printk statement to the function tcp_small_queue_check in
> net/ipv4/tcp_output.c  to see if it works.
> If running with Mininet / veth, sk_wmem_alloc is always at the value
> "1". The TSQ_THROTTLED never gets set. If doing the same on two machines
> connected over ethernet cable, sk_wmem_alloc raises, and eventually
> TSQ_THROTTLED is set.
This happens with physical interfaces, even when just using netem delay?
If that's the case, then the distinction is likely in the veth driver,
which of course both incurs no queueing or transmission delay on its' own,
but also has no throughput limitation.
Does anyone here have any tips on how I could simulate the TCP Small
> Queue throttling in Mininet?
I suspect you'll have better luck if you add delay as a bump in the wire
rather than on either of the TCP endpoints. You may also need to add
shaping to throttle the link to a speed slower than "pointer move".
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the mininet-discuss