Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[mininet-discuss] CPU scheduling among hosts and switches

Andrew Ferguson adf at cs.brown.edu
Sun Dec 8 16:13:52 PST 2013


hi Hugo,

check out the Linux CPU CFS bandwidth control. this gives you great CPU control than just relative weights (which is what the CPU shares provide).

some high-level overviews:
- https://lwn.net/Articles/428230/
- http://www.blaess.fr/christophe/2012/01/07/linux-3-2-cfs-cpu-bandwidth-english-version/

in depth paper: https://www.kernel.org/doc/ols/2010/ols2010-pages-245-254.pdf

and RedHat's very useful documentation on Resource Management:
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/


good luck!
Andrew


On Dec 5, 2013, at 10:36 PM, Hugo Sousa Pinto <hpinto at andrew.cmu.edu> wrote:
> Hey Bob.
> 
> As I understand, the high performance Mininet only gives an upper bound for the CPU usage of hosts. What I actually care about here (since the overall CPU usage is less than 100%) is that processes are given a minimum CPU in a small window of time, so that they can operate in real-time. I guess that this is being controlled by the linux kernel.  Does mininet allow me to control this?
> 
> Also, how do you think I could check whether a process (e.g. iperf) is getting enough CPU bandwidth in real time?
> 
> Hugo
> 
> 
> 
> 
> On Thu, Dec 5, 2013 at 6:39 PM, Bob Lantz <rlantz at cs.stanford.edu> wrote:
> 1) I don't think so, but you may be able to call the python api or tc.
> 
> On Dec 5, 2013, at 10:58 AM, Hugo Sousa Pinto <hpinto at andrew.cmu.edu> wrote:
> 
>> Hey Bob,
>> 
>> Actually I was making some tests on my machine, but I have a couple of questions:
>> 
>> 1) It is not possible to control performance limitations using a simple CLI command,right?
>> 2) How can I reserve a certain CPU bandwidth for the switch?
>> 
>> I have a network:
>> 
>> H1 - S1 - H2
>> 
>> I have 500 Mbps links, and I run an iperf UDP test from H1 to H2 (I can specify the bandwidth of the UDP flow). For faster flows, I start seeing packets get dropped, but the aggregate CPU usage is less than 70%. I believe that the switch may not be getting enough CPU to process the packets in real time. I only found how to reserve CPU for hosts with the CPU Limited Host, but not for the switches.
>> 
>> Thanks again!
>> Hugo
>> 
>> 
>> 
>> On Sun, Dec 1, 2013 at 4:56 PM, Bob Lantz <rlantz at cs.stanford.edu> wrote:
>> No, not really - the ratio is a coincidence. Those numbers were empirical based on our test system(s), which had approximately 3 GHz CPUs and achieved about 3 Gb/s of switching bandwidth through OVS. Additional details are in the tech report.
>> 
>> Really the only way to determine what kind of performance you could get on your system is to run some tests.
>> 
>> ~Bob
>> 
>> On Dec 1, 2013, at 12:27 PM, Hugo Sousa Pinto <hpinto at andrew.cmu.edu> wrote:
>> 
>>> I will take a look at that. In the context of a single physical machine, could you please clarify the following paragraph in the paper:
>>> 
>>> "For example, on a server with 3 GHz of CPU and 3 GB RAM that can provide 3 Gb/s of internal packet bandwidth, one can create a network of 30 hosts with 100 MHz CPU and 100 MB memory each, connected by 100 Mb/s links".
>>> 
>>> How do you get from the 3GHz of CPU to the 3 Gbps of packet bandwidth? Does forwarding one bit take one CPU cycle only?
>>> 
>>> Also, 30 hosts with 100Mhz CPU would already consume the whole 3GHz available. How can you still have bandwidth for emulating the 100 Mbps links?
>>> 
>>> Thank you,
>>> Hugo
>>> 
>>> 
>>> 
>>> 
>>> On Tue, Nov 26, 2013 at 4:49 AM, Philip Wette <wette at mail.upb.de> wrote:
>>> Am 25.11.13 20:21, schrieb Bob Lantz:
>>> 
>>> As has been discussed on the list:
>>> 
>>> 1) It's trivial to connect multiple mininet instances using tunnels (e.g. a couple of commands to ovs to set up gre tunnels)
>>> 
>>> 2) The slightly tricky part is orchestration and abstraction, which Vitaly, Philip and I have all been working on independently (though maybe together in the future?)
>>> 
>>> Yes, we should definitely try to find a way to integrate our developments into one project.
>>> 
>>> 
>>> -Bob
>>> 
>>> 
>>> On Nov 25, 2013, at 7:24 AM, Philip Wette <wette at mail.upb.de> wrote:
>>> 
>>> Hi,
>>> 
>>> or you take a look at MaxiNet which distributes Mininet over several physical machines.
>>> 
>>> https://www.cs.upb.de/?id=maxinet
>>> 
>>> 
>>> Am 25.11.13 13:59, schrieb Christian Esteve Rothenberg:
>>> Hi Hugo,
>>> 
>>> I suggest to look into this work:
>>> 
>>> Global Network Modelling Based On Mininet Approach.
>>> Vitaly Antonenko and Ruslan Smelyanskiy
>>> http://conferences.sigcomm.org/sigcomm/2013/papers/hotsdn/p145.pdf
>>> 
>>> -Ch
>>> 
>>> On Mon, Nov 25, 2013 at 2:15 AM, Hugo Sousa Pinto <hpinto at andrew.cmu.edu> wrote:
>>> Hello Bob,
>>> 
>>> Thank you for your reply. I took a look at the CoNEXT paper and the
>>> technical report, and they answer exactly the issues I was trying to
>>> understand. As a way of trying to address processing limitations of a single
>>> machines, would it be possible to connect multiple mininets, each on its own
>>> machine? Each mininet could have its own controller.
>>> 
>>> Thank you,
>>> Hugo
>>> 
>>> 
>>> On Sat, Nov 16, 2013 at 7:02 PM, Bob Lantz <rlantz at cs.stanford.edu> wrote:
>>> Have you considered looking at our papers? We have links to several of
>>> them on mininet.org, and you might want to take a look at them all but
>>> especially the CoNEXT paper, our Stanford technical report, Brandon's Ph.D.
>>> thesis, and the monitoring code on GitHub.
>>> 
>>> Basically the answer is: whatever the subsystem that you're using gives
>>> you. CFS is reasonably fair and also limits your time per quota according to
>>> specification. tc has various queuing disciplines that work in different
>>> ways. It is not difficult to monitor things in the expected ways, for
>>> example looking at packet or byte counters.
>>> 
>>> If you double the number of links usually you get less than half the speed
>>> because of multiplexing and other overhead.
>>> 
>>> There is a lot that can be done to improve the understanding and
>>> implementation of performance guarantees, so we encourage you to work with
>>> the project to make things better!
>>> 
>>> -Bob
>>> 
>>> 
>>> On Nov 16, 2013, at 2:56 PM, Hugo Sousa Pinto <hpinto at andrew.cmu.edu>
>>> wrote:
>>> 
>>> Any ideas on this?
>>> 
>>> Thank you.
>>> 
>>> On Mon, Nov 11, 2013 at 11:11 PM, Hugo Sousa Pinto <hpinto at andrew.cmu.edu>
>>> wrote:
>>> Hello,
>>> 
>>> The general issue I am trying to understand is how Mininet shares the CPU
>>> resources among hosts and switches, and what guarantees I can get. Is there
>>> any reference I can look into to have a better understanding of this?
>>> 
>>> I understand that the link speeds and switching speeds are limited by the
>>> CPU resources, but more specifically I have the following questions:
>>> 
>>> 1) Let's assume at first that the number of nodes and link speeds are
>>> small, meaning that there will be enough CPU resources to serve all the
>>> processes on time. If a given link has 100kbps of bandwidth, and assuming
>>> there would always be packets in the queue, would the packets actually be
>>> served at that rate, i.e.1 bit every 0.1 us? Or would the scheduler just
>>> guarantee an average of 100kbps, with bursts of packets being transmitted
>>> each time it runs the switch process? In that case, what would be the time
>>> window where I would see an average rate of 100kbps?
>>> 
>>> 2) Is there any tracing facility I can use to monitor the bandwidth
>>> utilization on the switches?
>>> 
>>> 3) How does Mininet scale to larger topologies, i.e. if I double the
>>> number of links does this mean that I can only have half the speed?
>>> 
>>> Thank you for your help.
>>> 
>>> Hugo
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> mininet-discuss mailing list
>>> mininet-discuss at lists.stanford.edu
>>> https://mailman.stanford.edu/mailman/listinfo/mininet-discuss
>>> 
>>> 
>>> _______________________________________________
>>> mininet-discuss mailing list
>>> mininet-discuss at lists.stanford.edu
>>> https://mailman.stanford.edu/mailman/listinfo/mininet-discuss
>>> 
>>> 
>>> 
>>> -- 
>>> Philip Wette, M.Sc.             E-Mail: wette at mail.upb.de
>>> University of Paderborn         Tel.:   05251 / 60-1716
>>> Department of Computer Science
>>> Computer Networks Group         http://wwwcs.upb.de/cs/ag-karl
>>> Warburger Straße 100            Room:   O3.152
>>> 33098 Paderborn
>>> 
>>> _______________________________________________
>>> mininet-discuss mailing list
>>> mininet-discuss at lists.stanford.edu
>>> https://mailman.stanford.edu/mailman/listinfo/mininet-discuss
>>> 
>>> 
>>> -- 
>>> Philip Wette, M.Sc.             E-Mail: wette at mail.upb.de
>>> University of Paderborn         Tel.:   05251 / 60-1716
>>> Department of Computer Science
>>> Computer Networks Group         http://wwwcs.upb.de/cs/ag-karl
>>> Warburger Straße 100            Room:   O3.152
>>> 33098 Paderborn
>>> 
>>> _______________________________________________
>>> mininet-discuss mailing list
>>> mininet-discuss at lists.stanford.edu
>>> https://mailman.stanford.edu/mailman/listinfo/mininet-discuss
>>> 
>> 
>> 
> 
> 
> _______________________________________________
> mininet-discuss mailing list
> mininet-discuss at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/mininet-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/mininet-discuss/attachments/20131208/2afffb8c/attachment.html>


More information about the mininet-discuss mailing list