Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[mininet-discuss] DOT: A Distributed OpenFlow Testbed

Philip Wette wette at mail.upb.de
Wed May 13 13:59:31 PDT 2015


Am 13.05.15 um 22:31 schrieb Phiho Hoang:
> Hi Phillip,
>
> > We are currently fixing last bugs and hope to release version 1.0 in 
> the next 4 to 6 weeks.
>
> This is great!
>
> BTW, do you think MaxiNet would work with Docker containers  (modulo 
> limitation of Mininet working in a container) as well as with VM's?
This is actually a good question. I'll check on that.
>
> It would be great if there are a couple of Dockerfile's for MaxiNet.
>
> Cheers,
>
> PhiHo
>
>
> On Wed, May 13, 2015 at 3:55 PM, Philip Wette <wette at mail.upb.de 
> <mailto:wette at mail.upb.de>> wrote:
>
>     Am 13.05.15 um 21:34 schrieb Phiho Hoang:
>>     Hi Philip,
>>
>>     Thank you for the info.
>>
>>     Is MaxiNet still under active development?
>     Yes it is!
>>
>>     What's the road map towards 1.0?
>     The release of MaxiNet 1.0 is actually quite near.
>     We are currently fixing last bugs and hope to release version 1.0
>     in the next 4 to 6 weeks.
>
>
>>
>>     Cheers,
>>
>>     PhiHo
>>
>>
>>     On Wed, May 13, 2015 at 3:16 PM, Philip Wette <wette at mail.upb.de
>>     <mailto:wette at mail.upb.de>> wrote:
>>
>>         Hi PhiHo,
>>
>>         if you plan to distribute a Mininet emulation across multiple
>>         physical machines you may take a look at MaxiNet.
>>
>>         MaxiNet lets you control and setup a distributed emulation
>>         without having to worry about your distributed resources.
>>         MaxiNet uses GRE tunnels to interconnect switches emulated at
>>         different machines. So you do not need any routing between
>>         the different mininet instances.
>>
>>         http://www.cs.uni-paderborn.de/?id=maxinet
>>
>>         Best,
>>
>>         Philip
>>
>>         Am 13.05.15 um 21:08 schrieb Phiho Hoang:
>>>         Hi Nick,
>>>
>>>         Thank you for sharing your experience.
>>>
>>>         I am not as much concerned with the through put as with how
>>>         to handle the massive number of hosts, sub-nets and
>>>         datapaths in the emulation using Mininet.
>>>         (I plan to use dozens of cheap CHIP, the Raspberry Pi Killer ;-)
>>>
>>>         Did you find any need for DHCP, DNS, routers... when you
>>>         work with Mininet at a massive scale across multiple servers
>>>         so that all Mininet hosts in the emulation can communicate
>>>         with one another conveniently?
>>>
>>>         Some sample codes would be much appreciated.
>>>
>>>         Again, thank you for sharing.
>>>
>>>
>>>         Regards,
>>>
>>>         PhiHo
>>>
>>>
>>>         On Tue, May 12, 2015 at 7:37 PM, Nicholas Bastin
>>>         <nick.bastin at gmail.com <mailto:nick.bastin at gmail.com>> wrote:
>>>
>>>             On Tue, May 12, 2015 at 6:55 PM, Phiho Hoang
>>>             <hohoangphi at gmail.com <mailto:hohoangphi at gmail.com>> wrote:
>>>
>>>                 Are they described in some research paper?
>>>
>>>
>>>             No, it's work, not research.. :-)
>>>
>>>                 Would you be able to share the custom topologies
>>>                 used in these mininet ebvironments?
>>>
>>>
>>>             Most of the test topologies are proprietary, as they
>>>             mirror customer environments, but I can elaborate a bit
>>>             about the structure of the testbed.  We have a small
>>>             cluster of Dell C6100 servers (4-node 2U chassis -
>>>             unavailable now but you can get them on the surplus
>>>             market or a newer variant from supermicro).  They're 12
>>>             core each with quad-channel memory (very important) and
>>>             then we put multiport 10G cards (2-4 ports) in each node
>>>             for the distributed interconnection.  These function as
>>>             the leaf of an infrastructure that then has custom-built
>>>             1U servers with 12x10G interfaces to serve as the core
>>>             interconnection fabric between the leaf nodes (you could
>>>             also of course use hardware OF 10G switches for this
>>>             purpose, but we provision datapaths and network
>>>             functions on this hardware, do packet analysis, etc. so
>>>             it's more than just moving packets around).
>>>
>>>             This is more of a systems design problem than anything
>>>             special about mininet - mininet itself is unmodified in
>>>             this environment (although you must obviously tweak the
>>>             OS for open file handles, etc.).  The real question is
>>>             not so much about the number of datapaths, but the
>>>             amount of traffic.  The amount of traffic the system can
>>>             handle is fixed by the amount of memory bandwidth
>>>             available - each leaf node in our system has a little
>>>             over 40GBytes/sec of memory bandwidth and as such if
>>>             you're doing packet copies to move your packets around
>>>             you have to consider your data rate multiplied by the
>>>             number of copies in a representative average path to
>>>             estimate rough max throughput per node.  For large scale
>>>             failover studies and such where you're not really
>>>             running traffic but want to see controller or switch
>>>             behaviour ripple throughout your topology, you can
>>>             safely run high hundreds of datapaths per core (given
>>>             sufficient memory of course).
>>>
>>>             I would not necessarily try to bring up 50k datapaths on
>>>             a single node (you probably really start to run into OS
>>>             resource issues here, and maybe would have to do it
>>>             using multiple VMs on the same metal -  checking
>>>             /proc/sys/fs/file-max on a newer system shows over 2.5
>>>             million so maybe it'd be possible, although there are
>>>             also limits on the number of non-patch interfaces you
>>>             can have), but I'd wager on big modern hardware you
>>>             could probably do it with 2 nodes (particularly
>>>             considering the absolutely absurd number of cores you
>>>             can get per node these days).
>>>
>>>             --
>>>             Nick
>>>
>>>
>>>
>>>
>>>         _______________________________________________
>>>         mininet-discuss mailing list
>>>         mininet-discuss at lists.stanford.edu  <mailto:mininet-discuss at lists.stanford.edu>
>>>         https://mailman.stanford.edu/mailman/listinfo/mininet-discuss
>>
>>
>>         -- 
>>         Philip Wette, M.Sc.             E-Mail:wette at mail.upb.de  <mailto:wette at mail.upb.de>
>>         University of Paderborn         Tel.:   05251 / 60-1716
>>         Department of Computer Science
>>         Computer Networks Grouphttp://wwwcs.upb.de/cs/ag-karl
>>         Warburger Straße 100            Room:   O3.152
>>         33098 Paderborn
>>
>>
>>         _______________________________________________
>>         mininet-discuss mailing list
>>         mininet-discuss at lists.stanford.edu
>>         <mailto:mininet-discuss at lists.stanford.edu>
>>         https://mailman.stanford.edu/mailman/listinfo/mininet-discuss
>>
>>
>
>
>     -- 
>     Philip Wette, M.Sc.             E-Mail:wette at mail.upb.de  <mailto:wette at mail.upb.de>
>     University of Paderborn         Tel.:   05251 / 60-1716
>     Department of Computer Science
>     Computer Networks Grouphttp://wwwcs.upb.de/cs/ag-karl
>     Warburger Straße 100            Room:   O3.152
>     33098 Paderborn
>
>


-- 
Philip Wette, M.Sc.             E-Mail: wette at mail.upb.de
University of Paderborn         Tel.:   05251 / 60-1716
Department of Computer Science
Computer Networks Group         http://wwwcs.upb.de/cs/ag-karl
Warburger Straße 100            Room:   O3.152
33098 Paderborn

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/mininet-discuss/attachments/20150513/0d493115/attachment-0001.html>


More information about the mininet-discuss mailing list