Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[mininet-discuss] DOT: A Distributed OpenFlow Testbed

Phiho Hoang hohoangphi at gmail.com
Wed May 13 12:08:53 PDT 2015


Hi Nick,

Thank you for sharing your experience.

I am not as much concerned with the through put as with how to handle the
massive number of hosts, sub-nets and datapaths in the emulation using
Mininet.
(I plan to use dozens of cheap CHIP, the Raspberry Pi Killer ;-)

Did you find any need for DHCP, DNS, routers... when you work with Mininet
at a massive scale across multiple servers so that all Mininet hosts in the
emulation can communicate with one another conveniently?

Some sample codes would be much appreciated.

Again, thank you for sharing.


Regards,

PhiHo


On Tue, May 12, 2015 at 7:37 PM, Nicholas Bastin <nick.bastin at gmail.com>
wrote:

> On Tue, May 12, 2015 at 6:55 PM, Phiho Hoang <hohoangphi at gmail.com> wrote:
>
>> Are they described in some research paper?
>>
>
> No, it's work, not research.. :-)
>
> Would you be able to share the custom topologies used in these mininet
>> ebvironments?
>>
>
> Most of the test topologies are proprietary, as they mirror customer
> environments, but I can elaborate a bit about the structure of the
> testbed.  We have a small cluster of Dell C6100 servers (4-node 2U chassis
> - unavailable now but you can get them on the surplus market or a newer
> variant from supermicro).  They're 12 core each with quad-channel memory
> (very important) and then we put multiport 10G cards (2-4 ports) in each
> node for the distributed interconnection.  These function as the leaf of an
> infrastructure that then has custom-built 1U servers with 12x10G interfaces
> to serve as the core interconnection fabric between the leaf nodes (you
> could also of course use hardware OF 10G switches for this purpose, but we
> provision datapaths and network functions on this hardware, do packet
> analysis, etc. so it's more than just moving packets around).
>
> This is more of a systems design problem than anything special about
> mininet - mininet itself is unmodified in this environment (although you
> must obviously tweak the OS for open file handles, etc.).  The real
> question is not so much about the number of datapaths, but the amount of
> traffic.  The amount of traffic the system can handle is fixed by the
> amount of memory bandwidth available - each leaf node in our system has a
> little over 40GBytes/sec of memory bandwidth and as such if you're doing
> packet copies to move your packets around you have to consider your data
> rate multiplied by the number of copies in a representative average path to
> estimate rough max throughput per node.  For large scale failover studies
> and such where you're not really running traffic but want to see controller
> or switch behaviour ripple throughout your topology, you can safely run
> high hundreds of datapaths per core (given sufficient memory of course).
>
> I would not necessarily try to bring up 50k datapaths on a single node
> (you probably really start to run into OS resource issues here, and maybe
> would have to do it using multiple VMs on the same metal -  checking
> /proc/sys/fs/file-max on a newer system shows over 2.5 million so maybe
> it'd be possible, although there are also limits on the number of non-patch
> interfaces you can have), but I'd wager on big modern hardware you could
> probably do it with 2 nodes (particularly considering the absolutely absurd
> number of cores you can get per node these days).
>
> --
> Nick
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/mininet-discuss/attachments/20150513/a9f64192/attachment.html>


More information about the mininet-discuss mailing list