DragonFly users List (threaded) for 2008-02
DragonFly BSD
DragonFly users List (threaded) for 2008-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Dragonfly Routers


From: Bill Hacker <wbh@xxxxxxxxxxxxx>
Date: Wed, 20 Feb 2008 00:44:39 +0000

Dave Hayes wrote:
Bill Hacker <wbh@conducive.org> writes:
Dave Hayes wrote:
Has anyone here tried to use DragonFly BSD as a router where the box had
more than 4 network interfaces? I'm wondering if too many network
interfaces on one machine would have performance issues?
What sort of hardware,

As yet unspecified hardware, which is why I am asking the list. :)


I seem to remember some very old idea that more than 4 network
interfaces on a PCI bus was a Bad Idea(tm).

The PCI bus itself needed a (originally DEC) chipset to 'bridge' each group of four slots.


Some 'commodity' MB have more than four slots as much for positioning 'fat' cards, such as RAID or VGA with fans, but cannot assign full resources to all of them at once.

ISTR the IBM specs once listed an RS/6000 as eing expandable to 53 PCI slots, but doubt anyone has ever done it in anger.

Others cannot have but one or two PCI (Asus was notorious for this), as they've used up the resources for onboard chipsets.

Many boards have the bridge chip (or function of same) to support evergrowing arsenals of onboard stuff - but alos use it up 'onboard'.

Ergo *very few* available MB have user-available bridged PCI busses with more than 4 fully-usable slots.

PCI-X and PCI-e are a whole 'nuther - even more complex - story, but really fast interfaces (10 Gig-E) can stress the whole I/O infrastucture, if not CPU and RAM.


I know conventional wisdom suggests specifying an application and expected load, but in this particular case I don't really know exact numbers in advance, I can only determine that the load is on the scale of ~100 machines, and several gigE networks.


'..several GigE *networks*' ?


OS aside, even 'server-grade' or 'carrier grade' MB are not well-suited to that. They place too many other demands on their I/O channels.

Go for bespoke hardware with fast backplane fabric. 40 GB/s and up, and dedicated to nothing else but moving the data.

Buy medium to low-end, it is cheap enough to retire for better kit when need be, and continues to drop in price as capability / functionality increases. There are rooms full of obsolete high-cost gear all over the place.

Look for those that do NOT run a *BSD or Linux OS. The closer to bare-metalloid state-machine, the faster it will run and the less admin work it will need.

I've run six pci-bus 10/100 NICs as an ipfw(1) bridging router under FreeBSD 4.8, 1 GHz Celeron, 512 MB PC133 SDRAM with acceptable performance.
Ergo I wouldn't expect DragonFly to take a back seat relative to any of the other *BSD's - or Linuces.

I'm not bridging, I'm actually routing...so that will take some of the
load off the idea. The downside is I'm routing gigE and I don't want too
much speed to be sacrificed.


Serious router/firewall kit is on a different 'Planet' (or Cisco, or ..)
and better served with an RTOS.

Perhaps. I don't have any data to confirm or deny this, though it seems
reasonable.

Check for reviews and actual benchmarks, whether you want/need multiple in-built VPN, multiple segmenting et al.


Routing and firewalling is a specialty that has become a very high-volume hardware/ASIC/RTOS field where any router a PC could at one time match on speed has become so cheap and flexible off-the-shelf it is no longer worth the bother to roll yer own *and maintain it* for any serious throughput.

Bill



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]