DragonFly BSD
DragonFly kernel List (threaded) for 2004-08
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: VFS ROADMAP (and vfs01.patch stage 1 available for testing)


From: David Cuthbert <dacut@xxxxxxxxx>
Date: Tue, 17 Aug 2004 22:29:57 -0400

1861@xxxxxxxxxxxxxxxxxxxxxx>
In-Reply-To: <20040817054428.GN11861@xxxxxxxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 23
Message-ID: <4122beb5$0$204$415eb37d@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
NNTP-Posting-Host: 209.195.151.220
X-Trace: 1092796086 crater_reader.dragonflybsd.org 204 209.195.151.220
Xref: crater_reader.dragonflybsd.org dragonfly.kernel:6257

Jeroen Ruigrok/asmodai wrote:
> Most high availability clusters have proprietary memory interlinks for this
> end.  Wasn't there some development/work on sharing memory state over
> (dedicated) Ethernet links?

Are you thinking of Mosix (and associated friends, such as OpenMosix)?

We tried running OpenMosix at work across a few machines; the results 
were underwhelming (compared to LSF).  Our application, though, might be 
written in such a way that it defeats clustering: a control process is 
run on each node, and it forks off simulation processes which run for a 
few seconds, return the results, and die.  For OM, we ran a single 
control process; it never migrated, and the simulation processes 
finished so quickly that OM was reluctant to migrate them.


Personally, I'd be more interested in the reverse concept: a bunch of 
machines which act like a single machine with redundancy (for, say, a 
high availability storage/authentication/database server).  A read 
transaction could be satisfied by any single node; a write transaction 
would be broadcast to all nodes.  Individual nodes could be taken 
offline for maintenance; upon return to the cluster, it would receive 
the missing transactions from other nodes to resync itself.



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]