DragonFly users List (threaded) for 2007-02
Re: Plans for 1.8+ (2.0?)
On 2/18/07, Michel Talon <email@example.com> wrote:
Rupert Pigott wrote:
> On Thu, 01 Feb 2007 09:39:30 -0500, Justin C. Sherrill wrote:
> True, but Matt has explained that ZFS doesn't provide the functionality
> that DragonFlyBSD needs for cluster computing.
> ZFS solves the problem of building a bigger fileserver, but it
> doesn't help you distribute that file system across hundreds or thousands
> of grid nodes. ZFS doesn't address the issue of high-latency
> comms links between nodes, and NFS just curls up and dies when you try to
> run it across the Atlantic with 100+ms of latency.
> I don't know if IBM's GridFS does any better with the latency, but it
> certainly scales a lot better but the barrier for adoption is $$$. It
> costs $$$ and it costs a lot more $$$ to train up and hire the SAs to run
> it. There are other options like AFS too, but people tend to be put off by
> the learning curve and the fact it's an extra rather than something that
> is packaged with the OS.
Of course it is none of my business, but i have always wandered about the
real usefulness of a clustering OS in the context of free systems, and you
post allows me to explain why. People who have the money to buy machines by
the thousands, run them, pay the electricity bill, etc. should also have
the money to pay $$$ to IBM, and not count on the generosity of unpaid
developers. Small installations are the natural target of free systems, and
in this context i remain convinced that the clustering ideas have an
utility next to null. And frankly, i doubt they have any utility for big
systems if you don't use high speed, low latency connects which are far
more expensive than the machines themselves. And even with this highly
expensive hardware, if you don't have high brain programmers able to really
make use of concurrency.
On the contrary, the disks of Joe User are becoming bigger and bigger, his
processor is getting more and more cores, so there is clearly a need for
file systems appropriate for big disks and sufficiently reliable ( ZFS
being an example ) and operating systems able to use multicores
Sorry for the dup post if the other came through - sent it from the
wrong addy ... yada yada (won't happen again, I hope ;)...
I am not sure I understand the potential aim of the new file system -
is it to allow all nodes on the SSI (I purposefully avoid terms like
"grid") to have all "local" data actually on their hard drive or is it
more like each node is aware of all data on the SSI, but the data may
be scattered about all of the nodes on the SSI?
So, in effect, is it similar in concept to the notion of storing bits
of files across many places using some unified knowledge of where the
bits are? This of course implies redunancy and creates synchronization
problems to handle (assuming bo global clock), but I certainly think
it is a good goal. In reality, how redundant will the data be? In a
practical sense, I think the principle of "locality" applies here -
the pieces that make up large files will all be located very close to
one another (aka, clustered around some single location).