DragonFly BSD
DragonFly users List (threaded) for 2006-09
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: "The future of NetBSD" by Charles M. Hannum


From: "Steve O'Hara-Smith" <steve@xxxxxxxxxx>
Date: Sat, 2 Sep 2006 02:18:14 +0100

On Fri, 1 Sep 2006 09:45:32 -0700 (PDT)
Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx> wrote:

> :On Thu, Aug 31, 2006 at 09:58:59AM -0700, Matthew Dillon wrote:
> ::     that 75% of the interest in our project has nothing to do with my
> ::     project goals but instead are directly associated with work being
> done ::     by our relatively small community.  I truely appreciate that
> effort ::     because it allows me to focus on the part that is most near
> and dear ::     to my own heart.
> :
> :Big question: after all the work that will go into the clustering, other
> than :scientific research, what will the average user be able to use such
> advanced :capability for?
> :
> :Jonathon McKitrick
> 
>     I held off answering because I became quite interested in what others
>     thought the clustering would be used for.
> 
>     Lets take a big, big step back and look at what the clustering means
>     from a practical standpoint.
> 
>     There are really two situations involved here.  First, we certainly
>     can allow you to say 'hey, I am going to take down machine A for
>     maintainance', giving the kernel the time to migrate all
>     resources off of machine A.
> 
>     But being able to flip the power switch on machine A without warning,
>     or otherwise have a machine fail unexpectedly, is another ball of wax
>     entirely.  There are only a few ways to cope with such an event:
> 
>     (1) Processes with inaccessible data are killed.  High level programs
> 	such as 'make' would have to be made aware of this possibility,
> 	process the correct error code, and restart the killed children
> 	(e.g. compiles and such).
> 
> 	In this scenario, only a few programs would have to be made aware
> 	of this type of failure in order to reap large benefits from a
> 	big cluster, such as the ability to do massively parallel 
> 	compiles or graphics or other restartable things.


	This is also quite good enough from my point of view, I think my
post may have given the impression that I was expecting #3 to appear - I
certainly was not, I know how hard that is. In fact #1 is more than I was
hoping for, having the make fail and a few windows close but being able to
reopen them and restart the make by hand would be orders of magnitude
better than I can achieve now with periodic rsync and a fair amount of
fiddling around to get environments running on a backup machine when I have
a hardware failure.

-- 
C:>WIN                                      |   Directable Mirror Arrays
The computer obeys and wins.                | A better way to focus the sun
You lose and Bill collects.                 |    licences available see
                                            |    http://www.sohara.org/



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]