DragonFly users List (threaded) for 2008-07
DragonFly BSD
DragonFly users List (threaded) for 2008-07
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Portable vkernel (emulator)


From: "Dmitri Nikulin" <dnikulin@xxxxxxxxx>
Date: Fri, 11 Jul 2008 11:57:26 +1000

On Fri, Jul 11, 2008 at 3:25 AM, Matthew Dillon
<dillon@apollo.backplane.com> wrote:
> :It's not even finished yet, I don't think it's fair to hype it as a
> :killer feature. There's a lot of proving and testing left before it is
> :competitive with other modern filesystems. Right now it's still
> :competing with UFSv1, which is how many decades old? Let's not get
> :ahead of ourselves.
>
>    Well, it isn't that bad :-).  All of HAMMER's major features are in
>    the tree now and neither UFS1 nor UFS2 can hold a candle to any of them.

I don't doubt the features, but if it has to compete with modern Linux
filesystems for single-node file server roles, it'll need a lot more
optimization. I'm not trying to troll, but it's fair to say that there
are still plenty of use cases that HAMMER won't suit without a lot
more work, even if most of that is because DragonFly itself still has
a fair way to go in some areas.

I don't want to trivialise your work at all though. I've been
following it through the mailing lists and it's very impressive. I
respect that you've been uncompromising in getting the best possible
on-disk format, while a lot of filesystems have obviously stopped
short and left themselves with unfixable problems.

>    But from a stability and use point of view I would agree that HAMMER
>    needs to age a bit to achieve the reputation that UFS has acquired
>    over the last 20 years.  This means people have to start using it for
>    more then testing (once we release).

It's a bit of a difficult thing to do in the traditionally
conservative BSD community. You want to test it as if it's in a real
world production environment, without actually trusting data to it. So
you could drop files onto it and replicate them to another non-hammer
FS, but that's not exactly real world usage. Or you could dump data on
it directly, like on a file server, and trust it completely,
immediately after its official release.

Although to be honest, at this point I'd *rather* use HAMMER than UFS
for a file server, because as a developer I intuitively trust new code
by experienced developers more than old code that hasn't been
maintained for years.

>    This is why I'm making a big push to get it running as smoothly as
>    possible for this release.  A lot of people outside the DragonFly
>    project are going to be judging HAMMER based on its early use cases,
>    and the clock starts ticking with the first official release.

I agree. And you've obviously done very well already. I'm just saying
that some people are getting a little too excited.

Even if HAMMER was the best file system in the world, that wouldn't do
much for DragonFly's adoption overall. In that case people would
rather port it from DragonFly than run DragonFly itself, while
DragonFly has severe limitations on the hardware it can run on, and
some performance problems once running.

But in the real world there are plenty of alternative file systems.
It's a commodity. Look at Linux - it has half a dozen file systems,
most of which perform at least as well as HAMMER with similar (or
better) reliability guarantees. Sure, they're messy and stale. But
they work and they work well. Most people don't care about clustering,
and those that do have found other ways to do it.

>    I think I stopped using the mailbox signals (they got replaced by the
>    co-thread I/O model), but the vkernel still needs the syscall support
>    for managing VM spaces and virtualized page tables.  I'm not sure
>    what kind of APIs Linux has to support their UVM stuff.

Ah, thanks for clearing that up. I had a feeling I missed a few email
threads as my mailing list activity dropped for a while.

>    The KVM stuff is pretty cool but the performance claims, particularly
>    by companies such as VMWare, are all hype.  The bare fact of the matter
>    is that no matter what you do you still have to cross a protection
>    boundary to make a system call or do I/O or take a page fault.

That's true, but those constant costs matter less and less with modern
hardware and software. Now even production filesystems can run in
userland containers like FUSE. That's become a trade-off we can make
on modern hardware with modern software. Virtualisation is one of
those things. It's heavyweight and hacky, but we can afford that, and
the benefits are well worth it. So hype or not, the performance is
good enough and since that translates into costs and feasibility,
people are investing in it and using it in production. "Hype" would
imply it's not meeting expectations. I don't think that's fair to say.

>    Hardware-supported hypervisors don't help much in that regard, they
>    just aren't advanced enough yet.  They will get there, for sure, but
>    it will be another couple of years at least.  In many respects as cpu's
>    continue to evolve into more and more cores (or threads if you take
>    Sun's viewpoint), the overheads imposed by hypervisors and KVMs will
>    continue to drop.  Strangly enough hyper-threading might be the best
>    solution for reducing KVM support overheads.
>
>    The issue, for a KVM, is that crossing this boundary takes about 2uS
>    verses the 150nS it takes on real hardware.  Everything else will run
>    at full speed.  Also, to run reasonably well on a KVM kernels often have
>    to be compiled to use far lower clock interrupt rates (an example of
>    this would be the custom linux kernels IBM runs on their main-frames).
>    For example, the scheduler interrupt might have to run at 50hz in a KVM
>    environment instead of 1000hz.  This is because idle overheads are
>    magnified considerably.  Multi-threading and interactive performance
>    of a kernel running under a KVM tends to suffer a lot due to the necessity
>    of reducing idle and clock interrupt overheads.

Linux solved this problem almost completely by having dynamic ticks in
its kernel. That means it doesn't have a fixed scheduler interrupt per
se, or at least, it doesn't have nearly the impact it does on other
kernels.

I see what you mean by the way. FreeBSD 7 while idle in KVM takes up a
FEW % CPU just because of all that overhead. Modern Linux takes up
virtually nothing. They've solved this problem pretty well. And from
what I hear pure Xen is still even better than KVM.

>    So the performance for something running under a KVM depends a lot on
>    what that something is doing.  Cpu-bound programs which don't make many
>    system calls (such as gcc), or I/O-bound programs which would be
>    blocked on I/O much of the time anyway (such as sendmail), will
>    perform fairly well.   System-call intensive programs, such as a web
>    server, will lose a lot in the translation.

Modern web servers don't have this problem as much as you'd think. The
fashion these days is to serve static files off a very simple, highly
optimized server or cluster, and serve dynamic (CPU-bound) content
from application servers. The application servers are the ones that
would be virtualised, and since they're mostly Python or PHP or J2EE,
virtualisation is the least of their performance problems. It's just
that the performance problem is at most 10% of their overall cost so
they don't care if that tiny figure even doubles. These days it's much
less than double, and shrinking every few months. But the one constant
is that Linux is always at the top in terms of performance and
efficiency.

>    A very large problem set is solved through the use of KVMs, even web
>    serving, as long as the load does not exceed available cpu power.  As
>    machines have gotten faster the cpu bar has continued to move up and
>    more and more programs can be run in such an environment.
>
>    If you are pegging your cpu to the wall and need performance, though,
>    you don't want to use a KVM.  If you need more hardware connectivity
>    then a simple network and disk interface, KVMs tends to be more trouble
>    then they are worth.  A KVM emulating more complex and varied hardware,
>    such as audio and video hardware, very quickly runs into trouble from
>    a performance standpoint.

Of course. Hardware emulation has been the domain of VMWare and
VirtualBox, which provide highly optimized drivers to the guest
operating systems, and use KVM-like features to optimize out the
inevitable overheads. VMWare even has experimental DirectX emulation.
We'll see what happens with that, but it proves they're solving so
many bottlenecks that they're finally stepping up to the task of
virtualising modern games, the biggest virtualisation holdout to date.

-- 
Dmitri Nikulin

Centre for Synchrotron Science
Monash University
Victoria 3800, Australia



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]