DragonFly kernel List (threaded) for 2003-07
Re: just curious
(people can tell I'm skipping back and forth in my inbox, I still
have about 80 messages from thursday to go through!)
:> For example, DragonFly will use messaging heavily but the messaging will
:> be a light-weight design that is, by itself, incapable of transiting a
:> protection boundary. The core messaging structures will not track
:> pointers or message sizes, for example. Instead what we will do is
:> support the transiting of protection boundaries by creating port
:> abstractions which do the appropriate translation into and out of forms
:> that *can* cross a protection boundary.
:So ports are opaque handles representing permission granted to write into another
:process's address space? Does it require a matching receive on the receive end?
:If so... is the data buffered in the kernel or is it merely kept on the sender side to be
:copied into the receiver's address space later.
I think what you are implying here is a mach-like data mapping going from
one user process to another. The way this would work is that the kernel
will convert the supplied address space from process #1 into an iovec
of VM objects, offsets, and ranges. If the kernel is handling all the
work for the message (e.g. for a system call) that is all it needs to
shove the data references around between kernel threads.
If the kernel is going to hand the data references to another process,
say a user process that is running a VFS layer that the original process
is trying to read() or write(), then the kernel will take that iovec and
encapsulate it in a descriptor which will be given to the target process.
None of this is related to the messaging system per-say, but is instead
related to the particular system call that a large data reference
is being transfered through, like read() or write() or something more
:I suppose you could, of course, play some games with shadowing pages as copy-on-write
:in the VM across process boundaries?
:I am mostly thinking out loud but this idea has me quite intrigued.
Since the in-kernel storage format is VM Objects, offsets, and ranges,
now actual shadowing or copy-on-write is really needed. The target
process would have the option of using lseek/read/write on the descriptor,
or mmapping() it. mmap()ing it would result in the data being shared,
but with mmap() one can also make things copy-on-write (MAP_PRIVATE), and
so forth. We already have the VM object layering model in place in the
kernel to make it all possible so it would not require any special effort.