DragonFly users List (threaded) for 2009-02
Re: the 'why' of pseudofs
Steve O'Hara-Smith wrote:
On Wed, 18 Feb 2009 11:10:01 +0100
Jost Tobias Springenberg <email@example.com> wrote:
If you want to have seperate partitions instead of PFS, thats perfectly
fine, nobody forces you to use PFS everywhere. In fact It might be very
reasonable to keep data from home directories seperated from other data
or the like.
It's not made immediately clear in the documentation but you can
use a hammer filesystem as a replication master instead of a PFS. My /home
is a hammer filesystem on it's own partition replicated to a PFS in another
hammer filesystem on a different disc.
ACK - doing much the same (as '/home is gone, and I now live in '/wheel'
Here '/master' is an 'ordinary' subdir of '/', while ~/slave is the
'usual' PFS, auto-created, per man 5 hammer and a hacked cmd_mirror.c
compiled into a hammer binary. This one JFDi when a new slave must be
created on the target.
I expect that Michael will have a more elegant solution coming along
that uses a comand-tail switch for over-ride.
One thing that does trouble me a little is that the nullfs mount of
the slave PFS is frozen while the nicely named symlink is not. The nullfs
mount to the slave would be much more useful if it was not frozen, as it is
I reverted the mirror-stream operation to refer to the PFS symlink and I
don't use the nullfs mount at all for the slave PFS.
I'm still trying to work out the logistics of doing a remote
mirror, it seems to me that it should require root at both ends and I'm not
entirely happy about opening up root ssh.
At the moment, I've altered the sshd configuration to permit it, and
done exactly that. Works like a champ.
OTOH - the test boxen are isolated from the outside world.
Depending on how badly this old bod craves sleep, I should 'soon' be
testing again with bespoke mounts on separate slices, ones that are not
(solely) owned by root.
"IF needed* I'll create a special user with 'root-like' privs to do the
do, but hope to not have to.
Backing up as I type in prep for a re-install, but with scp -r, not
mirror-copy, as the restore will be to UFS, AND part of the test is how
well *that* works 'bothway'...