DragonFly BSD

environmentquickstart

DragonFly BSD Quick Start

This QuickStart is part of the NewHandbook.

This document describes the DragonFly environment one will find on a newly installed system. While you are getting started, please pay careful attention to the version or level of DragonFly that the documentation was written for. Some documentation on this site may be out of date. Watch for the marker (obsolete) on items that are out of date or need updating.

Some Unix and BSD Fundamentals

If you have used another Unix flavour, another BSD, or Linux before, you may need to spend some time learning the differences between DragonFly and the system you are experienced in. If you have never used any flavor of Unix, BSD or otherwise, and have only used Windows before, please be prepared for a lengthy period of learning.

If you already know your way around a Unix filesystem, and already know what the /etc folder is, how to use vi or vim to edit a file, how to use a shell like tcsh or bash, how to configure that shell, or change what shell you're using, how su and sudo work, and what a root account is, the rest of this page may be enough to orient you to your surroundings.

You should understand everything in the Unix Basics section before you proceed with trying to use your new system.

Disk layout of a New Dragonfly BSD System using the HAMMER filesystem

If you chose to install on the HAMMER file system during installation you will be left with a system with the following disk configuration:

# df -h
Filesystem                Size   Used  Avail Capacity  Mounted on
ROOT                      288G    12G   276G     4%    /
devfs                     1.0K   1.0K     0B   100%    /dev
/dev/serno/9VMBWDM1.s1a   756M   138M   558M    20%    /boot
/pfs/@@-1:00001           288G    12G   276G     4%    /var
/pfs/@@-1:00002           288G    12G   276G     4%    /tmp
/pfs/@@-1:00003           288G    12G   276G     4%    /usr
/pfs/@@-1:00004           288G    12G   276G     4%    /home
/pfs/@@-1:00005           288G    12G   276G     4%    /usr/obj
/pfs/@@-1:00006           288G    12G   276G     4%    /var/crash
/pfs/@@-1:00007           288G    12G   276G     4%    /var/tmp
procfs                    4.0K   4.0K     0B   100%    /proc

In this example

The disk label looks as follows:

# disklabel /dev/serno/9VMBWDM1.s1

# /dev/serno/9VMBWDM1.s1:
#
# Informational fields calculated from the above
# All byte equivalent offsets must be aligned
#
# boot space:    1044992 bytes
# data space:  312567643 blocks # 305241.84 MB (320069266944 bytes)
#
# NOTE: If the partition data base looks odd it may be
#       physically aligned instead of slice-aligned
#
diskid: e67030af-d2af-11df-b588-01138fad54f5
label:
boot2 data base:      0x000000001000
partitions data base: 0x000000100200
partitions data stop: 0x004a85ad7000
backup label:         0x004a85ad7000
total size:           0x004a85ad8200    # 305242.84 MB
alignment: 4096
display block size: 1024        # for partition display only

16 partitions:
#          size     offset    fstype   fsuuid
  a:     786432          0    4.2BSD    #     768.000MB
  b:    8388608     786432      swap    #    8192.000MB
  d:  303392600    9175040    HAMMER    #  296281.836MB
  a-stor_uuid: eb1c8aac-d2af-11df-b588-01138fad54f5
  b-stor_uuid: eb1c8aec-d2af-11df-b588-01138fad54f5
  d-stor_uuid: eb1c8b21-d2af-11df-b588-01138fad54f5

The slice has 3 partitions:

When you create a HAMMER file system, you must give it a label. Here, the installer labelled it as "ROOT" and mounted it as

ROOT                      288G    12G   276G     4%    /

A PFS is a Pseudo File System inside a HAMMER file system. The HAMMER file system in which the PFSes are created is referred to as the root file system. You should not confuse the "root" file system with the label "ROOT": the label can be anything. The installer labeled it as ROOT because it is mounted at /.

Now inside the root HAMMER file system you find the installer created 7 PFSes from the df -h output above, let us see how they are mounted in /etc/fstab:

# cat /etc/fstab

# Device                Mountpoint      FStype  Options         Dump    Pass#
/dev/serno/9VMBWDM1.s1a         /boot           ufs     rw      1       1
/dev/serno/9VMBWDM1.s1b         none            swap    sw      0       0
/dev/serno/9VMBWDM1.s1d         /               hammer  rw      1       1
/pfs/var                /var            null    rw              0       0
/pfs/tmp                /tmp            null    rw              0       0
/pfs/usr                /usr            null    rw              0       0
/pfs/home               /home           null    rw              0       0
/pfs/usr.obj    /usr/obj                null    rw              0       0
/pfs/var.crash  /var/crash              null    rw              0       0
/pfs/var.tmp    /var/tmp                null    rw              0       0
proc                    /proc           procfs  rw              0       0

The PFSes are mounted using a NULL mount because they are also HAMMER file systems. You can read more on NULL mounts at the mount_null(8) manpage.

You don't need to specify a size for the PFSes like you do for logical volumes inside a volume group for LVM. All the free space in the root HAMMER file system is available to all the PFSes; it can be seen in the df -h output above that the free space is the same for all PFSes and the root HAMMER file system.

If you look in /var

# cd /var/
# ls
account   backups   caps   cron    empty   log   msgs   run   spool   yp  at        
cache     crash     db     games   lib     mail  preserve   rwho  tmp

you will find the above directories.

If you look at the status of one of the PFSes, e.g. /usr you will see /var/hammer is the default snapshot directory.

# hammer pfs-status /usr/
/usr/   PFS #3 {
    sync-beg-tid=0x0000000000000001
    sync-end-tid=0x0000000117ac6270
    shared-uuid=f33e318e-d2af-11df-b588-01138fad54f5
    unique-uuid=f33e31cb-d2af-11df-b588-01138fad54f5
    label=""
    prune-min=00:00:00
    operating as a MASTER
    snapshots directory defaults to /var/hammer/<pfs>
}

At installation time, it will be seen that there is no hammer directory in /var. The reason for this is that no snapshots have yet been taken. You can verify this by checking the snapshots available for /usr

# hammer snapls /usr
Snapshots on /usr       PFS #3
Transaction ID          Timestamp               Note

Snapshots will appear automatically each night as the system performs housekeeping on the Hammer filesystem. For a new volume, an immediate snapshot can be taken by running the command 'hammer cleanup'. Among other activities, it will take a snapshot of the filesystem.

# sudo hammer cleanup
cleanup /                    - HAMMER UPGRADE: Creating snapshots
        Creating snapshots in /var/hammer/root
 handle PFS #0 using /var/hammer/root
           snapshots - run
               prune - run
           rebalance - run..
             reblock - run....
              recopy - run....
cleanup /var                 - HAMMER UPGRADE: Creating snapshots
[...]
cleanup /tmp                 - HAMMER UPGRADE: Creating snapshots
[...]
cleanup /usr                 - HAMMER UPGRADE: Creating snapshots
[...]
cleanup /home                - HAMMER UPGRADE: Creating snapshots
[...]
cleanup /usr/obj             - HAMMER UPGRADE: Creating snapshots
[...]
cleanup /var/crash           - HAMMER UPGRADE: Creating snapshots
[...]
cleanup /var/tmp             - HAMMER UPGRADE: Creating snapshots
[...]
cleanup /var/isos            - HAMMER UPGRADE: Creating snapshots
[...]

No snapshots were taken for /tmp, /usr/obj and /var/tmp. This is because the PFSes are flagged as nohistory. HAMMER tracks history for all files in a PFS. Naturally, this consumes disk space until history is pruned, at which point the available disk space will stabilise. To prevent temporary files on the mentioned PFSes (e.g., object files, crash dumps) from consuming disk space, the PFSes are marked as nohistory.

After performing nightly housekeeping, a new directory called hammer will be found in /var with the following sub directories:

# cd hammer/
# ls -l
total 0
drwxr-xr-x  1 root  wheel  0 Oct 13 11:51 home
drwxr-xr-x  1 root  wheel  0 Oct 13 11:42 root
drwxr-xr-x  1 root  wheel  0 Oct 13 11:43 tmp
drwxr-xr-x  1 root  wheel  0 Oct 13 11:51 usr
drwxr-xr-x  1 root  wheel  0 Oct 13 11:54 var

Looking inside /var/hammer/usr, one finds:

# cd usr/
# ls -l
total 0
drwxr-xr-x  1 root  wheel   0 Oct 13 11:54 obj
lrwxr-xr-x  1 root  wheel  25 Oct 13 11:43 snap-20101013-1143 -> /usr/@@0x0000000117ac6cb0

We have a symlink pointing to the snapshot transaction ID shown below.

# hammer snapls /usr
Snapshots on /usr       PFS #3
Transaction ID          Timestamp               Note
0x0000000117ac6cb0      2010-10-13 11:43:04 IST -
#

You can read more about snapshots, prune, reblance, reblock, recopy etc from hammer(8). Make especially sure to look under the heading "cleanup [filesystem ...]".

You can learn more about PFS mirroring here

In order to correctly map hard disk sernos to device names you can use the 'devattr' command.

# udevd
# devattr -d "ad*" -p serno
Device ad4:
        serno = Z2AD9WN4
Device ad4s1:
Device ad4s1d:

Device ad5:
        serno = 9VMRFDSY
Device ad5s1:
Device ad5s1d:

Device ad3:
        serno = Z2AD9WLW
Device ad3s1:
Device ad3s1a:
Device ad3s1b:
Device ad3s1d:

If your disks are 'da', change as appropriate.

Configuring and Starting the SSH Server

Described in detail here

Software/Programs and Configuration Files Location

The DragonFly default installation contains the base software/programs from the DragonFly project itself and additional software from other sources.

The base system binary software programs are located in the folders

/bin    /sbin
/usr/bin   /usr/sbin

The configuration files for the base system can be found in /etc. Third-party programs use /usr/local/etc.

There are several different ways to install software and which version you use depends on which DragonFly BSD version you have. You can compile things from source code, or you can use binary packages.