I previously had dragonfly bsd installed in vmware workstation 6.5 on a windows xp host. I upgraded to windows 7 and decided on using virtualbox 4.04. I installed virtualbox 4.04, but I needed a way of moving my dragonfly bsd guest to virtualbox. Fortunately it is fairly simple to move vmware vmdk images to virtualbox. You just have to open virtualbox and in the storage flip you can search for disk images. Just select the main vmdk image you want to import. But here is where my problem began and the solution to moving vmware disks to virtualbox or for that matter moving disks between servers. I want to thank all those on the mailing lists who helped me solve this problem. You can find the emails by searching in the mail archive from 20 to 24 Feb.2011.
Start your virtualbox with your newly imported disk image
After making some small configuration changes I started the virtualbox dragonfly bsd guest. I got to the regular F1 boot menu and choose to start default start up. Here is where my problems began. In a few short moments I was confronted by the error message: Mounting root from hammer:serno/000000000000000000001.s1d tryroot serno/0000000000000000000001.s1d no disk named 'serno/00000000000000000000001.s1d' hammer_mountroot: can't find devvp boot mount failed: 6 and then just the mountroot prompt. I tried hammer:ad0s1, hammer:ad0s1a, hammer:ad0s1d and just ad0s1, but to no avail. I had also tried ufs:ad0s1a but I never got further than another error. So now I posted my problem to email@example.com. I received help very quickly and it was suggested that I use the live dragonflybsd iso to mount the disk.
Finding the problem and getting your system to boot again
After booting up with the dragonflybsd iso I could log in as root. Here is what I did. I ls -l /dev/serno/* , ls -l /dev/ad* and copied the devices that were available under /dev/serno/ and /dev/ad*. I then mounted the devices with mount_hammer /dev/ad0s1d /mnt and mount -t ufs /dev/ad0s1a /mnt/boot and used the command cat to inspect what /etc/fstab and /boot/loader.conf looked like. My old /etc/fstab looked like this:
# Device Mountpoint FStype Options Dump Pass#
/dev/serno/00000000000000000001.s1a /boot ufs rw 1
/dev/serno/00000000000000000001.s1b none swap sw 0
/dev/serno/00000000000000000001.s1d / hammer rw 1
/pfs/var /var null rw 0 0
/pfs/tmp /tmp null rw 0 0
/pfs/usr /usr null rw 0 0
/pfs/home /home null rw 0 0
/pfs/usr.obj /usr/obj null rw 0 0
/pfs/var.crash /var/crash null rw 0 0
/pfs/var.tmp /var/tmp null rw 0 0
proc /proc procfs rw 0 0
but I had no disks under /dev/serno/* that matched these devices. So I matched the devices that I had under /dev/serno/* with the correct serial numbers and used the sed command to put the correct devices in my fstab. I also had to do the same thing with /boot/loader.conf that is why I had to mount boot under /mnt/boot. My loader.conf reflected that the boot slice was not marked with the correct device. Here is what my /boot/loader.conf looked like before correcting it with the correct serial numbered disk slice.
and I also used the sed command to change the device e.g. vfs.root.mountfrom="hammer:serno/VB36e5d6cd7-BBL0e84e.s1d
After making these changes I unmounted the devices under /mnt
umount /mnt/boot /mnt and rebooted.
After the reboot my dragonfly bsd setup from my vmware workstation was now running on virtualbox 4.04. These instructions should be easily applied to hard drives moved between servers or workstations.