DragonFly BSD

vkernel

The DragonFly virtual kernels

Obtained from vkernel(7) written by Sascha Wildner, added by Matthias Schmidt

The idea behind the development of the vkernel architecture was to find an elegant solution to debugging of the kernel and its components. It eases debugging, as it allows for a virtual kernel being loaded in userland and hence debug it without affecting the real kernel itself. By being able to load it on a running system it also removes the need for reboots between kernel compiles.

The vkernel architecture allows for running DragonFly kernels in userland.

Supported devices

A number of virtual device drivers exist to supplement the virtual kernel.

Disk device

The vkd driver allows for up to 16 vn(4) based disk devices. The root device will be vkd0.

CD-ROM device

The vcd driver allows for up to 16 virtual CD-ROM devices. Basically this is a read only vkd device with a block size of 2048.

Network interface

The vke driver supports up to 16 virtual network interfaces which are

associated with tap(4) devices on the host. For each vke device, the per-interface read only sysctl(3) variable hw.vkeX.tap_unit holds the unit number of the associated tap(4) device.

Setup a virtual kernel environment

A couple of steps are necessary in order to prepare the system to build and run a virtual kernel.

Setting up the filesystem

The vkernel architecture needs a number of files which reside in /var/vkernel. Since these files tend to get rather big and the /var partition is usually of limited size, we recommend the directory to be created in the /home partition with a link to it in /var:

% mkdir /home/var.vkernel
% ln -s /home/var.vkernel /var/vkernel

Next, a filesystem image to be used by the virtual kernel has to be created and populated (assuming world has been built previously):

# dd if=/dev/zero of=/var/vkernel/rootimg.01 bs=1m count=2048
# vnconfig -c vn0 /var/vkernel/rootimg.01
# disklabel -r -w vn0s0 auto
# disklabel -e vn0s0      # add 'a' partition with fstype `4.2BSD' size could be '*'
# newfs /dev/vn0s0a
# mount /dev/vn0s0a /mnt

If instead of using vn0 you specify vn to vnconfig, a new vn device will be created and a message saying which vnX was created will appear. This effectively lifts the limit of 4 vn devices.

Assuming that you build your world before, you can populate the image now. If you didn't build your world see chapter 21.

# cd /usr/src
# make installworld DESTDIR=/mnt
# cd etc
# make distribution DESTDIR=/mnt

Create a fstab file to let the vkernel find your image file.

# echo '/dev/vkd0s0a      /       ufs     rw      1  1' >/mnt/etc/fstab
# echo 'proc              /proc   procfs  rw      0  0' >>/mnt/etc/fstab

Edit /mnt/etc/ttys and replace the console entry with the following line and turn off all other gettys.

# console "/usr/libexec/getty Pc"         cons25  on  secure

Then, unmount the disk.

# umount /mnt
# vnconfig -u vn0

Compiling the virtual kernel

In order to compile a virtual kernel use the VKERNEL (VKERNEL64 for x86_64) kernel configuration file residing in /usr/src/sys/config (or a configuration file derived thereof):

# cd /usr/src
# make -DNO_MODULES buildkernel KERNCONF=VKERNEL
# make -DNO_MODULES installkernel KERNCONF=VKERNEL DESTDIR=/var/vkernel

Enabling virtual kernel operation

A special sysctl(8), vm.vkernel_enable, must be set to enable vkernel operation:

# sysctl vm.vkernel_enable=1

To make this change permanent, edit /etc/sysctl.conf

Setup networking

Configuring the network on the host system

In order to access a network interface of the host system from the vkernel, you must add the interface to a bridge(4) device which will then be passed to the -I option:

# kldload if_bridge.ko
# kldload if_tap.ko
# ifconfig bridge0 create
# ifconfig bridge0 addm re0       # assuming re0 is the host's interface
# ifconfig bridge0 up

Note : You have to change re0 to the interface of your host machine.

Run a virtual kernel

Finally, the virtual kernel can be run:

# cd /var/vkernel
# ./boot/kernel/kernel -m 64m -r /var/vkernel/rootimg.01 -I auto:bridge0

You can issue the reboot(8), halt(8), or shutdown(8) commands from inside a virtual kernel. After doing a clean shutdown the reboot(8) command will re-exec the virtual kernel binary while the other two will cause the virtual kernel to exit.