DragonFly BSD

Virtualization: NVMM Hypervisor

Table of Contents:


NVMM is a Type-2 hypervisor, and hypervisor platform, that provides support for hardware-accelerated virtualization. A virtualization API is shipped in libnvmm(3), and allows existing emulators (e.g., QEMU) to easily create and manage virtual machines via NVMM.

NVMM can support up to 128 virtual machines, each having a maximum of 128 vCPUs and 127TB RAM. It works with both x86 AMD CPUs (SVM/AMD-V) and x86 Intel CPUs (VMX/VT-x).

NVMM was designed and written by Maxime Villard (m00nbsd.net), first appeared in NetBSD 9, and was ported to DragonFly 6.1 by Aaron LI (aly@) with significant help from Matt Dillon (dillon@) and Maxime.


In order to achieve hardware-accelerated virtualization, two components need to interact together:

NVMM provides the infrastructure needed for both the kernel driver and the userland emulators.

The kernel NVMM driver comes as a kernel module. It is made of a generic machine-independent frontend, and of several machine-dependent backends (currently only x86 AMD SVM and x86 Intel VMX backends). During initialization, NVMM selects the appropriate backend for the system. The frontend handles everything that is not CPU-specific: the virtual machines, the virtual CPUs, the guest physical address spaces, and so forth. The frontend also provides an IOCTL interface for userland emulators.

When it comes to the userland emulators, NVMM does not provide one. In other words, it does not re-implement a QEMU, a VirtualBox, a Bhyve (FreeBSD) or a VMD (OpenBSD). Rather, it provides a virtualization API via the libnvmm(3) library, which allows to effortlessly add NVMM support in already existing emulators. This API is meant to be simple and straightforward, and is fully documented. It has some similarities with WHPX on Windows and HVF on macOS. The idea is to provide an easy way for applications to use NVMM to implement services, which can go from small sandboxing systems to advanced system emulators.

An overview of NVMM's unique design:
NVMM Design
(Credit: https://m00nbsd.net/NvmmDesign.png)

Read blog From Zero to NVMM (by Maxime Villard) for a detailed analysis of the design.

Usage Guide



  1. Add yourself to the nvmm group (so you can later run examples and QEMU without using root):

    # pw groupmod nvmm -m $USER
  2. Re-login to make it effective.

  3. Load the nvmm kernel module:

    # kldload nvmm
  4. Check NVMM status:

    $ nvmmctl identify

    On my AMD Ryzen 3700X, it shows:

    nvmm: Kernel API version 3
    nvmm: State size 1008
    nvmm: Comm size 4096
    nvmm: Max machines 128
    nvmm: Max VCPUs per machine 128
    nvmm: Max RAM per machine 127T
    nvmm: Arch Mach conf 0
    nvmm: Arch VCPU conf 0x1<CPUID>
    nvmm: Guest FPU states 0x3<x87,SSE>



Basic setup

  1. Install QEMU:

    $ fetch https://leaf.dragonflybsd.org/~aly/nvmm/qemu-6.0.0_1.txz
    # pkg install ./qemu-6.0.0_1.txz

    NOTE: The qemu package has not yet been updated in DPorts as of now (2021-07-24), so use my prebuilt qemu package for the moment.

  2. Create a disk:

    $ qemu-img create -f qcow2 dfly.qcow2 50G
  3. Boot an ISO with NVMM acceleration:

    $ qemu-system-x86_64 \
      -machine type=q35,accel=nvmm \
      -smp cpus=2 -m 4G \
      -cdrom dfly.iso -boot d \
      -drive file=dfly.qcow2,if=none,id=disk0 \
      -device virtio-blk-pci,drive=disk0 \
      -netdev user,id=net0,hostfwd=tcp: \
      -device virtio-net-pci,netdev=net0 \
      -object rng-random,id=rng0,filename=/dev/urandom \
      -device virtio-rng-pci,rng=rng0 \
      -display curses \
      -vga qxl \
      -spice addr=,port=5900,ipv4=on,disable-ticketing=on,seamless-migration=on

This setup creates a VM of settings:

To connect to guest via SSH:

    $ ssh -p 6022 user@

To connect to guest via SPICE (install package spice-gtk to get the spicy utility):

    $ spicy -p 5900

By the way, the created VMs can be shown with:

    # nvmmctl list
    Machine ID VCPUs RAM  Owner PID Creation Time           
    ---------- ----- ---- --------- ------------------------
    0          2     4.1G 91101     Sat Jul 24 17:55:22 2021

TAP networking

The above setup uses user-mode networking, which has limitations in both performance and functionalities. A more advanced network can be achieved by using the TAP device.

  1. Create a bridge (bridge0) and configure it:

    # ifconfig bridge0 create
    # ifconfig bridge0 inet
    # ifconfig bridge0 up
  2. Create a TAP device (tap666) and add it to the bridge:

    # ifconfig tap666 create
    # ifconfig tap666 up
    # ifconfig bridge0 addm tap666
  3. Adjust TAP sysctls:

    # sysctl net.link.tap.up_on_open=1
    # sysctl net.link.tap.user_open=1
  4. Make the TAP device can be opened by yourself:

    # chown $USER /dev/tap666

    NOTE: Should have a better way to do this; devd(8) could be used.

  5. Start QEMU with option -netdev tap,ifname=tap666,id=net0,script=no,downscript=no, i.e.,

    $ qemu-system-x86_64 \
      ... \
      -netdev tap,ifname=tap666,id=net0,script=no,downscript=no \
      -device virtio-net-pci,netdev=net0,mac=52:54:00:34:56:66

    NOTE: QEMU by default assigns the link-level address 52:54:00:12:34:56 to guest. If unspecified, all guests would have the same MAC address. Specify the MAC address with -device xxx,netdev=xxx,mac=52:54:xx:xx:xx:xx.

  6. Configure guest IP address:

    guest# ifconfig vtnet0 inet up
    guest# route add default

    And then the guest can communicate with host and vice versa.

    guest# ping
    host$ ping


With the above setup, guests can only talk to each other and the host, but can't access the external network, which requires to configure the NAT on the host side:

  1. Enable IP forwarding:

    # sysctl net.inet.ip.forwarding=1
  2. Configure NAT with PF(4) by adding the follow snippet to /etc/pf.conf:

    ext_if = "re0"
    br_if = "bridge0"
    nat on $ext_if inet from $br_if:network to !$br_if:network -> ($ext_if:0)
  3. Enable and start PF:

    # echo 'pf_enable=YES' >> /etc/rc.conf
    # service pf start

Now, the guest can access the external network.


A DHCP server can be run on the bridge interface to provide guests with auto IP address configuration. Similarly, a DNS service can be provided to guests.

TODO: dnsmasq









PCI Passthrough