DragonFly BSD

nmatavka

Disk Slices, Partitions and local UNIX file systems

Here we describe how disks are subdivided.

Slices

A disk can be subdivided in slices.

Slices are named s0, s1 and so on.

For example the disk ad6 can contain the slice ad6s3.

DragonFly support two schemes for slices, MBR and GPT, either of them will manage all slices on a disk:

Partitions

Partitions are contained in slices.

Partitions are named a, b and so on.

DragonFly support 16 partitions per slice, that is a through p.

For example the partition ad6s3a is contained in the slice ad6s3.

Partition layout is defined in a label on the slice where the partition reside. DragonFly support two types of disk labels, disklabel32 and disklabel64 (the default):

Local UNIX file systems

File systems are contained in partitions. Each partition can contain only one file system, which means that file systems often are described by either their typical mount point in the file system hierarchy, or the letter of the partition they are contained in. Partition does not have the same meaning as the common usage of the term partition (for example, MS-DOS partition), because of DragonFly's UNIX® heritage.

DragonFly support two local UNIX file systems, UFS and HAMMER:

Typical disk layout

From the above we see the following typical disk layout scenarios:

HAMMER Note

HAMMER(5)

is a rather new file system, under active development.

As of DragonFly 2.2.1 release HAMMER is considered production ready. At 2.0 release it was considered to be in an early Beta state .

All major features except the mirroring are quite well tested as-of the 2.2.1 release.

You should evaluate if HAMMER is suitable for your needs.

Examples of ongoing development includes:

HAMMER Features

HAMMER(5) has several advanced features not found in UFS:

More info on HAMMER can be found here.

DragonFly also uses disk space for swap space. Swap space provides DragonFly with virtual memory. This allows your computer to behave as though it has much more memory than it actually does. When DragonFly runs low on memory it moves some of the data that is not currently being used to the swap space, and moves it back in (moving something else out) when it needs it.

Adding a Disk

Adding a disk is done by installing it physically, and to connect it to a disk controller that DragonFly supports. If you are in doubt if controller is supported, manual pages for disk controllers can be consulted ('man -k disk' or 'man -k scsi' can be of help). The easiest thing is normally to boot DargonFly with the controller installed and note if boot message contains the controller.

Assuming that disk ad6 is installed, we could set it up using fdisk(8) and disklabel(8)](http://leaf.dragonflybsd.org/cgi/web-man?command=disklabel&section8) or gpt(8) and disklabel64(8).

In this example we choose gpt(8) & disklabel64(8).

# gpt -v create ad6

...

# gpt add -s1 ad6

ad6s0

# gpt add ad6

ad6s1

# gpt show ad6

...

Here we first create the GPT and then add two slices. In this example the first slice added is ad6s0, which is made a dummy slice of size 1 sector, this is just for not having to make further reference to it, as many users remembers that s0 has special meaning, which really isn't true for GPT slices. The second slice is ad6s1 which will cover the rest of the disk.

# disklabel64 -rw ad6s1 auto

# disklabel64 -e ad6s1          # edit label to add partitions as needed

disklabel

For disklabel(8) labels some partitions have certain conventions associated with them.

Partition Convention
a Normally contains the root file system
b Normally contains swap space
c Normally the same size as the enclosing slice. This allows utilities that need to work on the entire slice (for example, a bad block scanner) to work on the c partition. You would not normally create a file system on this partition. This is not necessarily true; it is possible to use the 'c' partition as a normal partition.
d Partition d used to have a special meaning associated with it, although that is now gone. To this day, some tools may operate oddly if told to work on partition d.

Each partition-that-contains-a-file-system is stored in what DragonFly calls a slice. Slice is DragonFly's term for what the common call partitions, and again, this is because of DragonFly's UNIX background. Slices are numbered, starting at 1.

Slice numbers follow the device name, prefixed with an s, starting at 1. So da0s1 is the first slice on the first SCSI drive. There can only be four physical slices on a disk, but you can have logical slices inside physical slices of the appropriate type. These extended slices are numbered starting at 5, so ad0s5 is the first extended slice on the first IDE disk. These devices are used by file systems that expect to occupy a slice.

Dangerously dedicated physical drives are accessed as slice 0.

Slices, dangerously dedicated physical drives, and other drives contain partitions, which are represented as letters from a to p. This letter is appended to the device name, so da0s0a is the a partition on the first da drive, which is dangerously dedicated. ad1s3e is the fifth partition in the third slice of the second IDE disk drive.

Finally, each disk on the system is identified. A disk name starts with a code that indicates the type of disk, and then a number, indicating which disk it is. Disk numbering starts at 0. Common codes that you will see are listed in Table 3-1.

When referring to a partition DragonFly requires that you also name the slice and disk that contains the partition, and when referring to a slice you should also refer to the disk name. Do this by listing the disk name, s, the slice number, and then the partition letter. Examples are shown in Example 3-1.

Example 3-2 shows a conceptual model of the disk layout that should help make things clearer.

In order to install DragonFly you must first configure the disk slices, then create partitions within the slice you will use for DragonFly, and then create a file system (or swap space) in each partition, and decide where that file system will be mounted.

'Table 3-1. Disk Device Codes'

Code Meaning
ad ATAPI (IDE) disk
da SCSI direct access disk
acd ATAPI (IDE) CDROM
cd SCSI CDROM
vn Virtual disk
fd Floppy disk

'Example 3-1. Sample Disk, Slice, and Partition Names'

Name Meaning
ad0s1a The first partition (a) on the first slice (s1) on the first IDE disk (ad0).
da1s2e The fifth partition (e) on the second slice (s2) on the second SCSI disk (da1).

'Example 3-2. Conceptual Model of a Disk'

This diagram shows DragonFly's view of the first IDE disk attached to the system. Assume that the disk is 4 GB in size, and contains two 2 GB slices (MS-DOS partitions). The first slice contains a MS-DOS disk, C:, and the second slice contains a DragonFly installation. This example DragonFly installation has three partitions, and a swap partition.

The three partitions will each hold a file system. Partition a will be used for the root file system, e for the /var directory hierarchy, and f for the /usr directory hierarchy.

Mounting and Unmounting File Systems

The file system is best visualized as a tree, rooted at /.

The directories, e.g. /dev and /usr, in the root directory are branches,

which may have their own branches, such as /usr/local, and so on.

There are various reasons to house some of these directories on separate file systems. /var contains the directories log/ and spool/, and various types of temporary files, and as such, may get filled up. Filling up the root file system is not a good idea, so splitting /var from / is often favorable.

Another common reason to contain certain directory trees on other file systems is if they are to be housed on separate physical disks, e.g. CD-ROM, or are used as separate virtual disks, such as Network File System exports.

The fstab File

During the boot process, file systems listed in /etc/fstab are automatically mounted (unless they are listed with the noauto option).

The /etc/fstab file contains a list of lines of the following format:

device       mount-point   fstype     options      dumpfreq     passno

These parameters have the following meaning:

Consult the fstab(5) manual page for more information on the format of the /etc/fstab file and the options it contains.

The mount Command

The mount(8) command is what is ultimately used to mount file systems.

In its most basic form, you use:

# mount device mountpoint

Or, if mountpoint is specified in /etc/fstab, just:

# mount mountpoint

There are plenty of options, as mentioned in the mount(8) manual page, but the most common are:

Mount Options

The -o option takes a comma-separated list of the options, including the following:

The umount Command

The umount(8) command takes, as a parameter, one of a mountpoint, a device name, or the -a or -A option.

All forms take -f to force unmounting, and -v for verbosity. Be warned that -f is not generally a good idea. Forcibly unmounting file systems might crash the computer or damage data on the file system.

-a and -A are used to unmount all mounted file systems, possibly modified by the file system types listed after -t. -A, however, does not attempt to unmount the root file system.

================

  1. nmatavka
    1. Disk Slices, Partitions and local UNIX file systems
      1. Slices
      2. Partitions
      3. Local UNIX file systems
      4. Typical disk layout
      5. HAMMER Note
      6. HAMMER Features
      7. Adding a Disk
      8. disklabel
    2. Mounting and Unmounting File Systems
      1. The fstab File
      2. The mount Command
      3. The umount Command
  2. Chapter 1 Introduction
    1. Synopsis
    2. Welcome to DragonFly!
      1. What Can DragonFly Do?
    3. About the DragonFly Project
      1. A Brief History of DragonFly
      2. DragonFly Project Goals
      3. The DragonFly Development Model
      4. The Current DragonFly Release
      5. DragonFly Origin
    4. Updating the System
      1. Supported methods
      2. Getting the source code
      3. Build and upgrade process
  3. DragonFly BSD Quick Start
    1. Some Unix and BSD Fundamentals
    2. Disk layout of a New Dragonfly BSD System using the HAMMER filesystem
    3. Configuring and Starting the SSH Server
    4. Software/Programs and Configuration Files Location
    5. Installing Third-party Software
      1. Using pkg
      2. Installing an X.org desktop X11 environment and XFCE desktop
  4. UNIX Basics
    1. Synopsis
    2. Virtual Consoles and Terminals
      1. The Console
      2. Logging into DragonFly
      3. Multiple Consoles
      4. The /etc/ttys File
      5. Single User Mode Console
        1. Notes
    3. Permissions
      1. Symbolic Permissions
      2. DragonFly File Flags
    4. Directory Structure
    5. Disk Organization
    6. Choosing File System Layout
    7. Disk Slices, Partitions and local UNIX file systems
      1. Slices
      2. Partitions
      3. Local UNIX file systems
      4. Typical disk layout
      5. HAMMER Note
      6. HAMMER Features
      7. Adding a Disk
      8. disklabel
    8. Mounting and Unmounting File Systems
      1. The fstab File
      2. The mount Command
      3. The umount Command
    9. Processes
    10. Daemons, Signals, and Killing Processes
    11. Shells
      1. Changing Your Shell
    12. Text Editors
    13. Devices and Device Nodes
    14. Binary Formats
    15. For More Information
      1. Manual Pages
      2. GNU Info Files
  5. DPorts and pkgng
    1. Getting started with pkgng
    2. Configuring pkgng
    3. Basic pkgng Operations
    4. Obtaining Information About Installed Packages with pkgng
    5. Installing and Removing Packages with pkgng
    6. Upgrading Installed Packages with pkgng
    7. Auditing Installed Packages with pkgng
  6. Advanced pkgng Operations
    1. Automatically Removing Leaf Dependencies with pkgng
    2. Backing Up the pkgng Package Database
    3. Removing Stale pkgng Packages
    4. Modifying pkgng Package Metadata
  7. Building DPorts from source
    1. Installing DPorts tree
    2. Final thoughts
    3. More reading
  8. Disclaimer
  9. pkgsrc on DragonFly
    1. Overview
      1. History
      2. Overview
    2. Installing pkgsrc
      1. Tracking the stable branch
    3. Dealing with pkgsrc packages
      1. Finding Your Application
      2. Installing applications
      3. Installing applications, source only
      4. Installing applications, binary only
        1. Issues with pre-built packages
      5. List all installed applications
      6. Removing packages
        1. Remove associated files needed for building a package
    4. Upgrading packages
      1. Update pkgsrc system packages
      2. bmake replace
      3. pkg_rolling-replace
      4. pkgin
      5. pkg_chk
      6. pkg_add -u
      7. rpkgmanager
    5. Start pkgsrc applications on system startup
    6. Miscellaneous topics
      1. Post-installation Activities
      2. Dealing with Broken Packages
      3. What is WIP?
      4. Links
  10. The X Window System
    1. Synopsis
    2. Understanding X
      1. What is X.Org
      2. The Window Manager and the Desktop Environment
    3. Installing X
    4. Configuring X
    5. The X Display Manager
      1. Overview
      2. Using XDM
      3. Configuring XDM
        1. Xaccess
        2. Xresources
        3. Xservers
        4. Xsession
        5. Xsetup_*
        6. xdm-config
        7. xdm-errors
      4. Running a Network Display Server
      5. Replacements for XDM
    6. Desktop Environments
      1. GNOME
        1. About GNOME
        2. Installing GNOME
        3. Anti-aliased Fonts with GNOME
      2. KDE
        1. About KDE
        2. Installing KDE
        3. More Details on KDE
        4. The KDE Display Manager
      3. XFce
        1. About XFce
        2. Installing XFce
  11. Configuration and Tuning
    1. Synopsis
    2. Initial Configuration
      1. Partition Layout
        1. Base Partitions
        2. Swap Partition
        3. Why Partition?
    3. Core Configuration
    4. Application Configuration
    5. Starting Services
    6. Configuring the cron Utility
      1. Installing a Crontab
    7. Using rc under DragonFly
      1. Using DragonFly's rcrun(8)
        1. Notes
    8. Setting Up Network Interface Cards
      1. Locating the Correct Driver
      2. Configuring the Network Card
      3. Testing and Troubleshooting
        1. Testing the Ethernet Card
        2. Troubleshooting
    9. Virtual Hosts
    10. Configuration Files
      1. /etc Layout
      2. Hostnames
        1. /etc/resolv.conf
        2. /etc/hosts
      3. Log File Configuration
        1. syslog.conf
        2. newsyslog.conf
      4. sysctl.conf
    11. Tuning with sysctl
      1. sysctl(8) Read-only
    12. Tuning Disks
      1. Sysctl Variables
        1. vfs.vmiodirenable
        2. vfs.write_behind
        3. vfs.hirunningspace
        4. vm.swap_idle_enabled
        5. hw.ata.wc
      2. Soft Updates
        1. More Details about Soft Updates
    13. Tuning Kernel Limits
      1. File/Process Limits
        1. kern.maxfiles
        2. kern.ipc.somaxconn
      2. Network Limits
        1. net.inet.ip.portrange.*
        2. TCP Bandwidth Delay Product
    14. Adding Swap Space
      1. Swap on a New Hard Drive
      2. Swapping over NFS
      3. Swapfiles
    15. Power and Resource Management
      1. What Is ACPI?
      2. Shortcomings of Advanced Power Management (APM)
      3. Configuring ACPI
    16. Using and Debugging DragonFly ACPI
      1. Submitting Debugging Information
      2. Background
      3. Common Problems
        1. Suspend/Resume
        2. System Hangs (temporary or permanent)
        3. Panics
        4. System Powers Up After Suspend or Shutdown
        5. Other Problems
      4. ASL, acpidump, and IASL
      5. Fixing Your ASL
        1. OS dependencies
        2. Missing Return statements
        3. Overriding the Default AML
      6. Getting Debugging Output From ACPI
      7. References
  12. The DragonFly virtual kernels
    1. Supported devices
      1. Disk device
      2. CD-ROM device
      3. Network interface
    2. Setup a virtual kernel environment
      1. Setting up the filesystem
      2. Compiling the virtual kernel
      3. Enabling virtual kernel operation
    3. Setup networking
      1. Configuring the network on the host system
    4. Run a virtual kernel
  13. The DragonFly Booting Process
    1. Synopsis
    2. The Booting Problem
    3. The Boot Manager and Boot Stages
      1. The Boot Manager
      2. Stage One, /boot/boot1, and Stage Two, /boot/boot2
      3. Stage Three, /boot/loader
        1. Loader Program Flow
        2. Loader Built-In Commands
        3. Loader Examples
    4. Kernel Interaction During Boot
      1. Kernel Boot Flags
    5. Init: Process Control Initialization
      1. Automatic Reboot Sequence
      2. Single-User Mode
      3. Multi-User Mode
        1. Resource Configuration (rc)
    6. Shutdown Sequence
  14. Users and Basic Account Management
    1. Synopsis
    2. Introduction
    3. The Superuser Account
    4. System Accounts
    5. User Accounts
    6. Modifying Accounts
      1. adduser
      2. rmuser
      3. chpass
      4. passwd
      5. pw
        1. Notes
    7. Limiting Users
    8. Personalizing Users
    9. Groups
  15. SSH Server on DragonFly
    1. WARNING :
  16. SSH Server on DragonFly
    1. WARNING :
  17. Configuring the DragonFly Kernel
    1. Synopsis
    2. Why Build a Custom Kernel?
    3. Building and Installing a Custom Kernel
      1. Installing the Source
      2. Your Custom Config File
      3. Building a Kernel - Full Source Tree
      4. Building a Kernel - Kernel Source Only
      5. Running Your New Kernel
    4. The Configuration File
        1. Notes
    5. Device Nodes
    6. If Something Goes Wrong
  18. Security
    1. Synopsis
    2. Introduction
    3. Securing DragonFly
      1. Securing the root Account and Staff Accounts
      2. Securing Root-run Servers and SUID/SGID Binaries
      3. Securing User Accounts
      4. Securing the Password File
      5. Securing the Kernel Core, Raw Devices, and Filesystems
      6. Checking File Integrity: Binaries, Configuration Files, Etc.
      7. Paranoia
      8. Denial of Service Attacks
    4. DES, MD5, and Crypt
      1. Recognizing Your Crypt Mechanism
    5. One-time Passwords
      1. Secure Connection Initialization
      2. Insecure Connection Initialization
      3. Generating a Single One-time Password
      4. Generating Multiple One-time Passwords
      5. Restricting Use of UNIX® Passwords
        1. Notes
    6. Firewalls
      1. What Is a Firewall?
        1. Packet Filtering Routers
        2. Proxy Servers
      2. Firewall options in DragonFlyBSD
        1. What Does IPFW Allow Me to Do?
        2. Enabling IPFW on DragonFly
        3. Configuring IPFW
          1. Altering the IPFW Rules
          2. Listing the IPFW Rules
          3. Flushing the IPFW Rules
          4. Clearing the IPFW Packet Counters
        4. Example Commands for ipfw
        5. Building a Packet Filtering Firewall
        6. IPFW Overhead and Optimization
    7. OpenSSL
    8. VPN over IPsec
      1. Understanding IPsec
      2. The Problem
      3. Scenario #1: Two networks, connected to the Internet, to behave as one
        1. Step 1: Creating and testing a virtual network link
        2. Step 2: Securing the link
    9. OpenSSH
      1. Advantages of Using OpenSSH
      2. Enabling sshd
      3. SSH Client
      4. Secure Copy
      5. Configuration
      6. ssh-keygen
      7. SSH Tunneling
        1. Practical SSH Tunneling Examples
          1. Secure Access of a POP3 Server
          2. Bypassing a Draconian Firewall
    10. Synopsis
    11. Terms Related to Jails
    12. Introduction
    13. What is a Jail
    14. Creating and Controlling Jails
    15. Fine Tuning and Administration
    16. System tools for jail tuning in DragonFly
    17. Synopsis
    18. Terms Related to Jails
    19. Introduction
    20. What is a Jail
    21. Creating and Controlling Jails
    22. Fine Tuning and Administration
    23. System tools for jail tuning in DragonFly
    24. Servers
    25. Installing flash player on firefox.
      1. Configure Linux Support
      2. Install multimedia/libflashsupport from pkgsrc.
      3. Install www/nspluginwrapper
      4. Install multimedia/ns-flash
    26. Chapter 18 Serial Communications
    27. Synopsis
    28. 18.1 Introduction
      1. 18.1.1 Terminology
      2. 18.1.2 Cables and Ports
        1. 18.1.2.1 Cables
          1. 18.1.2.1.1 Null-modem Cables
          2. 18.1.2.1.2 Standard RS-232C Cables
        2. 18.1.2.2 Ports
          1. 18.1.2.2.1 Kinds of Ports
          2. 18.1.2.2.2 Port Names
      3. 18.1.3 Kernel Configuration
      4. 18.1.4 Device Special Files
      5. 18.1.5 Serial Port Configuration
    29. 18.2 Terminals
      1. 18.2.1 Uses and Types of Terminals
        1. 18.2.1.1 Dumb Terminals
        2. 18.2.1.2 PCs Acting as Terminals
        3. 18.2.1.3 X Terminals
      2. 18.2.2 Configuration
        1. 18.2.2.1 Adding an Entry to /etc/ttys
        2. 18.2.2.2 Force init to Reread /etc/ttys
      3. 18.2.3 Troubleshooting Your Connection
        1. 18.2.3.1 No Login Prompt Appears
        2. 18.2.3.2 If Garbage Appears Instead of a Login Prompt
        3. 18.2.3.3 Characters Appear Doubled, the Password Appears When Typed
    30. 18.3 Dial-in Service
      1. 18.3.1 External vs. Internal Modems
        1. 18.3.1.1 Modems and Cables
      2. 18.3.2 Serial Interface Considerations
      3. 18.3.3 Quick Overview
      4. 18.3.4 Configuration Files
        1. 18.3.4.1 /etc/gettytab
          1. 18.3.4.1.1 Locked-speed Config
          2. 18.3.4.1.2 Matching-speed Config
        2. 18.3.4.2 /etc/ttys
          1. 18.3.4.2.1 Locked-speed Config
          2. 18.3.4.2.2 Matching-speed Config
        3. 18.3.4.3 /etc/rc.serial
      5. 18.3.5 Modem Settings
        1. 18.3.5.1 Locked-speed Config
        2. 18.3.5.2 Matching-speed Config
        3. 18.3.5.3 Checking the Modem's Configuration
      6. 18.3.6 Troubleshooting
        1. 18.3.6.1 Checking Out the DragonFly System
        2. 18.3.6.2 Try Dialing In
    31. 18.4 Dial-out Service
      1. 18.4.1 My Stock Hayes Modem Is Not Supported, What Can I Do?
      2. 18.4.2 How Am I Expected to Enter These AT Commands?
      3. 18.4.3 The @ Sign for the pn Capability Does Not Work!
      4. 18.4.4 How Can I Dial a Phone Number on the Command Line?
      5. 18.4.5 Do I Have to Type in the bps Rate Every Time I Do That?
      6. 18.4.6 I Access a Number of Hosts Through a Terminal Server
      7. 18.4.7 Can Tip Try More Than One Line for Each Site?
      8. 18.4.8 Why Do I Have to Hit Ctrl + P Twice to Send Ctrl + P Once?
      9. 18.4.9 Suddenly Everything I Type Is in Upper Case??
      10. 18.4.10 How Can I Do File Transfers with tip?
      11. 18.4.11 How Can I Run zmodem with tip?
    32. 18.5 Setting Up the Serial Console
      1. 18.5.1 Introduction
      2. 18.5.2 Serial Console Configuration, Terse Version
      3. 18.5.3 Serial Console Configuration
      4. 18.5.4 Summary
        1. 18.5.4.1 Case 1: You Set the Flags to 0x10 for sio0
        2. 18.5.4.2 Case 2: You Set the Flags to 0x30 for sio0
      5. 18.5.5 Tips for the Serial Console
        1. 18.5.5.1 Setting a Faster Serial Port Speed
        2. 18.5.5.2 Using Serial Port Other Than sio0 for the Console
        3. 18.5.5.3 Entering the DDB Debugger from the Serial Line
        4. 18.5.5.4 Getting a Login Prompt on the Serial Console
      6. 18.5.6 Changing Console from the Boot Loader
        1. 18.5.6.1 Setting Up the Serial Console
        2. 18.5.6.2 Using a Serial Port Other Than sio0 for the Console
      7. 18.5.7 Caveats

Chapter 1 Introduction

*Restructured, reorganized, and parts rewritten by Jim Mock. *

Synopsis

Thank you for your interest in DragonFly! The following chapter covers various aspects of the DragonFly Project, such as its history, goals, development model, and so on.

After reading this chapter, you will know:


Welcome to DragonFly!

DragonFly is a 4.4BSD-Lite unix operating system for Intel (x86) and amd64 (x86_64) architectures.

What Can DragonFly Do?

Work on BSD-flavor Unix systems running on PC compatible hardware started as a fork of the 4.4BSD-Lite release from Computer Systems Research Group (CSRG) at the University of California at Berkeley. One of the variants that became quite popular became known later as FreeBSD. DragonFly BSD started out as a fork and continuation of FreeBSD 4.8.

Like all other modern PC-compatible BSD variants, it carries on the distinguished tradition of BSD systems development. In addition to the fine work provided by CSRG, the DragonFly Project has put in many thousands of hours in fine-tuning the system for maximum performance and reliability in real-life load situations.

As many of the commercial giants struggle to field PC operating systems with such features, performance and reliability, DragonFly can offer them now!

For example the Hammer filesystem, the default in DragonFly BSD, is the most powerful and reliable filesystem available on any operating system.

The applications to which DragonFly can be put are truly limited only by your own imagination. From software development to factory automation, inventory control to azimuth correction of remote satellite antennae; if it can be done with a commercial UNIX product, it is more than likely that you can do it with DragonFly, too! DragonFly also benefits significantly from literally thousands of high-quality applications developed by research centres and universities around the world, often available at little to no cost. Commercial applications are also available and appearing in greater numbers every day.

Because the source code for DragonFly itself is generally available, the system can also be customized to an almost unheard-of degree for special applications or projects, and in ways not generally possible with operating systems from most major commercial vendors. Here is just a sampling of some of the applications in which people are currently using DragonFly:

The robust TCP/IP networking built into DragonFly renders it an ideal platform for a variety of Internet services such as:

With DragonFly, you can install on almost any PC, from older 32 bit computers running 386 or Pentium chips, to modern 64 bit Intel Core or AMD X64 desktop CPUs, and even up to and including high end Xeon CPUs. All of these CPUs share a common ancestry and instruction set, going back to the original Intel 80386 CPU, the first fully 32-bit desktop CPU for "IBM PC compatible" computers.

Here are some of the fields where people are using Dragonfly BSD, and the reasons that they find that DragonFly BSD fits their needs:

DragonFly is available via anonymous FTP or GIT. Please see Appendix A for more information about obtaining DragonFly.

For more help on installing, see the appropriate sections of this handbook.


About the DragonFly Project

The following section provides some background information on the project, including a brief history, project goals, and the development model of the project.

A Brief History of DragonFly

Matthew Dillon, one of the developers for FreeBSD, was growing increasingly frustrated with the FreeBSD Project's direction for release 5. The FreeBSD 5 release had been delayed multiple times, and had performance problems compared to earlier releases of FreeBSD. DragonFly was announced in June of 2003. The code base was taken from the 4.8 release of FreeBSD, which offered better performance and more complete features. Development has proceeded at a very quick rate since then, with Matt Dillon and a group of developers fixing longstanding BSD bugs and modernizing the new DragonFly system.

DragonFly Project Goals

DragonFly is an effort to maintain the traditional BSD format -- lean, stable code -- along with modern features such as lightweight threads, a workable packaging system, and a revised VFS. Underpinning all this work is efficient support for multiple processors, something rare among open source systems. Because DragonFly is built on an existing very stable code base, it is possible to make these radical changes as part of an incremental process.

The DragonFly Development Model

*Written by Justin Sherrill. *

DragonFly is developed by many people around the world. There is no qualification process; anyone may submit his or her code, documentation, or designs, for use in the Project. Here is a general description of the Project's organizational structure.

Source for DragonFly is kept in git, available with each DragonFly install. The primary git repository resides on a machine in California, USA. Documentation on obtaining the DragonFly source is available elsewhere in this book. The best way of getting changes made to the DragonFly source is to mail the submit mailing list. Including desired source code changes (unified diff format is best) is the most useful format. A certain number of developers have access to commit changes to the DragonFly source, and can do so after review on that list. The DragonFly development model is loose; changes to the code are generally peer-reviewed and added when any objections have been corrected. There is no formal entry/rejection process, though final say on all code submissions goes to Matt Dillon, as originator of this project.

The Current DragonFly Release

DragonFly is a freely available, full source 4.4BSD-Lite based release for almost all Intel and AMD based computer systems. It is based primarily on FreeBSD 4.8, and includes enhancements from U.C. Berkeley's CSRG group, NetBSD, OpenBSD, 386BSD, and the Free Software Foundation. A number of additional documents which you may find very helpful in the process of installing and using DragonFly may now also be found in the /usr/share/doc directory on any machine.

DragonFly Origin

Matthew Dillon happened to take a picture of a dragonfly in his garden while trying to come up with a name for this new branch of BSD. Taking this as inspiration, "DragonFly" became the new name.

Updating the System

Supported methods

The only supported method of upgrading DragonFly BSD is by building from source code. The supported upgrade process includes going from the previous release to latest release.

Getting the source code

There is a Makefile in /usr which will ease the task of retrieving the source tree; it needs to be run as root:

% cd /usr
% make src-create
 [...]

This will check out (download) the source tree to /usr/src and switch to the master branch. For the stable branch, you need to check it out with the following command (remember to replace the DragonFly_RELEASE_3_0 with the appropriate branch name for the release needed).

% cd /usr/src
% git checkout DragonFly_RELEASE_3_0

To see the available remote branches:

# cd /usr/src 
# git pull
# git branch -r

The leading edge (development trunk) version of the system will be the "master".

Build and upgrade process

The build process requires some time to build all the userland programs and the DragonFly BSD kernel. Once built, the next step is to install everything and make the upgrade target. No configuration files in /etc are changed by this process. More details can be found in build(7) manpage.

% cd /usr/src
% make buildworld
% make buildkernel
% make installkernel
% make installworld
% make upgrade
(reboot)

Note: You may use a concurrent build if you have a SMP (a machine with several cores or CPUs). You may specify -j x parameter to make where x is the number of CPUs + 1. If you run DragonFly 2.12 or higher the kernel will auto-detect the number of CPUs your computer has and activate them all if possible. To find out how many CPUs your computer has:

% sysctl hw.ncpu
hw.ncpu: 2

An explanation of each step follows.

If your computer fails to boot the new kernel, you can always select 'Boot DragonFly using kernel.old' in the loader menu, so that the old kernel is loaded instead of the new one.

Additional upgrading instructions can be found in /usr/src/UPDATING in the source tree. They can also be found online, here

DragonFly BSD Quick Start

This QuickStart is part of the NewHandbook.

This document describes the DragonFly environment one will find on a newly installed system. While you are getting started, please pay careful attention to the version or level of DragonFly that the documentation was written for. Some documentation on this site may be out of date. Watch for the marker (obsolete) on items that are out of date or need updating.

Some Unix and BSD Fundamentals

If you have used another Unix flavour, another BSD, or Linux before, you may need to spend some time learning the differences between DragonFly and the system you are experienced in. If you have never used any flavor of Unix, BSD or otherwise, and have only used Windows before, please be prepared for a lengthy period of learning.

If you already know your way around a Unix filesystem, and already know what the /etc folder is, how to use vi or vim to edit a file, how to use a shell like tcsh or bash, how to configure that shell, or change what shell you're using, how su and sudo work, and what a root account is, the rest of this page may be enough to orient you to your surroundings.

You should understand everything in the Unix Basics section before you proceed with trying to use your new system.

Disk layout of a New Dragonfly BSD System using the HAMMER filesystem

If you chose to install on the HAMMER file system during installation you will be left with a system with the following disk configuration:

# df -h
Filesystem                Size   Used  Avail Capacity  Mounted on
ROOT                      288G    12G   276G     4%    /
devfs                     1.0K   1.0K     0B   100%    /dev
/dev/serno/9VMBWDM1.s1a   756M   138M   558M    20%    /boot
/pfs/@@-1:00001           288G    12G   276G     4%    /var
/pfs/@@-1:00002           288G    12G   276G     4%    /tmp
/pfs/@@-1:00003           288G    12G   276G     4%    /usr
/pfs/@@-1:00004           288G    12G   276G     4%    /home
/pfs/@@-1:00005           288G    12G   276G     4%    /usr/obj
/pfs/@@-1:00006           288G    12G   276G     4%    /var/crash
/pfs/@@-1:00007           288G    12G   276G     4%    /var/tmp
procfs                    4.0K   4.0K     0B   100%    /proc

In this example

The disk label looks as follows:

# disklabel /dev/serno/9VMBWDM1.s1

# /dev/serno/9VMBWDM1.s1:
#
# Informational fields calculated from the above
# All byte equivalent offsets must be aligned
#
# boot space:    1044992 bytes
# data space:  312567643 blocks # 305241.84 MB (320069266944 bytes)
#
# NOTE: If the partition data base looks odd it may be
#       physically aligned instead of slice-aligned
#
diskid: e67030af-d2af-11df-b588-01138fad54f5
label:
boot2 data base:      0x000000001000
partitions data base: 0x000000100200
partitions data stop: 0x004a85ad7000
backup label:         0x004a85ad7000
total size:           0x004a85ad8200    # 305242.84 MB
alignment: 4096
display block size: 1024        # for partition display only

16 partitions:
#          size     offset    fstype   fsuuid
  a:     786432          0    4.2BSD    #     768.000MB
  b:    8388608     786432      swap    #    8192.000MB
  d:  303392600    9175040    HAMMER    #  296281.836MB
  a-stor_uuid: eb1c8aac-d2af-11df-b588-01138fad54f5
  b-stor_uuid: eb1c8aec-d2af-11df-b588-01138fad54f5
  d-stor_uuid: eb1c8b21-d2af-11df-b588-01138fad54f5

The slice has 3 partitions:

When you create a HAMMER file system, you must give it a label. Here, the installer labelled it as "ROOT" and mounted it as

ROOT                      288G    12G   276G     4%    /

A PFS is a Pseudo File System inside a HAMMER file system. The HAMMER file system in which the PFSes are created is referred to as the root file system. You should not confuse the "root" file system with the label "ROOT": the label can be anything. The installer labeled it as ROOT because it is mounted at /.

Now inside the root HAMMER file system you find the installer created 7 PFSes from the df -h output above, let us see how they are mounted in /etc/fstab:

# cat /etc/fstab

# Device                Mountpoint      FStype  Options         Dump    Pass#
/dev/serno/9VMBWDM1.s1a         /boot           ufs     rw      1       1
/dev/serno/9VMBWDM1.s1b         none            swap    sw      0       0
/dev/serno/9VMBWDM1.s1d         /               hammer  rw      1       1
/pfs/var                /var            null    rw              0       0
/pfs/tmp                /tmp            null    rw              0       0
/pfs/usr                /usr            null    rw              0       0
/pfs/home               /home           null    rw              0       0
/pfs/usr.obj    /usr/obj                null    rw              0       0
/pfs/var.crash  /var/crash              null    rw              0       0
/pfs/var.tmp    /var/tmp                null    rw              0       0
proc                    /proc           procfs  rw              0       0

The PFSes are mounted using a NULL mount because they are also HAMMER file systems. You can read more on NULL mounts at the mount_null(8) manpage.

You don't need to specify a size for the PFSes like you do for logical volumes inside a volume group for LVM. All the free space in the root HAMMER file system is available to all the PFSes; it can be seen in the df -h output above that the free space is the same for all PFSes and the root HAMMER file system.

If you look in /var

# cd /var/
# ls
account   backups   caps   cron    empty   log   msgs   run   spool   yp  at        
cache     crash     db     games   lib     mail  preserve   rwho  tmp

you will find the above directories.

If you look at the status of one of the PFSes, e.g. /usr you will see /var/hammer is the default snapshot directory.

# hammer pfs-status /usr/
/usr/   PFS #3 {
    sync-beg-tid=0x0000000000000001
    sync-end-tid=0x0000000117ac6270
    shared-uuid=f33e318e-d2af-11df-b588-01138fad54f5
    unique-uuid=f33e31cb-d2af-11df-b588-01138fad54f5
    label=""
    prune-min=00:00:00
    operating as a MASTER
    snapshots directory defaults to /var/hammer/<pfs>
}

At installation time, it will be seen that there is no "hammer" directory in /var. The reason for this is that no snapshots have yet been taken. You can verify this by checking the snapshots available for /usr

# hammer snapls /usr
Snapshots on /usr       PFS #3
Transaction ID          Timestamp               Note

Snapshots will appear automatically each night as the system performs housekeeping on the Hammer filesystem. For a new volume, an immediate snapshot can be taken by running the command 'hammer cleanup'. Among other activities, it will take a snapshot of the filesystem.

# sudo hammer cleanup
cleanup /                    - HAMMER UPGRADE: Creating snapshots
        Creating snapshots in /var/hammer/root
 handle PFS #0 using /var/hammer/root
           snapshots - run
               prune - run
           rebalance - run..
             reblock - run....
              recopy - run....
cleanup /var                 - HAMMER UPGRADE: Creating snapshots
[...]
cleanup /tmp                 - HAMMER UPGRADE: Creating snapshots
[...]
cleanup /usr                 - HAMMER UPGRADE: Creating snapshots
[...]
cleanup /home                - HAMMER UPGRADE: Creating snapshots
[...]
cleanup /usr/obj             - HAMMER UPGRADE: Creating snapshots
[...]
cleanup /var/crash           - HAMMER UPGRADE: Creating snapshots
[...]
cleanup /var/tmp             - HAMMER UPGRADE: Creating snapshots
[...]
cleanup /var/isos            - HAMMER UPGRADE: Creating snapshots
[...]

No snapshots were taken for /tmp, /usr/obj and /var/tmp. This is because the PFSes are flagged as nohistory. HAMMER tracks history for all files in a PFS. Naturally, this consumes disk space until history is pruned, at which point the available disk space will stabilise. To prevent temporary files on the mentioned PFSes (e.g., object files, crash dumps) from consuming disk space, the PFSes are marked as nohistory.

After performing nightly housekeeping, a new directory called hammer will be found in /var with the following sub directories:

# cd hammer/
# ls -l
total 0
drwxr-xr-x  1 root  wheel  0 Oct 13 11:51 home
drwxr-xr-x  1 root  wheel  0 Oct 13 11:42 root
drwxr-xr-x  1 root  wheel  0 Oct 13 11:43 tmp
drwxr-xr-x  1 root  wheel  0 Oct 13 11:51 usr
drwxr-xr-x  1 root  wheel  0 Oct 13 11:54 var

Looking inside /var/hammer/usr, one finds:

# cd usr/
# ls -l
total 0
drwxr-xr-x  1 root  wheel   0 Oct 13 11:54 obj
lrwxr-xr-x  1 root  wheel  25 Oct 13 11:43 snap-20101013-1143 -> /usr/@@0x0000000117ac6cb0

We have a symlink pointing to the snapshot transaction ID shown below.

# hammer snapls /usr
Snapshots on /usr       PFS #3
Transaction ID          Timestamp               Note
0x0000000117ac6cb0      2010-10-13 11:43:04 IST -
#

You can read more about snapshots, prune, reblance, reblock, recopy etc from hammer(8). Make especially sure to look under the heading "cleanup [filesystem ...]".

You can learn more about PFS mirroring here

In order to correctly map hard disk sernos to device names you can use the 'devattr' command.

# udevd
# devattr -d "ad*" -p serno
Device ad4:
        serno = Z2AD9WN4
Device ad4s1:
Device ad4s1d:

Device ad5:
        serno = 9VMRFDSY
Device ad5s1:
Device ad5s1d:

Device ad3:
        serno = Z2AD9WLW
Device ad3s1:
Device ad3s1a:
Device ad3s1b:
Device ad3s1d:

Or if your disks are 'da', just change it as appropiate.

Configuring and Starting the SSH Server

Described in detail here

Software/Programs and Configuration Files Location

DragonFly default installation contains the base software/programs from the DragonFly project itself and additional software from other sources.

The base system binary software programs are located in the folders

/bin    /sbin
/usr/bin   /usr/sbin

The configuration files for the base system can be found in /etc. Third-party programs use /usr/local/etc.

There are several different ways to install software and which version you use depends on which DragonFly BSD version you have. You can compile things from source code, or you can use binary packages.

Installing Third-party Software

For an in-depth description about dealing with packaging systems, see the dports howto . Note that although DragonFly BSD has several older package managers (like pkgin), as of 2014 the most modern binary package installation system is pkg.

Using pkg

Read dports howto then for some errata, read this.

You can look at the help and the man page for the pkg tool like this:

pkg help install

Example: Read man page for pkg-install

man pkg-install

Installing an X.org desktop X11 environment and XFCE desktop

If it's already on your system run X by typing startx. If it's not, be sure to check your dports configuration is finished, then install it using pkg install xorg-7.7 xfce4-desktop. This will install the core X.org X11 server, and an XFCE based desktop environment.

(obsolete) Slightly out of date instructions on installing a GUI (X desktop) environment are in the new handbook.

UNIX Basics

*Rewritten by Chris Shumway. *

Synopsis

The following chapter will cover the basic commands and functionality of the DragonFly operating system. Much of this material is relevant for any UNIX®-like operating system. Feel free to skim over this chapter if you are familiar with the material. If you are new to DragonFly, then you will definitely want to read through this chapter carefully.

After reading this chapter, you will know:

Virtual Consoles and Terminals

DragonFly can be used in various ways. One of them is typing commands to a text terminal. A lot of the flexibility and power of a UNIX® operating system is readily available at your hands when using DragonFly this way. This section describes what terminals and consoles are, and how you can use them in !DragonFly.

The Console

If you have not configured DragonFly to automatically start a graphical environment during startup, the system will present you with a login prompt after it boots, right after the startup scripts finish running. You will see something similar to:

Additional ABI support:.
Starting cron.
Local package initialization:.
Additional TCP options:.

Wed Feb 18 17:53:48 GMT 2009

DragonFly/i386 (Amnesiac) (ttyv0)

login: 

The messages might be a bit different on your system, but you will see something similar. The last two lines are what we are interested in right now. The second last line reads:

DragonFly/i386 (Amnesiac) (ttyv0)

This line contains some bits of information about the system you have just booted. You are looking at a DragonFlyBSD console, running on an Intel or compatible processor of the x86 architecture(1). The name of this machine (every UNIX machine has a name) is Amnesiac, and you are now looking at its system console--the ttyv0 terminal. Finally, the last line is always:

login:

This is the part where you are supposed to type in your username to log into DragonFly. The next section describes how you can do this.

Logging into DragonFly

DragonFly is a multiuser, multiprocessing system. This is the formal description that is usually given to a system that can be used by many different people, who simultaneously run a lot of programs on a single machine. Every multiuser system needs some way to distinguish one userfrom the rest. In !DragonFly (and all the UNIX-like operating systems), this is accomplished by requiring that every user must log into the system before being able to run programs. Every user has a unique name (the username and a personal, secret key (the password). DragonFly will ask for these two before allowing a user to run any programs.

Right after DragonFly boots and finishes running its startup scripts(2), it will present you with a prompt and ask for a valid username:

login:

For the sake of this example, let us assume that your username is john. Type john at this prompt and press Enter . You should then be presented with a prompt to enter a password:

login: john
Password:

Type in john's password now, and press Enter . The password is not echoed! You need not worry about this right now. Suffice it to say that it is done for security reasons. If you have typed your password correctly, you should by now be logged into DragonFly and ready to try out all the available commands. You should see the MOTD or message of the day followed by a command prompt (a #, $, or % character). This indicates you have successfully logged into DragonFly.

Multiple Consoles

Running UNIX commands in one console is fine, but DragonFly can run many programs at once. Having one console where commands can be typed would be a bit of a waste when an operating system like DragonFly can run dozens of programs at the same time. This is where virtual consoles can be very helpful. DragonFly can be configured to present you with many different virtual consoles. You can switch from one of them to any other virtual console by pressing a couple of keys on your keyboard. Each console has its own different output channel, and DragonFly takes care of properly redirecting keyboard input and monitor output as you switch from one virtual console to the next.

Special key combinations have been reserved by DragonFly for switching consoles(3). You can use Alt - F1 , Alt - F2 , through Alt - F8 to switch to a different virtual console in DragonFly. As you are switching from one console to the next, DragonFly takes care of saving and restoring the screen output. The result is an illusion of having multiple virtual screens and keyboards that you can use to type commands for DragonFly to run. The programs that you launch on one virtual console do not stop running when that console is not visible. They continue running when you have switched to a different virtual console.

The /etc/ttys File

The default configuration of DragonFly will start up with eight virtual consoles. This is not a hardwired setting though, and you can easily customize your installation to boot with more or fewer virtual consoles. The number and settings of the virtual consoles are configured in the /etc/ttys file.

You can use the /etc/ttys file to configure the virtual consoles of DragonFly. Each uncommented line in this file (lines that do not start with a # character) contains settings for a single terminal or virtual console. The default version of this file that ships with DragonFly configures nine virtual consoles, and enables eight of them. They are the lines that start with ttyv:

# name  getty                           type    status          comments
#
ttyv0   "/usr/libexec/getty Pc"         cons25  on  secure
# Virtual terminals
ttyv1   "/usr/libexec/getty Pc"         cons25  on  secure
ttyv2   "/usr/libexec/getty Pc"         cons25  on  secure
ttyv3   "/usr/libexec/getty Pc"         cons25  on  secure
ttyv4   "/usr/libexec/getty Pc"         cons25  on  secure
ttyv5   "/usr/libexec/getty Pc"         cons25  on  secure
ttyv6   "/usr/libexec/getty Pc"         cons25  on  secure
ttyv7   "/usr/libexec/getty Pc"         cons25  on  secure
ttyv8   "/usr/pkg/xorg/bin/xdm -nodaemon"  xterm   off secure

For a detailed description of every column in this file and all the options you can use to set things up for the virtual consoles, consult the ttys(5) manual page.

Single User Mode Console

A detailed description of what single user mode is can be found in [boot-init.html#BOOT-SINGLEUSER Section 7.5.2]. It is worth noting that there is only one console when you are running DragonFly in single user mode. There are no virtual consoles available. The settings of the single user mode console can also be found in the /etc/ttys file. Look for the line that starts with console:

# name  getty                           type    status          comments
#
# If console is marked "insecure", then init will ask for the root password
# when going to single-user mode.
console none                            unknown off secure

Note: As the comments above the console line indicate, you can edit this line and change secure to insecure. If you do that, when DragonFly boots into single user mode, it will still ask for the root password. Be careful when changing this to insecure. If you ever forget the root password, booting into single user mode is a bit involved. It is still possible, but it might be a bit hard for someone who is not very comfortable with the DragonFly booting process and the programs involved.

Notes

(1) This is what i386 means. Note that even if you are not running DragonFly on an Intel 386 CPU, this is going to be i386. It is not the type of your processor, but the processor architecture that is shown here.
(2) Startup scripts are programs that are run automatically by DragonFly when booting. Their main function is to set things up for everything else to run, and start any services that you have configured to run in the background doing useful things.
(3) A fairly technical and accurate description of all the details of the DragonFly console and keyboard drivers can be found in the manual pages of syscons(4), atkbd(4), vidcontrol(1) and kbdcontrol(1). We will not expand on the details here, but the interested reader can always consult the manual pages for a more detailed and thorough explanation of how things work.

Permissions

DragonFly, being a direct descendant of BSD UNIX®, is based on several key UNIX concepts. The first and most pronounced is that DragonFly is a multi-user operating system. The system can handle several users all working simultaneously on completely unrelated tasks. The system is responsible for properly sharing and managing requests for hardware devices, peripherals, memory, and CPU time fairly to each user.

Because the system is capable of supporting multiple users, everything the system manages has a set of permissions governing who can read, write, and execute the resource. These permissions are stored as three octets broken into three pieces, one for the owner of the file, one for the group that the file belongs to, and one for everyone else. This numerical representation works like this:

Value Permission Directory Listing
0 No read, no write, no execute ---
1 No read, no write, execute --x
2 No read, write, no execute -w-
3 No read, write, execute -wx
4 Read, no write, no execute r--
5 Read, no write, execute r-x
6 Read, write, no execute rw-
7 Read, write, execute rwx

You can use the -l command line argument to ls(1) to view a long directory listing that includes a column with information about a file's permissions for the owner, group, and everyone else. For example, a ls -l in an arbitrary directory may show:

% ls -l
total 530
-rw-r--r--  1 root  wheel     512 Sep  5 12:31 myfile
-rw-r--r--  1 root  wheel     512 Sep  5 12:31 otherfile
-rw-r--r--  1 root  wheel    7680 Sep  5 12:31 email.txt
...

Here is how the first column of ls -l is broken up:

-rw-r--r--

The first (leftmost) character tells if this file is a regular file, a directory, a special character device, a socket, or any other special pseudo-file device. In this case, the - indicates a regular file. The next three characters, rw- in this example, give the permissions for the owner of the file. The next three characters, r--, give the permissions for the group that the file belongs to. The final three characters, r--, give the permissions for the rest of the world. A dash means that the permission is turned off. In the case of this file, the permissions are set so the owner can read and write to the file, the group can read the file, and the rest of the world can only read the file. According to the table above, the permissions for this file would be 644, where each digit represents the three parts of the file's permission.

This is all well and good, but how does the system control permissions on devices? DragonFly actually treats most hardware devices as a file that programs can open, read, and write data to just like any other file. These special device files are stored on the /dev directory.

Directories are also treated as files. They have read, write, and execute permissions. The executable bit for a directory has a slightly different meaning than that of files. When a directory is marked executable, it means it can be traversed into, that is, it is possible to cd (change directory) into it. This also means that within the directory it is possible to access files whose names are known (subject, of course, to the permissions on the files themselves).

In particular, in order to perform a directory listing, read permission must be set on the directory, whilst to delete a file that one knows the name of, it is necessary to have write and execute permissions to the directory containing the file. There are more permission bits, but they are primarily used in special circumstances such as setuid binaries and sticky directories. If you want more information on file permissions and how to set them, be sure to look at the chmod(1) manual page.

Symbolic Permissions

Contributed by Tom Rhodes.

Symbolic permissions, sometimes referred to as symbolic expressions, use characters in place of octal values to assign permissions to files or directories. Symbolic expressions use the syntax of (who) (action) (permissions), where the following values are available:

Option Letter Represents
(who) u User
(who) g Group owner
(who) o Other
(who) a All (world)
(action)
    +
Adding permissions
(action)
    -
Removing permissions
(action) = Explicitly set permissions
(permissions) r Read
(permissions) w Write
(permissions) x Execute
(permissions) t Sticky bit
(permissions) s Set UID or GID

These values are used with the chmod(1) command just like before, but with letters. For an example, you could use the following command to block other users from accessing FILE:

% chmod go=FILE

A comma separated list can be provided when more than one set of changes to a file must be made. For example the following command will remove the groups and world write permission on FILE, then it adds the execute permissions for everyone:

% chmod go-w,a+x FILE

DragonFly File Flags

Contributed by Tom Rhodes.

In addition to file permissions discussed previously, DragonFly supports the use of file flags. These flags add an additional level of security and control over files, but not directories. These file flags add an additional level of control over files, helping to ensure that in some cases not even the root can remove or alter files. File flags are altered by using the chflags(1) utility, using a simple interface. For example, to enable the system undeletable flag on the file file1, issue the following command:

# chflags sunlink file1

And to disable the system undeletable flag, simply issue the previous command with no in front of the sunlink. Observe:

# chflags nosunlink file1

To view the flags of this file, use the ls(1) with the -lo flags:

# ls -lo file1

The output should look like the following:

-rw-r--r--  1 trhodes  trhodes  sunlnk 0 Mar  1 05:54 file1

Several flags may only added or removed to files by the root user. In other cases, the file owner may set these flags. It is recommended an administrator read over the chflags(1) and chflags(2) manual pages for more information.

Directory Structure

The DragonFly directory hierarchy is fundamental to obtaining an overall understanding of the system. The most important concept to grasp is that of the root directory, /. This directory is the first one mounted at boot time and it contains the base system necessary to prepare the operating system for multi-user operation. The root directory also contains mount points for every other file system that you may want to mount.

A mount point is a directory where additional file systems can be grafted onto the root file system. This is further described in this Section. Standard mount points include /usr, /var, /tmp, /mnt, and /cdrom. These directories are usually referenced to entries in the file /etc/fstab. /etc/fstab is a table of various file systems and mount points for reference by the system. Most of the file systems in /etc/fstab are mounted automatically at boot time from the script rc(8) unless they contain the noauto option. Details can be found in this section.

A complete description of the file system hierarchy is available in hier(7). For now, a brief overview of the most common directories will suffice.

Directory Description
/ Root directory of the file system.
/bin/ User utilities fundamental to both single-user and multi-user environments.
/boot/ Programs and configuration files used during operating system bootstrap.
/boot/defaults/ Default bootstrapping configuration files; see loader.conf(5).
/dev/ Device nodes; see intro(4).
/etc/ System configuration files and scripts.
/etc/defaults/ Default system configuration files; see rc(8).
/etc/mail/ Configuration files for mail transport agents such as sendmail(8).
/etc/namedb/ named configuration files; see named(8).
/etc/periodic/ Scripts that are run daily, weekly, and monthly, via cron(8); see periodic(8).
/etc/ppp/ ppp configuration files; see ppp(8).
/mnt/ Empty directory commonly used by system administrators as a temporary mount point.
/proc/ Process file system; see procfs(5), mount_procfs(8).
/root/ Home directory for the root account.
/sbin/ System programs and administration utilities fundamental to both single-user and multi-user environments.
/tmp/ Temporary files. The contents of /tmp are usually NOT preserved across a system reboot. A memory-based file system is often mounted at /tmp. This can be automated with an entry in /etc/fstab; see mfs(8).
/usr/ The majority of user utilities and applications.
/usr/bin/ Common utilities, programming tools, and applications.
/usr/include/ Standard C include files.
/usr/lib/ Archive libraries.
/usr/libdata/ Miscellaneous utility data files.
/usr/libexec/ System daemons & system utilities (executed by other programs).
/usr/local/ Local executables, libraries, etc. Within /usr/local, the general layout sketched out by hier(7) for /usr should be used. An exceptions is the man directory, which is directly under /usr/local rather than under /usr/local/share.
/usr/obj/ Architecture-specific target tree produced by building the /usr/src tree.
/usr/pkg Used as the default destination for the files installed via the pkgsrc® tree or pkgsrc packages (optional). The configuration directory is tunable, but the default location is /usr/pkg/etc.
/usr/pkg/xorg/ Xorg distribution executables, libraries, etc (optional).
/usr/pkgsrc The pkgsrc tree for installing packages (optional).
/usr/sbin/ System daemons & system utilities (executed by users).
/usr/share/ Architecture-independent files.
/usr/src/ BSD and/or local source files.
/var/ Multi-purpose log, temporary, transient, and spool files. A memory-based file system is sometimes mounted at /var. This can be automated with an entry in /etc/fstab; see mfs(8).
/var/log/ Miscellaneous system log files.
/var/mail/ User mailbox files.
/var/spool/ Miscellaneous printer and mail system spooling directories.
/var/tmp/ Temporary files. The files are usually preserved across a system reboot, unless /var is a memory-based file system.
/var/yp NIS maps.

Disk Organization

The smallest unit of organization that DragonFly uses to find files is the filename. Filenames are case-sensitive, which means that readme.txt and README.TXT are two separate files. DragonFly does not use the extension (.txt) of a file to determine whether the file is a program, or a document, or some other form of data.

Files are stored in directories. A directory may contain no files, or it may contain many hundreds of files. A directory can also contain other directories, allowing you to build up a hierarchy of directories within one another. This makes it much easier to organize your data.

Files and directories are referenced by giving the file or directory name, followed by a forward slash, /, followed by any other directory names that are necessary. If you have directory foo, which contains directory bar, which contains the file readme.txt, then the full name, or path to the file is foo/bar/readme.txt.

Directories and files are stored in a file system. Each file system contains exactly one directory at the very top level, called the root directory for that file system. This root directory can then contain other directories.

So far this is probably similar to any other operating system you may have used. There are a few differences; for example, MS-DOS® and Windows® use \.

DragonFly does not use drive letters, or other drive names in the path. You would not write c:/foo/bar/readme.txt on DragonFly.

Instead, one file system is designated the root file system. The root file system's root directory is referred to as /. Every other file system is then mounted under the root file system. No matter how many disks you have on your DragonFly system, every directory appears to be part of the same disk.

Suppose you have three file systems, called A, B, and C. Each file system has one root directory, which contains two other directories, called A1, A2 (and likewise B1, B2 and C1, C2).

Call A the root file system. If you used the ls command to view the contents of this directory you would see two subdirectories, A1 and A2. The directory tree looks like this:

A file system must be mounted on to a directory in another file system. So now suppose that you mount file system B on to the directory A1. The root directory of B replaces A1, and the directories in B appear accordingly:

Any files that are in the B1 or B2 directories can be reached with the path /A1/B1 or /A1/B2 as necessary. Any files that were in /A1 have been temporarily hidden. They will reappear if B is unmounted from A.

If B had been mounted on A2 then the diagram would look like this:

and the paths would be /A2/B1 and /A2/B2 respectively.

File systems can be mounted on top of one another. Continuing the last example, the C file system could be mounted on top of the B1 directory in the B file system, leading to this arrangement:

Or C could be mounted directly on to the A file system, under the A1 directory:

If you are familiar with MS-DOS, this is similar, although not identical, to the join command.

Choosing File System Layout

This is not normally something you need to concern yourself with. Typically you create file systems when installing DragonFly and decide where to mount them, and then never change them unless you add a new disk.

It is entirely possible to have one large root file system, and not need to create any others. There are some drawbacks to this approach, and one advantage.

Benefits of Multiple File Systems

Disk Slices, Partitions and local UNIX file systems

Here we describe how disks are subdivided.

Slices

A disk can be subdivided in slices.

Slices are named s0, s1 and so on.

For example the disk ad6 can contain the slice ad6s3.

DragonFly support two schemes for slices, MBR and GPT, either of them will manage all slices on a disk:

Partitions

Partitions are contained in slices.

Partitions are named a, b and so on.

DragonFly support 16 partitions per slice, that is a through p.

For example the partition ad6s3a is contained in the slice ad6s3.

Partition layout is defined in a label on the slice where the partition reside. DragonFly support two types of disk labels, disklabel32 and disklabel64 (the default):

Local UNIX file systems

File systems are contained in partitions. Each partition can contain only one file system, which means that file systems often are described by either their typical mount point in the file system hierarchy, or the letter of the partition they are contained in. Partition does not have the same meaning as the common usage of the term partition (for example, MS-DOS partition), because of DragonFly's UNIX® heritage.

DragonFly support two local UNIX file systems, UFS and HAMMER:

Typical disk layout

From the above we see the following typical disk layout scenarios:

HAMMER Note

HAMMER(5)

is a rather new file system, under active development.

As of DragonFly 2.2.1 release HAMMER is considered production ready. At 2.0 release it was considered to be in an early Beta state .

All major features except the mirroring are quite well tested as-of the 2.2.1 release.

You should evaluate if HAMMER is suitable for your needs.

Examples of ongoing development includes:

HAMMER Features

HAMMER(5) has several advanced features not found in UFS:

More info on HAMMER can be found here.

DragonFly also uses disk space for swap space. Swap space provides DragonFly with virtual memory. This allows your computer to behave as though it has much more memory than it actually does. When DragonFly runs low on memory it moves some of the data that is not currently being used to the swap space, and moves it back in (moving something else out) when it needs it.

Adding a Disk

Adding a disk is done by installing it physically, and to connect it to a disk controller that DragonFly supports. If you are in doubt if controller is supported, manual pages for disk controllers can be consulted ('man -k disk' or 'man -k scsi' can be of help). The easiest thing is normally to boot DargonFly with the controller installed and note if boot message contains the controller.

Assuming that disk ad6 is installed, we could set it up using fdisk(8) and disklabel(8)](http://leaf.dragonflybsd.org/cgi/web-man?command=disklabel&section8) or gpt(8) and disklabel64(8).

In this example we choose gpt(8) & disklabel64(8).

# gpt -v create ad6

...

# gpt add -s1 ad6

ad6s0

# gpt add ad6

ad6s1

# gpt show ad6

...

Here we first create the GPT and then add two slices. In this example the first slice added is ad6s0, which is made a dummy slice of size 1 sector, this is just for not having to make further reference to it, as many users remembers that s0 has special meaning, which really isn't true for GPT slices. The second slice is ad6s1 which will cover the rest of the disk.

# disklabel64 -rw ad6s1 auto

# disklabel64 -e ad6s1          # edit label to add partitions as needed

disklabel

For disklabel(8) labels some partitions have certain conventions associated with them.

Partition Convention
a Normally contains the root file system
b Normally contains swap space
c Normally the same size as the enclosing slice. This allows utilities that need to work on the entire slice (for example, a bad block scanner) to work on the c partition. You would not normally create a file system on this partition. This is not necessarily true; it is possible to use the 'c' partition as a normal partition.
d Partition d used to have a special meaning associated with it, although that is now gone. To this day, some tools may operate oddly if told to work on partition d.

Each partition-that-contains-a-file-system is stored in what DragonFly calls a slice. Slice is DragonFly's term for what the common call partitions, and again, this is because of DragonFly's UNIX background. Slices are numbered, starting at 1.

Slice numbers follow the device name, prefixed with an s, starting at 1. So da0s1 is the first slice on the first SCSI drive. There can only be four physical slices on a disk, but you can have logical slices inside physical slices of the appropriate type. These extended slices are numbered starting at 5, so ad0s5 is the first extended slice on the first IDE disk. These devices are used by file systems that expect to occupy a slice.

Dangerously dedicated physical drives are accessed as slice 0.

Slices, dangerously dedicated physical drives, and other drives contain partitions, which are represented as letters from a to p. This letter is appended to the device name, so da0s0a is the a partition on the first da drive, which is dangerously dedicated. ad1s3e is the fifth partition in the third slice of the second IDE disk drive.

Finally, each disk on the system is identified. A disk name starts with a code that indicates the type of disk, and then a number, indicating which disk it is. Disk numbering starts at 0. Common codes that you will see are listed in Table 3-1.

When referring to a partition DragonFly requires that you also name the slice and disk that contains the partition, and when referring to a slice you should also refer to the disk name. Do this by listing the disk name, s, the slice number, and then the partition letter. Examples are shown in Example 3-1.

Example 3-2 shows a conceptual model of the disk layout that should help make things clearer.

In order to install DragonFly you must first configure the disk slices, then create partitions within the slice you will use for DragonFly, and then create a file system (or swap space) in each partition, and decide where that file system will be mounted.

'Table 3-1. Disk Device Codes'

Code Meaning
ad ATAPI (IDE) disk
da SCSI direct access disk
acd ATAPI (IDE) CDROM
cd SCSI CDROM
vn Virtual disk
fd Floppy disk

'Example 3-1. Sample Disk, Slice, and Partition Names'

Name Meaning
ad0s1a The first partition (a) on the first slice (s1) on the first IDE disk (ad0).
da1s2e The fifth partition (e) on the second slice (s2) on the second SCSI disk (da1).

'Example 3-2. Conceptual Model of a Disk'

This diagram shows DragonFly's view of the first IDE disk attached to the system. Assume that the disk is 4 GB in size, and contains two 2 GB slices (MS-DOS partitions). The first slice contains a MS-DOS disk, C:, and the second slice contains a DragonFly installation. This example DragonFly installation has three partitions, and a swap partition.

The three partitions will each hold a file system. Partition a will be used for the root file system, e for the /var directory hierarchy, and f for the /usr directory hierarchy.

Mounting and Unmounting File Systems

The file system is best visualized as a tree, rooted at /.

The directories, e.g. /dev and /usr, in the root directory are branches,

which may have their own branches, such as /usr/local, and so on.

There are various reasons to house some of these directories on separate file systems. /var contains the directories log/ and spool/, and various types of temporary files, and as such, may get filled up. Filling up the root file system is not a good idea, so splitting /var from / is often favorable.

Another common reason to contain certain directory trees on other file systems is if they are to be housed on separate physical disks, e.g. CD-ROM, or are used as separate virtual disks, such as Network File System exports.

The fstab File

During the boot process, file systems listed in /etc/fstab are automatically mounted (unless they are listed with the noauto option).

The /etc/fstab file contains a list of lines of the following format:

device       mount-point   fstype     options      dumpfreq     passno

These parameters have the following meaning:

Consult the fstab(5) manual page for more information on the format of the /etc/fstab file and the options it contains.

The mount Command

The mount(8) command is what is ultimately used to mount file systems.

In its most basic form, you use:

# mount device mountpoint

Or, if mountpoint is specified in /etc/fstab, just:

# mount mountpoint

There are plenty of options, as mentioned in the mount(8) manual page, but the most common are:

Mount Options

The -o option takes a comma-separated list of the options, including the following:

The umount Command

The umount(8) command takes, as a parameter, one of a mountpoint, a device name, or the -a or -A option.

All forms take -f to force unmounting, and -v for verbosity. Be warned that -f is not generally a good idea. Forcibly unmounting file systems might crash the computer or damage data on the file system.

-a and -A are used to unmount all mounted file systems, possibly modified by the file system types listed after -t. -A, however, does not attempt to unmount the root file system.

Processes

DragonFly is a multi-tasking operating system. This means that it seems as though more than one program is running at once. Each program running at any one time is called a process. Every command you run will start at least one new process, and there are a number of system processes that run all the time, keeping the system functional.

Each process is uniquely identified by a number called a process ID, or PID, and, like files, each process also has one owner and group. The owner and group information is used to determine what files and devices the process can open, using the file permissions discussed earlier. Most processes also have a parent process. The parent process is the process that started them. For example, if you are typing commands to the shell then the shell is a process, and any commands you run are also processes. Each process you run in this way will have your shell as its parent process. The exception to this is a special process called init(8). init is always the first process, so its PID is always 1. init is started automatically by the kernel when DragonFly starts.

Two commands are particularly useful to see the processes on the system, ps(1) and top(1). The ps command is used to show a static list of the currently running processes, and can show their PID, how much memory they are using, the command line they were started with, and so on. The top command displays all the running processes, and updates the display every few seconds, so that you can interactively see what your computer is doing.

By default, ps only shows you the commands that are running and are owned by you. For example:

% ps

  PID  TT  STAT      TIME COMMAND
  298  p0  Ss     0:01.10 tcsh
 7078  p0  S      2:40.88 xemacs mdoc.xsl (xemacs-21.1.14)
37393  p0  I      0:03.11 xemacs freebsd.dsl (xemacs-21.1.14)
48630  p0  S      2:50.89 /usr/local/lib/netscape-linux/navigator-linux-4.77.bi
48730  p0  IW     0:00.00 (dns helper) (navigator-linux-)
72210  p0  R+     0:00.00 ps
  390  p1  Is     0:01.14 tcsh
 7059  p2  Is+    1:36.18 /usr/local/bin/mutt -y
 6688  p3  IWs    0:00.00 tcsh
10735  p4  IWs    0:00.00 tcsh
20256  p5  IWs    0:00.00 tcsh
  262  v0  IWs    0:00.00 -tcsh (tcsh)
  270  v0  IW+    0:00.00 /bin/sh /usr/X11R6/bin/startx -- -bpp 16
  280  v0  IW+    0:00.00 xinit /home/nik/.xinitrc -- -bpp 16
  284  v0  IW     0:00.00 /bin/sh /home/nik/.xinitrc
  285  v0  S      0:38.45 /usr/X11R6/bin/sawfish

As you can see in this example, the output from ps(1) is organized into a number of columns. PID is the process ID discussed earlier. PIDs are assigned starting from 1, go up to 99999, and wrap around back to the beginning when you run out. The TT column shows the tty the program is running on, and can safely be ignored for the moment. STAT shows the program's state, and again, can be safely ignored. TIME is the amount of time the program has been running on the CPU--this is usually not the elapsed time since you started the program, as most programs spend a lot of time waiting for things to happen before they need to spend time on the CPU. Finally, COMMAND is the command line that was used to run the program.

ps(1) supports a number of different options to change the information that is displayed. One of the most useful sets is auxww. a displays information about all the running processes, not just your own. u displays the username of the process' owner, as well as memory usage. x displays information about daemon processes, and ww causes ps(1) to display the full command line, rather than truncating it once it gets too long to fit on the screen.

The output from top(1) is similar. A sample session looks like this:

% top
last pid: 72257;  load averages:  0.13,  0.09,  0.03    up 0+13:38:33  22:39:10
47 processes:  1 running, 46 sleeping
CPU states: 12.6% user,  0.0% nice,  7.8% system,  0.0% interrupt, 79.7% idle
Mem: 36M Active, 5256K Inact, 13M Wired, 6312K Cache, 15M Buf, 408K Free
Swap: 256M Total, 38M Used, 217M Free, 15% Inuse


  PID USERNAME PRI NICE  SIZE    RES STATE    TIME   WCPU    CPU COMMAND
72257 nik       28   0  1960K  1044K RUN      0:00 14.86%  1.42% top
 7078 nik        2   0 15280K 10960K select   2:54  0.88%  0.88% xemacs-21.1.14
  281 nik        2   0 18636K  7112K select   5:36  0.73%  0.73% XF86_SVGA
  296 nik        2   0  3240K  1644K select   0:12  0.05%  0.05% xterm
48630 nik        2   0 29816K  9148K select   3:18  0.00%  0.00% navigator-linu
  175 root       2   0   924K   252K select   1:41  0.00%  0.00% syslogd
 7059 nik        2   0  7260K  4644K poll     1:38  0.00%  0.00% mutt
...

The output is split into two sections. The header (the first five lines) shows the PID of the last process to run, the system load averages (which are a measure of how busy the system is), the system uptime (time since the last reboot) and the current time. The other figures in the header relate to how many processes are running (47 in this case), how much memory and swap space has been taken up, and how much time the system is spending in different CPU states.

Below that are a series of columns containing similar information to the output from ps(1). As before you can see the PID, the username, the amount of CPU time taken, and the command that was run. top(1) also defaults to showing you the amount of memory space taken by the process. This is split into two columns, one for total size, and one for resident size--total size is how much memory the application has needed, and the resident size is how much it is actually using at the moment. In this example you can see that Netscape® has required almost 30 MB of RAM, but is currently only using 9 MB.

top(1) automatically updates this display every two seconds; this can be changed with the s option.

Daemons, Signals, and Killing Processes

When you run an editor it is easy to control the editor, tell it to load files, and so on. You can do this because the editor provides facilities to do so, and because the editor is attached to a terminal. Some programs are not designed to be run with continuous user input, and so they disconnect from the terminal at the first opportunity. For example, a web server spends all day responding to web requests, it normally does not need any input from you. Programs that transport email from site to site are another example of this class of application.

We call these programs daemons. Daemons were characters in Greek mythology; neither good or evil, they were little attendant spirits that, by and large, did useful things for mankind. Much like the web servers and mail servers of today do useful things. This is why the mascot for a number of BSD-based operating systems has, for a long time, been a cheerful looking daemon with sneakers and a pitchfork.

There is a convention to name programs that normally run as daemons with a trailing d. BIND is the Berkeley Internet Name Daemon (and the actual program that executes is called named), the Apache web server program is called httpd, the line printer spooling daemon is lpd and so on. This is a convention, not a hard and fast rule; for example, the main mail daemon for the Sendmail application is called sendmail, and not maild, as you might imagine.

Sometimes you will need to communicate with a daemon process. These communications are called signals, and you can communicate with a daemon (or with any other running process) by sending it a signal. There are a number of different signals that you can send--some of them have a specific meaning, others are interpreted by the application, and the application's documentation will tell you how that application interprets signals. You can only send a signal to a process that you own. If you send a signal to someone else's process with kill(1) or kill(2) permission will be denied. The exception to this is the root user, who can send signals to everyone's processes.

DragonFly will also send applications signals in some cases. If an application is badly written, and tries to access memory that it is not supposed to, DragonFly sends the process the Segmentation Violation signal (SIGSEGV). If an application has used the alarm(3) system call to be alerted after a period of time has elapsed then it will be sent the Alarm signal (SIGALRM), and so on.

Two signals can be used to stop a process, SIGTERM and SIGKILL. SIGTERM is the polite way to kill a process; the process can catch the signal, realize that you want it to shut down, close any log files it may have open, and generally finish whatever it is doing at the time before shutting down. In some cases a process may even ignore SIGTERM if it is in the middle of some task that can not be interrupted.

SIGKILL can not be ignored by a process. This is the I do not care what you are doing, stop right now signal. If you send SIGKILL to a process then DragonFly will stop that process there and then(1).

The other signals you might want to use are SIGHUP, SIGUSR1, and SIGUSR2. These are general purpose signals, and different applications will do different things when they are sent.

Suppose that you have changed your web server's configuration file--you would like to tell the web server to re-read its configuration. You could stop and restart httpd, but this would result in a brief outage period on your web server, which may be undesirable. Most daemons are written to respond to the SIGHUP signal by re-reading their configuration file. So instead of killing and restarting httpd you would send it the SIGHUP signal. Because there is no standard way to respond to these signals, different daemons will have different behavior, so be sure and read the documentation for the daemon in question.

Signals are sent using the kill(1) command, as this example shows.

Sending a Signal to a Process

This example shows how to send a signal to inetd(8). The inetd configuration file is /etc/inetd.conf, and inetd will re-read this configuration file when it is sent SIGHUP.

  1. Find the process ID of the process you want to send the signal to. Do this using ps(1) and grep(1). The grep(1) command is used to search through output, looking for the string you specify. This command is run as a normal user, and inetd(8) is run as root, so the ax options must be given to ps(1).

    % ps -ax | grep inetd
    
    
    198  ??  IWs    0:00.00 inetd -wW
    

    So the inetd(8) PID is 198. In some cases the grep inetd command might also occur in this output. This is because of the way ps(1) has to find the list of running processes.

  2. Use kill(1) to send the signal. Because inetd(8) is being run by root you must use su(1) to become root first.

    % su
    
    
    Password:
    
    
    # /bin/kill -s HUP 198
    

    In common with most UNIX® commands, kill(1) will not print any output if it is successful. If you send a signal to a process that you do not own then you will see kill: PID: Operation not permitted. If you mistype the PID you will either send the signal to the wrong process, which could be bad, or, if you are lucky, you will have sent the signal to a PID that is not currently in use, and you will see kill: PID: No such process.

Why Use /bin/kill? Many shells provide the kill command as a built in command; that is, the shell will send the signal directly, rather than running /bin/kill. This can be very useful, but different shells have a different syntax for specifying the name of the signal to send. Rather than try to learn all of them, it can be simpler just to use the /bin/kill ... command directly.

Sending other signals is very similar, just substitute TERM or KILL in the command line as necessary.

Important: Killing random process on the system can be a bad idea. In particular, init(8), process ID 1, is very special. Running /bin/kill -s KILL 1 is a quick way to shutdown your system. Always double check the arguments you run kill(1) with before you press Return .

Shells

In DragonFly, a lot of everyday work is done in a command line interface called a shell. A shell's main job is to take commands from the input channel and execute them. A lot of shells also have built in functions to help everyday tasks such as file management, file globbing, command line editing, command macros, and environment variables. DragonFly comes with a set of shells, such as sh, the Bourne Shell, and tcsh, the improved C-shell. Many other shells are available from pkgsrc®, such as zsh and bash.

Which shell do you use? It is really a matter of taste. If you are a C programmer you might feel more comfortable with a C-like shell such as tcsh. If you have come from Linux or are new to a UNIX® command line interface you might try bash. The point is that each shell has unique properties that may or may not work with your preferred working environment, and that you have a choice of what shell to use.

One common feature in a shell is filename completion. Given the typing of the first few letters of a command or filename, you can usually have the shell automatically complete the rest of the command or filename by hitting the Tab key on the keyboard. Here is an example. Suppose you have two files called foobar and foo.bar. You want to delete foo.bar. So what you would type on the keyboard is: rm fo[ **Tab** ].[ **Tab** ].

The shell would print out rm foo[BEEP].bar.

The [BEEP] is the console bell, which is the shell telling me it was unable to totally complete the filename because there is more than one match. Both foobar and foo.bar start with fo, but it was able to complete to foo. If you type in ., then hit Tab again, the shell would be able to fill in the rest of the filename for you.

Another feature of the shell is the use of environment variables. Environment variables are a variable key pair stored in the shell's environment space. This space can be read by any program invoked by the shell, and thus contains a lot of program configuration. Here is a list of common environment variables and what they mean:

Variable Description
USER Current logged in user's name.
PATH Colon separated list of directories to search for binaries.
DISPLAY Network name of the X11 display to connect to, if available.
SHELL The current shell.
TERM The name of the user's terminal. Used to determine the capabilities of the terminal.
TERMCAP Database entry of the terminal escape codes to perform various terminal functions.
OSTYPE Type of operating system. e.g., DragonFly.
MACHTYPE The CPU architecture that the system is running on.
EDITOR The user's preferred text editor.
PAGER The user's preferred text pager.
MANPATH Colon separated list of directories to search for manual pages.

Setting an environment variable differs somewhat from shell to shell. For example, in the C-Style shells such as tcsh and csh, you would use setenv to set environment variables. Under Bourne shells such as sh and bash, you would use export to set your current environment variables. For example, to set or modify the EDITOR environment variable, under csh or tcsh a command like this would set EDITOR to /usr/pkg/bin/emacs:

% setenv EDITOR /usr/pkg/bin/emacs

Under Bourne shells:

% export EDITOR="/usr/pkg/bin/emacs"

You can also make most shells expand the environment variable by placing a $ character in front of it on the command line. For example, echo $TERM would print out whatever $TERM is set to, because the shell expands $TERM and passes it on to echo.

Shells treat a lot of special characters, called meta-characters as special representations of data. The most common one is the * character, which represents any number of characters in a filename. These special meta-characters can be used to do filename globbing. For example, typing in echo * is almost the same as typing in ls because the shell takes all the files that match * and puts them on the command line for echo to see.

To prevent the shell from interpreting these special characters, they can be escaped from the shell by putting a backslash (\) character in front of them. echo $TERM prints whatever your terminal is set to. echo \$TERM prints $TERM as is.

Changing Your Shell

The easiest way to change your shell is to use the chsh command. Running chsh will place you into the editor that is in your EDITOR environment variable; if it is not set, you will be placed in vi. Change the Shell: line accordingly.

You can also give chsh the -s option; this will set your shell for you, without requiring you to enter an editor. For example, if you wanted to change your shell to bash, the following should do the trick:

% chsh -s /usr/pkg/bin/bash

Note: The shell that you wish to use must be present in the /etc/shells file. If you have installed a shell from the pkgsrc tree , then this should have been done for you already. If you installed the shell by hand, you must do this.

For example, if you installed bash by hand and placed it into /usr/local/bin, you would want to:

# echo "/usr/local/bin/bash" >> /etc/shells

Then rerun chsh.

Text Editors

A lot of configuration in DragonFly is done by editing text files. Because of this, it would be a good idea to become familiar with a text editor. DragonFly comes with a few as part of the base system, and many more are available in the pkgsrc® tree.

The easiest and simplest editor to learn is an editor called ee , which stands for easy editor. To start ee , one would type at the command line ee filename where filename is the name of the file to be edited. For example, to edit /etc/rc.conf, type in ee /etc/rc.conf. Once inside of ee, all of the commands for manipulating the editor's functions are listed at the top of the display. The caret ^ character represents the Ctrl key on the keyboard, so ^e expands to the key combination Ctrl + e . To leave ee , hit the Esc key, then choose leave editor. The editor will prompt you to save any changes if the file has been modified.

DragonFly also comes with more powerful text editors such as vi as part of the base system, while other editors, like emacs and vim , are part of the pkgsrc tree. These editors offer much more functionality and power at the expense of being a little more complicated to learn. However if you plan on doing a lot of text editing, learning a more powerful editor such as vim or emacs will save you much more time in the long run.

Devices and Device Nodes

A device is a term used mostly for hardware-related activities in a system, including disks, printers, graphics cards, and keyboards. When DragonFly boots, the majority of what DragonFly displays are devices being detected. You can look through the boot messages again by viewing /var/run/dmesg.boot.

For example, acd0 is the first IDE CDROM drive, while kbd0 represents the keyboard.

Most of these devices in a UNIX® operating system must be accessed through special files called device nodes, which are located in the /dev directory.

The device nodes in the /dev directory are created and destroyed automatically on DragonFly >= 2.4, by means of the device file system (devfs).

Binary Formats

To understand why DragonFly uses the elf(5) format, you must first know a little about the three currently dominant executable formats for UNIX®:

So, why are there so many different formats? Back in the dim, dark past, there was simple hardware. This simple hardware supported a simple, small system. a.out was completely adequate for the job of representing binaries on this simple system (a PDP-11). As people ported UNIX from this simple system, they retained the a.out format because it was sufficient for the early ports of UNIX to architectures like the Motorola 68k, VAXen, etc.

Then some bright hardware engineer decided that if he could force software to do some sleazy tricks, then he would be able to shave a few gates off the design and allow his CPU core to run faster. While it was made to work with this new kind of hardware (known these days as RISC), a.out was ill-suited for this hardware, so many formats were developed to get to a better performance from this hardware than the limited, simple a.out format could offer. Things like COFF, ECOFF, and a few obscure others were invented and their limitations explored before things seemed to settle on ELF.

In addition, program sizes were getting huge and disks (and physical memory) were still relatively small so the concept of a shared library was born. The VM system also became more sophisticated. While each one of these advancements was done using the a.out format, its usefulness was stretched more and more with each new feature. In addition, people wanted to dynamically load things at run time, or to junk parts of their program after the init code had run to save in core memory and swap space. Languages became more sophisticated and people wanted code called before main automatically. Lots of hacks were done to the a.out format to allow all of these things to happen, and they basically worked for a time. In time, a.out was not up to handling all these problems without an ever increasing overhead in code and complexity. While ELF solved many of these problems, it would be painful to switch from the system that basically worked. So ELF had to wait until it was more painful to remain with a.out than it was to migrate to ELF.

ELF is more expressive than a.out and allows more extensibility in the base system. The ELF tools are better maintained, and offer cross compilation support, which is important to many people. ELF may be a little slower than a.out, but trying to measure it can be difficult. There are also numerous details that are different between the two in how they map pages, handle init code, etc. None of these are very important, but they are differences.

For More Information

Manual Pages

The most comprehensive documentation on DragonFly is in the form of manual pages. Nearly every program on the system comes with a short reference manual explaining the basic operation and various arguments. These manuals can be viewed with the man command. Use of the man command is simple:

% man command

command is the name of the command you wish to learn about. For example, to learn more about ls command type:

% man ls

The online manual is divided up into numbered sections:

  1. User commands.
  2. System calls and error numbers.
  3. Functions in the C libraries.
  4. Device drivers.
  5. File formats.
  6. Games and other diversions.
  7. Miscellaneous information.
  8. System maintenance and operation commands.
  9. Kernel internals.

In some cases, the same topic may appear in more than one section of the online manual. For example, there is a chmod user command and a chmod() system call. In this case, you can tell the man command which one you want by specifying the section:

% man 1 chmod

This will display the manual page for the user command chmod. References to a particular section of the online manual are traditionally placed in parenthesis in written documentation, so chmod(1) refers to the chmod user command and chmod(2) refers to the system call.

This is fine if you know the name of the command and simply wish to know how to use it, but what if you cannot recall the command name? You can use man to search for keywords in the command descriptions by using the -k switch:

% man -k mail

With this command you will be presented with a list of commands that have the keyword mail in their descriptions. This is actually functionally equivalent to using the apropos command.

So, you are looking at all those fancy commands in /usr/bin but do not have the faintest idea what most of them actually do? Simply do:

% cd /usr/bin
% man -f *

or

% cd /usr/bin
% whatis *

which does the same thing.

GNU Info Files

DragonFly includes many applications and utilities produced by the Free Software Foundation (FSF). In addition to manual pages, these programs come with more extensive hypertext documents called info files which can be viewed with the info command or, if you installed emacs , the info mode of emacs . To use the info(1) command, simply type:

% info

For a brief introduction, type h. For a quick command reference, type ?.

DPorts and pkgng

Dports is DragonFly's own third-party software build system. It is based on FreeBSD's Ports Collection. Differences between ports and DPorts are intentionally kept to a minimum, both to maintain familiarity for mutual users of both operating systems and also to leverage the tremendous amount of work the FreeBSD contributors put into ports. DPorts can and does feature ports unique to DragonFly, so it's truly a native system.

The pkgng tool called "pkg" is a modern and fast binary package manager. It was developed for FreeBSD, but PC-BSD used it in production first, followed soon after by DragonFly. In the future, it will be the only binary package manager on FreeBSD, just as DPorts is currently the only port manager.

pkgng is not a replacement for port management tools like ports-mgmt/portmaster or ports-mgmt/portupgrade. While ports-mgmt/portmaster and ports-mgmt/portupgrade can install third-party software from both binary packages and DPorts, pkgng installs only binary packages.

Getting started with pkgng

DragonFly daily snapshots and Releases (starting with 3.4) come with pkgng already installed. Upgrades from earlier releases, however, will not have it. If the "pkg" program is missing on the system for any reason, it can be quickly bootstrapped without having to build it from source.

To ensure pkgng on a DragonFly BSD 3.4 or higher system is ready for use, run the following BEFORE you try to use pkg the first time:

# cd /usr
# make dports-create
# rm -rf /usr/pkg
# pkg upgrade
# rehash

Since you may need to manually edit the configuration file /usr/local/etc/pkg.conf, here is the usual command to edit it using the vi editor:

 # vi /usr/local/etc/pkg.conf

Before using consult the man page (man pkg) and then try these examples:

# pkg search editors
# pkg install vim

To bootstrap pkgng with a download on a very old version of DragonFly that is still using pkgsrc run:

# make pkg-bootstrap
# rehash
# pkg-static install -y pkg
# rehash

Note that this step is unnecessary for any newly installed release from DragonFly 3.4 onwards.

Configuring pkgng

Older versions of pkgng saved their configuration at /usr/local/etc/pkg.conf; this file made reference to a PACKAGESITE. pkgng will still work based on this file, but will output errors:

# pkg update
pkg: PACKAGESITE in pkg.conf is deprecated. Please create a repository configuration file
Updating repository catalogue
pkg: Warning: use of http:// URL scheme with SRV records is deprecated: switch to pkg+http://

Listen to the errors: hash out the packagesite line, save the file, and move on. This can be done with vi:

# vi /usr/local/etc/pkg.conf

There will be two lines in the file like this:

# Default Dports package server (California)
PACKAGESITE: http://mirror-master.dragonflybsd.org/dports/${ABI}/LATEST

Hash out the offending line:

# Default Dports package server (California)
# PACKAGESITE: http://mirror-master.dragonflybsd.org/dports/${ABI}/LATEST

Note that, as of time of writing, there are two working package repositories:

# Default Dports package server (California)
# PACKAGESITE: http://mirror-master.dragonflybsd.org/dports/${ABI}/LATEST

# European mirrors
[...]
#PACKAGESITE: http://dfly.schlundtech.de/dports/${ABI}/LATEST

Test their performance---we will be using the fastest one. This may, or may not, be the one closest to you (the California site for the New World, the German site for the Old World).

# ping schlundtech.de
PING schlundtech.de (85.236.36.90): 56 data bytes
64 bytes from 85.236.36.90: icmp_seq=0 ttl=49 time=101.433 ms
64 bytes from 85.236.36.90: icmp_seq=1 ttl=49 time=59.177 ms
64 bytes from 85.236.36.90: icmp_seq=2 ttl=49 time=79.550 ms
64 bytes from 85.236.36.90: icmp_seq=3 ttl=49 time=88.268 ms
64 bytes from 85.236.36.90: icmp_seq=4 ttl=49 time=120.060 ms
[...]
--- schlundtech.de ping statistics ---
20 packets transmitted, 19 packets received, 5.0% packet loss
round-trip min/avg/max/stddev = 49.555/96.064/186.662/33.559 ms
# ping mirror-master.dragonflybsd.org
PING avalon.dragonflybsd.org (199.233.90.72): 56 data bytes
64 bytes from 199.233.90.72: icmp_seq=0 ttl=47 time=208.013 ms
64 bytes from 199.233.90.72: icmp_seq=1 ttl=47 time=256.441 ms
64 bytes from 199.233.90.72: icmp_seq=2 ttl=47 time=281.436 ms
64 bytes from 199.233.90.72: icmp_seq=3 ttl=47 time=281.103 ms
64 bytes from 199.233.90.72: icmp_seq=4 ttl=47 time=285.440 ms
[...]
--- avalon.dragonflybsd.org ping statistics ---
19 packets transmitted, 19 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 208.013/264.017/334.180/31.549 ms

Now, navigate to /usr/local/etc/pkg/repos/ and rename one of the configuration file samples you find there. Edit the one you renamed:

# cd /usr/local/etc/pkg/repos/
# ls
df-latest.conf.sample   df-releases.conf.sample
# cp -v df-latest.conf.sample df-latest.conf
df-latest.conf.sample -> df-latest.conf
# chmod -v 644 df-latest.conf
df-latest.conf
# vim df-latest.conf

Enable whichever server was faster (Avalon is American, SchlundTech is German):

Avalon: {
    url             : http://mirror-master.dragonflybsd.org/dports/${ABI}/LATEST,
    [...]
    enabled         : no
}
SchlundTech: {
    url             : http://dfly.schlundtech.de/dports/${ABI}/LATEST,
    enabled         : yes
}

Basic pkgng Operations

Usage information for pkgng is available in the pkg(8) manual page, or by running pkg without additional arguments.

Each pkgng command argument is documented in a command-specific manual page. To read the manual page for pkg install, for example, run either:

# pkg help install
# man pkg-install

Obtaining Information About Installed Packages with pkgng

Information about the packages installed on a system can be viewed by running pkg info. Similar to pkg_info(1), the package version and description for all packages will be listed. Information about a specific package is available by running:

# pkg info packagename

For example, to see which version of pkgng is installed on the system, run:

# pkg info pkg
pkg-1.0.12                   New generation package manager

Installing and Removing Packages with pkgng

In general, most DragonFly users will install binary packages by typing:

# pkg install <packagename>

For example, to install curl:

# pkg install curl

Updating repository catalogue
Repository catalogue is up-to-date, no need to fetch fresh copy
The following packages will be installed:

    Installing ca_root_nss: 3.13.5
    Installing curl: 7.24.0

The installation will require 4 MB more space

1 MB to be downloaded

Proceed with installing packages [y/N]: y
ca_root_nss-3.13.5.txz           100%    255KB   255.1KB/s  255.1KB/s   00:00
curl-7.24.0.txz                  100%   1108KB     1.1MB/s    1.1MB/s   00:00
Checking integrity... done
Installing ca_root_nss-3.13.5... done
Installing curl-7.24.0... done

The new package and any additional packages that were installed as dependencies can be seen in the installed packages list:

# pkg info
ca_root_nss-3.13.5    The root certificate bundle from the Mozilla Project
curl-7.24.0           Non-interactive tool to get files from FTP, GOPHER, HTTP(S) servers
pkg-1.0.12            New generation package manager

Packages that are no longer needed can be removed with pkg delete. For example, if it turns out that curl is not needed after all:

# pkg delete curl
The following packages will be deleted:

    curl-7.24.0_1

The deletion will free 3 MB

Proceed with deleting packages [y/N]: y
Deleting curl-7.24.0_1... done

Upgrading Installed Packages with pkgng

Packages that are outdated can be found with pkg version. If a local ports tree does not exist, pkg-version(8) will use the remote repository catalogue, otherwise the local ports tree will be used to identify package versions.

Packages can be upgraded to newer versions with pkgng. Suppose a new version of curl has been released. The local package can be upgraded to the new version:

# pkg upgrade
Updating repository catalogue
repo.txz            100%    297KB   296.5KB/s   296.5KB/s   00:00
The following packages will be upgraded:

Upgrading curl: 7.24.0 -> 7.24.0_1

1 MB to be downloaded

Proceed with upgrading packages [y/N]: y
curl-7.24.0_1.txz   100%    1108KB  1.1MB/s       1.1MB/s   00:00
Checking integrity... done
Upgrading curl from 7.24.0 to 7.24.0_1... done

Auditing Installed Packages with pkgng

Occasionally, software vulnerabilities may be discovered in software within DPorts. pkgng includes built-in auditing. To audit the software installed on the system, type:

# pkg audit -F

Advanced pkgng Operations

Automatically Removing Leaf Dependencies with pkgng

Removing a package may leave behind unnecessary dependencies, like security/ca_root_nss in the example above. Such packages are still installed, but nothing depends on them any more. Unneeded packages that were installed as dependencies can be automatically detected and removed:

# pkg autoremove
Packages to be autoremoved:
    ca_root_nss-3.13.5

The autoremoval will free 723 kB

Proceed with autoremoval of packages [y/N]: y
Deinstalling ca_root_nss-3.13.5... done

Backing Up the pkgng Package Database

pkgng includes its own package database backup mechanism. To manually back up the package database contents, type:

# pkg backup -d <pkgng.db>

Additionally, pkgng includes a periodic(8) script to automatically back up the package database daily if daily_backup_pkgng_enable is set to YES in periodic.conf(5). To prevent the pkg_install periodic script from also backing up the package database, set daily_backup_pkgdb_enable to NO in periodic.conf(5).

To restore the contents of a previous package database backup, run:

# pkg backup -r </path/to/pkgng.db>

Removing Stale pkgng Packages

By default, pkgng stores binary packages in a cache directory as defined by PKG_CACHEDIR in pkg.conf(5). When upgrading packages with pkg upgrade, old versions of the upgraded packages are not automatically removed.

To remove the outdated binary packages, type:

# pkg clean

Modifying pkgng Package Metadata

pkgng has a built-in command to update package origins. For example, if lang/php5 was originally at version 5.3, but has been renamed to lang/php53 for the inclusion of version 5.4, the package database can be updated to deal with this. For pkgng, the syntax is:

# pkg set -o <category/oldport>:<category/newport>

For example, to change the package origin for the above example, type:

# pkg set -o lang/php5:lang/php53

As another example, to update lang/ruby18 to lang/ruby19, type:

# pkg set -o lang/ruby18:lang/ruby19

As a final example, to change the origin of the libglut shared libraries from graphics/libglut to graphics/freeglut, type:

# pkg set -o graphics/libglut:graphics/freeglut

Note: When changing package origins, in most cases it is important to reinstall packages that are dependent on the package that has had the origin changed. To force a reinstallation of dependent packages, type:

# pkg install -Rf graphics/freeglut

Building DPorts from source

The average user will probably not build packages from source. However, it's easy to do and it can be done even when packages have already been pre-installed on the system. Common reasons to build from source are:

Installing DPorts tree

DragonFly 3.4 or later is the minimum version that can build DPorts from source.

It's probably that pkgsrc binaries are already installed because it comes bootstrapped with new systems. It is necessary to rename /usr/pkg directory so that the existing pkgsrc binary tools and libraries don’t get accidentally used while building DPorts, causing breakage. For the installation of the DPorts tree, type:

# cd /usr
# make dports-create-shallow

If the /usr/pkg directory has already been renamed, git won’t be in the search path any more. One option is to download a tarball of DPorts and unpack it. To do this, type:

# cd /usr
# make dports-download

For future updates, pull delta changes via git is fastest, so it is suggested to convert the static tree to a git repository by typing:

# cd /usr/dports/devel/git
# make install
# cd /usr
# rm -rf /usr/dports
# make dports-create-shallow

The git repository is hosted on the github account of John Marino.

Final thoughts

Building from source works similar to ports and pkgsrc: cd into the appropriate program's directory, and type 'make'. 'make install' to install the software, 'make clean' to clean up work files, and so on. Use 'make config-recursive' if you want to set all the port's options, and the options of its dependencies, immediately instead of during the build.

To take all the default build options and avoid getting the pop-up dialog box, set NO_DIALOG=yes on either the command line or the make.conf file.

If you just want to set the options for one package, and accept the default for all of its dependencies, do 'make config' in the package in you want non-default options, and then 'make NO_DIALOG=yes'. Note that this is only necessary if you want to build from source with a non-default set of options, or if no pre-built binary package is available yet.

More reading

Disclaimer

DragonFly, up to and including version 3.4, used pkgsrc to manage third party software packages. DragonFly switched to dports at the 3.6 release.

This page is still useful for anyone wanting to use pkgsrc, but the recommended packaging method is dports, which is covered in a similar document here:

http://www.dragonflybsd.org/docs/howtos/HowToDPorts/


pkgsrc on DragonFly

DragonFly uses a specially crafted Makefile in /usr and a git mirror of the official pkgsrc repository to make pkgsrc distribution more user-friendly.

The basics of the pkgsrc system can be found in NetBSD's Pkgsrc Guide, and can be considered the canonical resource.

Overview

History

Pkgsrc is a packaging system that was originally created for NetBSD. It has been ported to DragonFly, along with other operating systems. Pkgsrc is very similar to FreeBSD's ports mechanism.

Overview

The pkgsrc collection supplies a collection of files designed to automate the process of compiling an application from source code. Remember that there are a number of steps you would normally carry out if you compiled a program yourself (downloading, unpacking, patching, compiling, installing). The files that make up a pkgsrc source collection contain all the necessary information to allow the system to do this for you. You run a handful of simple commands and the source code for the application is automatically downloaded, extracted, patched, compiled, and installed for you. In fact, the pkgsrc source subsystem can also be used to generate packages which can later be manipulated with pkg_add and the other package management commands that will be introduced shortly.

Pkgsrc understands dependencies. Suppose you want to install an application that depends on a specific library being installed. Both the application and the library have been made available through the pkgsrc collection. If you use the pkg_add command or the pkgsrc subsystem to add the application, both will notice that the library has not been installed, and automatically install the library first. You might be wondering why pkgsrc® bothers with both. Binary packages and the source tree both have their own strengths, and which one you use will depend on your own preference.

Binary Package Benefits

Pkgsrc source Benefits

To keep track of pkgsrc releases subscribe to the NetBSD pkgsrc users mailing list and the NetBSD pkgsrc users mailing list. It's also useful to watch the DragonFly User related mailing list as errors with pkgsrc on DragonFly should be reported there.

Warning: Before installing any application, you should check http://www.pkgsrc.org/ for security issues related to your application.

Audit-packages will automatically check all installed applications for known vulnerabilities, a check will be also performed before any application build. Meanwhile, you can use the command audit-packages -d after you have installed some packages.

Note: Binary packages and source packages are effectively the same software and can be manipulated with the same pkg_* tools.

Installing pkgsrc

The basic pkgsrc tools are provided with every DragonFly system as part of installation. However, you still need to download the pkgsrc tree for building applications with these tools.

Set GITHOST in /etc/make.conf or set it as an environment variable to select a different download location, if desired. See mirrors page for available mirrors.

This downloads the stable version of the pkgsrc tree from the default mirror, if you didn't set GITHOST. As root:

# cd /usr
# make pkgsrc-create

to fetch the intial pkgsrc repository from the net, or

# cd /usr
# make pkgsrc-update

to update.

Note: If your DragonFly install is not up to date, you might have ended up with an old release of the pkgsrc tree.

# cd /usr/pkgsrc
# git branch

will show what release you are on. See Tracking the stable branch for more information.

Tracking the stable branch

There are quarterly releases of pkgsrc that are specifically designed for stability. You should in general follow these, rather than the bleeding edge pkgsrc. When a new branch is out you need to set up a local branch tracking that one. 'make pkgsrc-update' will not do this for you.

To see the available remote branches:

# cd /usr/pkgsrc 
# git pull
# git branch -r

To create a local branch, tracking the remote quarterly release:

# cd /usr/pkgsrc 
# git branch pkgsrc-2010Q4 origin/pkgsrc-2010Q4

Branch naming format is 'pkgsrc-YYYYQX', where YYYY is the year and QX is quarters 1-4 of the year. Check pkgsrc.org to see the name of the latest stable branch.

After adding a new branch, it can be downloaded with:

# cd /usr/pkgsrc 
# git checkout pkgsrc-2010Q4
# git pull

Dealing with pkgsrc packages

The following section explains how to find, install and remove pkgsrc packages.

Finding Your Application

Before you can install any applications you need to know what you want, and what the application is called. DragonFly's list of available applications is growing all the time. Fortunately, there are a number of ways to find what you want:

Since DragonFly 1.11 pkg_search(1) is included in the base system. pkg_search(1) searches an already installed pkgsrc INDEX for for a given package name. If pkgsrc is not installed or the INDEX file is missing, it fetches the pkg_summary(5) file.

# pkg_search fvwm
fvwm-2.4.20nb1          Newer version of X11 Virtual window manager
fvwm-2.5.24             Development version of X11 Virtual window manager
fvwm-themes-0.6.2nb8    Configuration framework for fvwm2 with samples
fvwm-wharf-1.0nb1       Copy of AfterStep's Wharf compatible with fvwm2
fvwm1-1.24rnb1          Virtual window manager for X

# pkg_search -v fvwm-2.5
Name    : fvwm-2.5.24-50
Dir     : wm/fvwm-devel                                     
Desc    : Development version of X11 Virtual window manager 
URL     : any                                               
Deps    : perl>#5.0 gettext-lib>0.14.5 [...]

Its also possible to issue the command

# cd /usr/pkgsrc/
# bmake search key='package you are looking for'

from the /usr/pkgsrc directory.

It's also possible to browse website that show all the available pkgsrc packages, such as http://pkgsrc.se/ .

Installing applications

Downloading a binary package is almost always faster than building from source, but not all programs in pkgsrc can be redistributed as a binary. In most cases, you will want to download a binary package if possible, and otherwise build from source if it's not available.

The bin-install target on DragonFly (with pkgsrc from 2011/02/07 and later) will do just that:

# cd /usr/pkgsrc/misc/screen
# bmake bin-install clean

This will download and install the appropriate screen binary package if it exists, and try building from source if it can't complete the download.

Installing applications, source only

Packages are built by going into the appropriate directory and issuing bmake install clean. For example, to build the screen package you need to issue the following commands.

# cd /usr/pkgsrc/misc/screen
# bmake install clean

To find out the options that can affect how a program is built:

# bmake show-options

To change options:

# bmake PKG_OPTIONS.<package_name>="-option1 option2" install clean

Listing an option enables it. Listing an option with a "-" before it disables the option.

To make these option changes permanent for every future build or upgrade of this package, put a similar line in /usr/pkg/etc/mk.conf:

 . PKG_OPTIONS.<package_name>=-option1 option2

Installing applications, binary only

Binary packages can be installed using pkg_radd:

# pkg_radd screen

This program works by setting the PKG_PATH environment variable to the appropriate path for the operating system and architecture to a remote repository of binary packages, and then using pkg_add to get packages. This will install most packages, but will not upgrade packages that are already installed.

You can manually set BINPKG_BASE and use pkg_add to get the same effect, using a different server.

# setenv BINPKG_BASE http://mirror-master.dragonflybsd.org/packages
# pkg_add screen

Issues with pre-built packages

List all installed applications

To obtain a list of all the packages that are installed on your system:

# pkg_info

To see if certain packages have been installed, filter for the name of the package. This example will show all xorg-related packages currently installed on the system:

# pkg_info | grep xorg

Removing packages

If a program was installed as a package:

# pkg_delete packagename

If a package was installed from the source files, you can also change to the directory they were installed from and issue the command:

# bmake deinstall

Note that these methods are effectively interchangeable. Either will work whether the package was originally installed from source or binary.

Remove associated files needed for building a package

To remove the work file from building a package, and the package's dependencies:

# bmake clean clean-depends

This can be combined with other steps:

# bmake install clean clean-depends

Upgrading packages

There's a number of ways to upgrade pkgsrc; some of these are built in and some are packages installable with pkgsrc. This list is not necessarily comprehensive.

Update pkgsrc system packages

Note: Sometimes basic pkgsrc tools; bmake, pkg_install and bootstrap-mk-files need to be upgraded. However, they can't be deleted and replaced since you need that tool to accomplish replacement. The solution is to build a separate package before deletion, and install that package.

# cd /usr/pkgsrc/devel/bmake
or
# cd /usr/pkgsrc/pkgtools/pkg_install
or 
# cd /usr/pkgsrc/pkgtools/bootstrap-mk-files

# env USE_DESTDIR=yes bmake package
# bmake clean-depends clean

And go to the packages directory and install the binary package with

# cd /usr/pkgsrc/packages/All
# pkg_add -u <pkg_name> (i.e. the name of the .tgz file).

bmake replace

Performed in the /usr/pkgsrc directory that correlates with the installed package, the software is first built and then replaced.

# cd /usr/pkgsrc/chat/ircII
# bmake replace

pkg_rolling-replace

pkg_rolling-replace replaces packages one by one and you can use it for a better way of package management. Actually it does bmake replace on one package at a time, sorting the packages being replaced according to their interdependencies, which avoids most duplicate rebuilds. Once pkg_rolling-replace is installed you can update the packages through the following steps.

# cd /usr && make pkgsrc-update
# pkg_rolling-replace -u

pkgin

Downloads and installs binary packages. Check the mirrors page for sites carrying binary packages to use with pkgin. You can run the following commands to get the packages updated. This assumes that pkgin is already configured. Please consult the documentation and the man page on how to do so.

# pkgin update
# pkgin full-upgrade 

pkg_chk

It updates packages by removing them and rebuilding them. Warning: programs are unavailable until a rebuild finishes. If they don't rebuild, it won't work. pkg_chk requires a few steps in order to work correctly. They are listed here.

# pkg_chk -g  # make initial list of installed packages
# pkg_chk -r  # remove all packages that are not up to date and packages that depend on them
# pkg_chk -a  # install all missing packages (use binary packages, this is the default)
# pkg_chk -as # install all missing packages (build from source)

The above process removes all packages at once and installs the missing packages one by one. This can cause longer disruption of services when the removed package has to wait a long time for its turn to get installed.

pkg_add -u

Point at a local or online binary archive location to download and update packages.

rpkgmanager

This requires that you've set up rpkgmanager first. Read more about rpkgmanager here.

# yes | rpkgmanager.rb

Start pkgsrc applications on system startup

Packages often install rc.d scripts to control software running on startup. To specify where the rc.d scripts from the installed packages should go, add the following lines to your /usr/pkg/etc/mk.conf file:

RCD_SCRIPTS_DIR=/etc/rc.d
PKG_RCD_SCRIPTS=YES

This option can be set in the environment to activate it for binary packages. These packages will still have to be enabled in /etc/rc.conf/ to run at boot. If these options aren't set, the rc file will be placed in /usr/pkg/share/examples/rc.d/ and will need to be manually copied over to /etc/rc.d.

Many other options can be set in this file; see /usr/pkgsrc/mk/defaults/mk.conf for examples.

Miscellaneous topics

Post-installation Activities

After installing a new application you will normally want to read any documentation it may have included, edit any configuration files that are required, ensure that the application starts at boot time (if it is a daemon), and so on. The exact steps you need to take to configure each application will obviously be different. However, if you have just installed a new application and are wondering What now? These tips might help:

Use pkg_info(1) to find out which files were installed, and where. For example, if you have just installed Foo_Package version 1.0.0, then this command

# pkg_info -L foopackage-1.0.0 | less

will show all the files installed by the package. Pay special attention to files in man/ directories, which will be manual pages, etc/ directories, which will be configuration files, and doc/, which will be more comprehensive documentation. If you are not sure which version of the application was just installed, a command like this

# pkg_info | grep -i foopackage

will find all the installed packages that have foopackage in the package name. Replace foopackage in your command line as necessary.

Once you have identified where the application's manual pages have been installed, review them using man(1). Similarly, look over the sample configuration files, and any additional documentation that may have been provided. If the application has a web site, check it for additional documentation, frequently asked questions, and so forth. If you are not sure of the web site address it may be listed in the output from

# pkg_info foopackage-1.0.0

A WWW: line, if present, should provide a URL for the application's web site.

Dealing with Broken Packages

If you come across a package that does not work for you, there are a few things you can do, including:

  1. Fix it! The pkgsrc Guide includes detailed information on the pkgsrc® infrastructure so that you can fix the occasional broken package or even submit your own!

  2. Send email to the maintainer of the package first. Type bmake maintainer or read the Makefile to find the maintainer's email address. Remember to include the name and version of the port (send the $NetBSD: line from the Makefile) and the output leading up to the error when you email the maintainer. If you do not get a response from the maintainer, you can try users .

  3. Grab a pre-built package from an mirror site near you.

What is WIP?

Packages that can be built within the pkgsrc framework but are not yet necessarily ready for production use can be found in http://pkgsrc-wip.sourceforge.net. These packages need to be downloaded separately; check the website for details. Packages in this collection are in development and may not build successfully.

Links

The X Window System

Updated for X.Org's X11 server by Ken Tom and Marc Fonvieille. Updated for DragonFly by Victor Balada Diaz. Updated for 2014 pkgng by Warren Postma

Synopsis

This chapter will cover the installation and some configuration of the usual way of giving your Dragonfly BSD system an X-Windows style Graphical User Interface (GUI) and a modern Desktop Environment. In Unix systems, the graphical drawing system is provided by the combination of an X11R6 compliant X-Windows Server, such as the X.org server, and other software such as Window Managers and Desktop Environments. This multi-layered approach may be surprising to people coming from systems like the Mac or like Windows where these components are not so flexible, or provided by so many separately installed and configured pieces.

For more information on the video hardware support in X.org, check the X.org web site. If you have problems configuring your X server, just search the web. There are lots of tutorials and guides on how to set up your X properly, if the information in this page is not enough for your situation.

Before reading this chapter, you should know how to install additional third-party software. Read the dports section of the documentation, for DragonFly 3.4 and later.

You may find the FreeBSD X Configuration instructions apply exactly and unchanged in DragonFly BSD. They are found here

Understanding X

What is X.Org

X.Org is the most popular free implementation of the X11 specification. The X11 specification is an open standard, and there are other implementations, some commercial, and some free.

The Window Manager and the Desktop Environment

An X Server is a very low level piece of software. It does not provide any way to move windows around or resize them. It does not provide a title bar on the top of your windows, or a dock, or any menus.

These things are the job, in the oldest style of X environment, of your window manager, or in more recent times, of a Desktop Environment.

Installing X.org by itself does not give you any window manager or any desktop environment. You will have to choose one and install it yourself. Until you select one, your system will not be usable.

There are dozens of window managers and desktop environments available for X. The most retro ones you might chose include fvwm and twm which have that retro 1980s workstation look and feel. There are also window managers included inside modern desktop environments like XFCE, KDE and Gnome.

If you are brand new and don't know what to do, select the XFCE4 desktop and follow those instructions. Every desktop environment and window manager also has a different configuration mechanism. Read your chosen environment's documentation to learn more. Some are configured by text files alone, and some (like KDE and Gnome) have sophisticated graphical configuration utilities and "control panels".

Note that XFCE4 and Gnome and KDE do not require you to install any window manager as they include one automatically.

Installing X

X.org is currently available in the DragonFly dports collection.

To install:

pkg install xorg-7.7

By the time you read this, it might be a newer version of xorg than 7.7, you can also try this general command:

pkg install xorg

Configuring X

You may need to add the following lines to /etc/rc.conf for regular PCs but you might not want to set these two lines to NO instead on a Virtual Machine as they cause problems in Dragonfly BSD 3.4 through 3.6:

hald_enable ="YES"
dbus_enable= "YES" 

Also see below about enabling moused in rc.conf, which may be required for you to see your mouse pointer in X.

As of version 7.3, Xorg can often work without any configuration file by simply typing at prompt:

% startx

If this does not work, or if the default configuration is not acceptable, then X11 must be configured manually. For example, if X11 does not detect your mouse then you will not get a mouse pointer, you will get a desktop (either a color or a dotted-pattern) but moving your mouse will not result in you seeing a mouse pointer move around. Also, you might get a garbled display, or no display at all. If any of these happen to you, you need to do some manual configuration of X.org, which means a configuration text file.

Configuration of X11 is a multi-step process. The first step is to build an initial configuration file. As the super user, simply run:

# Xorg -configure

This will generate an X11 configuration skeleton file in the /root directory called xorg.conf.new (whether you su(1) or do a direct login affects the inherited supervisor $HOME directory variable). The X11 program will attempt to probe the graphics hardware on the system and write a configuration file to load the proper drivers for the detected hardware on the target system.

The next step is to test the existing configuration to verify that X.org can work with the graphics hardware on the target system. To perform this task, type:

# Xorg -config xorg.conf.new -retro

The -retro option is now required or you will only get a black desktop when testing. This retro mode is an empty X desktop with a dot pattern on the background and an X cursor in the center. If the mouse is working, you should be able to move it.

If a black and grey grid and an X mouse cursor appear, the configuration was successful. To exit the test, just press Ctrl + Alt + Backspace simultaneously.

Note: If the mouse does not work, you will need to first configure it before proceeding. This can usually be achieved by just using /dev/sysmouse as the input device in the config file and enabling moused:

# rcenable moused

Tune the xorg.conf.new configuration file to taste and move it to where Xorg(1) can find it. This is typically /etc/X11/xorg.conf or /usr/pkg/xorg/lib/X11/xorg.conf.

The X11 configuration process is now complete. You can start X.org with startx(1). The X11 server may also be started with the use of xdm(1).

The X Display Manager

Contributed by Seth Kingsley.

Overview

The X Display Manager ( XDM ) is an optional part of the X Window System that is used for login session management. This is useful for several types of situations, including minimal "X Terminals", desktops, and large network display servers. Since the X Window System is network and protocol independent, there are a wide variety of possible configurations for running X clients and servers on different machines connected by a network. XDM provides a graphical interface for choosing which display server to connect to, and entering authorization information such as a login and password combination.

Think of XDM as providing the same functionality to the user as the getty(8) utility (see Section 17.3.2 for details). That is, it performs system logins to the display being connected to and then runs a session manager on behalf of the user (usually an X window manager). XDM then waits for this program to exit, signaling that the user is done and should be logged out of the display. At this point, XDM can display the login and display chooser screens for the next user to login.

Using XDM

The XDM daemon program is located in /usr/pkg/bin/xdm. This program can be run at any time as root and it will start managing the X display on the local machine. If XDM is to be run every time the machine boots up, a convenient way to do this is by adding an entry to /etc/ttys. For more information about the format and usage of this file, see Section 17.3.2.1. There is a line in the default /etc/ttys file for running the XDM daemon on a virtual terminal:

ttyv8   "/usr/pkg/bin/xdm -nodaemon"  xterm   off secure

By default this entry is disabled; in order to enable it change field 5 from off to on and restart init(8) using the directions in Section 17.3.2.2. The first field, the name of the terminal this program will manage, is ttyv8. This means that XDM will start running on the 9th virtual terminal.

Configuring XDM

The XDM configuration directory is located in /var/lib/xdm. The sample configuration files are in /usr/pkg/share/examples/xdm/, in this directory there are several files used to change the behavior and appearance of XDM . Typically these files will be found:

File Description
Xaccess Client authorization ruleset.
Xresources Default X resource values.
Xservers List of remote and local displays to manage.
Xsession Default session script for logins.
Xsetup_* Script to launch applications before the login interface.
xdm-config Global configuration for all displays running on this machine.
xdm-errors Errors generated by the server program.
xdm-pid The process ID of the currently running XDM.

Also in this directory are a few scripts and programs used to set up the desktop when XDM is running. The purpose of each of these files will be briefly described. The exact syntax and usage of all of these files is described in xdm(1).

The default configuration is a simple rectangular login window with the hostname of the machine displayed at the top in a large font and "Login:" and "Password:" prompts below. This is a good starting point for changing the look and feel of XDM screens.

Xaccess

The protocol for connecting to XDM controlled displays is called the X Display Manager Connection Protocol (XDMCP). This file is a ruleset for controlling XDMCP connections from remote machines. It is ignored unless the xdm-config is changed to listen for remote connections. By default, it does not allow any clients to connect.

Xresources

This is an application-defaults file for the display chooser and the login screens. This is where the appearance of the login program can be modified. The format is identical to the app-defaults file described in the X11 documentation.

Xservers

This is a list of the remote displays the chooser should provide as choices.

Xsession

This is the default session script for XDM to run after a user has logged in. Normally each user will have a customized session script in ~/.xsession that overrides this script.

Xsetup_*

These will be run automatically before displaying the chooser or login interfaces. There is a script for each display being used, named Xsetup_ followed by the local display number (for instance Xsetup_0). Typically these scripts will run one or two programs in the background such as xconsole.

xdm-config

This contains settings in the form of app-defaults that are applicable to every display that this installation manages.

xdm-errors

This contains the output of the X servers that XDM is trying to run. If a display that XDM is trying to start hangs for some reason, this is a good place to look for error messages. These messages are also written to the user's ~/.xsession-errors file on a per-session basis.

Running a Network Display Server

In order for other clients to connect to the display server, edit the access control rules, and enable the connection listener. By default these are set to conservative values. To make XDM listen for connections, first comment out a line in the xdm-config file:

! SECURITY: do not listen for XDMCP or Chooser requests

! Comment out this line if you want to manage X terminals with xdm

DisplayManager.requestPort:     0

and then restart XDM . Remember that comments in app-defaults files begin with a "!" character, not the usual "#". More strict access controls may be desired. Look at the example entries in Xaccess, and refer to the xdm(1) manual page for further information.

Replacements for XDM

Several replacements for the default XDM program exist. One of them, kdm (bundled with KDE ) is described later in this chapter. The kdm display manager offers many visual improvements and cosmetic frills, as well as the functionality to allow users to choose their window manager of choice at login time.


Desktop Environments

*Contributed by Valentino Vaschetto. *

This section describes the different desktop environments available for X on FreeBSD. A desktop environment can mean anything ranging from a simple window manager to a complete suite of desktop applications, such as KDE or GNOME .

GNOME

About GNOME

GNOME is a user-friendly desktop environment that enables users to easily use and configure their computers. GNOME includes a panel (for starting applications and displaying status), a desktop (where data and applications can be placed), a set of standard desktop tools and applications, and a set of conventions that make it easy for applications to cooperate and be consistent with each other. Users of other operating systems or environments should feel right at home using the powerful graphics-driven environment that GNOME provides.

Installing GNOME

GNOME can be easily installed from a package or from the pkgsrc framework:

To install the GNOME package from the network, simply type:

# pkg install gnome-desktop

To build GNOME from source, if you have the pkgsrc tree on your system:

# cd /usr/pkgsrc/meta-pkgs/gnome

# bmake install clean

Once GNOME is installed, the X server must be told to start GNOME instead of a default window manager.

The easiest way to start GNOME is with GDM , the GNOME Display Manager. GDM , which is installed as a part of the GNOME desktop (but is disabled by default), can be enabled by adding gdm_enable="YES" to /etc/rc.conf. Once you have rebooted, GNOME will start automatically once you log in -- no further configuration is necessary.

GNOME may also be started from the command-line by properly configuring a file named .xinitrc. If a custom .xinitrc is already in place, simply replace the line that starts the current window manager with one that starts /usr/pkg/bin/gnome-session instead. If nothing special has been done to the configuration file, then it is enough simply to type:

% echo "/usr/pkg/bin/gnome-session" > ~/.xinitrc

Next, type startx, and the GNOME desktop environment will be started.

Note: If an older display manager, like XDM , is being used, this will not work. Instead, create an executable .xsession file with the same command in it. To do this, edit the file and replace the existing window manager command with /usr/pkg/bin/gnome-session :

% echo "#!/bin/sh" > ~/.xsession

% echo "/usr/pkg/bin/gnome-session" >> ~/.xsession

% chmod +x ~/.xsession

Yet another option is to configure the display manager to allow choosing the window manager at login time; the section on KDE details explains how to do this for kdm , the display manager of KDE .

Anti-aliased Fonts with GNOME

X11 supports anti-aliasing via its RENDER extension. GTK+ 2.0 and greater (the toolkit used by GNOME ) can make use of this functionality. Configuring anti-aliasing is described in Section 5.5.3.

So, with up-to-date software, anti-aliasing is possible within the GNOME desktop. Just go to Applications->Desktop Preferences->Font , and select either Best shapes, Best contrast, or Subpixel smoothing (LCDs). For a GTK+ application that is not part of the GNOME desktop, set the environment variable GDK_USE_XFT to 1 before launching the program.

KDE

About KDE

KDE is an easy to use contemporary desktop environment. Some of the things that KDE brings to the user are:

Installing KDE

Just as with GNOME or any other desktop environment, the easiest way to install KDE is through the pkgsrc framework or from a package:

To install the KDE 4.10 package from the network, simply type:

# pkg install kde-4.10

To build KDE from source, using the pkgsrc framework:

# cd /usr/pkgsrc/meta-pkgs/kde3

# bmake install clean

After KDE has been installed, the X server must be told to launch this application instead of the default window manager. This is accomplished by editing the .xinitrc file:

% echo "exec startkde" > ~/.xinitrc

Now, whenever the X Window System is invoked with startx, KDE will be the desktop.

If a display manager such as XDM is being used, the configuration is slightly different. Edit the .xsession file instead. Instructions for kdm are described later in this chapter.

More Details on KDE

Now that KDE is installed on the system, most things can be discovered through the help pages, or just by pointing and clicking at various menus. Windows® or Mac® users will feel quite at home.

The best reference for KDE is the on-line documentation. KDE comes with its own web browser, Konqueror , dozens of useful applications, and extensive documentation. The remainder of this section discusses the technical items that are difficult to learn by random exploration.

The KDE Display Manager

An administrator of a multi-user system may wish to have a graphical login screen to welcome users. XDM can be used, as described earlier. However, KDE includes an alternative, kdm , which is designed to look more attractive and include more login-time options. In particular, users can easily choose (via a menu) which desktop environment ( KDE , GNOME , or something else) to run after logging on.

To enable kdm , the ttyv8 entry in /etc/ttys has to be adapted. The line should look as follows:

ttyv8 "/usr/pkg/bin/kdm -nodaemon" xterm on secure

XFce

About XFce

XFce is a desktop environment based on the GTK+ toolkit used by GNOME , but is much more lightweight and meant for those who want a simple, efficient desktop which is nevertheless easy to use and configure. Visually, it looks very much like CDE , found on commercial UNIX systems. Some of XFce 's features are:

More information on XFce can be found on the XFce website.

Installing XFce

A binary package for XFce exists. To install, simply type:

# pkg install xfce

This should install the main xfce4 desktop package, and most of the required components.

Alternatively, to build from source, use the pkgsrc framework:

# cd /usr/pkgsrc/meta-pkgs/xfce4

# bmake install clean

Now, tell the X server to launch XFce the next time X is started. Simply type this:

% echo "/usr/pkg/bin/startxfce4" > ~/.xinitrc

The next time X is started, XFce will be the desktop. As before, if a display manager like XDM is being used, create an .xsession, as described in the section on GNOME, but with the /usr/pkg/bin/startxfce4 command; or, configure the display manager to allow choosing a desktop at login time, as explained in the section on kdm.


Configuration and Tuning

Written by Chern Lee. Based on a tutorial written by Mike Smith. Also based on tuning(7) written by Matt Dillon.

Synopsis

One of the important aspects of DragonFly is system configuration. Correct system configuration will help prevent headaches during future upgrades. This chapter will explain much of the DragonFly configuration process, including some of the parameters which can be set to tune a DragonFly system.

After reading this chapter, you will know:

Before reading this chapter, you should:

Initial Configuration

Partition Layout

Base Partitions

When laying out file systems with disklabel(8) remember that hard drives transfer data faster from the outer tracks to the inner. Thus smaller and heavier-accessed file systems should be closer to the outside of the drive, while larger partitions like /usr should be placed toward the inner. It is a good idea to create partitions in a similar order to: root, swap, /var, /usr.

The size of /var reflects the intended machine usage. /var is used to hold mailboxes, log files, and printer spools. Mailboxes and log files can grow to unexpected sizes depending on how many users exist and how long log files are kept. Most users would never require a gigabyte, but remember that /var/tmp must be large enough to contain packages.

The /usr partition holds much of the files required to support the system, the pkgsrc® collection (recommended) and the source code (optional). At least 2 gigabytes would be recommended for this partition.

When selecting partition sizes, keep the space requirements in mind. Running out of space in one partition while barely using another can be a hassle.

Swap Partition

As a rule of thumb, the swap partition should be about double the size of system memory (RAM). For example, if the machine has 128 megabytes of memory, the swap file should be 256 megabytes. Systems with less memory may perform better with more swap. Less than 256 megabytes of swap is not recommended and memory expansion should be considered. The kernel's VM paging algorithms are tuned to perform best when the swap partition is at least two times the size of main memory. Configuring too little swap can lead to inefficiencies in the VM page scanning code and might create issues later if more memory is added.

On larger systems with multiple SCSI disks (or multiple IDE disks operating on different controllers), it is recommend that a swap is configured on each drive (up to four drives). The swap partitions should be approximately the same size. The kernel can handle arbitrary sizes but internal data structures scale to 4 times the largest swap partition. Keeping the swap partitions near the same size will allow the kernel to optimally stripe swap space across disks. Large swap sizes are fine, even if swap is not used much. It might be easier to recover from a runaway program before being forced to reboot.

Why Partition?

Several users think a single large partition will be fine, but there are several reasons why this is a bad idea. First, each partition has different operational characteristics and separating them allows the file system to tune accordingly. For example, the root and /usr partitions are read-mostly, without much writing. While a lot of reading and writing could occur in /var and /var/tmp.

By properly partitioning a system, fragmentation introduced in the smaller write heavy partitions will not bleed over into the mostly-read partitions. Keeping the write-loaded partitions closer to the disk's edge, will increase I/O performance in the partitions where it occurs the most. Now while I/O performance in the larger partitions may be needed, shifting them more toward the edge of the disk will not lead to a significant performance improvement over moving /var to the edge. Finally, there are safety concerns. A smaller, neater root partition which is mostly read-only has a greater chance of surviving a bad crash.

CategoryHandbook

CategoryHandbook-configuration

Core Configuration

The principal location for system configuration information is within /etc/rc.conf. This file contains a wide range of configuration information, principally used at system startup to configure the system. Its name directly implies this; it is configuration information for the rc* files.

An administrator should make entries in the rc.conf file to override the default settings from /etc/defaults/rc.conf. The defaults file should not be copied verbatim to /etc - it contains default values, not examples. All system-specific changes should be made in the rc.conf file itself.

A number of strategies may be applied in clustered applications to separate site-wide configuration from system-specific configuration in order to keep administration overhead down. The recommended approach is to place site-wide configuration into another file, such as /etc/rc.conf.site, and then include this file into /etc/rc.conf, which will contain only system-specific information.

As rc.conf is read by sh(1) it is trivial to achieve this. For example:

The rc.conf.site file can then be distributed to every system using rsync or a similar program, while the rc.conf file remains unique.

Upgrading the system using make world will not overwrite the rc.conf file, so system configuration information will not be lost.

CategoryHandbook

CategoryHandbook-configuration

Application Configuration

Typically, installed applications have their own configuration files, with their own syntax, etc. It is important that these files be kept separate from the base system, so that they may be easily located and managed by the package management tools.

Typically, these files are installed in /usr/pkg/etc. In the case where an application has a large number of configuration files, a subdirectory will be created to hold them.

Normally, when a port or package is installed, sample configuration files are also installed. These are usually identified with a .default suffix. If there are no existing configuration files for the application, they will be created by copying the .default files.

For example, consider the contents of the directory /usr/pkg/etc/httpd:

total 90

-rw-r--r--  1 root  wheel  -   34K Jan 11 12:04 httpd.conf

-rw-r--r--  1 root  wheel  -   13K Jan 11 12:02 magic

-rw-r--r--  1 root  wheel  -   28K Jan 11 12:02 mime.types

-rw-r--r--  1 root  wheel  -   11K Jan 11 12:02 ssl.conf

Starting Services

It is common for a system to host a number of services. These may be started in several different fashions, each having different advantages.

Software installed from a port or the packages collection will often place a script in /usr/pkg/share/examples/rc.d which is invoked at system startup with a start argument, and at system shutdown with a stop argument. This is the recommended way for starting system-wide services that are to be run as root, or that expect to be started as root. These scripts are registered as part of the installation of the package, and will be removed when the package is removed.

A generic startup script in /usr/pkg/share/examples/rc.d looks like:

#!/bin/sh

echo -n ' FooBar'



case "$1" in

start)

        /usr/pkg/bin/foobar

        ;;

stop)

        kill -9 `cat /var/run/foobar.pid`

        ;;

*)

        echo "Usage: `basename $0` {start|stop}" >&2

        exit 64

        ;;

esac



exit 0

The startup scripts of DragonFly will look in /usr/pkg/share/examples/rc.d for scripts that have an .sh extension and are executable by root. Those scripts that are found are called with an option start at startup, and stop at shutdown to allow them to carry out their purpose. So if you wanted the above sample script to be picked up and run at the proper time during system startup, you should save it to a file called FooBar.sh in /usr/local/etc/rc.d and make sure it is executable. You can make a shell script executable with chmod(1) as shown below:

# chmod 755 "FooBar.sh"

Some services expect to be invoked by inetd(8) when a connection is received on a suitable port. This is common for mail reader servers (POP and IMAP, etc.). These services are enabled by editing the file /etc/inetd.conf. See inetd(8) for details on editing this file.

Some additional system services may not be covered by the toggles in /etc/rc.conf. These are traditionally enabled by placing the command(s) to invoke them in /etc/rc.local (which does not exist by default). Note that rc.local is generally regarded as the location of last resort; if there is a better place to start a service, do it there.

Note: Do not place any commands in /etc/rc.conf. To start daemons, or run any commands at boot time, place a script in /usr/pkg/share/examples/rc.d instead.

It is also possible to use the cron(8) daemon to start system services. This approach has a number of advantages, not least being that because cron(8) runs these processes as the owner of the crontab, services may be started and maintained by non-root users.

This takes advantage of a feature of cron(8): the time specification may be replaced by @reboot, which will cause the job to be run when cron(8) is started shortly after system boot.

CategoryHandbook

CategoryHandbook-configuration

Configuring the cron Utility

*Contributed by Tom Rhodes. *

One of the most useful utilities in DragonFly is cron(8). The cron utility runs in the background and constantly checks the /etc/crontab file. The cron utility also checks the /var/cron/tabs directory, in search of new crontab files. These crontab files store information about specific functions which cron is supposed to perform at certain times.

The cron utility uses two different types of configuration files, the system crontab and user crontabs. The only difference between these two formats is the sixth field. In the system crontab, the sixth field is the name of a user for the command to run as. This gives the system crontab the ability to run commands as any user. In a user crontab, the sixth field is the command to run, and all commands run as the user who created the crontab; this is an important security feature.

Note: User crontabs allow individual users to schedule tasks without the need for root privileges. Commands in a user's crontab run with the permissions of the user who owns the crontab.

The root user can have a user crontab just like any other user. This one is different from /etc/crontab (the system crontab). Because of the system crontab, there's usually no need to create a user crontab for root.

Let us take a look at the /etc/crontab file (the system crontab):

# /etc/crontab - root's crontab for DragonFly

#

#                                                                  (1)

#

SHELL=/bin/sh

PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin                            (2)

HOME=/var/log

#

#

#minute hour    mday    month   wday    who command            (3)

#

#


*/5 *   *   *   *   root    /usr/libexec/atrun (4)
  1. Like most DragonFly configuration files, the # character represents a comment. A comment can be placed in the file as a reminder of what and why a desired action is performed. Comments cannot be on the same line as a command or else they will be interpreted as part of the command; they must be on a new line. Blank lines are ignored.

  2. First, the environment must be defined. The equals (=) character is used to define any environment settings, as with this example where it is used for the SHELL, PATH, and HOME options. If the shell line is omitted, cron will use the default, which is sh. If the PATH variable is omitted, no default will be used and file locations will need to be absolute. If HOME is omitted, cron will use the invoking users home directory.

  3. This line defines a total of seven fields. Listed here are the values minute, hour, mday, month, wday, who, and command. These are almost all self explanatory. minute is the time in minutes the command will be run. hour is similar to the minute option, just in hours. mday stands for day of the month. month is similar to hour and minute, as it designates the month. The wday option stands for day of the week. All these fields must be numeric values, and follow the twenty-four hour clock. The who field is special, and only exists in the /etc/crontab file. This field specifies which user the command should be run as. When a user installs his or her crontab file, they will not have this option. Finally, the command option is listed. This is the last field, so naturally it should designate the command to be executed.

  4. This last line will define the values discussed above. Notice here we have a */5 listing, followed by several more * characters. These * characters mean first-last, and can be interpreted as every time. So, judging by this line, it is apparent that the atrun command is to be invoked by root every five minutes regardless of what day or month it is. For more information on the atrun command, see the atrun(8) manual page.Commands can have any number of flags passed to them; however, commands which extend to multiple lines need to be broken with the backslash ** continuation character.

This is the basic set up for every crontab file, although there is one thing different about this one. Field number six, where we specified the username, only exists in the system /etc/crontab file. This field should be omitted for individual user crontab files.

Installing a Crontab

Important: You must not use the procedure described here to edit/install the system crontab. Simply use your favorite editor: the cron utility will notice that the file has changed and immediately begin using the updated version. If you use crontab to load the /etc/crontab file you may get an error like root: not found because of the system crontab's additional user field.

To install a freshly written user crontab, first use your favorite editor to create a file in the proper format, and then use the crontab utility. The most common usage is:

% crontab crontab-file

In this example, crontab-file is the filename of a crontab that was previously created.

There is also an option to list installed crontab files: just pass the -l option to crontab and look over the output.

For users who wish to begin their own crontab file from scratch, without the use of a template, the crontab -e option is available. This will invoke the selected editor with an empty file. When the file is saved, it will be automatically installed by the crontab command.

If you later want to remove your user crontab completely, use crontab with the -r option.

Using rc under DragonFly

*Contributed by Tom Rhodes. *

DragonFly uses the NetBSD® rc.d system for system initialization. Users should notice the files listed in the /etc/rc.d directory. Many of these files are for basic services which can be controlled with the start, stop, and restart options. For instance, sshd(8) can be restarted with the following command:

# /etc/rc.d/sshd restart

This procedure is similar for other services. Of course, services are usually started automatically as specified in rc.conf(5). For example, enabling the Network Address Translation daemon at startup is as simple as adding the following line to /etc/rc.conf:

natd_enable="YES"

If a natd_enable="NO" line is already present, then simply change the NO to YES. The rc scripts will automatically load any other dependent services during the next reboot, as described below.

Another way to add services to the automatic startup/shutdown is to type, for example for natd,

 # rcenable natd

Since the rc.d system is primarily intended to start/stop services at system startup/shutdown time, the standard start, stop and restart options will only perform their action if the appropriate /etc/rc.conf variables are set. For instance the above sshd restart command will only work if sshd_enable is set to YES in /etc/rc.conf. To start, stop or restart a service regardless of the settings in /etc/rc.conf, the commands should be prefixed with force. For instance to restart sshd regardless of the current /etc/rc.conf setting, execute the following command:

# /etc/rc.d/sshd forcerestart

It is easy to check if a service is enabled in /etc/rc.conf by running the appropriate rc.d script with the option rcvar. Thus, an administrator can check that sshd is in fact enabled in /etc/rc.conf by running:

# /etc/rc.d/sshd rcvar

# sshd

$sshd_enable=YES

Note: The second line (# sshd) is the output from the rc.d script, not a root prompt.

To determine if a service is running, a status option is available. For instance to verify that sshd is actually started:

# /etc/rc.d/sshd status

sshd is running as pid 433.

It is also possible to reload a service. This will attempt to send a signal to an individual service, forcing the service to reload its configuration files. In most cases this means sending the service a SIGHUP signal.

The rcNG structure is used both for network services and system initialization. Some services are run only at boot; and the RCNG system is what triggers them.

Many system services depend on other services to function properly. For example, NIS and other RPC-based services may fail to start until after the rpcbind (portmapper) service has started. To resolve this issue, information about dependencies and other meta-data is included in the comments at the top of each startup script. The rcorder(8) program is then used to parse these comments during system initialization to determine the order in which system services should be invoked to satisfy the dependencies. The following words may be included at the top of each startup file:

By using this method, an administrator can easily control system services without the hassle of runlevels like some other UNIX® operating systems.

Additional information about the DragonFly rc.d system can be found in the rc(8), rc.conf(5), and rc.subr(8) manual pages.

Using DragonFly's rcrun(8)

Besides the methods described above DragonFly supports rcrun(8) to control rc(8) scripts. rcrun(8) provides a number of command for controlling rc(8)

scripts. The start, forcestart, faststart, stop, restart, and rcvar commands are just passed to the scripts. See rc(8) for more information on these commands.

The remaining commands are:

disable Sets the corresponding _enable variable in rc.conf(5) to NO and runs the stop command.
enable Sets the corresponding _enable variable in rc.conf(5) to YES and runs the start command.
list Shows the status of the specified scripts. If no argument is specified, the status of all scripts is shown.

To enable the dntpd(8) service, you can use:

 # rcenable dntpd

To check if dntpd(8) is running you can use the following command:

# rclist dntpd

rcng_dntpd=stopped

To start dntpd(8):

# rcstart dntpd

Running /etc/rc.d/dntpd start

Starting dntpd.

Restart and stop works the same way:

# rcrestart dntpd

Stopping dntpd.

Starting dntpd.



# rcstop dntpd

Stopping dntpd.

If a service is not enabled in /etc/rc.conf, but you want it start anyway, execute the following:

# rcforce dntpd

Running /etc/rc.d/dntpd forcestart

Starting dntpd.

Notes

(1) Previously this was used to define *BSD dependent features.

Setting Up Network Interface Cards

*Contributed by Marc Fonvieille. *

Nowadays we can not think about a computer without thinking about a network connection. Adding and configuring a network card is a common task for any DragonFly administrator.

Locating the Correct Driver

Before you begin, you should know the model of the card you have, the chip it uses, and whether it is a PCI or ISA card. DragonFly supports a wide variety of both PCI and ISA cards. Check the Hardware Compatibility List for your release to see if your card is supported.

Once you are sure your card is supported, you need to determine the proper driver for the card. The file /usr/src/sys/i386/conf/LINT will give you the list of network interfaces drivers with some information about the supported chipsets/cards. If you have doubts about which driver is the correct one, read the manual page of the driver. The manual page will give you more information about the supported hardware and even the possible problems that could occur.

If you own a common card, most of the time you will not have to look very hard for a driver. Drivers for common network cards are present in the GENERIC kernel, so your card should show up during boot, like so:

dc0: <82c169 PNIC 10/100BaseTX> port 0xa000-0xa0ff mem 0xd3800000-0xd38

000ff irq 15 at device 11.0 on pci0

dc0: Ethernet address: 00:a0:cc:da:da:da

miibus0: <MII bus> on dc0

ukphy0: <Generic IEEE 802.3u media interface> on miibus0

ukphy0:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto

dc1: <82c169 PNIC 10/100BaseTX> port 0x9800-0x98ff mem 0xd3000000-0xd30

000ff irq 11 at device 12.0 on pci0

dc1: Ethernet address: 00:a0:cc:da:da:db

miibus1: <MII bus> on dc1

ukphy1: <Generic IEEE 802.3u media interface> on miibus1

ukphy1:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto

In this example, we see that two cards using the dc(4) driver are present on the system.

To use your network card, you will need to load the proper driver. This may be accomplished in one of two ways. The easiest way is to simply load a kernel module for your network card with kldload(8). A module is not available for all network card drivers (ISA cards and cards using the ed(4) driver, for example). Alternatively, you may statically compile the support for your card into your kernel. Check /usr/src/sys/i386/conf/LINT and the manual page of the driver to know what to add in your kernel configuration file. For more information about recompiling your kernel, please see [kernelconfig.html Chapter 9]. If your card was detected at boot by your kernel (GENERIC) you do not have to build a new kernel.

Configuring the Network Card

Once the right driver is loaded for the network card, the card needs to be configured. As with many other things, the network card may have been configured at installation time.

To display the configuration for the network interfaces on your system, enter the following command:

% ifconfig

dc0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500

        inet 192.168.1.3 netmask 0xffffff00 broadcast 192.168.1.255

        ether 00:a0:cc:da:da:da

        media: Ethernet autoselect (100baseTX <full-duplex>)

        status: active

dc1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500

        inet 10.0.0.1 netmask 0xffffff00 broadcast 10.0.0.255

        ether 00:a0:cc:da:da:db

        media: Ethernet 10baseT/UTP

        status: no carrier

lp0: flags=8810<POINTOPOINT,SIMPLEX,MULTICAST> mtu 1500

lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384

        inet 127.0.0.1 netmask 0xff000000

tun0: flags=8010<POINTOPOINT,MULTICAST> mtu 1500

Note: Note that entries concerning IPv6 (inet6 etc.) were omitted in this example.

In this example, the following devices were displayed:

DragonFly uses the driver name followed by the order in which one the card is detected at the kernel boot to name the network card, starting the count at zero. For example, sis2 would be the third network card on the system using the sis(4) driver.

In this example, the dc0 device is up and running. The key indicators are:

  1. UP means that the card is configured and ready.

  2. The card has an Internet (inet) address (in this case 192.168.1.3).

  3. It has a valid subnet mask (netmask; 0xffffff00 is the same as 255.255.255.0).

  4. It has a valid broadcast address (in this case, 192.168.1.255).

  5. The MAC address of the card (ether) is 00:a0:cc:da:da:da

  6. The physical media selection is on autoselection mode (media: Ethernet autoselect (100baseTX <full-duplex>)). We see that dc1 was configured to run with 10baseT/UTP media. For more information on available media types for a driver, please refer to its manual page.

  7. The status of the link (status) is active, i.e. the carrier is detected. For dc1, we see status: no carrier. This is normal when an Ethernet cable is not plugged into the card.

If the ifconfig(8) output had shown something similar to:

dc0: flags=8843<BROADCAST,SIMPLEX,MULTICAST> mtu 1500

            ether 00:a0:cc:da:da:da

it would indicate the card has not been configured.

To configure your card, you need root privileges. The network card configuration can be done from the command line with ifconfig(8) as root.

# ifconfig dc0 inet 192.168.1.3 netmask 255.255.255.0

Manually configuring the care has the disadvantage that you would have to do it after each reboot of the system. The file /etc/rc.conf is where to add the network card's configuration.

Open /etc/rc.conf in your favorite editor. You need to add a line for each network card present on the system, for example in our case, we added these lines:

ifconfig_dc0="inet 192.168.1.3 netmask 255.255.255.0"

ifconfig_dc1="inet 10.0.0.1 netmask 255.255.255.0 media 10baseT/UTP"

You have to replace dc0, dc1, and so on, with the correct device for your cards, and the addresses with the proper ones. You should read the card driver and ifconfig(8) manual pages for more details about the allowed options and also rc.conf(5) manual page for more information on the syntax of /etc/rc.conf.

If you configured the network during installation, some lines about the network card(s) may be already present. Double check /etc/rc.conf before adding any lines.

You will also have to edit the file /etc/hosts to add the names and the IP addresses of various machines of the LAN, if they are not already there. For more information please refer to hosts(5) and to /usr/share/examples/etc/hosts.

Testing and Troubleshooting

Once you have made the necessary changes in /etc/rc.conf, you should reboot your system. This will allow the change(s) to the interface(s) to be applied, and verify that the system restarts without any configuration errors.

Once the system has been rebooted, you should test the network interfaces.

Testing the Ethernet Card

To verify that an Ethernet card is configured correctly, you have to try two things. First, ping the interface itself, and then ping another machine on the LAN.

First test the local interface:

% ping -c5 192.168.1.3

PING 192.168.1.3 (192.168.1.3): 56 data bytes

64 bytes from 192.168.1.3: icmp_seq#0 ttl64 time=0.082 ms

64 bytes from 192.168.1.3: icmp_seq#1 ttl64 time=0.074 ms

64 bytes from 192.168.1.3: icmp_seq#2 ttl64 time=0.076 ms

64 bytes from 192.168.1.3: icmp_seq#3 ttl64 time=0.108 ms

64 bytes from 192.168.1.3: icmp_seq#4 ttl64 time=0.076 ms



--- 192.168.1.3 ping statistics ---

5 packets transmitted, 5 packets received, 0% packet loss

round-trip min/avg/max/stddev = 0.074/0.083/0.108/0.013 ms

Now we have to ping another machine on the LAN:

% ping -c5 192.168.1.2

PING 192.168.1.2 (192.168.1.2): 56 data bytes

64 bytes from 192.168.1.2: icmp_seq#0 ttl64 time=0.726 ms

64 bytes from 192.168.1.2: icmp_seq#1 ttl64 time=0.766 ms

64 bytes from 192.168.1.2: icmp_seq#2 ttl64 time=0.700 ms

64 bytes from 192.168.1.2: icmp_seq#3 ttl64 time=0.747 ms

64 bytes from 192.168.1.2: icmp_seq#4 ttl64 time=0.704 ms



--- 192.168.1.2 ping statistics ---

5 packets transmitted, 5 packets received, 0% packet loss

round-trip min/avg/max/stddev = 0.700/0.729/0.766/0.025 ms

You could also use the machine name instead of 192.168.1.2 if you have set up the /etc/hosts file.

Troubleshooting

Troubleshooting hardware and software configurations is always a pain, and a pain which can be alleviated by checking the simple things first. Is your network cable plugged in? Have you properly configured the network services? Did you configure the firewall correctly? Is the card you are using supported by DragonFly? Always check the hardware notes before sending off a bug report. Update your version of DragonFly to the latest PREVIEW version. Check the mailing list archives, or perhaps search the Internet.

If the card works, yet performance is poor, it would be worthwhile to read over the tuning(7) manual page. You can also check the network configuration as incorrect network settings can cause slow connections.

Some users experience one or two device timeouts, which is normal for some cards. If they continue, or are bothersome, you may wish to be sure the device is not conflicting with another device. Double check the cable connections. Perhaps you may just need to get another card.

At times, users see a few watchdog timeout errors. The first thing to do here is to check your network cable. Many cards require a PCI slot which supports Bus Mastering. On some old motherboards, only one PCI slot allows it (usually slot 0). Check the network card and the motherboard documentation to determine if that may be the problem.

No route to host messages occur if the system is unable to route a packet to the destination host. This can happen if no default route is specified, or if a cable is unplugged. Check the output of netstat -rn and make sure there is a valid route to the host you are trying to reach. If there is not, read on to [advanced-networking.html Chapter 19].

ping: sendto: Permission denied error messages are often caused by a misconfigured firewall. If ipfw is enabled in the kernel but no rules have been defined, then the default policy is to deny all traffic, even ping requests! Read on to [firewalls.html Section 10.7] for more information.

Sometimes performance of the card is poor, or below average. In these cases it is best to set the media selection mode from autoselect to the correct media selection. While this usually works for most hardware, it may not resolve this issue for everyone. Again, check all the network settings, and read over the tuning(7) manual page.

Virtual Hosts

A very common use of DragonFly is virtual site hosting, where one server appears to the network as many servers. This is achieved by assigning multiple network addresses to a single interface.

A given network interface has one real address, and may have any number of alias addresses. These aliases are normally added by placing alias entries in /etc/rc.conf.

An alias entry for the interface fxp0 looks like:

ifconfig_fxp0_alias0="inet xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx"

Note that alias entries must start with alias0 and proceed upwards in order, (for example, _alias1, _alias2, and so on). The configuration process will stop at the first missing number.

The calculation of alias netmasks is important, but fortunately quite simple. For a given interface, there must be one address which correctly represents the network's netmask. Any other addresses which fall within this network must have a netmask of all 1s (expressed as either 255.255.255.255 or 0xffffffff).

For example, consider the case where the fxp0 interface is connected to two networks, the 10.1.1.0 network with a netmask of 255.255.255.0 and the 202.0.75.16 network with a netmask of 255.255.255.240. We want the system to appear at 10.1.1.1 through 10.1.1.5 and at 202.0.75.17 through 202.0.75.20. As noted above, only the first address in a given network range (in this case, 10.0.1.1 and 202.0.75.17) should have a real netmask; all the rest (10.1.1.2 through 10.1.1.5 and 202.0.75.18 through 202.0.75.20) must be configured with a netmask of 255.255.255.255.

The following entries configure the adapter correctly for this arrangement:

 ifconfig_fxp0="inet 10.1.1.1 netmask 255.255.255.0"

 ifconfig_fxp0_alias0="inet 10.1.1.2 netmask 255.255.255.255"

 ifconfig_fxp0_alias1="inet 10.1.1.3 netmask 255.255.255.255"

 ifconfig_fxp0_alias2="inet 10.1.1.4 netmask 255.255.255.255"

 ifconfig_fxp0_alias3="inet 10.1.1.5 netmask 255.255.255.255"

 ifconfig_fxp0_alias4="inet 202.0.75.17 netmask 255.255.255.240"

 ifconfig_fxp0_alias5="inet 202.0.75.18 netmask 255.255.255.255"

 ifconfig_fxp0_alias6="inet 202.0.75.19 netmask 255.255.255.255"

 ifconfig_fxp0_alias7="inet 202.0.75.20 netmask 255.255.255.255"

CategoryHandbook

CategoryHandbook-configuration

Configuration Files

/etc Layout

There are a number of directories in which configuration information is kept. These include:

/etc Generic system configuration information; data here is system-specific.
/etc/defaults Default versions of system configuration files.
/etc/mail Extra sendmail(8) configuration, other MTA configuration files.
/etc/ppp Configuration for both user- and kernel-ppp programs.
/etc/namedb Default location for named(8) data. Normally named.conf and zone files are stored here.
/usr/local/etc Configuration files for installed applications. May contain per-application subdirectories.
/usr/local/etc/rc.d Start/stop scripts for installed applications.
/var/db Automatically generated system-specific database files, such as the package database, the locate database, and so on

Hostnames

/etc/resolv.conf

/etc/resolv.conf dictates how DragonFly's resolver accesses the Internet Domain Name System (DNS).

The most common entries to resolv.conf are:

nameserver The IP address of a name server the resolver should query. The servers are queried in the order listed with a maximum of three.
search Search list for hostname lookup. This is normally determined by the domain of the local hostname.
domain The local domain name.

A typical resolv.conf:

search example.com

nameserver 147.11.1.11

nameserver 147.11.100.30

Note: Only one of the search and domain options should be used.

If you are using DHCP, dhclient(8) usually rewrites resolv.conf with information received from the DHCP server.

/etc/hosts

/etc/hosts is a simple text database reminiscent of the old Internet. It works in conjunction with DNS and NIS providing name to IP address mappings. Local computers connected via a LAN can be placed in here for simplistic naming purposes instead of setting up a named(8) server. Additionally, /etc/hosts can be used to provide a local record of Internet names, reducing the need to query externally for commonly accessed names.

#

#

# Host Database

# This file should contain the addresses and aliases

# for local hosts that share this file.

# In the presence of the domain name service or NIS, this file may

# not be consulted at all; see /etc/nsswitch.conf for the resolution order.

#

#

::1                     localhost localhost.my.domain myname.my.domain

127.0.0.1               localhost localhost.my.domain myname.my.domain

#

# Imaginary network.

#10.0.0.2               myname.my.domain myname

#10.0.0.3               myfriend.my.domain myfriend

#

# According to RFC 1918, you can use the following IP networks for

# private nets which will never be connected to the Internet:

#

#       10.0.0.0        -   10.255.255.255

#       172.16.0.0      -   172.31.255.255

#       192.168.0.0     -   192.168.255.255

#

# In case you want to be able to connect to the Internet, you need

# real official assigned numbers.  PLEASE PLEASE PLEASE do not try

# to invent your own network numbers but instead get one from your

# network provider (if any) or from the Internet Registry (ftp to

# rs.internic.net, directory `/templates').

#

/etc/hosts takes on the simple format of:

[Internet address] [official hostname] [alias1] [alias2] ...

For example:

10.0.0.1 myRealHostname.example.com myRealHostname foobar1 foobar2

Consult hosts(5) for more information.

Log File Configuration

syslog.conf

syslog.conf is the configuration file for the syslogd(8) program. It indicates which types of syslog messages are logged to particular log files.

#

#

#       Spaces ARE valid field separators in this file. However,

#       other *nix-like systems still insist on using tabs as field

#       separators. If you are sharing this file between systems, you

#       may want to use only tabs as field separators here.

#       Consult the syslog.conf(5) manual page.

*.err;kern.debug;auth.notice;mail.crit /dev/console

*.notice;kern.debug;lpr.info;mail.crit;news.err /var/log/messages

security.*                                      /var/log/security

mail.info                                       /var/log/maillog

lpr.info                                        /var/log/lpd-errs

cron.*                                          /var/log/cron

*.err root

*.notice;news.err                               root

*.alert                                         root

*.emerg                                         *

# uncomment this to log all writes to /dev/console to /var/log/console.log

#console.info                                   /var/log/console.log

# uncomment this to enable logging of all log messages to /var/log/all.log

#*.*                                            /var/log/all.log

# uncomment this to enable logging to a remote log host named loghost

#*.*                                            @loghost

# uncomment these if you're running inn

# news.crit                                     /var/log/news/news.crit

# news.err                                      /var/log/news/news.err

# news.notice                                   /var/log/news/news.notice

!startslip

. /var/log/slip.log

!ppp

. /var/log/ppp.log

Consult the syslog.conf(5) manual page for more information.

newsyslog.conf

newsyslog.conf is the configuration file for newsyslog(8), a program that is normally scheduled to run by cron(8). newsyslog(8) determines when log files require archiving or rearranging. logfile is moved to logfile.0, logfile.0 is moved to logfile.1, and so on. Alternatively, the log files may be archived in gzip(1) format causing them to be named: logfile.0.gz, logfile.1.gz, and so on.

newsyslog.conf indicates which log files are to be managed, how many are to be kept, and when they are to be touched. Log files can be rearranged and/or archived when they have either reached a certain size, or at a certain periodic time/date.

# configuration file for newsyslog

#

#

# filename          [owner:group]    mode count size when [ZB] [/pid_file] [sig_num]

/var/log/cron                           600  3     100  *     Z

/var/log/amd.log                        644  7     100  *     Z

/var/log/kerberos.log                   644  7     100  *     Z

/var/log/lpd-errs                       644  7     100  *     Z

/var/log/maillog                        644  7     *    @T00  Z

/var/log/sendmail.st                    644  10    *    168   B

/var/log/messages                       644  5     100  *     Z

/var/log/all.log                        600  7     *    @T00  Z

/var/log/slip.log                       600  3     100  *     Z

/var/log/ppp.log                        600  3     100  *     Z

/var/log/security                       600  10    100  *     Z

/var/log/wtmp                           644  3     *    @01T05 B

/var/log/daily.log                      640  7     *    @T00  Z

/var/log/weekly.log                     640  5     1    $W6D0 Z

/var/log/monthly.log                    640  12    *    $M1D0 Z

/var/log/console.log                    640  5     100  *     Z

Consult the newsyslog(8) manual page for more information.

sysctl.conf

sysctl.conf looks much like rc.conf. Values are set in a variable=value form. The specified values are set after the system goes into multi-user mode. Not all variables are settable in this mode.

A sample sysctl.conf turning off logging of fatal signal exits and letting Linux programs know they are really running under DragonFly:

kern.logsigexit=0       # Do not log fatal signal exits (e.g. sig 11)

compat.linux.osname=DragonFly

compat.linux.osrelease=4.3-STABLE

Tuning with sysctl

sysctl(8) is an interface that allows you to make changes to a running DragonFly system. This includes many advanced options of the TCP/IP stack and virtual memory system that can dramatically improve performance for an experienced system administrator. Over five hundred system variables can be read and set using sysctl(8).

At its core, sysctl(8) serves two functions: to read and to modify system settings.

To view all readable variables:

% sysctl -a

To read a particular variable, for example, kern.maxproc:

% sysctl kern.maxproc

kern.maxproc: 1044

To set a particular variable, use the intuitive ***variable***=***value*** syntax:

# sysctl kern.maxfiles=5000

kern.maxfiles: 2088 -< 5000

Settings of sysctl variables are usually either strings, numbers, or booleans (a boolean being 1 for yes or a 0 for no).

If you want to set automatically some variables each time the machine boots, add them to the /etc/sysctl.conf file. For more information see the sysctl.conf(5) manual page and the [configtuning-configfiles.html#CONFIGTUNING-SYSCTLCONF Section 6.10.4].

sysctl(8) Read-only

*Contributed by Tom Rhodes. *

In some cases it may be desirable to modify read-only sysctl(8) values. While this is not recommended, it is also sometimes unavoidable.

For instance on some laptop models the cardbus(4) device will not probe memory ranges, and fail with errors which look similar to:

cbb0: Could not map register memory

device_probe_and_attach: cbb0 attach returned 12

Cases like the one above usually require the modification of some default sysctl(8) settings which are set read only. To overcome these situations a user can put sysctl(8) OIDs in their local /boot/loader.conf. Default settings are located in the /boot/defaults/loader.conf file.

Fixing the problem mentioned above would require a user to set hw.pci.allow_unsupported_io_range=1 in the aforementioned file. Now cardbus(4) will work properly.

Tuning Disks

Sysctl Variables

vfs.vmiodirenable

The vfs.vmiodirenable sysctl variable may be set to either 0 (off) or 1 (on); it is 1 by default. This variable controls how directories are cached by the system. Most directories are small, using just a single fragment (typically 1 K) in the file system and less (typically 512 bytes) in the buffer cache. With this variable turned off (to 0), the buffer cache will only cache a fixed number of directories even if ou have a huge amount of memory. When turned on (to 1), this sysctl allows the buffer cache to use the VM Page Cache to cache the directories, making all the memory available for caching directories. However, the minimum in-core memory used to cache a directory is the physical page size (typically 4 K) rather than 512 bytes. We recommend keeping this option on if you are running any services which manipulate large numbers of files. Such services can include web caches, large mail systems, and news systems. Keeping this option on will generally not reduce performance even with the wasted memory but you should experiment to find out.

vfs.write_behind

The vfs.write_behind sysctl variable defaults to 1 (on). This tells the file system to issue media writes as full clusters are collected, which typically occurs when writing large sequential files. The idea is to avoid saturating the buffer cache with dirty buffers when it would not benefit I/O performance. However, this may stall processes and under certain circumstances you may wish to turn it off.

vfs.hirunningspace

The vfs.hirunningspace sysctl variable determines how much outstanding write I/O may be queued to disk controllers system-wide at any given instance. The default is usually sufficient but on machines with lots of disks you may want to bump it up to four or five megabytes. Note that setting too high a value (exceeding the buffer cache's write threshold) can lead to extremely bad clustering performance. Do not set this value arbitrarily high! Higher write values may add latency to reads occurring at the same time.

There are various other buffer-cache and VM page cache related sysctls. We do not recommend modifying these values. The VM system does an extremely good job of automatically tuning itself.

vm.swap_idle_enabled

The vm.swap_idle_enabled sysctl variable is useful in large multi-user systems where you have lots of users entering and leaving the system and lots of idle processes. Such systems tend to generate a great deal of continuous pressure on free memory reserves. Turning this feature on and tweaking the swapout hysteresis (in idle seconds) via vm.swap_idle_threshold1 and vm.swap_idle_threshold2 allows you to depress the priority of memory pages associated with idle processes more quickly then the normal pageout algorithm. This gives a helping hand to the pageout daemon. Do not turn this option on unless you need it, because the tradeoff you are making is essentially pre-page memory sooner rather than later; thus eating more swap and disk bandwidth. In a small system this option will have a determinable effect but in a large system that is already doing moderate paging this option allows the VM system to stage whole processes into and out of memory easily.

hw.ata.wc

IDE drives lie about when a write completes. With IDE write caching turned on, IDE hard drives not only write data to disk out of order, but will sometimes delay writing some blocks indefinitely when under heavy disk loads. A crash or power failure may cause serious file system corruption. Turning off write caching will remove the danger of this data loss, but will also cause disk operations to proceed very slowly. Change this only if prepared to suffer with the disk slowdown.

Changing this variable must be done from the boot loader at boot time. Attempting to do it after the kernel boots will have no effect.

For more information, please see ata(4) manual page.

Soft Updates

Note that soft updates are only available on UFS.

The tunefs(8) program can be used to fine-tune a UFS file system. This program has many different options, but for now we are only concerned with toggling Soft Updates on and off, which is done by:

# tunefs -n enable /filesystem

# tunefs -n disable /filesystem

A filesystem cannot be modified with tunefs(8) while it is mounted. A good time to enable Soft Updates is before any partitions have been mounted, in single-user mode.

Note: It is possible to enable Soft Updates at filesystem creation time, through use of the -U option to newfs(8).

Soft Updates drastically improves meta-data performance, mainly file creation and deletion, through the use of a memory cache. We recommend to use Soft Updates on all of your file systems. There are two downsides to Soft Updates that you should be aware of: First, Soft Updates guarantees filesystem consistency in the case of a crash but could very easily be several seconds (even a minute!) behind updating the physical disk. If your system crashes you may lose more work than otherwise. Secondly, Soft Updates delays the freeing of filesystem blocks. If you have a filesystem (such as the root filesystem) which is almost full, performing a major update, such as make installworld, can cause the filesystem to run out of space and the update to fail.

More Details about Soft Updates

There are two traditional approaches to writing a file systems meta-data back to disk. (Meta-data updates are updates to non-content data like inodes or directories.)

Historically, the default behavior was to write out meta-data updates synchronously. If a directory had been changed, the system waited until the change was actually written to disk. The file data buffers (file contents) were passed through the buffer cache and backed up to disk later on asynchronously. The advantage of this implementation is that it operates safely. If there is a failure during an update, the meta-data are always in a consistent state. A file is either created completely or not at all. If the data blocks of a file did not find their way out of the buffer cache onto the disk by the time of the crash, fsck(8) is able to recognize this and repair the filesystem by setting the file length to 0. Additionally, the implementation is clear and simple. The disadvantage is that meta-data changes are slow. An rm -r, for instance, touches all the files in a directory sequentially, but each directory change (deletion of a file) will be written synchronously to the disk. This includes updates to the directory itself, to the inode table, and possibly to indirect blocks allocated by the file. Similar considerations apply for unrolling large hierarchies (tar -x).

The second case is asynchronous meta-data updates. This is the default for Linux/ext2fs and mount -o async for *BSD ufs. All meta-data updates are simply being passed through the buffer cache too, that is, they will be intermixed with the updates of the file content data. The advantage of this implementation is there is no need to wait until each meta-data update has been written to disk, so all operations which cause huge amounts of meta-data updates work much faster than in the synchronous case. Also, the implementation is still clear and simple, so there is a low risk for bugs creeping into the code. The disadvantage is that there is no guarantee at all for a consistent state of the filesystem. If there is a failure during an operation that updated large amounts of meta-data (like a power failure, or someone pressing the reset button), the filesystem will be left in an unpredictable state. There is no opportunity to examine the state of the filesystem when the system comes up again; the data blocks of a file could already have been written to the disk while the updates of the inode table or the associated directory were not. It is actually impossible to implement a fsck which is able to clean up the resulting chaos (because the necessary information is not available on the disk). If the filesystem has been damaged beyond repair, the only choice is to use newfs(8) on it and restore it from backup.

The usual solution for this problem was to implement dirty region logging, which is also referred to as journaling, although that term is not used consistently and is occasionally applied to other forms of transaction logging as well. Meta-data updates are still written synchronously, but only into a small region of the disk. Later on they will be moved to their proper location. Because the logging area is a small, contiguous region on the disk, there are no long distances for the disk heads to move, even during heavy operations, so these operations are quicker than synchronous updates. Additionally the complexity of the implementation is fairly limited, so the risk of bugs being present is low. A disadvantage is that all meta-data are written twice (once into the logging region and once to the proper location) so for normal work, a performance pessimization might result. On the other hand, in case of a crash, all pending meta-data operations can be quickly either rolled-back or completed from the logging area after the system comes up again, resulting in a fast filesystem startup.

Kirk McKusick, the developer of Berkeley FFS, solved this problem with Soft Updates: all pending meta-data updates are kept in memory and written out to disk in a sorted sequence (ordered meta-data updates). This has the effect that, in case of heavy meta-data operations, later updates to an item catch the earlier ones if the earlier ones are still in memory and have not already been written to disk. So all operations on, say, a directory are generally performed in memory before the update is written to disk (the data blocks are sorted according to their position so that they will not be on the disk ahead of their meta-data). If the system crashes, this causes an implicit log rewind: all operations which did not find their way to the disk appear as if they had never happened. A consistent filesystem state is maintained that appears to be the one of 30 to 60 seconds earlier. The algorithm used guarantees that all resources in use are marked as such in their appropriate bitmaps: blocks and inodes. After a crash, the only resource allocation error that occurs is that resources are marked as used which are actually free. fsck(8) recognizes this situation, and frees the resources that are no longer used. It is safe to ignore the dirty state of the filesystem after a crash by forcibly mounting it with mount -f. In order to free resources that may be unused, fsck(8) needs to be run at a later time.

The advantage is that meta-data operations are nearly as fast as asynchronous updates (i.e. faster than with logging, which has to write the meta-data twice). The disadvantages are the complexity of the code (implying a higher risk for bugs in an area that is highly sensitive regarding loss of user data), and a higher memory consumption. Additionally there are some idiosyncrasies one has to get used to. After a crash, the state of the filesystem appears to be somewhat older. In situations where the standard synchronous approach would have caused some zero-length files to remain after the fsck, these files do not exist at all with a Soft Updates filesystem because neither the meta-data nor the file contents have ever been written to disk. Disk space is not released until the updates have been written to disk, which may take place some time after running rm. This may cause problems when installing large amounts of data on a filesystem that does not have enough free space to hold all the files twice.

Tuning Kernel Limits

File/Process Limits

kern.maxfiles

kern.maxfiles can be raised or lowered based upon your system requirements. This variable indicates the maximum number of file descriptors on your system. When the file descriptor table is full, file: table is full will show up repeatedly in the system message buffer, which can be viewed with the dmesg command.

Each open file, socket, or fifo uses one file descriptor. A large-scale production server may easily require many thousands of file descriptors, depending on the kind and number of services running concurrently.

kern.maxfile's default value is dictated by the MAXUSERS option in your kernel configuration file. kern.maxfiles grows proportionally to the value of MAXUSERS. When compiling a custom kernel, it is a good idea to set this kernel configuration option according to the uses of your system. From this number, the kernel is given most of its pre-defined limits. Even though a production machine may not actually have 256 users connected at once, the resources needed may be similar to a high-scale web server.

Note: Setting MAXUSERS to 0 in your kernel configuration file will choose a reasonable default value based on the amount of RAM present in your system. It is set to 0 in the default GENERIC kernel.

kern.ipc.somaxconn

The kern.ipc.somaxconn sysctl variable limits the size of the listen queue for accepting new TCP connections. The default value of 128 is typically too low for robust handling of new connections in a heavily loaded web server environment. For such environments, it is recommended to increase this value to 1024 or higher. The service daemon may itself limit the listen queue size (e.g. sendmail(8), or Apache ) but will often have a directive in its configuration file to adjust the queue size. Large listen queues also do a better job of avoiding Denial of Service (DoS) attacks.

Network Limits

The NMBCLUSTERS kernel configuration option dictates the amount of network Mbufs available to the system. A heavily-trafficked server with a low number of Mbufs will hinder DragonFly's ability. Each cluster represents approximately 2 K of memory, so a value of 1024 represents 2 megabytes of kernel memory reserved for network buffers. A simple calculation can be done to figure out how many are needed. If you have a web server which maxes out at 1000 simultaneous connections, and each connection eats a 16 K receive and 16 K send buffer, you need approximately 32 MB worth of network buffers to cover the web server. A good rule of thumb is to multiply by 2, so 2x32 MB / 2 KB # 64 MB / 2 kB 32768. We recommend values between 4096 and 32768 for machines with greater amounts of memory. Under no circumstances should you specify an arbitrarily high value for this parameter as it could lead to a boot time crash. The -m option to netstat(1) may be used to observe network cluster use. kern.ipc.nmbclusters loader tunable should be used to tune this at boot time.

For busy servers that make extensive use of the sendfile(2) system call, it may be necessary to increase the number of sendfile(2) buffers via the NSFBUFS kernel configuration option or by setting its value in /boot/loader.conf (see loader(8) for details). A common indicator that this parameter needs to be adjusted is when processes are seen in the sfbufa state. The sysctl variable kern.ipc.nsfbufs is a read-only glimpse at the kernel configured variable. This parameter nominally scales with kern.maxusers, however it may be necessary to tune accordingly.

Important: Even though a socket has been marked as non-blocking, calling sendfile(2) on the non-blocking socket may result in the sendfile(2) call blocking until enough struct sf_buf's are made available.

net.inet.ip.portrange.*

The net.inet.ip.portrange.* sysctl variables control the port number ranges automatically bound to TCP and UDP sockets. There are three ranges: a low range, a default range, and a high range. Most network programs use the default range which is controlled by the net.inet.ip.portrange.first and net.inet.ip.portrange.last, which default to 1024 and 5000, respectively. Bound port ranges are used for outgoing connections, and it is possible to run the system out of ports under certain circumstances. This most commonly occurs when you are running a heavily loaded web proxy. The port range is not an issue when running servers which handle mainly incoming connections, such as a normal web server, or has a limited number of outgoing connections, such as a mail relay. For situations where you may run yourself out of ports, it is recommended to increase net.inet.ip.portrange.last modestly. A value of 10000, 20000 or 30000 may be reasonable. You should also consider firewall effects when changing the port range. Some firewalls may block large ranges of ports (usually low-numbered ports) and expect systems to use higher ranges of ports for outgoing connections -- for this reason it is recommended that net.inet.ip.portrange.first be lowered.

TCP Bandwidth Delay Product

The TCP Bandwidth Delay Product Limiting is similar to TCP/Vegas in NetBSD. It can be enabled by setting net.inet.tcp.inflight_enable sysctl variable to 1. The system will attempt to calculate the bandwidth delay product for each connection and limit the amount of data queued to the network to just the amount required to maintain optimum throughput.

This feature is useful if you are serving data over modems, Gigabit Ethernet, or even high speed WAN links (or any other link with a high bandwidth delay product), especially if you are also using window scaling or have configured a large send window. If you enable this option, you should also be sure to set net.inet.tcp.inflight_debug to 0 (disable debugging), and for production use setting net.inet.tcp.inflight_min to at least 6144 may be beneficial. However, note that setting high minimums may effectively disable bandwidth limiting depending on the link. The limiting feature reduces the amount of data built up in intermediate route and switch packet queues as well as reduces the amount of data built up in the local host's interface queue. With fewer packets queued up, interactive connections, especially over slow modems, will also be able to operate with lower Round Trip Times. However, note that this feature only effects data transmission (uploading / server side). It has no effect on data reception (downloading).

Adjusting net.inet.tcp.inflight_stab is not recommended. This parameter defaults to 20, representing 2 maximal packets added to the bandwidth delay product window calculation. The additional window is required to stabilize the algorithm and improve responsiveness to changing conditions, but it can also result in higher ping times over slow links (though still much lower than you would get without the inflight algorithm). In such cases, you may wish to try reducing this parameter to 15, 10, or 5; and may also have to reduce net.inet.tcp.inflight_min (for example, to 3500) to get the desired effect. Reducing these parameters should be done as a last resort only.

Adding Swap Space

No matter how well you plan, sometimes a system does not run as you expect. If you find you need more swap space, it is simple enough to add. You have three ways to increase swap space: adding a new hard drive, enabling swap over NFS, and creating a swap file on an existing partition.

Swap on a New Hard Drive

The best way to add swap, of course, is to use this as an excuse to add another hard drive. You can always use another hard drive, after all. If you can do this, go reread the discussion about swap space in [configtuning-initial.html Section 6.2] for some suggestions on how to best arrange your swap.

Swapping over NFS

Swapping over NFS is only recommended if you do not have a local hard disk to swap to. Even though DragonFly has an excellent NFS implementation, NFS swapping will be limited by the available network bandwidth and puts an additional burden on the NFS server.

Swapfiles

You can create a file of a specified size to use as a swap file. In our example here we will use a 64MB file called /usr/swap0. You can use any name you want, of course.

Example 6-1. Creating a Swapfile

  1. Be certain that your kernel configuration includes the vnode driver. It is not in recent versions of GENERIC.

     pseudo-device   vn 1   #Vnode driver (turns a file into a device)
    
  2. Create a swapfile (/usr/swap0):

     # dd if=/dev/zero of=/usr/swap0 bs=1024k count=64
    
  3. Set proper permissions on (/usr/swap0):

     # chmod 0600 /usr/swap0
    
  4. Enable the swap file in /etc/rc.conf:

     swapfile="/usr/swap0"   # Set to name of swapfile if aux swapfile desired.
    
  5. Reboot the machine or to enable the swap file immediately, type:

     # vnconfig -e /dev/vn0b /usr/swap0 swap
    

Power and Resource Management

*Written by Hiten Pandya and Tom Rhodes. *

It is very important to utilize hardware resources in an efficient manner. Before ACPI was introduced, it was very difficult and inflexible for operating systems to manage the power usage and thermal properties of a system. The hardware was controlled by some sort of BIOS embedded interface, such as Plug and Play BIOS (PNPBIOS), or Advanced Power Management (APM) and so on. Power and Resource Management is one of the key components of a modern operating system. For example, you may want an operating system to monitor system limits (and possibly alert you) in case your system temperature increased unexpectedly.

In this section, we will provide comprehensive information about ACPI. References will be provided for further reading at the end. Please be aware that ACPI is available on DragonFly systems as a default kernel module.

What Is ACPI?

Advanced Configuration and Power Interface (ACPI) is a standard written by an alliance of vendors to provide a standard interface for hardware resources and power management (hence the name). It is a key element in Operating System-directed configuration and Power Management, i.e.: it provides more control and flexibility to the operating system (OS). Modern systems stretched the limits of the current Plug and Play interfaces (such as APM), prior to the introduction of ACPI. ACPI is the direct successor to APM (Advanced Power Management).

Shortcomings of Advanced Power Management (APM)

The Advanced Power Management (APM) facility control's the power usage of a system based on its activity. The APM BIOS is supplied by the (system) vendor and it is specific to the hardware platform. An APM driver in the OS mediates access to the APM Software Interface, which allows management of power levels.

There are four major problems in APM. Firstly, power management is done by the (vendor-specific) BIOS, and the OS does not have any knowledge of it. One example of this, is when the user sets idle-time values for a hard drive in the APM BIOS, that when exceeded, it (BIOS) would spin down the hard drive, without the consent of the OS. Secondly, the APM logic is embedded in the BIOS, and it operates outside the scope of the OS. This means users can only fix problems in their APM BIOS by flashing a new one into the ROM; which, is a very dangerous procedure, and if it fails, it could leave the system in an unrecoverable state. Thirdly, APM is a vendor-specific technology, which, means that there is a lot or parity (duplication of efforts) and bugs found in one vendor's BIOS, may not be solved in others. Last but not the least, the APM BIOS did not have enough room to implement a sophisticated power policy, or one that can adapt very well to the purpose of the machine.

Plug and Play BIOS (PNPBIOS) was unreliable in many situations. PNPBIOS is 16-bit technology, so the OS has to use 16-bit emulation in order to interface with PNPBIOS methods.

The DragonFly APM driver is documented in the apm(4) manual page.

Configuring ACPI

The acpi.ko driver is loaded by default at start up by the loader(8) and should not be compiled into the kernel. The reasoning behind this is that modules are easier to work with, say if switching to another acpi.ko without doing a kernel rebuild. This has the advantage of making testing easier. Another reason is that starting ACPI after a system has been brought up is not too useful, and in some cases can be fatal. In doubt, just disable ACPI all together. This driver should not and can not be unloaded because the system bus uses it for various hardware interactions. ACPI can be disabled with the acpiconf(8) utility. In fact most of the interaction with ACPI can be done via acpiconf(8). Basically this means, if anything about ACPI is in the dmesg(8) output, then most likely it is already running.

Note: ACPI and APM cannot coexist and should be used separately. The last one to load will terminate if the driver notices the other running.

In the simplest form, ACPI can be used to put the system into a sleep mode with acpiconf(8), the -s flag, and a 1-5 option. Most users will only need 1. Option 5 will do a soft-off which is the same action as:

# halt -p

The other options are available. Check out the acpiconf(8) manual page for more information.

Using and Debugging DragonFly ACPI

*Written by Nate Lawson. With contributions from Peter Schultz and Tom Rhodes. *

ACPI is a fundamentally new way of discovering devices, managing power usage, and providing standardized access to various hardware previously managed by the BIOS. Progress is being made toward ACPI working on all systems, but bugs in some motherboards ACPI Machine Language (AML) bytecode, incompleteness in DragonFly's kernel subsystems, and bugs in the Intel ACPI-CA interpreter continue to appear.

This document is intended to help you assist the DragonFly ACPI maintainers in identifying the root cause of problems you observe and debugging and developing a solution. Thanks for reading this and we hope we can solve your system's problems.

Submitting Debugging Information

Note: Before submitting a problem, be sure you are running the latest BIOS version and, if available, embedded controller firmware version.

For those of you that want to submit a problem right away, please send the following information to bugs

Background

ACPI is present in all modern computers that conform to the ia32 (x86), ia64 (Itanium), and amd64 (AMD) architectures. The full standard has many features including CPU performance management, power planes control, thermal zones, various battery systems, embedded controllers, and bus enumeration. Most systems implement less than the full standard. For instance, a desktop system usually only implements the bus enumeration parts while a laptop might have cooling and battery management support as well. Laptops also have suspend and resume, with their own associated complexity.

An ACPI-compliant system has various components. The BIOS and chipset vendors provide various fixed tables (e.g., FADT) in memory that specify things like the APIC map (used for SMP), config registers, and simple configuration values. Additionally, a table of bytecode (the Differentiated System Description Table DSDT) is provided that specifies a tree-like name space of devices and methods.

The ACPI driver must parse the fixed tables, implement an interpreter for the bytecode, and modify device drivers and the kernel to accept information from the ACPI subsystem. For DragonFly, Intel has provided an interpreter (ACPI-CA) that is shared with Linux and NetBSD®. The path to the ACPI-CA source code is src/sys/dev/acpica5. Finally, drivers that implement various ACPI devices are found in src/sys/dev/acpica5.

Common Problems

For ACPI to work correctly, all the parts have to work correctly. Here are some common problems, in order of frequency of appearance, and some possible workarounds or fixes.

Suspend/Resume

ACPI has three suspend to RAM (STR) states, S1-S3, and one suspend to disk state (STD), called S4. S5 is soft off and is the normal state your system is in when plugged in but not powered up. S4 can actually be implemented two separate ways. S4BIOS is a BIOS-assisted suspend to disk. S4OS is implemented entirely by the operating system.

Start by checking sysctl hw.acpi for the suspend-related items. Here are the results for my Thinkpad:

hw.acpi.supported_sleep_state: S3 S4 S5

hw.acpi.s4bios: 0

This means that I can use acpiconf -s to test S3, S4OS, and S5. If s4bios was one (1), I would have S4BIOS support instead of S4 OS.

When testing suspend/resume, start with S1, if supported. This state is most likely to work since it doesn't require much driver support. No one has implemented S2 but if you have it, it's similar to S1. The next thing to try is S3. This is the deepest STR state and requires a lot of driver support to properly reinitialize your hardware. If you have problems resuming, feel free to email the bugs list but do not expect the problem to be resolved since there are a lot of drivers/hardware that need more testing and work.

To help isolate the problem, remove as many drivers from your kernel as possible. If it works, you can narrow down which driver is the problem by loading drivers until it fails again. Typically binary drivers like nvidia.ko, X11 display drivers, and USB will have the most problems while Ethernet interfaces usually work fine. If you can load/unload the drivers ok, you can automate this by putting the appropriate commands in /etc/rc.suspend and /etc/rc.resume. There is a commented-out example for unloading and loading a driver. Try setting hw.acpi.reset_video to zero (0) if your display is messed up after resume. Try setting longer or shorter values for hw.acpi.sleep_delay to see if that helps.

Another thing to try is load a recent Linux distribution with ACPI support and test their suspend/resume support on the same hardware. If it works on Linux, it's likely a DragonFly driver problem and narrowing down which driver causes the problems will help us fix the problem. Note that the ACPI maintainers do not usually maintain other drivers (e.g sound, ATA, etc.) so any work done on tracking down a driver problem should probably eventually be posted to the bugs list and mailed to the driver maintainer. If you are feeling adventurous, go ahead and start putting some debugging printf(3)s in a problematic driver to track down where in its resume function it hangs.

Finally, try disabling ACPI and enabling APM instead. If suspend/resume works with APM, you may be better off sticking with APM, especially on older hardware (pre-2000). It took vendors a while to get ACPI support correct and older hardware is more likely to have BIOS problems with ACPI.

<-- XXX: mention sensors somewhere; but not in this section -->

System Hangs (temporary or permanent)

Most system hangs are a result of lost interrupts or an interrupt storm. Chipsets have a lot of problems based on how the BIOS configures interrupts before boot, correctness of the APIC (MADT) table, and routing of the System Control Interrupt (SCI).

Interrupt storms can be distinguished from lost interrupts by checking the output of vmstat -i and looking at the line that has acpi0. If the counter is increasing at more than a couple per second, you have an interrupt storm. If the system appears hung, try breaking to DDB ( CTRL + ALT + ESC on console) and type show interrupts.

Your best hope when dealing with interrupt problems is to try disabling APIC support with hint.apic.0.disabled="1" in loader.conf.

Panics

Panics are relatively rare for ACPI and are the top priority to be fixed. The first step is to isolate the steps to reproduce the panic (if possible) and get a backtrace. Follow the advice for enabling options DDB and setting up a serial console (see this section) or setting up a dump(8) partition. You can get a backtrace in DDB with tr. If you have to handwrite the backtrace, be sure to at least get the lowest five (5) and top five (5) lines in the trace.

Then, try to isolate the problem by booting with ACPI disabled. If that works, you can isolate the ACPI subsystem by using various values of debug.acpi.disable. See the acpi(4) manual page for some examples.

System Powers Up After Suspend or Shutdown

First, try setting hw.acpi.disable_on_poweroff#0 in loader.conf(5). This keeps ACPI from disabling various events during the shutdown process. Some systems need this value set to 1 (the default) for the same reason. This usually fixes the problem of a system powering up spontaneously after a suspend or poweroff.

Other Problems

If you have other problems with ACPI (working with a docking station, devices not detected, etc.), please email a description to the mailing list as well; however, some of these issues may be related to unfinished parts of the ACPI subsystem so they might take a while to be implemented. Please be patient and prepared to test patches we may send you.

ASL, acpidump, and IASL

The most common problem is the BIOS vendors providing incorrect (or outright buggy!) bytecode. This is usually manifested by kernel console messages like this:

ACPI-1287: *** Error: Method execution failed [\\_SB_.PCI0.LPC0.FIGD._STA] \\

(Node 0xc3f6d160), AE_NOT_FOUND

Often, you can resolve these problems by updating your BIOS to the latest revision. Most console messages are harmless but if you have other problems like battery status not working, they're a good place to start looking for problems in the AML. The bytecode, known as AML, is compiled from a source language called ASL. The AML is found in the table known as the DSDT. To get a copy of your ASL, use acpidump(8). You should use both the -t (show contents of the fixed tables) and -d (disassemble AML to ASL) options. See the submitting Debugging Information section for an example syntax.

The simplest first check you can do is to recompile your ASL to check for errors. Warnings can usually be ignored but errors are bugs that will usually prevent ACPI from working correctly. To recompile your ASL, issue the following command:

# iasl your.asl

Fixing Your ASL

In the long run, our goal is for almost everyone to have ACPI work without any user intervention. At this point, however, we are still developing workarounds for common mistakes made by the BIOS vendors. The Microsoft interpreter (acpi.sys and acpiec.sys) does not strictly check for adherence to the standard, and thus many BIOS vendors who only test ACPI under Windows never fix their ASL. We hope to continue to identify and document exactly what non-standard behavior is allowed by Microsoft's interpreter and replicate it so DragonFly can work without forcing users to fix the ASL. As a workaround and to help us identify behavior, you can fix the ASL manually. If this works for you, please send a diff(1) of the old and new ASL so we can possibly work around the buggy behavior in ACPI-CA and thus make your fix unnecessary.

Here is a list of common error messages, their cause, and how to fix them:

OS dependencies

Some AML assumes the world consists of various Windows versions. You can tell DragonFly to claim it is any OS to see if this fixes problems you may have. An easy way to override this is to set hw.acpi.osname=Windows 2001 in /boot/loader.conf or other similar strings you find in the ASL.

Missing Return statements

Some methods do not explicitly return a value as the standard requires. While ACPI-CA does not handle this, DragonFly has a workaround that allows it to return the value implicitly. You can also add explicit Return statements where required if you know what value should be returned. To force iasl to compile the ASL, use the -f flag.

Overriding the Default AML

After you customize your.asl, you will want to compile it, run:

# iasl your.asl

You can add the -f flag to force creation of the AML, even if there are errors during compilation. Remember that some errors (e.g., missing Return statements) are automatically worked around by the interpreter.

DSDT.aml is the default output filename for iasl. You can load this instead of your BIOS's buggy copy (which is still present in flash memory) by editing /boot/loader.conf as follows:

acpi_dsdt_load="YES"

acpi_dsdt_name="/boot/DSDT.aml"

Be sure to copy your DSDT.aml to the /boot directory.

Getting Debugging Output From ACPI

The ACPI driver has a very flexible debugging facility. It allows you to specify a set of subsystems as well as the level of verbosity. The subsystems you wish to debug are specified as layers and are broken down into ACPI-CA components (ACPI_ALL_COMPONENTS) and ACPI hardware support (ACPI_ALL_DRIVERS). The verbosity of debugging output is specified as the level and ranges from ACPI_LV_ERROR (just report errors) to ACPI_LV_VERBOSE (everything). The level is a bitmask so multiple options can be set at once, separated by spaces. In practice, you will want to use a serial console to log the output if it is so long it flushes the console message buffer.

Debugging output is not enabled by default. To enable it, add options ACPI_DEBUG to your kernel config if ACPI is compiled into the kernel. You can add ACPI_DEBUG=1 to your /etc/make.conf to enable it globally. If it is a module, you can recompile just your acpi.ko module as follows:

# cd /sys/dev/acpica5 && make clean && make ACPI_DEBUG=1

Install acpi.ko in /boot/kernel and add your desired level and layer to loader.conf. This example enables debug messages for all ACPI-CA components and all ACPI hardware drivers (CPU, LID, etc.) It will only output error messages, the least verbose level.

debug.acpi.layer="ACPI_ALL_COMPONENTS ACPI_ALL_DRIVERS"

debug.acpi.level="ACPI_LV_ERROR"

If the information you want is triggered by a specific event (say, a suspend and then resume), you can leave out changes to loader.conf and instead use sysctl to specify the layer and level after booting and preparing your system for the specific event. The sysctls are named the same as the tunables in loader.conf.

References

More information about ACPI may be found in the following locations:

The DragonFly virtual kernels

Obtained from vkernel(7) written by Sascha Wildner, added by Matthias Schmidt

The idea behind the development of the vkernel architecture was to find an elegant solution to debugging of the kernel and its components. It eases debugging, as it allows for a virtual kernel being loaded in userland and hence debug it without affecting the real kernel itself. By being able to load it on a running system it also removes the need for reboots between kernel compiles.

The vkernel architecture allows for running DragonFly kernels in userland.

Supported devices

A number of virtual device drivers exist to supplement the virtual kernel.

Disk device

The vkd driver allows for up to 16 vn(4) based disk devices. The root device will be vkd0.

CD-ROM device

The vcd driver allows for up to 16 virtual CD-ROM devices. Basically this is a read only vkd device with a block size of 2048.

Network interface

The vke driver supports up to 16 virtual network interfaces which are

associated with tap(4) devices on the host. For each vke device, the per-interface read only sysctl(3) variable hw.vkeX.tap_unit holds the unit number of the associated tap(4) device.

Setup a virtual kernel environment

A couple of steps are necessary in order to prepare the system to build and run a virtual kernel.

Setting up the filesystem

The vkernel architecture needs a number of files which reside in /var/vkernel. Since these files tend to get rather big and the /var partition is usually of limited size, we recommend the directory to be created in the /home partition with a link to it in /var:

% mkdir /home/var.vkernel
% ln -s /home/var.vkernel /var/vkernel

Next, a filesystem image to be used by the virtual kernel has to be created and populated (assuming world has been built previously):

# dd if=/dev/zero of=/var/vkernel/rootimg.01 bs=1m count=2048
# vnconfig -c vn0 /var/vkernel/rootimg.01
# disklabel -r -w vn0s0 auto
# disklabel -e vn0s0      # add 'a' partition with fstype `4.2BSD' size could be '*'
# newfs /dev/vn0s0a
# mount /dev/vn0s0a /mnt

If instead of using vn0 you specify vn to vnconfig, a new vn device will be created and a message saying which vnX was created will appear. This effectively lifts the limit of 4 vn devices.

Assuming that you build your world before, you can populate the image now. If you didn't build your world see chapter 21.

# cd /usr/src
# make installworld DESTDIR=/mnt
# cd etc
# make distribution DESTDIR=/mnt

Create a fstab file to let the vkernel find your image file.

# echo '/dev/vkd0s0a      /       ufs     rw      1  1' >/mnt/etc/fstab
# echo 'proc              /proc   procfs  rw      0  0' >>/mnt/etc/fstab

Edit /mnt/etc/ttys and replace the console entry with the following line and turn off all other gettys.

# console "/usr/libexec/getty Pc"         cons25  on  secure

Then, unmount the disk.

# umount /mnt
# vnconfig -u vn0

Compiling the virtual kernel

In order to compile a virtual kernel use the VKERNEL kernel configuration file residing in /usr/src/sys/config (or a configuration file derived thereof):

# cd /usr/src
# make -DNO_MODULES buildkernel KERNCONF=VKERNEL
# make -DNO_MODULES installkernel KERNCONF=VKERNEL DESTDIR=/var/vkernel

Enabling virtual kernel operation

A special sysctl(8), vm.vkernel_enable, must be set to enable vkernel operation:

# sysctl vm.vkernel_enable=1

To make this change permanent, edit /etc/sysctl.conf

Setup networking

Configuring the network on the host system

In order to access a network interface of the host system from the vkernel, you must add the interface to a bridge(4) device which will then be passed to the -I option:

# kldload if_bridge.ko
# kldload if_tap.ko
# ifconfig bridge0 create
# ifconfig bridge0 addm re0       # assuming re0 is the host's interface
# ifconfig bridge0 up

Note : You have to change re0 to the interface of your host machine.

Run a virtual kernel

Finally, the virtual kernel can be run:

# cd /var/vkernel
# ./boot/kernel/kernel -m 64m -r /var/vkernel/rootimg.01 -I auto:bridge0

You can issue the reboot(8), halt(8), or shutdown(8) commands from inside a virtual kernel. After doing a clean shutdown the reboot(8) command will re-exec the virtual kernel binary while the other two will cause the virtual kernel to exit.

The DragonFly Booting Process

Synopsis

The process of starting a computer and loading the operating system is referred to as the bootstrap process, or simply booting. DragonFly's boot process provides a great deal of flexibility in customizing what happens when you start the system, allowing you to select from different operating systems installed on the same computer, or even different versions of the same operating system or installed kernel.

This chapter details the configuration options you can set and how to customize the DragonFly boot process. This includes everything that happens until the DragonFly kernel has started, probed for devices, and started init(8). If you are not quite sure when this happens, it occurs when the text color changes from bright white to grey.

After reading this chapter, you will know:

The Booting Problem

Turning on a computer and starting the operating system poses an interesting dilemma. By definition, the computer does not know how to do anything until the operating system is started. This includes running programs from the disk. So if the computer can not run a program from the disk without the operating system, and the operating system programs are on the disk, how is the operating system started?

This problem parallels one in the book The Adventures of Baron Munchausen. A character had fallen part way down a manhole, and pulled himself out by grabbing his bootstraps, and lifting. In the early days of computing the term bootstrap was applied to the mechanism used to load the operating system, which has become shortened to booting.

On x86 hardware the Basic Input/Output System (BIOS) is responsible for loading the operating system. To do this, the BIOS looks on the hard disk for the Master Boot Record (MBR), which must be located on a specific place on the disk. The BIOS has enough knowledge to load and run the MBR, and assumes that the MBR can then carry out the rest of the tasks involved in loading the operating system possibly with the help of the BIOS.

The code within the MBR is usually referred to as a boot manager, especially when it interacts with the user. In this case the boot manager usually has more code in the first track of the disk or within some OS's file system. (A boot manager is sometimes also called a boot loader, but FreeBSD uses that term for a later stage of booting.) Popular boot managers include boot0 (a.k.a. Boot Easy , the standard DragonFly boot manager), Grub , GAG , and LILO . (Only boot0 fits within the MBR.)

If you have only one operating system installed on your disks then a standard PC MBR will suffice. This MBR searches for the first bootable (a.k.a. active) slice on the disk, and then runs the code on that slice to load the remainder of the operating system. The MBR installed by fdisk(8), by default, is such an MBR. It is based on /boot/mbr.

If you have installed multiple operating systems on your disks then you can install a different boot manager, one that can display a list of different operating systems, and allows you to choose the one to boot from. Two of these are discussed in the next subsection.

The remainder of the DragonFly bootstrap system is divided into three stages. The first stage is run by the MBR, which knows just enough to get the computer into a specific state and run the second stage. The second stage can do a little bit more, before running the third stage. The third stage finishes the task of loading the operating system. The work is split into these three stages because the PC standards put limits on the size of the programs that can be run at stages one and two. Chaining the tasks together allows DragonFly to provide a more flexible loader.

The kernel is then started and it begins to probe for devices and initialize them for use. Once the kernel boot process is finished, the kernel passes control to the user process init(8), which then makes sure the disks are in a usable state. init(8) then starts the user-level resource configuration which mounts file systems, sets up network cards to communicate on the network, and generally starts all the processes that usually are run on a DragonFly system at startup.


The Boot Manager and Boot Stages

The Boot Manager

The code in the MBR or boot manager is sometimes referred to as stage zero of the boot process. This subsection discusses two of the boot managers previously mentioned: boot0 and LILO .

The boot0 * Boot Manager:* The MBR installed by FreeBSD's installer or boot0cfg(8), by default, is based on /boot/boot0. (The boot0 program is very simple, since the program in the MBR can only be 446 bytes long because of the slice table and 0x55AA identifier at the end of the MBR.) If you have installed boot0 and multiple operating systems on your hard disks, then you will see a display similar to this one at boot time:

Example 7-1. boot0 Screenshot

F1 DOS

F2 FreeBSD

F3 Linux

F4 ??

F5 Drive 1



Default: F2

Other operating systems, in particular Windows®, have been known to overwrite an existing MBR with their own. If this happens to you, or you want to replace your existing MBR with the DragonFly MBR then use the following command:

# fdisk -B -b /boot/boot0 device

where ***device*** is the device that you boot from, such as ad0 for the first IDE disk, ad2 for the first IDE disk on a second IDE controller, da0 for the first SCSI disk, and so on. Or, if you want a custom configuration of the MBR, use boot0cfg(8).

The LILO Boot Manager: To install this boot manager so it will also boot DragonFly, first start Linux and add the following to your existing /etc/lilo.conf configuration file:

other=/dev/hdXY

table=/dev/hdX

loader=/boot/chain.b

label=DragonFly

In the above, specify DragonFly's primary partition and drive using Linux specifiers, replacing ***X*** with the Linux drive letter and ***Y*** with the Linux primary partition number. If you are using a SCSI drive, you will need to change ***/dev/hd*** to read something similar to ***/dev/sd***. The loader=/boot/chain.b line can be omitted if you have both operating systems on the same drive. Now run /sbin/lilo -v to commit your new changes to the system; this should be verified by checking its screen messages.

Stage One, /boot/boot1, and Stage Two, /boot/boot2

Conceptually the first and second stages are part of the same program, on the same area of the disk. Because of space constraints they have been split into two, but you would always install them together. They are copied from the combined file /boot/boot by the installer or disklabel (see below).

They are located outside file systems, in the first track of the boot slice, starting with the first sector. This is where boot0, or any other boot manager, expects to find a program to run which will continue the boot process. The number of sectors used is easily determined from the size of /boot/boot.

They are found on the boot sector of the boot slice, which is where boot0, or any other program on the MBR expects to find the program to run to continue the boot process. The files in the /boot directory are copies of the real files, which are stored outside of the DragonFly file system.

boot1 is very simple, since it can only be 512 bytes in size, and knows just enough about the DragonFly disklabel, which stores information about the slice, to find and execute boot2.

boot2 is slightly more sophisticated, and understands the DragonFly file system enough to find files on it, and can provide a simple interface to choose the kernel or loader to run.

Since the loader is much more sophisticated, and provides a nice easy-to-use boot configuration, boot2 usually runs it, but previously it was tasked to run the kernel directly.

Example 7-2. boot2 Screenshot

>> DragonFly/i386 BOOT

Default: 0:ad(0,a)/boot/loader

boot:

If you ever need to replace the installed boot1 and boot2 use disklabel(8):

# disklabel -B diskslice

where ***diskslice*** is the disk and slice you boot from, such as ad0s1 for the first slice on the first IDE disk.

Stage Three, /boot/loader

The loader is the final stage of the three-stage bootstrap, and is located on the file system, usually as /boot/loader.

The loader is intended as a user-friendly method for configuration, using an easy-to-use built-in command set, backed up by a more powerful interpreter, with a more complex command set.

Loader Program Flow

During initialization, the loader will probe for a console and for disks, and figure out what disk it is booting from. It will set variables accordingly, and an interpreter is started where user commands can be passed from a script or interactively.

The loader will then read /boot/loader.rc, which by default reads in /boot/defaults/loader.conf which sets reasonable defaults for variables and reads /boot/loader.conf for local changes to those variables. loader.rc then acts on these variables, loading whichever modules and kernel are selected.

Finally, by default, the loader issues a 10 second wait for key presses, and boots the kernel if it is not interrupted. If interrupted, the user is presented with a prompt which understands the easy-to-use command set, where the user may adjust variables, unload all modules, load modules, and then finally boot or reboot.

Loader Built-In Commands

These are the most commonly used loader commands. For a complete discussion of all available commands, please see loader(8).

*boot-conf: Goes through the same automatic configuration of modules based on variables as what happens at boot. This only makes sense if you use unload first, and change some variables, most commonly kernel.

Loader Examples

Here are some practical examples of loader usage:


Kernel Interaction During Boot

Once the kernel is loaded by either loader (as usual) or boot2 (bypassing the loader), it examines its boot flags, if any, and adjusts its behavior as necessary.

Kernel Boot Flags

Here are the more common boot flags:

-a:: during kernel initialization, ask for the device to mount as the root file system.-C:: boot from CDROM.-c:: run UserConfig, the boot-time kernel configurator-s:: boot into single-user mode-v:: be more verbose during kernel startup

Note: There are other boot flags; read boot(8) for more information on them.

Init: Process Control Initialization

Once the kernel has finished booting, it passes control to the user process init(8), which is located at /sbin/init, or the program path specified in the init_path variable in loader.

Automatic Reboot Sequence

The automatic reboot sequence makes sure that the file systems available on the system are consistent. If they are not, and fsck(8) cannot fix the inconsistencies, init(8) drops the system into single-user mode for the system administrator to take care of the problems directly.

Single-User Mode

This mode can be reached through the automatic reboot sequence, or by the user booting with the -s option or setting the boot_single variable in loader.

It can also be reached by calling shutdown(8) without the reboot (-r) or halt (-h) options, from multi-user mode.

If the system console is set to insecure in /etc/ttys, then the system prompts for the root password before initiating single-user mode.

'Example 7-3. An Insecure Console in /etc/ttys'

# name  getty                           type    status          comments

#

# If console is marked "insecure", then init will ask for the root password

# when going to single-user mode.

console none                            unknown off insecure

Note: An insecure console means that you consider your physical security to the console to be insecure, and want to make sure only someone who knows the root password may use single-user mode, and it does not mean that you want to run your console insecurely. Thus, if you want security, choose insecure, not secure.

Multi-User Mode

If init(8) finds your file systems to be in order, or once the user has finished in single-user mode, the system enters multi-user mode, in which it starts the resource configuration of the system.

Resource Configuration (rc)

The resource configuration system reads in configuration defaults from /etc/defaults/rc.conf, and system-specific details from /etc/rc.conf, and then proceeds to mount the system file systems mentioned in /etc/fstab, start up networking services, start up miscellaneous system daemons, and finally runs the startup scripts of locally installed packages.

The rc(8) manual page is a good reference to the resource configuration system, as is examining the scripts themselves.

Shutdown Sequence

Upon controlled shutdown, via shutdown(8), init(8) will attempt to run the script /etc/rc.shutdown, and then proceed to send all processes the TERM signal, and subsequently the KILL signal to any that do not terminate timely.

To power down a DragonFly machine on architectures and systems that support power management, simply use the command shutdown -p now to turn the power off immediately. To just reboot a DragonFly system, just use shutdown -r now. You need to be root or a member of operator group to run shutdown(8). The halt(8) and reboot(8) commands can also be used, please refer to their manual pages and to shutdown(8)'s one for more information.

Note: Power management requires acpi(4) support in the kernel or loaded as a module, or apm(4) support.

Users and Basic Account Management

*Contributed by Neil Blakey-Milner. *

Synopsis

Unix, including DragonFly BSD is, as previously explained, a multi-user, multi-tasking system. It is therefore possible, and in fact very common, to have a situation where many users are logged on to one computer, and every one of these users is running many different jobs. Although only one user can physically sit at the computer and use the monitor, keyboard, and mouse connected thereto, others can get their work done by logging in through the network.

After reading this chapter, you will know:

Before reading this chapter, you should:

Introduction

All access to the system is achieved via accounts, and all processes are run by users, so user and account management are of integral importance on DragonFly systems.

Every account on a DragonFly system has certain information associated with it to identify the account.

There are three main types of accounts: the Superuser, system users and user accounts. The Superuser account, usually called root, is used to manage the system with no limitations on privileges. System users run services. Finally, user accounts are used by real people, who log on, read mail, and so forth.

The Superuser Account

The superuser account, usually called root, comes preconfigured to facilitate system administration, and should not be used for day-to-day tasks like sending and receiving mail, general exploration of the system, or programming.

This is because the superuser, unlike normal user accounts, can operate without limits, and misuse of the superuser account may result in spectacular disasters. User accounts are unable to destroy the system by mistake, so it is generally best to use normal user accounts whenever possible, unless you especially need the extra privilege.

You should always double and triple-check commands you issue as the superuser, since an extra space or missing character can mean irreparable data loss.

So, the first thing you should do after reading this chapter is to create an unprivileged user account for yourself for general usage if you have not already. This applies equally whether you are running a multi-user or single-user machine. Later in this chapter, we discuss how to create additional accounts, and how to change between the normal user and superuser.

System Accounts

System users are those used to run services such as DNS, mail, web servers, and so forth. The reason for this is security; if all services ran as the superuser, they could act without restriction.

Examples of system users are daemon, operator, bind (for the Domain Name Service), and news. Often sysadmins create httpd to run web servers they install.

nobody is the generic unprivileged system user. However, it is important to keep in mind that the more services that use nobody, the more files and processes that user will become associated with, and hence the more privileged that user becomes.

User Accounts

User accounts are the primary means of access for real people to the system, and these accounts insulate the user and the environment, preventing the users from damaging the system or other users, and allowing users to customize their environment without affecting others.

Every person accessing your system should have a unique user account. This allows you to find out who is doing what, prevent people from clobbering each others' settings or reading each others' mail, and so forth.

Each user can set up their own environment to accommodate their use of the system, by using alternate shells, editors, key bindings, and language.

Modifying Accounts

There are a variety of different commands available in the UNIX® environment to manipulate user accounts. The most common commands are summarized below, followed by more detailed examples of their usage.

Command Summary
adduser(8) The recommended command-line application for adding new users.
rmuser(8) The recommended command-line application for removing users.
chpass(1) A flexible tool to change user database information.
passwd(1) The simple command-line tool to change user passwords.
pw(8) A powerful and flexible tool to modify all aspects of user accounts.

adduser

adduser(8) is a simple program for adding new users. It creates entries in the system passwd and group files. It will also create a home directory for the new user, copy in the default configuration files (dotfiles) from /usr/share/skel, and can optionally mail the new user a welcome message.

To create the initial configuration file, use adduser -s -config_create. Next, we configure adduser(8) defaults, and create our first user account, since using root for normal usage is evil and nasty.

Example 8-1. Configuring adduser and adding a user

# adduser -v

Use option -silent if you don't want to see all warnings and questions.

Check /etc/shells

Check /etc/master.passwd

Check /etc/group

Enter your default shell: csh date no sh tcsh zsh [sh]: zsh

Your default shell is: zsh -&gt; /usr/local/bin/zsh

Enter your default HOME partition: [/home]:

Copy dotfiles from: /usr/share/skel no [/usr/share/skel]:

Send message from file: /etc/adduser.message no

[/etc/adduser.message]: no

Do not send message

Use passwords (y/n) [y]: y



Write your changes to /etc/adduser.conf? (y/n) [n]: y



Ok, let's go.

Don't worry about mistakes. I will give you the chance later to correct any input.

Enter username [a-z0-9_-]: jru

Enter full name []: J. Random User

Enter shell csh date no sh tcsh zsh [zsh]:

Enter home directory (full path) [/home/jru]:

Uid [1001]:

Enter login class: default []:

Login group jru [jru]:

Login group is ***jru***. Invite jru into other groups: guest no

[no]: wheel

Enter password []:

Enter password again []:



Name:     jru

Password: ****

Fullname: J. Random User

Uid:      1001

Gid:      1001 (jru)

Class:

Groups:   jru wheel

HOME:     /home/jru

Shell:    /usr/local/bin/zsh

OK? (y/n) [y]: y

Added user ***jru***

Copy files from /usr/share/skel to /home/jru

Add another user? (y/n) [y]: n

Goodbye!

#

In summary, we changed the default shell to zsh (an additional shell found in pkgsrc®), and turned off the sending of a welcome mail to added users. We then saved the configuration, created an account for jru, and made sure jru is in wheel group (so that she may assume the role of root with the su(1) command.)

Note: The password you type in is not echoed, nor are asterisks displayed. Make sure you do not mistype the password twice.

Note: Just use adduser(8) without arguments from now on, and you will not have to go through changing the defaults. If the program asks you to change the defaults, exit the program, and try the -s option.

rmuser

You can use rmuser(8) to completely remove a user from the system. rmuser(8) performs the following steps:

  1. Removes the user's crontab(1) entry (if any).

  2. Removes any at(1) jobs belonging to the user.

  3. Kills all processes owned by the user.

  4. Removes the user from the system's local password file.

  5. Removes the user's home directory (if it is owned by the user).

  6. Removes the incoming mail files belonging to the user from /var/mail.

  7. Removes all files owned by the user from temporary file storage areas such as /tmp.

  8. Finally, removes the username from all groups to which it belongs in /etc/group.

    Note: If a group becomes empty and the group name is the same as the username, the group is removed; this complements the per-user unique groups created by adduser(8).

rmuser(8) cannot be used to remove superuser accounts, since that is almost always an indication of massive destruction.

By default, an interactive mode is used, which attempts to make sure you know what you are doing.

Example 8-2. rmuser Interactive Account Removal

# rmuser jru

Matching password entry:

jru:*:1001:1001::0:0:J. Random User:/home/jru:/usr/local/bin/zsh

Is this the entry you wish to remove? y

Remove user's home directory (/home/jru)? y

Updating password file, updating databases, done.

Updating group file: trusted (removing group jru -- personal group is empty) done.

Removing user's incoming mail file /var/mail/jru: done.

Removing files belonging to jru from /tmp: done.

Removing files belonging to jru from /var/tmp: done.

Removing files belonging to jru from /var/tmp/vi.recover: done.

#

chpass

chpass(1) changes user database information such as passwords, shells, and personal information.

Only system administrators, as the superuser, may change other users' information and passwords with chpass(1).

When passed no options, aside from an optional username, chpass(1) displays an editor containing user information. When the user exists from the editor, the user database is updated with the new information.

'Example 8-3. Interactive chpass by Superuser'

#Changing user database information for jru.

Login: jru

Password: *

Uid [#]: 1001

Gid [# or name]: 1001

Change [month day year]:

Expire [month day year]:

Class:

Home directory: /home/jru

Shell: /usr/local/bin/zsh

Full Name: J. Random User

Office Location:

Office Phone:

Home Phone:

Other information:

The normal user can change only a small subset of this information, and only for themselves.

Example 8-4. Interactive chpass by Normal User

#Changing user database information for jru.

Shell: /usr/local/bin/zsh

Full Name: J. Random User

Office Location:

Office Phone:

Home Phone:

Other information:

Note: chfn(1) and chsh(1) are just links to chpass(1), as are ypchpass(1), ypchfn(1), and ypchsh(1). NIS support is automatic, so specifying the yp before the command is not necessary. If this is confusing to you, do not worry, NIS will be covered in [advanced-networking.html Chapter 19].

passwd

passwd(1) is the usual way to change your own password as a user, or another user's password as the superuser.

Note: To prevent accidental or unauthorized changes, the original password must be entered before a new password can be set.

Example 8-5. Changing Your Password

% passwd

Changing local password for jru.

Old password:

New password:

Retype new password:

passwd: updating the database...

passwd: done

'Example 8-6. Changing Another User's Password as the Superuser'

# passwd jru

Changing local password for jru.

New password:

Retype new password:

passwd: updating the database...

passwd: done

Note: As with chpass(1), yppasswd(1) is just a link to passwd(1), so NIS works with either command.

pw

pw(8) is a command line utility to create, remove, modify, and display users and groups. It functions as a front end to the system user and group files. pw(8) has a very powerful set of command line options that make it suitable for use in shell scripts, but new users may find it more complicated than the other commands presented here.

Notes

The -s makes adduser(8) default to quiet. We use -v later when we want to change defaults.

Limiting Users

If you have users, the ability to limit their system use may have come to mind. DragonFly provides several ways an administrator can limit the amount of system resources an individual may use. These limits are divided into two sections: disk quotas, and other resource limits.

Disk quotas limit disk usage to users, and they provide a way to quickly check that usage without calculating it every time. Quotas are discussed in [quotas.html Section 12.12].

The other resource limits include ways to limit the amount of CPU, memory, and other resources a user may consume. These are defined using login classes and are discussed here.

Login classes are defined in /etc/login.conf. The precise semantics are beyond the scope of this section, but are described in detail in the login.conf(5) manual page. It is sufficient to say that each user is assigned to a login class (default by default), and that each login class has a set of login capabilities associated with it. A login capability is a name=value pair, where name is a well-known identifier and value is an arbitrary string processed accordingly depending on the name. Setting up login classes and capabilities is rather straight-forward and is also described in login.conf(5).

Resource limits are different from plain vanilla login capabilities in two ways. First, for every limit, there is a soft (current) and hard limit. A soft limit may be adjusted by the user or application, but may be no higher than the hard limit. The latter may be lowered by the user, but never raised. Second, most resource limits apply per process to a specific user, not the user as a whole. Note, however, that these differences are mandated by the specific handling of the limits, not by the implementation of the login capability framework (i.e., they are not really a special case of login capabilities).

And so, without further ado, below are the most commonly used resource limits (the rest, along with all the other login capabilities, may be found in login.conf(5)).

There are a few other things to remember when setting resource limits. Following are some general tips, suggestions, and miscellaneous comments.

For further information on resource limits and login classes and capabilities in general, please consult the relevant manual pages: cap_mkdb(1), getrlimit(2), login.conf(5).

Personalizing Users

Localization is an environment set up by the system administrator or user to accommodate different languages, character sets, date and time standards, and so on. This is discussed in this chapter.

Groups

A group is simply a list of users. Groups are identified by their group name and GID (Group ID). In DragonFly (and most other UNIX® like systems), the two factors the kernel uses to decide whether a process is allowed to do something is its user ID and list of groups it belongs to. Unlike a user ID, a process has a list of groups associated with it. You may hear some things refer to the group ID of a user or process; most of the time, this just means the first group in the list.

The group name to group ID map is in /etc/group. This is a plain text file with four colon-delimited fields. The first field is the group name, the second is the encrypted password, the third the group ID, and the fourth the comma-delimited list of members. It can safely be edited by hand (assuming, of course, that you do not make any syntax errors!). For a more complete description of the syntax, see the group(5) manual page.

If you do not want to edit /etc/group manually, you can use the pw(8) command to add and edit groups. For example, to add a group called teamtwo and then confirm that it exists you can use:

Example 8-7. Adding a Group Using pw(8)

# pw groupadd teamtwo

# pw groupshow teamtwo

teamtwo:*:1100:

The number 1100 above is the group ID of the group teamtwo. Right now, teamtwo has no members, and is thus rather useless. Let's change that by inviting jru to the teamtwo group.

Example 8-8. Adding Somebody to a Group Using pw(8)

# pw groupmod teamtwo -M jru

# pw groupshow teamtwo

teamtwo:*:1100:jru

The argument to the -M option is a comma-delimited list of users who are members of the group. From the preceding sections, we know that the password file also contains a group for each user. The latter (the user) is automatically added to the group list by the system; the user will not show up as a member when using the groupshow command to pw(8), but will show up when the information is queried via id(1) or similar tool. In other words, pw(8) only manipulates the /etc/group file; it will never attempt to read additionally data from /etc/passwd.

Example 8-9. Using id(1) to Determine Group Membership

% id jru

uid#1001(jru) gid1001(jru) groups=1001(jru), 1100(teamtwo)

As you can see, jru is a member of the groups jru and teamtwo.

For more information about pw(8), see its manual page, and for more information on the format of /etc/group, consult the group(5) manual page.

SSH Server on DragonFly

The best way to log in to a Unix machine across the network is with a program known as ssh.

If you try to ssh to a newly installed dfly from another system you will get this error

$ ssh root@172.16.50.62
ssh: connect to host 172.16.50.62 port 22: Connection refused

This is because sshd is not up and running on dfly. At this point if you check /etc/ssh you will only have the following files

# ls /etc/ssh
blacklist.DSA-1024      blacklist.RSA-2048      ssh_config
blacklist.DSA-2048      blacklist.RSA-4096      sshd_config
blacklist.RSA-1024      moduli

You don't have any SSH host keys generated for the system yet!

When you start sshd for the first time it is best to start it through the "/etc/rc.d/sshd" script which will automatically generate the host keys. For this to work right you need to do the following steps (these steps are not essential for DragonFly 2.8.2 since sshd is already enabled in rc.conf)

1) Enable sshd in rc.conf

#echo "sshd_enable=yes" >> /etc/rc.conf

2) Start the sshd server using the rc script

# /etc/rc.d/sshd start
Generating public/private rsa1 key pair.
Your identification has been saved in /etc/ssh/ssh_host_key.
Your public key has been saved in /etc/ssh/ssh_host_key.pub.
The key fingerprint is:
........
Generating public/private dsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_dsa_key.
Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub.
The key fingerprint is:
........
Starting sshd.

Now if you go back and look in /etc/ssh you will find the SSH host key files too.

# ls /etc/ssh
blacklist.DSA-1024      moduli                  ssh_host_key.pub
blacklist.DSA-2048      ssh_config              ssh_host_rsa_key
blacklist.RSA-1024      ssh_host_dsa_key        ssh_host_rsa_key.pub
blacklist.RSA-2048      ssh_host_dsa_key.pub    sshd_config
blacklist.RSA-4096      ssh_host_key

At this point if you try to ssh to the dfly you will get the following error

$ ssh sgeorge@172.16.50.62
The authenticity of host '172.16.50.62 (172.16.50.62)' can't be established.
RSA key fingerprint is 46:77:28:c2:70:86:93:1a:23:32:5f:01:2c:80:de:de.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.16.50.62' (RSA) to the list of known hosts.
Permission denied (publickey).

This is because of the following configuration option in the default "/etc/ssh/sshd_config" file.

# To disable tunneled clear text passwords, change to no here!
# We disable cleartext passwords by default
PasswordAuthentication no

Change it to

PasswordAuthentication yes

and reload sshd configuration

# /etc/rc.d/sshd reload
Reloading sshd config files.

Nowyou can login to the dragonfly system as a normal user.

$ ssh sgeorge@172.16.50.62
sgeorge at 172.16.50.62's password:
Last login: Tue Oct 19 04:17:47 2010
Copyright (c) 1980, 1983, 1986, 1988, 1990, 1991, 1993, 1994
        The Regents of the University of California.  All rights reserved.

DragonFly v2.7.3.1283.gfa568-DEVELOPMENT (GENERIC.MP) #3: Thu Oct 14 12:01:24 IST 2010

....

But if you try to login by SSH as root you will get the following error.

$ ssh root at 172.16.50.62
root at 172.16.50.62's password:
Permission denied, please try again.

If you investigate the log of the dragonfly system "/var/log/auth.log" you will find a line similar to

Oct 19 07:29:36 dfly-vmsrv sshd[17269]: Failed password for root from 172.16.2.0 port 56447 ssh2

even if you typed the right password for root.

It is because of the following configuration option in the default "/etc/ssh/sshd_config" file

# only allow root logins via public key pair
PermitRootLogin without-password

which allowes only SSH key based authentication as root.

If you change it to

PermitRootLogin yes

and reload sshd configuration

# /etc/rc.d/sshd reload
Reloading sshd config files.

you can login as root

$ ssh root@172.16.50.62
root at 172.16.50.62's password:
Last login: Fri Oct  8 12:22:40 2010
Copyright (c) 1980, 1983, 1986, 1988, 1990, 1991, 1993, 1994
        The Regents of the University of California.  All rights reserved.

DragonFly v2.7.3.1283.gfa568-DEVELOPMENT (GENERIC.MP) #3: Thu Oct 14 12:01:24 IST 2010

Welcome to DragonFly!
......

Now in the *"/var/log/auth.log" * you will find a line similar to

Oct 19 07:30:32 dfly-vmsrv sshd[17894]: Accepted password for root from 172.16.2.0 port 56468 ssh2

WARNING :

* It is not advisable to allow Root Login with password especially if your System is connected to the Internet unless you use Very Strong Passwords. You could be a victim of ssh password based brute force attacks. If you are victim of one such attack you can find entries like the following in your* "/var/log/auth.log file".

Oct 18 18:54:54 cross sshd[9783]: Invalid user maryse from 218.248.26.6
Oct 18 18:54:54 cross sshd[9781]: input_userauth_request: invalid user maryse
Oct 18 18:54:54 cross sshd[9783]: Failed password for invalid user maryse from 218.248.26.6 port 34847 ssh2
Oct 18 18:54:54 cross sshd[9781]: Received disconnect from 218.248.26.6: 11: Bye Bye
Oct 18 18:54:55 cross sshd[27641]: Invalid user may from 218.248.26.6
Oct 18 18:54:55 cross sshd[3450]: input_userauth_request: invalid user may
Oct 18 18:54:55 cross sshd[27641]: Failed password for invalid user may from 218.248.26.6 port 34876 ssh2
Oct 18 18:54:55 cross sshd[3450]: Received disconnect from 218.248.26.6: 11: Bye Bye
Oct 18 18:54:56 cross sshd[8423]: Invalid user admin from 218.248.26.6
Oct 18 18:54:56 cross sshd[3131]: input_userauth_request: invalid user admin
Oct 18 18:54:56 cross sshd[8423]: Failed password for invalid user admin from 218.248.26.6 port 34905 ssh2
Oct 18 18:54:56 cross sshd[3131]: Received disconnect from 218.248.26.6: 11: Bye Bye
Oct 18 18:54:57 cross sshd[7373]: Invalid user admin from 218.248.26.6
Oct 18 18:54:57 cross sshd[28059]: input_userauth_request: invalid user admin
Oct 18 18:54:57 cross sshd[7373]: Failed password for invalid user admin from 218.248.26.6 port 34930 ssh2
Oct 18 18:54:57 cross sshd[28059]: Received disconnect from 218.248.26.6: 11: Bye Bye
Oct 18 18:54:58 cross sshd[12081]: Invalid user admin from 218.248.26.6
Oct 18 18:54:58 cross sshd[22416]: input_userauth_request: invalid user admin
Oct 18 18:54:58 cross sshd[12081]: Failed password for invalid user admin from 218.248.26.6 port 34958 ssh2
Oct 18 18:54:58 cross sshd[22416]: Received disconnect from 218.248.26.6: 11: Bye Bye

SSH Server on DragonFly

Unix, including DragonFly BSD is, as previously explained, a multi-user, multi-tasking system. It is therefore possible, and in fact very common, to have a situation where many users are logged on to one computer, and every one of these users is running many different jobs. Although only one user can physically sit at the computer and use the monitor, keyboard, and mouse connected thereto, others can log in through the network.

This document is very detailed so that a new user can be familiar with the environment.

If you try to ssh to a newly installed dfly from another system you will get this error

$ ssh root@172.16.50.62
ssh: connect to host 172.16.50.62 port 22: Connection refused

This is because sshd is not up and running on dfly. At this point if you check /etc/ssh you will only have the following files

# ls /etc/ssh
blacklist.DSA-1024      blacklist.RSA-2048      ssh_config
blacklist.DSA-2048      blacklist.RSA-4096      sshd_config
blacklist.RSA-1024      moduli

You don't have any SSH host keys generated for the system yet!

When you start sshd for the first time it is best to start it through the "/etc/rc.d/sshd" script which will automatically generate the host keys. For this to work right you need to do the following steps (these steps are not essential for DragonFly 2.8.2 since sshd is already enabled in rc.conf)

1) Enable sshd in rc.conf

#echo "sshd_enable=yes" >> /etc/rc.conf

2) Start the sshd server using the rc script

# /etc/rc.d/sshd start
Generating public/private rsa1 key pair.
Your identification has been saved in /etc/ssh/ssh_host_key.
Your public key has been saved in /etc/ssh/ssh_host_key.pub.
The key fingerprint is:
........
Generating public/private dsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_dsa_key.
Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub.
The key fingerprint is:
........
Starting sshd.

Now if you go back and look in /etc/ssh you will find the SSH host key files too.

# ls /etc/ssh
blacklist.DSA-1024      moduli                  ssh_host_key.pub
blacklist.DSA-2048      ssh_config              ssh_host_rsa_key
blacklist.RSA-1024      ssh_host_dsa_key        ssh_host_rsa_key.pub
blacklist.RSA-2048      ssh_host_dsa_key.pub    sshd_config
blacklist.RSA-4096      ssh_host_key

At this point if you try to ssh to the dfly you will get the following error

$ ssh sgeorge@172.16.50.62
The authenticity of host '172.16.50.62 (172.16.50.62)' can't be established.
RSA key fingerprint is 46:77:28:c2:70:86:93:1a:23:32:5f:01:2c:80:de:de.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.16.50.62' (RSA) to the list of known hosts.
Permission denied (publickey).

This is because of the following configuration option in the default "/etc/ssh/sshd_config" file.

# To disable tunneled clear text passwords, change to no here!
# We disable cleartext passwords by default
PasswordAuthentication no

Change it to

PasswordAuthentication yes

and reload sshd configuration

# /etc/rc.d/sshd reload
Reloading sshd config files.

Nowyou can login to the dragonfly system as a normal user.

$ ssh sgeorge@172.16.50.62
sgeorge at 172.16.50.62's password:
Last login: Tue Oct 19 04:17:47 2010
Copyright (c) 1980, 1983, 1986, 1988, 1990, 1991, 1993, 1994
        The Regents of the University of California.  All rights reserved.

DragonFly v2.7.3.1283.gfa568-DEVELOPMENT (GENERIC.MP) #3: Thu Oct 14 12:01:24 IST 2010

....

But if you try to login by SSH as root you will get the following error.

$ ssh root at 172.16.50.62
root at 172.16.50.62's password:
Permission denied, please try again.

If you investigate the log of the dragonfly system "/var/log/auth.log" you will find a line similar to

Oct 19 07:29:36 dfly-vmsrv sshd[17269]: Failed password for root from 172.16.2.0 port 56447 ssh2

even if you typed the right password for root.

It is because of the following configuration option in the default "/etc/ssh/sshd_config" file

# only allow root logins via public key pair
PermitRootLogin without-password

which allowes only SSH key based authentication as root.

If you change it to

PermitRootLogin yes

and reload sshd configuration

# /etc/rc.d/sshd reload
Reloading sshd config files.

you can login as root

$ ssh root@172.16.50.62
root at 172.16.50.62's password:
Last login: Fri Oct  8 12:22:40 2010
Copyright (c) 1980, 1983, 1986, 1988, 1990, 1991, 1993, 1994
        The Regents of the University of California.  All rights reserved.

DragonFly v2.7.3.1283.gfa568-DEVELOPMENT (GENERIC.MP) #3: Thu Oct 14 12:01:24 IST 2010

Welcome to DragonFly!
......

Now in the *"/var/log/auth.log" * you will find a line similar to

Oct 19 07:30:32 dfly-vmsrv sshd[17894]: Accepted password for root from 172.16.2.0 port 56468 ssh2

WARNING :

* It is not advisable to allow Root Login with password especially if your System is connected to the Internet unless you use Very Strong Passwords. You could be a victim of ssh password based brute force attacks. If you are victim of one such attack you can find entries like the following in your* "/var/log/auth.log file".

Oct 18 18:54:54 cross sshd[9783]: Invalid user maryse from 218.248.26.6
Oct 18 18:54:54 cross sshd[9781]: input_userauth_request: invalid user maryse
Oct 18 18:54:54 cross sshd[9783]: Failed password for invalid user maryse from 218.248.26.6 port 34847 ssh2
Oct 18 18:54:54 cross sshd[9781]: Received disconnect from 218.248.26.6: 11: Bye Bye
Oct 18 18:54:55 cross sshd[27641]: Invalid user may from 218.248.26.6
Oct 18 18:54:55 cross sshd[3450]: input_userauth_request: invalid user may
Oct 18 18:54:55 cross sshd[27641]: Failed password for invalid user may from 218.248.26.6 port 34876 ssh2
Oct 18 18:54:55 cross sshd[3450]: Received disconnect from 218.248.26.6: 11: Bye Bye
Oct 18 18:54:56 cross sshd[8423]: Invalid user admin from 218.248.26.6
Oct 18 18:54:56 cross sshd[3131]: input_userauth_request: invalid user admin
Oct 18 18:54:56 cross sshd[8423]: Failed password for invalid user admin from 218.248.26.6 port 34905 ssh2
Oct 18 18:54:56 cross sshd[3131]: Received disconnect from 218.248.26.6: 11: Bye Bye
Oct 18 18:54:57 cross sshd[7373]: Invalid user admin from 218.248.26.6
Oct 18 18:54:57 cross sshd[28059]: input_userauth_request: invalid user admin
Oct 18 18:54:57 cross sshd[7373]: Failed password for invalid user admin from 218.248.26.6 port 34930 ssh2
Oct 18 18:54:57 cross sshd[28059]: Received disconnect from 218.248.26.6: 11: Bye Bye
Oct 18 18:54:58 cross sshd[12081]: Invalid user admin from 218.248.26.6
Oct 18 18:54:58 cross sshd[22416]: input_userauth_request: invalid user admin
Oct 18 18:54:58 cross sshd[12081]: Failed password for invalid user admin from 218.248.26.6 port 34958 ssh2
Oct 18 18:54:58 cross sshd[22416]: Received disconnect from 218.248.26.6: 11: Bye Bye

Configuring the DragonFly Kernel

Updated and restructured by Jim Mock. Originally contributed by Jake Hamby.

Synopsis

The kernel is the core of the DragonFly operating system. It is responsible for managing memory, enforcing security controls, networking, disk access, and much more. While more and more of DragonFly becomes dynamically configurable it is still occasionally necessary to reconfigure and recompile your kernel.

After reading this chapter, you will know:

Why Build a Custom Kernel?

Traditionally, DragonFly has had what is called a monolithic kernel. This means that the kernel was one large program, supported a fixed list of devices, and if you wanted to change the kernel's behavior then you had to compile a new kernel, and then reboot your computer with the new kernel.

Today, DragonFly is rapidly moving to a model where much of the kernel's functionality is contained in modules which can be dynamically loaded and unloaded from the kernel as necessary. This allows the kernel to adapt to new hardware suddenly becoming available (such as PCMCIA cards in a laptop), or for new functionality to be brought into the kernel that was not necessary when the kernel was originally compiled. This is known as a modular kernel. Colloquially these are called KLDs.

Despite this, it is still necessary to carry out some static kernel configuration. In some cases this is because the functionality is so tied to the kernel that it can not be made dynamically loadable. In others it may simply be because no one has yet taken the time to write a dynamic loadable kernel module for that functionality yet.

Building a custom kernel is one of the most important rites of passage nearly every UNIX® user must endure. This process, while time consuming, will provide many benefits to your DragonFly system. Unlike the GENERIC kernel, which must support a wide range of hardware, a custom kernel only contains support for your PC's hardware. This has a number of benefits, such as:

Building and Installing a Custom Kernel

First, let us take a quick tour of the kernel build directory. All directories mentioned will be relative to the main /usr/src/sys directory, which is also accessible through /sys. There are a number of subdirectories here representing different parts of the kernel, but the most important, for our purposes, is config, where you will edit your custom kernel configuration, and compile, which is the staging area where your kernel will be built. Notice the logical organization of the directory structure, with each supported device, file system, and option in its own subdirectory.

Installing the Source

If there is no /usr/src/sys directory on your system, then the kernel source has not been installed. One method to do this is via git. An alternative is to install the kernel source tree from the archive distributed on the DragonFly CD named src-sys.tar.bz2. This is especially useful when you do not have ready access to the internet. Use the Makefile in /usr to fetch the source or to unpack the archive. When installing kernel source only, use the alternate build procedure below.

The preferred way of installing the sources is:

# cd /usr
# make src-create

This will download the whole source tree via git into /usr/src. This method also allows for easy updating of the source tree by using:

# make src-update

Your Custom Config File

Next, move to the config directory and copy the GENERIC configuration file to the name you want to give your kernel. For example:

# cd /usr/src/sys/config
# cp GENERIC MYKERNEL

Traditionally, this name is in all capital letters and, if you are maintaining multiple DragonFly machines with different hardware, it is a good idea to name it after your machine's hostname. We will call it MYKERNEL for the purpose of this example.

Tip: Storing your kernel config file directly under /usr/src can be a bad idea. If you are experiencing problems it can be tempting to just delete /usr/src and start again. Five seconds after you do that you realize that you have deleted your custom kernel config file. Do not edit GENERIC directly, as it may get overwritten the next time you update your source tree, and your kernel modifications will be lost. You might want to keep your kernel config file elsewhere, and then create a symbolic link to the file in the config directory.

For example:

# cd /usr/src/sys/config
# mkdir /root/kernels
# cp GENERIC /root/kernels/MYKERNEL
# ln -s /root/kernels/MYKERNEL

Note: You must execute these and all of the following commands under the root account or you will get permission denied errors.

Now, edit MYKERNEL with your favorite text editor. If you are just starting out, the only editor available will probably be vi, which is too complex to explain here, but is covered well in many books in the bibliography. However, DragonFly does offer an easier editor called ee which, if you are a beginner, should be your editor of choice. Feel free to change the comment lines at the top to reflect your configuration or the changes you have made to differentiate it from GENERIC.

If you have built a kernel under SunOS™ or some other BSD operating system, much of this file will be very familiar to you. If you are coming from some other operating system such as DOS, on the other hand, the GENERIC configuration file might seem overwhelming to you, so follow the descriptions in the ?Configuration File section slowly and carefully.

Building a Kernel - Full Source Tree

Note: Be sure to always check the file /usr/src/UPDATING, before you perform any update steps, in the case you sync your source tree with the latest sources of the DragonFly project. In this file all important issues with updating DragonFly are typed out. /usr/src/UPDATING always fits your version of the DragonFly source, and is therefore more accurate for new information than the handbook.

  1. Change to the /usr/src directory.

      # cd /usr/src
    
  2. Compile the kernel.

      # make buildkernel KERNCONF=MYKERNEL
    
  3. Install the new kernel.

      # make installkernel KERNCONF=MYKERNEL
    

If you have not upgraded your source tree in any way since the last time you successfully completed a buildworld-installworld cycle (you have not run git pull ), then it is safe to use the quickworld and quickkernel, buildworld, buildkernel sequence.

Building a Kernel - Kernel Source Only

When only the kernel source is installed, you need to change step 2, above, to this:

  # make nativekernel KERNCONF=MYKERNEL

The other steps are the same.

Running Your New Kernel

The installer copies the new kernel and modules to /boot/kernel/, the kernel being /boot/kernel/kernel and the modules being /boot/kernel/*.ko. The old kernel and modules are moved to /boot/kernel.old/. Now, shutdown the system and reboot to use your new kernel. In case something goes wrong, there are some troubleshooting instructions at the end of this chapter. Be sure to read the section which explains how to recover in case your new kernel does not boot.

Note: If you have added any new devices (such as sound cards), you may have to add some device nodes to your /dev directory before you can use them. For more information, take a look at device nodes section later on in this chapter.

The Configuration File

The general format of a configuration file is quite simple. Each line contains a keyword and one or more arguments. For simplicity, most lines only contain one argument. Anything following a # is considered a comment and ignored. The following sections describe each keyword, generally in the order they are listed in GENERIC, although some related keywords have been grouped together in a single section (such as Networking) even though they are actually scattered throughout the GENERIC file. An exhaustive list of options and more detailed explanations of the device lines is present in the LINT configuration file, located in the same directory as GENERIC. If you are in doubt as to the purpose or necessity of a line, check first in LINT.

The following is an example GENERIC kernel configuration file with various additional comments where needed for clarity. This example should match your copy in /usr/src/sys/config/GENERIC fairly closely. For details of all the possible kernel options, see /usr/src/sys/config/LINT.

#

#

# GENERIC -- Generic kernel configuration file for DragonFly/i386

#

# Check the LINT configuration file in sys/config, for an

# exhaustive list of options.

#

# $DragonFly: src/sys/config/GENERIC,v 1.56 2007/12/26 14:02:36 sephe Exp $

The following are the mandatory keywords required in every kernel you build:

machine         i386

This is the machine architecture. It must be i386 at the moment. Support for amd64 will be added in the future.

cpu          I386_CPU

cpu          I486_CPU

cpu          I586_CPU

cpu          I686_CPU

The above option specifies the type of CPU you have in your system. You may have multiple instances of the CPU line (i.e., you are not sure whether you should use I586_CPU or I686_CPU), however, for a custom kernel, it is best to specify only the CPU you have. If you are unsure of your CPU type, you can check the /var/run/dmesg.boot file to view your boot up messages.

ident          GENERIC

This is the identification of the kernel. You should change this to whatever you named your kernel, i.e. MYKERNEL if you have followed the instructions of the previous examples. The value you put in the ident string will print when you boot up the kernel, so it is useful to give the new kernel a different name if you want to keep it separate from your usual kernel (i.e. you want to build an experimental kernel).

maxusers          0

The maxusers option sets the size of a number of important system tables. This number is supposed to be roughly equal to the number of simultaneous users you expect to have on your machine.

(Recommended) The system will auto-tune this setting for you if you explicitly set it to 0(1). If you want to manage it yourself you will want to set maxusers to at least 4, especially if you are using the X Window System or compiling software. The reason is that the most important table set by maxusers is the maximum number of processes, which is set to 20 + 16 * maxusers, so if you set maxusers to 1, then you can only have 36 simultaneous processes, including the 18 or so that the system starts up at boot time, and the 15 or so you will probably create when you start the X Window System. Even a simple task like reading a manual page will start up nine processes to filter, decompress, and view it. Setting maxusers to 64 will allow you to have up to 1044 simultaneous processes, which should be enough for nearly all uses. If, however, you see the dreaded proc table full error when trying to start another program, or are running a server with a large number of simultaneous users, you can always increase the number and rebuild.

Note: maxusers does not limit the number of users which can log into your machine. It simply sets various table sizes to reasonable values considering the maximum number of users you will likely have on your system and how many processes each of them will be running. One keyword which does limit the number of simultaneous remote logins and X terminal windows is [kernelconfig-config.html#KERNELCONFIG-PTYS pseudo-device pty 16].

# Floating point support - do not disable.

device          npx0     at nexus? port IO_NPX irq 13

npx0 is the interface to the floating point math unit in DragonFly, which is either the hardware co-processor or the software math emulator. This is not optional.

# Pseudo devices - the number indicates how many units to allocate.

pseudo-device   loop          # Network loopback

This is the generic loopback device for TCP/IP. If you telnet or FTP to localhost (a.k.a., 127.0.0.1) it will come back at you through this device. This is mandatory.

Everything that follows is more or less optional. See the notes underneath or next to each option for more information.

#makeoptions     DEBUG=-g          #Build kernel with gdb(1) debug symbols

The normal build process of the DragonFly does not include debugging information when building the kernel and strips most symbols after the resulting kernel is linked, to save some space at the install location. If you are going to do tests of kernels in the DEVELOPMENT branch or develop changes of your own for the DragonFly kernel, you might want to uncomment this line. It will enable the use of the -g option which enables debugging information when passed to gcc(1).

options          MATH_EMULATE      #Support for x87 emulation

This line allows the kernel to simulate a math co-processor if your computer does not have one (386 or 486SX). If you have a 486DX, or a 386 or 486SX (with a separate 387 or 487 chip), or higher (Pentium®, Pentium II, etc.), you can comment this line out.

Note: The normal math co-processor emulation routines that come with DragonFly are not very accurate. If you do not have a math co-processor, and you need the best accuracy, it is recommended that you change this option to GPL_MATH_EMULATE to use the GNU math support, which is not included by default for licensing reasons.

options          INET          #InterNETworking

Networking support. Leave this in, even if you do not plan to be connected to a network. Most programs require at least loopback networking (i.e., making network connections within your PC), so this is essentially mandatory.

options          INET6          #IPv6 communications protocols

This enables the IPv6 communication protocols.

options          FFS          #Berkeley Fast Filesystem

options          FFS_ROOT     #FFS usable as root device [keep this!]

This is the basic hard drive Filesystem. Leave it in if you boot from the hard disk.

options          UFS_DIRHASH  #Improve performance on big directories

This option includes functionality to speed up disk operations on large directories, at the expense of using additional memory. You would normally keep this for a large server, or interactive workstation, and remove it if you are using DragonFly on a smaller system where memory is at a premium and disk access speed is less important, such as a firewall.

options          SOFTUPDATES  #Enable FFS Soft Updates support

This option enables Soft Updates in the kernel, this will help speed up write access on the disks. Even when this functionality is provided by the kernel, it must be turned on for specific disks. Review the output from mount(8) to see if Soft Updates is enabled for your system disks. If you do not see the soft-updates option then you will need to activate it using the tunefs(8) (for existing filesystems) or newfs(8) (for new filesystems) commands.

options          MFS          #Memory Filesystem

options          MD_ROOT      #MD is a potential root device

This is the memory-mapped filesystem. This is basically a RAM disk for fast storage of temporary files, useful if you have a lot of swap space that you want to take advantage of. A perfect place to mount an MFS partition is on the /tmp directory, since many programs store temporary data here. To mount an MFS RAM disk on /tmp, add the following line to /etc/fstab:

/dev/ad1s2b     /tmp mfs rw 0 0

Now you simply need to either reboot, or run the command mount /tmp.

options          NFS          #Network Filesystem

options          NFS_ROOT     #NFS usable as root device, NFS required

The network Filesystem. Unless you plan to mount partitions from a UNIX® file server over TCP/IP, you can comment these out.

options          MSDOSFS      #MSDOS Filesystem

The MS-DOS® Filesystem. Unless you plan to mount a DOS formatted hard drive partition at boot time, you can safely comment this out. It will be automatically loaded the first time you mount a DOS partition, as described above. Also, the excellent mtools software (in pkgsrc®) allows you to access DOS floppies without having to mount and unmount them (and does not require MSDOSFS at all).

options          CD9660       #ISO 9660 Filesystem

options          CD9660_ROOT  #CD-ROM usable as root, CD9660 required

The ISO 9660 Filesystem for CDROMs. Comment it out if you do not have a CDROM drive or only mount data CDs occasionally (since it will be dynamically loaded the first time you mount a data CD). Audio CDs do not need this Filesystem.

options          PROCFS       #Process filesystem

The process filesystem. This is a pretend filesystem mounted on /proc which allows programs like ps(1) to give you more information on what processes are running. *

Compatibility with 4.3BSD. Leave this in; some programs will act strangely if you comment this out.

options          SCSI_DELAY=5000    #Delay (in ms) before probing SCSI

This causes the kernel to pause for 15 seconds before probing each SCSI device in your system. If you only have IDE hard drives, you can ignore this, otherwise you will probably want to lower this number, perhaps to five seconds (5000 ms), to speed up booting. Of course, if you do this, and DragonFly has trouble recognizing your SCSI devices, you will have to raise it back up.

options          UCONSOLE            #Allow users to grab the console

Allow users to grab the console, which is useful for X users. For example, you can create a console xterm by typing xterm -C, which will display any write(1), talk(1), and any other messages you receive, as well as any console messages sent by the kernel.

options          USERCONFIG          #boot -c editor

This option allows you to boot the configuration editor from the boot menu.

options          VISUAL_USERCONFIG   #visual boot -c editor

This option allows you to boot the visual configuration editor from the boot menu.

options          KTRACE              #ktrace(1) support

This enables kernel process tracing, which is useful in debugging.

options          SYSVSHM             #SYSV-style shared memory

This option provides for System V shared memory. The most common use of this is the XSHM extension in X, which many graphics-intensive programs will automatically take advantage of for extra speed. If you use X, you will definitely want to include this.

options          SYSVSEM             #SYSV-style semaphores

Support for System V semaphores. Less commonly used but only adds a few hundred bytes to the kernel.

options          SYSVMSG             #SYSV-style message queues

Support for System V messages. Again, only adds a few hundred bytes to the kernel.

Note: The ipcs(1) command will list any processes using each of these System V facilities.

options         P1003_1B                #Posix P1003_1B real-time extensions

options         _KPOSIX_PRIORITY_SCHEDULING

Real-time extensions added in the 1993 POSIX®. Certain applications in the ports collection use these (such as StarOffice™ ).

options         ICMP_BANDLIM            #Rate limit bad replies

This option enables ICMP error response bandwidth limiting. You typically want this option as it will help protect the machine from denial of service packet attacks.

# To make an SMP kernel, the next two are needed

#options        SMP                     # Symmetric MultiProcessor Kernel

#options        APIC_IO                 # Symmetric (APIC) I/O

The above are both required for SMP support.

device          isa

All PCs supported by DragonFly have one of these. Do not remove, even if you have no ISA slots. If you have an IBM PS/2 (Micro Channel Architecture), DragonFly provides some limited support at this time. For more information about the MCA support, see /usr/src/sys/config/LINT.

device          eisa

Include this if you have an EISA motherboard. This enables auto-detection and configuration support for all devices on the EISA bus.

device          pci

Include this if you have a PCI motherboard. This enables auto-detection of PCI cards and gatewaying from the PCI to ISA bus.

device          agp

Include this if you have an AGP card in the system. This will enable support for AGP, and AGP GART for boards which have these features.

# Floppy drives

device          fdc0        at isa? port IO_FD1 irq 6 drq 2

device          fd0         at fdc0 drive 0

device          fd1         at fdc0 drive 1

This is the floppy drive controller. fd0 is the A: floppy drive, and fd1 is the B: drive.

device          ata

This driver supports all ATA and ATAPI devices. You only need one device ata line for the kernel to detect all PCI ATA/ATAPI devices on modern machines.

device          atadisk                 # ATA disk drives

This is needed along with device ata for ATA disk drives.

device          atapicd                 # ATAPI CDROM drives

This is needed along with device ata for ATAPI CDROM drives.

device          atapifd                 # ATAPI floppy drives

This is needed along with device ata for ATAPI floppy drives.

device          atapist                 # ATAPI tape drives

This is needed along with device ata for ATAPI tape drives.

options         ATA_STATIC_ID           #Static device numbering

This makes the controller number static (like the old driver) or else the device numbers are dynamically allocated.

# ATA and ATAPI devices

device          ata0        at isa? port IO_WD1 irq 14

device          ata1        at isa? port IO_WD2 irq 15

Use the above for older, non-PCI systems.

# SCSI Controllers

device          ahb        # EISA AHA1742 family

device          ahc        # AHA2940 and onboard AIC7xxx devices

device          amd        # AMD 53C974 (Teckram DC-390(T))

device          dpt        # DPT Smartcache - See LINT for options!

device          isp        # Qlogic family

device          ncr        # NCR/Symbios Logic

device          sym        # NCR/Symbios Logic (newer chipsets)

device          adv0       at isa?

device          adw

device          bt0        at isa?

device          aha0       at isa?

device          aic0       at isa?

SCSI controllers. Comment out any you do not have in your system. If you have an IDE only system, you can remove these altogether.

# SCSI peripherals

device          scbus      # SCSI bus (required)

device          da         # Direct Access (disks)

device          sa         # Sequential Access (tape etc)

device          cd         # CD

device          pass       # Passthrough device (direct SCSI

access)

SCSI peripherals. Again, comment out any you do not have, or if you have only IDE hardware, you can remove them completely.

Note: The USB umass(4) driver (and a few other drivers) use the SCSI subsystem even though they are not real SCSI devices. Therefore make sure not to remove SCSI support, if any such drivers are included in the kernel configuration.

# RAID controllers

device          ida        # Compaq Smart RAID

device          amr        # AMI MegaRAID

device          mlx        # Mylex DAC960 family

Supported RAID controllers. If you do not have any of these, you can comment them out or remove them.

# atkbdc0 controls both the keyboard and the PS/2 mouse

device          atkbdc0    at isa? port IO_KBD

The keyboard controller (atkbdc) provides I/O services for the AT keyboard and PS/2 style pointing devices. This controller is required by the keyboard driver (atkbd) and the PS/2 pointing device driver (psm).

device          atkbd0     at atkbdc? irq 1

The atkbd driver, together with atkbdc controller, provides access to the AT 84 keyboard or the AT enhanced keyboard which is connected to the AT keyboard controller.

device          psm0       at atkbdc? irq 12

Use this device if your mouse plugs into the PS/2 mouse port.

device          vga0        at isa?

The video card driver.

# splash screen/screen saver

pseudo-device          splash

Splash screen at start up! Screen savers require this too.

# syscons is the default console driver, resembling an SCO console

device          sc0          at isa?

sc0 is the default console driver, which resembles a SCO console. Since most full-screen programs access the console through a terminal database library like termcap, it should not matter whether you use this or vt0, the VT220 compatible console driver. When you log in, set your TERM variable to scoansi if full-screen programs have trouble running under this console.

# Enable this and PCVT_FREEBSD for pcvt vt220 compatible console driver

#device          vt0     at isa?

#options         XSERVER          # support for X server on a vt console

#options         FAT_CURSOR       # start with block cursor

# If you have a ThinkPAD, uncomment this along with the rest of the PCVT lines

#options         PCVT_SCANSET=2   # IBM keyboards are non-std

This is a VT220-compatible console driver, backward compatible to VT100/102. It works well on some laptops which have hardware incompatibilities with sc0. Also set your TERM variable to vt100 or vt220 when you log in. This driver might also prove useful when connecting to a large number of different machines over the network, where termcap or terminfo entries for the sc0 device are often not available -- vt100 should be available on virtually any platform.

# Power management support (see LINT for more options)

device          apm0     at nexus? disable flags 0x20  # Advanced Power Management

Advanced Power Management support. Useful for laptops.

# PCCARD (PCMCIA) support

device          card

device          pcic0    at isa? irq 10 port 0x3e0 iomem 0xd0000

device          pcic1    at isa? irq 11 port 0x3e2 iomem 0xd4000 disable

PCMCIA support. You want this if you are using a laptop.

# Serial (COM) ports

device          sio0     at isa? port IO_COM1 flags 0x10 irq 4

device          sio1     at isa? port IO_COM2 irq 3

device          sio2     at isa? disable port IO_COM3 irq 5

device          sio3     at isa? disable port IO_COM4 irq 9

These are the four serial ports referred to as COM1 through COM4 in the MS-DOS/Windows® world.

Note: If you have an internal modem on COM4 and a serial port at COM2, you will have to change the IRQ of the modem to 2 (for obscure technical reasons, IRQ2 # IRQ 9) in order to access it from DragonFly. If you have a multiport serial card, check the manual page for sio(4) for more information on the proper values for these lines. Some video cards (notably those based on S3 chips) use IO addresses in the form of 0x*2e8, and since many cheap serial cards do not fully decode the 16-bit IO address space, they clash with these cards making the COM4 port practically unavailable.

Each serial port is required to have a unique IRQ (unless you are using one of the multiport cards where shared interrupts are supported), so the default IRQs for COM3 and COM4 cannot be used.

# Parallel port

device          ppc0    at isa? irq 7

This is the ISA-bus parallel port interface.

device          ppbus      # Parallel port bus (required)

Provides support for the parallel port bus.

device          lpt        # Printer

Support for parallel port printers.

Note: All three of the above are required to enable parallel printer support.

device          plip       # TCP/IP over parallel

This is the driver for the parallel network interface.

device          ppi        # Parallel port interface device

The general-purpose I/O (*geek port) + IEEE1284 I/O.

#device         vpo        # Requires scbus and da

This is for an Iomega Zip drive. It requires scbus and da support. Best performance is achieved with ports in EPP 1.9 mode.

# PCI Ethernet NICs.

device          de         # DEC/Intel DC21x4x (Tulip)

device          fxp        # Intel EtherExpress PRO/100B (82557, 82558)

device          tx         # SMC 9432TX (83c170 EPIC)

device          vx         # 3Com 3c590, 3c595 (Vortex)

device          wx         # Intel Gigabit Ethernet Card (Wiseman)

Various PCI network card drivers. Comment out or remove any of these not present in your system.

# PCI Ethernet NICs that use the common MII bus controller code.

device          miibus     # MII bus support

MII bus support is required for some PCI 10/100 Ethernet NICs, namely those which use MII-compliant transceivers or implement transceiver control interfaces that operate like an MII. Adding device miibus to the kernel config pulls in support for the generic miibus API and all of the PHY drivers, including a generic one for PHYs that are not specifically handled by an individual driver.

device          dc         # DEC/Intel 21143 and various workalikes

device          rl         # RealTek 8129/8139

device          sf         # Adaptec AIC-6915 (Starfire)

device          sis        # Silicon Integrated Systems SiS 900/SiS 7016

device          ste        # Sundance ST201 (D-Link DFE-550TX)

device          tl         # Texas Instruments ThunderLAN

device          vr         # VIA Rhine, Rhine II

device          wb         # Winbond W89C840F

device          xl         # 3Com 3c90x (Boomerang, Cyclone)

Drivers that use the MII bus controller code.

# ISA Ethernet NICs.

device          ed0    at isa? port 0x280 irq 10 iomem 0xd8000

device          ex

device          ep

# WaveLAN/IEEE 802.11 wireless NICs. Note: the WaveLAN/IEEE really

# exists only as a PCMCIA device, so there is no ISA attachment needed

# and resources will always be dynamically assigned by the pccard code.

device          wi

# Aironet 4500/4800 802.11 wireless NICs. Note: the declaration below will

# work for PCMCIA and PCI cards, as well as ISA cards set to ISA PnP

# mode (the factory default). If you set the switches on your ISA

# card for a manually chosen I/O address and IRQ, you must specify

# those parameters here.

device          an

# The probe order of these is presently determined by i386/isa/isa_compat.c.

device          ie0    at isa? port 0x300 irq 10 iomem 0xd0000

device          fe0    at isa? port 0x300

device          le0    at isa? port 0x300 irq 5 iomem 0xd0000

device          lnc0   at isa? port 0x280 irq 10 drq 0

device          cs0    at isa? port 0x300

device          sn0    at isa? port 0x300 irq 10

# requires PCCARD (PCMCIA) support to be activated

#device         xe0    at isa?

ISA Ethernet drivers. See /usr/src/sys/config/LINT for which cards are supported by which driver.

pseudo-device   ether         # Ethernet support

ether is only needed if you have an Ethernet card. It includes generic Ethernet protocol code.

pseudo-device   sl      1     # Kernel SLIP

sl is for SLIP support. This has been almost entirely supplanted by PPP, which is easier to set up, better suited for modem-to-modem connection, and more powerful. The number after sl specifies how many simultaneous SLIP sessions to support.

pseudo-device   ppp     1     # Kernel PPP

This is for kernel PPP support for dial-up connections. There is also a version of PPP implemented as a userland application that uses tun and offers more flexibility and features such as demand dialing. The number after ppp specifies how many simultaneous PPP connections to support. .

device   tun           # Packet tunnel.

This is used by the userland PPP software. A number after tun specifies the number of simultaneous PPP sessions to support. See the [userppp.html PPP] section of this book for more information.

pseudo-device   pty           # Pseudo-ttys (telnet etc)

This is a pseudo-terminal or simulated login port. It is used by incoming telnet and rlogin sessions, xterm, and some other applications such as Emacs. The number after pty indicates the number of ptys to create. If you need more than the default of 16 simultaneous xterm windows and/or remote logins, be sure to increase this number accordingly, up to a maximum of 256. *

Memory disk pseudo-devices.

pseudo-device   gif     # IPv6 and IPv4 tunneling

This implements IPv6 over IPv4 tunneling, IPv4 over IPv6 tunneling, IPv4 over IPv4 tunneling, and IPv6 over IPv6 tunneling.

pseudo-device   faith   # IPv6-to-IPv4 relaying (translation)

This pseudo-device captures packets that are sent to it and diverts them to the IPv4/IPv6 translation daemon.

# The `bpf' device enables the Berkeley Packet Filter.

# Be aware of the administrative consequences of enabling this!

pseudo-device   bpf           # Berkeley packet filter

This is the Berkeley Packet Filter. This pseudo-device allows network interfaces to be placed in promiscuous mode, capturing every packet on a broadcast network (e.g., an Ethernet). These packets can be captured to disk and or examined with the tcpdump(1) program.

Note: The bpf(4) device is also used by dhclient(8) to obtain the IP address of the default router (gateway) and so on. If you use DHCP, leave this uncommented.

# USB support

#device         uhci          # UHCI PCI-&gt;USB interface

#device         ohci          # OHCI PCI-&gt;USB interface

#device         usb           # USB Bus (required)

#device         ugen          # Generic

#device         uhid          # ***Human Interface Devices***

#device         ukbd          # Keyboard

#device         ulpt          # Printer

#device         umass         # Disks/Mass storage - Requires scbus and da

#device         ums           # Mouse

# USB Ethernet, requires mii

#device         aue           # ADMtek USB ethernet

#device         cue           # CATC USB ethernet

#device         kue           # Kawasaki LSI USB ethernet

Support for various USB devices.

For more information and additional devices supported by DragonFly, see /usr/src/sys/i386/conf/LINT.

Notes

(1) The auto-tuning algorithm sets maxuser equal to the amount of memory in the system, with a minimum of 32, and a maximum of 384.

Device Nodes

Almost every device in the kernel has a corresponding node entry in the /dev directory. These nodes look like regular files, but are actually special entries into the kernel which programs use to access the device.

These nodes are created automatically once devfs is mounted, which happens manually for the root /dev during boot, just after the root mount.

If Something Goes Wrong

Note: If you are having trouble building a kernel, make sure to keep a GENERIC, or some other kernel that is known to work on hand as a different name that will not get erased on the next build. You cannot rely on kernel.old because when installing a new kernel, kernel.old is overwritten with the last installed kernel which may be non-functional. Also, as soon as possible, move the working kernel to the proper kernel location or commands such as ps(1) will not work properly. The proper command to unlock the kernel file that make installs (in order to move another kernel back permanently) is:

 % chflags noschg /boot/kernel

If you find you cannot do this, you are probably running at a securelevel(8) greater than zero. Edit kern_securelevel in /etc/rc.conf and set it to -1, then reboot. You can change it back to its previous setting when you are happy with your new kernel.

And, if you want to lock your new kernel into place, or any file for that matter, so that it cannot be moved or tampered with:

% chflags schg /boot/kernel

There are five categories of trouble that can occur when building a custom kernel. They are:

Security

*Much of this chapter has been taken from the security(7) manual page by Matthew Dillon. *

Synopsis

This chapter will provide a basic introduction to system security concepts, some general good rules of thumb, and some advanced topics under DragonFly. A lot of the topics covered here can be applied to system and Internet security in general as well. The Internet is no longer a friendly place in which everyone wants to be your kind neighbor. Securing your system is imperative to protect your data, intellectual property, time, and much more from the hands of hackers and the like.

DragonFly provides an array of utilities and mechanisms to ensure the integrity and security of your system and network.

After reading this chapter, you will know:

Before reading this chapter, you should:

CategoryHandbook

Category

Introduction

Security is a function that begins and ends with the system administrator. While all BSD UNIX® multi-user systems have some inherent security, the job of building and maintaining additional security mechanisms to keep those users honest is probably one of the single largest undertakings of the sysadmin. Machines are only as secure as you make them, and security concerns are ever competing with the human necessity for convenience. UNIX systems, in general, are capable of running a huge number of simultaneous processes and many of these processes operate as servers -- meaning that external entities can connect and talk to them. As yesterday's mini-computers and mainframes become today's desktops, and as computers become networked and internetworked, security becomes an even bigger issue.

Security is best implemented through a layered onion approach. In a nutshell, what you want to do is to create as many layers of security as are convenient and then carefully monitor the system for intrusions. You do not want to overbuild your security or you will interfere with the detection side, and detection is one of the single most important aspects of any security mechanism. For example, it makes little sense to set the schg flags (see chflags(1)) on every system binary because while this may temporarily protect the binaries, it prevents an attacker who has broken in from making an easily detectable change that may result in your security mechanisms not detecting the attacker at all.

System security also pertains to dealing with various forms of attack, including attacks that attempt to crash, or otherwise make a system unusable, but do not attempt to compromise the root account (break root). Security concerns can be split up into several categories:

  1. Denial of service attacks.

  2. User account compromises.

  3. Root compromise through accessible servers.

  4. Root compromise via user accounts.

  5. Backdoor creation.

A denial of service attack is an action that deprives the machine of needed resources. Typically, DoS attacks are brute-force mechanisms that attempt to crash or otherwise make a machine unusable by overwhelming its servers or network stack. Some DoS attacks try to take advantage of bugs in the networking stack to crash a machine with a single packet. The latter can only be fixed by applying a bug fix to the kernel. Attacks on servers can often be fixed by properly specifying options to limit the load the servers incur on the system under adverse conditions. Brute-force network attacks are harder to deal with. A spoofed-packet attack, for example, is nearly impossible to stop, short of cutting your system off from the Internet. It may not be able to take your machine down, but it can saturate your Internet connection.

A user account compromise is even more common than a DoS attack. Many sysadmins still run standard telnetd , rlogind , rshd , and ftpd servers on their machines. These servers, by default, do not operate over encrypted connections. The result is that if you have any moderate-sized user base, one or more of your users logging into your system from a remote location (which is the most common and convenient way to login to a system) will have his or her password sniffed. The attentive system admin will analyze his remote access logs looking for suspicious source addresses even for successful logins.

One must always assume that once an attacker has access to a user account, the attacker can break root. However, the reality is that in a well secured and maintained system, access to a user account does not necessarily give the attacker access to root. The distinction is important because without access to root the attacker cannot generally hide his tracks and may, at best, be able to do nothing more than mess with the user's files, or crash the machine. User account compromises are very common because users tend not to take the precautions that sysadmins take.

System administrators must keep in mind that there are potentially many ways to break root on a machine. The attacker may know the root password, the attacker may find a bug in a root-run server and be able to break root over a network connection to that server, or the attacker may know of a bug in a suid-root program that allows the attacker to break root once he has broken into a user's account. If an attacker has found a way to break root on a machine, the attacker may not have a need to install a backdoor. Many of the root holes found and closed to date involve a considerable amount of work by the attacker to cleanup after himself, so most attackers install backdoors. A backdoor provides the attacker with a way to easily regain root access to the system, but it also gives the smart system administrator a convenient way to detect the intrusion. Making it impossible for an attacker to install a backdoor may actually be detrimental to your security, because it will not close off the hole the attacker found to break in the first place.

Security remedies should always be implemented with a multi-layered onion peel approach and can be categorized as follows:

  1. Securing root and staff accounts.

  2. Securing root -- root-run servers and suid/sgid binaries.

  3. Securing user accounts.

  4. Securing the password file.

  5. Securing the kernel core, raw devices, and filesystems.

  6. Quick detection of inappropriate changes made to the system.

  7. Paranoia.

The next section of this chapter will cover the above bullet items in greater depth.

CategoryHandbook

CategoryHandbook-security

Securing DragonFly

Command vs. Protocol: Throughout this document, we will use bold text to refer to a command or application. This is used for instances such as ssh, since it is a protocol as well as command.

The sections that follow will cover the methods of securing your DragonFly system that were mentioned in the last section of this chapter.

Securing the root Account and Staff Accounts

First off, do not bother securing staff accounts if you have not secured the root account. Most systems have a password assigned to the root account. The first thing you do is assume that the password is always compromised. This does not mean that you should remove the password. The password is almost always necessary for console access to the machine. What it does mean is that you should not make it possible to use the password outside of the console or possibly even with the su(1) command. For example, make sure that your pty's are specified as being insecure in the /etc/ttys file so that direct root logins via telnet or rlogin are disallowed. If using other login services such as sshd , make sure that direct root logins are disabled there as well. You can do this by editing your /etc/ssh/sshd_config file, and making sure that PermitRootLogin is set to NO. Consider every access method -- services such as FTP often fall through the cracks. Direct root logins should only be allowed via the system console.

Of course, as a sysadmin you have to be able to get to root, so we open up a few holes. But we make sure these holes require additional password verification to operate. One way to make root accessible is to add appropriate staff accounts to the wheel group (in /etc/group). The staff members placed in the wheel group are allowed to su to root. You should never give staff members native wheel access by putting them in the wheel group in their password entry. Staff accounts should be placed in a staff group, and then added to the wheel group via the /etc/group file. Only those staff members who actually need to have root access should be placed in the wheel group. While having the wheel mechanism is better than having nothing at all, it is not necessarily the safest option.

An indirect way to secure staff accounts, and ultimately root access is to use an alternative login access method and do what is known as starring out the encrypted password for the staff accounts. Using the vipw(8) command, one can replace each instance of an encrypted password with a single * character. This command will update the /etc/master.passwd file and user/password database to disable password-authenticated logins.

A staff account entry such as:

foobar:R9DT/Fa1/LV9U:1000:1000::0:0:Foo Bar:/home/foobar:/usr/local/bin/tcsh

Should be changed to this:

foobar:*:1000:1000::0:0:Foo Bar:/home/foobar:/usr/local/bin/tcsh

This change will prevent normal logins from occurring, since the encrypted password will never match *. With this done, staff members must use another mechanism to authenticate themselves such as ssh(1) using a public/private key pair. When using a public/private key pair with ssh, one must generally secure the machine used to login from (typically one's workstation). An additional layer of protection can be added to the key pair by password protecting the key pair when creating it with ssh-keygen(1). Being able to star out the passwords for staff accounts also guarantees that staff members can only login through secure access methods that you have set up. This forces all staff members to use secure, encrypted connections for all of their sessions, which closes an important hole used by many intruders: sniffing the network from an unrelated, less secure machine.

The more indirect security mechanisms also assume that you are logging in from a more restrictive server to a less restrictive server. For example, if your main box is running all sorts of servers, your workstation should not be running any. In order for your workstation to be reasonably secure you should run as few servers as possible, up to and including no servers at all, and you should run a password-protected screen blanker. Of course, given physical access to a workstation an attacker can break any sort of security you put on it. This is definitely a problem that you should consider, but you should also consider the fact that the vast majority of break-ins occur remotely, over a network, from people who do not have physical access to your workstation or servers.

Securing Root-run Servers and SUID/SGID Binaries

The prudent sysadmin only runs the servers he needs to, no more, no less. Be aware that third party servers are often the most bug-prone. For example, running an old version of imapd or popper is like giving a universal root ticket out to the entire world. Never run a server that you have not checked out carefully. Many servers do not need to be run as root. For example, the ntalk , comsat , and finger daemons can be run in special user sandboxes. A sandbox is not perfect, unless you go through a large amount of trouble, but the onion approach to security still stands: If someone is able to break in through a server running in a sandbox, they still have to break out of the sandbox. The more layers the attacker must break through, the lower the likelihood of his success. Root holes have historically been found in virtually every server ever run as root, including basic system servers. If you are running a machine through which people only login via sshd and never login via telnetd or rshd or rlogind , then turn off those services!

DragonFly now defaults to running ntalkd , comsat , and finger in a sandbox. Another program which may be a candidate for running in a sandbox is named(8). /etc/defaults/rc.conf includes the arguments necessary to run named in a sandbox in a commented-out form. Depending on whether you are installing a new system or upgrading an existing system, the special user accounts used by these sandboxes may not be installed. The prudent sysadmin would research and implement sandboxes for servers whenever possible.

There are a number of other servers that typically do not run in sandboxes: sendmail , popper , imapd , ftpd , and others. There are alternatives to some of these, but installing them may require more work than you are willing to perform (the convenience factor strikes again). You may have to run these servers as root and rely on other mechanisms to detect break-ins that might occur through them.

The other big potential root holes in a system are the suid-root and sgid binaries installed on the system. Most of these binaries, such as rlogin , reside in /bin, /sbin, /usr/bin, or /usr/sbin. While nothing is 100% safe, the system-default suid and sgid binaries can be considered reasonably safe. Still, root holes are occasionally found in these binaries. A root hole was found in Xlib in 1998 that made xterm (which is typically suid) vulnerable. It is better to be safe than sorry and the prudent sysadmin will restrict suid binaries, that only staff should run, to a special group that only staff can access, and get rid of (chmod 000) any suid binaries that nobody uses. A server with no display generally does not need an xterm binary. Sgid binaries can be almost as dangerous. If an intruder can break an sgid-kmem binary, the intruder might be able to read /dev/kmem and thus read the encrypted password file, potentially compromising any passworded account. Alternatively an intruder who breaks group kmem can monitor keystrokes sent through pty's, including pty's used by users who login through secure methods. An intruder that breaks the tty group can write to almost any user's tty. If a user is running a terminal program or emulator with a keyboard-simulation feature, the intruder can potentially generate a data stream that causes the user's terminal to echo a command, which is then run as that user.

Securing User Accounts

User accounts are usually the most difficult to secure. While you can impose Draconian access restrictions on your staff and star out their passwords, you may not be able to do so with any general user accounts you might have. If you do have sufficient control, then you may win out and be able to secure the user accounts properly. If not, you simply have to be more vigilant in your monitoring of those accounts. Use of ssh for user accounts is more problematic, due to the extra administration and technical support required, but still a very good solution compared to a crypted password file.

Securing the Password File

The only sure fire way is to * out as many passwords as you can and use ssh for access to those accounts. Even though the encrypted password file (/etc/spwd.db) can only be read by root, it may be possible for an intruder to obtain read access to that file even if the attacker cannot obtain root-write access.

Your security scripts should always check for and report changes to the password file (see the Checking file integrity section below).

Securing the Kernel Core, Raw Devices, and Filesystems

If an attacker breaks root he can do just about anything, but there are certain conveniences. For example, most modern kernels have a packet sniffing device driver built in. Under DragonFly it is called the bpf device. An intruder will commonly attempt to run a packet sniffer on a compromised machine. You do not need to give the intruder the capability and most systems do not have the need for the bpf device compiled in.

But even if you turn off the bpf device, you still have /dev/mem and /dev/kmem to worry about. For that matter, the intruder can still write to raw disk devices. Also, there is another kernel feature called the module loader, kldload(8). An enterprising intruder can use a KLD module to install his own bpf device, or other sniffing device, on a running kernel. To avoid these problems you have to run the kernel at a higher secure level, at least securelevel 1. The securelevel can be set with a sysctl on the kern.securelevel variable. Once you have set the securelevel to 1, write access to raw devices will be denied and special chflags flags, such as schg, will be enforced. You must also ensure that the schg flag is set on critical startup binaries, directories, and script files -- everything that gets run up to the point where the securelevel is set. This might be overdoing it, and upgrading the system is much more difficult when you operate at a higher secure level. You may compromise and run the system at a higher secure level but not set the schg flag for every system file and directory under the sun. Another possibility is to simply mount / and /usr read-only. It should be noted that being too Draconian in what you attempt to protect may prevent the all-important detection of an intrusion.

Checking File Integrity: Binaries, Configuration Files, Etc.

When it comes right down to it, you can only protect your core system configuration and control files so much before the convenience factor rears its ugly head. For example, using chflags to set the schg bit on most of the files in / and /usr is probably counterproductive, because while it may protect the files, it also closes a detection window. The last layer of your security onion is perhaps the most important -- detection. The rest of your security is pretty much useless (or, worse, presents you with a false sense of safety) if you cannot detect potential incursions. Half the job of the onion is to slow down the attacker, rather than stop him, in order to give the detection side of the equation a chance to catch him in the act.

The best way to detect an incursion is to look for modified, missing, or unexpected files. The best way to look for modified files is from another (often centralized) limited-access system. Writing your security scripts on the extra-secure limited-access system makes them mostly invisible to potential attackers, and this is important. In order to take maximum advantage you generally have to give the limited-access box significant access to the other machines in the business, usually either by doing a read-only NFS export of the other machines to the limited-access box, or by setting up ssh key-pairs to allow the limited-access box to ssh to the other machines. Except for its network traffic, NFS is the least visible method -- allowing you to monitor the filesystems on each client box virtually undetected. If your limited-access server is connected to the client boxes through a switch, the NFS method is often the better choice. If your limited-access server is connected to the client boxes through a hub, or through several layers of routing, the NFS method may be too insecure (network-wise) and using ssh may be the better choice even with the audit-trail tracks that ssh lays.

Once you give a limited-access box, at least read access to the client systems it is supposed to monitor, you must write scripts to do the actual monitoring. Given an NFS mount, you can write scripts out of simple system utilities such as find(1) and md5(1). It is best to physically md5 the client-box files at least once a day, and to test control files such as those found in /etc and /usr/local/etc even more often. When mismatches are found, relative to the base md5 information the limited-access machine knows is valid, it should scream at a sysadmin to go check it out. A good security script will also check for inappropriate suid binaries and for new or deleted files on system partitions such as / and /usr.

When using ssh rather than NFS, writing the security script is much more difficult. You essentially have to scp the scripts to the client box in order to run them, making them visible, and for safety you also need to scp the binaries (such as find) that those scripts use. The ssh client on the client box may already be compromised. All in all, using ssh may be necessary when running over insecure links, but it is also a lot harder to deal with.

A good security script will also check for changes to user and staff members access configuration files: .rhosts, .shosts, .ssh/authorized_keys and so forth... files that might fall outside the purview of the MD5 check.

If you have a huge amount of user disk space, it may take too long to run through every file on those partitions. In this case, setting mount flags to disallow suid binaries and devices on those partitions is a good idea. The nodev and nosuid options (see mount(8)) are what you want to look into. You should probably scan them anyway, at least once a week, since the object of this layer is to detect a break-in whether or not the break-in is effective.

Process accounting (see accton(8)) is a relatively low-overhead feature of the operating system which might help as a post-break-in evaluation mechanism. It is especially useful in tracking down how an intruder has actually broken into a system, assuming the file is still intact after the break-in occurs.

Finally, security scripts should process the log files, and the logs themselves should be generated in as secure a manner as possible -- remote syslog can be very useful. An intruder tries to cover his tracks, and log files are critical to the sysadmin trying to track down the time and method of the initial break-in. One way to keep a permanent record of the log files is to run the system console to a serial port and collect the information on a continuing basis through a secure machine monitoring the consoles.

Paranoia

A little paranoia never hurts. As a rule, a sysadmin can add any number of security features, as long as they do not affect convenience, and can add security features that do affect convenience with some added thought. Even more importantly, a security administrator should mix it up a bit -- if you use recommendations such as those given by this document verbatim, you give away your methodologies to the prospective attacker who also has access to this document.

Denial of Service Attacks

This section covers Denial of Service attacks. A DoS attack is typically a packet attack. While there is not much you can do about modern spoofed packet attacks that saturate your network, you can generally limit the damage by ensuring that the attacks cannot take down your servers.

  1. Limiting server forks.

  2. Limiting springboard attacks (ICMP response attacks, ping broadcast, etc.).

  3. Kernel Route Cache.

A common DoS attack is against a forking server that attempts to cause the server to eat processes, file descriptors, and memory, until the machine dies. inetd (see inetd(8)) has several options to limit this sort of attack. It should be noted that while it is possible to prevent a machine from going down, it is not generally possible to prevent a service from being disrupted by the attack. Read the inetd manual page carefully and pay specific attention to the -c, -C, and -R options. Note that spoofed-IP attacks will circumvent the -C option to inetd , so typically a combination of options must be used. Some standalone servers have self-fork-limitation parameters.

Sendmail has its -OMaxDaemonChildren option, which tends to work much better than trying to use sendmail's load limiting options due to the load lag. You should specify a MaxDaemonChildren parameter, when you start sendmail , high enough to handle your expected load, but not so high that the computer cannot handle that number of sendmails without falling on its face. It is also prudent to run sendmail in queued mode (-ODeliveryMode=queued) and to run the daemon (sendmail -bd) separate from the queue-runs (sendmail -q15m). If you still want real-time delivery you can run the queue at a much lower interval, such as -q1m, but be sure to specify a reasonable MaxDaemonChildren option for that sendmail to prevent cascade failures.

Syslogd can be attacked directly and it is strongly recommended that you use the -s option whenever possible, and the -a option otherwise.

You should also be fairly careful with connect-back services such as tcpwrapper s reverse-identd, which can be attacked directly. You generally do not want to use the reverse-ident feature of tcpwrappers for this reason.

It is a very good idea to protect internal services from external access by firewalling them off at your border routers. The idea here is to prevent saturation attacks from outside your LAN, not so much to protect internal services from network-based root compromise. Always configure an exclusive firewall, i.e., firewall everything except ports A, B, C, D, and M-Z. This way you can firewall off all of your low ports except for certain specific services such as named (if you are primary for a zone), ntalkd , sendmail , and other Internet-accessible services. If you try to configure the firewall the other way -- as an inclusive or permissive firewall, there is a good chance that you will forget to close a couple of services, or that you will add a new internal service and forget to update the firewall. You can still open up the high-numbered port range on the firewall, to allow permissive-like operation, without compromising your low ports. Also take note that DragonFly allows you to control the range of port numbers used for dynamic binding, via the various net.inet.ip.portrange sysctl's (sysctl -a | fgrep portrange), which can also ease the complexity of your firewall's configuration. For example, you might use a normal first/last range of 4000 to 5000, and a hiport range of 49152 to 65535, then block off everything under 4000 in your firewall (except for certain specific Internet-accessible ports, of course).

Another common DoS attack is called a springboard attack -- to attack a server in a manner that causes the server to generate responses which overloads the server, the local network, or some other machine. The most common attack of this nature is the ICMP ping broadcast attack. The attacker spoofs ping packets sent to your LAN's broadcast address with the source IP address set to the actual machine they wish to attack. If your border routers are not configured to stomp on ping's to broadcast addresses, your LAN winds up generating sufficient responses to the spoofed source address to saturate the victim, especially when the attacker uses the same trick on several dozen broadcast addresses over several dozen different networks at once. Broadcast attacks of over a hundred and twenty megabits have been measured. A second common springboard attack is against the ICMP error reporting system. By constructing packets that generate ICMP error responses, an attacker can saturate a server's incoming network and cause the server to saturate its outgoing network with ICMP responses. This type of attack can also crash the server by running it out of mbuf's, especially if the server cannot drain the ICMP responses it generates fast enough. The DragonFly kernel has a new kernel compile option called ICMP_BANDLIM which limits the effectiveness of these sorts of attacks. The last major class of springboard attacks is related to certain internal inetd services such as the udp echo service. An attacker simply spoofs a UDP packet with the source address being server A's echo port, and the destination address being server B's echo port, where server A and B are both on your LAN. The two servers then bounce this one packet back and forth between each other. The attacker can overload both servers and their LANs simply by injecting a few packets in this manner. Similar problems exist with the internal chargen port. A competent sysadmin will turn off all of these inetd-internal test services.

Spoofed packet attacks may also be used to overload the kernel route cache. Refer to the net.inet.ip.rtexpire, rtminexpire, and rtmaxcache sysctl parameters. A spoofed packet attack that uses a random source IP will cause the kernel to generate a temporary cached route in the route table, viewable with netstat -rna | fgrep W3. These routes typically timeout in 1600 seconds or so. If the kernel detects that the cached route table has gotten too big it will dynamically reduce the rtexpire but will never decrease it to less than rtminexpire. There are two problems:

  1. The kernel does not react quickly enough when a lightly loaded server is suddenly attacked.

  2. The rtminexpire is not low enough for the kernel to survive a sustained attack.

If your servers are connected to the Internet via a T3 or better, it may be prudent to manually override both rtexpire and rtminexpire via sysctl(8). Never set either parameter to zero (unless you want to crash the machine). Setting both parameters to two seconds should be sufficient to protect the route table from attack.

DES, MD5, and Crypt

*Parts rewritten and updated by Bill Swingle. *

Every user on a UNIX® system has a password associated with their account. It seems obvious that these passwords need to be known only to the user and the actual operating system. In order to keep these passwords secret, they are encrypted with what is known as a one-way hash, that is, they can only be easily encrypted but not decrypted. In other words, what we told you a moment ago was obvious is not even true: the operating system itself does not really know the password. It only knows the encrypted form of the password. The only way to get the plain-text password is by a brute force search of the space of possible passwords.

Unfortunately the only secure way to encrypt passwords when UNIX came into being was based on DES, the Data Encryption Standard. This was not such a problem for users resident in the US, but since the source code for DES could not be exported outside the US, DragonFly had to find a way to both comply with US law and retain compatibility with all the other UNIX variants that still used DES.

The solution was to divide up the encryption libraries so that US users could install the DES libraries and use DES but international users still had an encryption method that could be exported abroad. This is how DragonFly came to use MD5 as its default encryption method. MD5 is believed to be more secure than DES, so installing DES is offered primarily for compatibility reasons.

Recognizing Your Crypt Mechanism

libcrypt.a provides a configurable password authentication hash library. Currently the library supports DES, MD5, Blowfish, SHA256, and SHA512 hash functions. By default DragonFly uses SHA256 to encrypt passwords.

It is pretty easy to identify which encryption method DragonFly is set up to use. Examining the encrypted passwords in the /etc/master.passwd file is one way. Passwords encrypted with the MD5 hash are longer than those encrypted with the DES hash and also begin with the characters $1$. Passwords starting with $2a$ are encrypted with the Blowfish hash function. DES password strings do not have any particular identifying characteristics, but they are shorter than MD5 passwords, and are coded in a 64-character alphabet which does not include the $ character, so a relatively short string which does not begin with a dollar sign is very likely a DES password.

The password format used for new passwords is controlled by the passwd_format login capability in /etc/login.conf, which takes values of des, md5 or blf. See the login.conf(5) manual page for more information about login capabilities.

One-time Passwords

S/Key is a one-time password scheme based on a one-way hash function. DragonFly uses the MD4 hash for compatibility but other systems have used MD5 and DES-MAC. S/Key ia part of the FreeBSD base system, and is also used on a growing number of other operating systems. S/Key is a registered trademark of Bell Communications Research, Inc.

There are three different sorts of passwords which we will discuss below. The first is your usual UNIX® style password; we will call this a UNIX password. The second sort is the one-time password which is generated by the S/Key key program or the OPIE opiekey(1) program and accepted by the keyinit or opiepasswd(1) programs and the login prompt; we will call this a one-time password. The final sort of password is the secret password which you give to the key/opiekey programs (and sometimes the keyinit/opiepasswd programs) which it uses to generate one-time passwords; we will call it a secret password or just unqualified password.

The secret password does not have anything to do with your UNIX password; they can be the same but this is not recommended. S/Key and OPIE secret passwords are not limited to eight characters like old UNIX passwords(1), they can be as long as you like. Passwords of six or seven word long phrases are fairly common. For the most part, the S/Key or OPIE system operates completely independently of the UNIX password system.

Besides the password, there are two other pieces of data that are important to S/Key and OPIE. One is what is known as the seed or key, consisting of two letters and five digits. The other is what is called the iteration count, a number between 1 and 100. S/Key creates the one-time password by concatenating the seed and the secret password, then applying the MD4/MD5 hash as many times as specified by the iteration count and turning the result into six short English words. These six English words are your one-time password. The authentication system (primarily PAM) keeps track of the last one-time password used, and the user is authenticated if the hash of the user-provided password is equal to the previous password. Because a one-way hash is used it is impossible to generate future one-time passwords if a successfully used password is captured; the iteration count is decremented after each successful login to keep the user and the login program in sync. When the iteration count gets down to 1, S/Key and OPIE must be reinitialized.

There are three programs involved in each system which we will discuss below. The key and opiekey programs accept an iteration count, a seed, and a secret password, and generate a one-time password or a consecutive list of one-time passwords. The keyinit and opiepasswd programs are used to initialize S/Key and OPIE respectively, and to change passwords, iteration counts, or seeds; they take either a secret passphrase, or an iteration count, seed, and one-time password. The keyinfo and opieinfo programs examine the relevant credentials files (/etc/skeykeys or /etc/opiekeys) and print out the invoking user's current iteration count and seed.

There are four different sorts of operations we will cover. The first is using keyinit or opiepasswd over a secure connection to set up one-time-passwords for the first time, or to change your password or seed. The second operation is using keyinit or opiepasswd over an insecure connection, in conjunction with key or opiekey over a secure connection, to do the same. The third is using key/opiekey to log in over an insecure connection. The fourth is using key or opiekey to generate a number of keys which can be written down or printed out to carry with you when going to some location without secure connections to anywhere.

Secure Connection Initialization

To initialize S/Key for the first time, change your password, or change your seed while logged in over a secure connection (e.g., on the console of a machine or via ssh ), use the keyinit command without any parameters while logged in as yourself:

% keyinit

Adding unfurl:

Reminder - Only use this method if you are directly connected.

If you are using telnet or rlogin exit with no password and use keyinit -s.

Enter secret password:

Again secret password:



ID unfurl s/key is 99 to17757

DEFY CLUB PRO NASH LACE SOFT

For OPIE, opiepasswd is used instead:

% opiepasswd -c

[grimreaper] ~ $ opiepasswd -f -c

Adding unfurl:

Only use this method from the console; NEVER from remote. If you are using

telnet, xterm, or a dial-in, type ^C now or exit with no password.

Then run opiepasswd without the -c parameter.

Using MD5 to compute responses.

Enter new secret pass phrase:

Again new secret pass phrase:

ID unfurl OTP key is 499 to4268

MOS MALL GOAT ARM AVID COED

At the Enter new secret pass phrase: or Enter secret password: prompts, you should enter a password or phrase. Remember, this is not the password that you will use to login with, this is used to generate your one-time login keys. The ID line gives the parameters of your particular instance: your login name, the iteration count, and seed. When logging in the system will remember these parameters and present them back to you so you do not have to remember them. The last line gives the particular one-time password which corresponds to those parameters and your secret password; if you were to re-login immediately, this one-time password is the one you would use.

Insecure Connection Initialization

To initialize or change your secret password over an insecure connection, you will need to already have a secure connection to some place where you can run key or opiekey; this might be in the form of a desk accessory on a Macintosh®, or a shell prompt on a machine you trust. You will also need to make up an iteration count (100 is probably a good value), and you may make up your own seed or use a randomly-generated one. Over on the insecure connection (to the machine you are initializing), use the keyinit -s command:

% keyinit -s

Updating unfurl:

Old key: to17758

Reminder you need the 6 English words from the key command.

Enter sequence count from 1 to 9999: 100

Enter new key [default to17759]:

s/key 100 to 17759

s/key access password:

s/key access password:CURE MIKE BANE HIM RACY GORE

For OPIE, you need to use opiepasswd:

% opiepasswd



Updating unfurl:

You need the response from an OTP generator.

Old secret pass phrase:

        otp-md5 498 to4268 ext

        Response: GAME GAG WELT OUT DOWN CHAT

New secret pass phrase:

        otp-md5 499 to4269

        Response: LINE PAP MILK NELL BUOY TROY



ID mark OTP key is 499 gr4269

LINE PAP MILK NELL BUOY TROY

To accept the default seed (which the keyinit program confusingly calls a key), press Return . Then before entering an access password, move over to your secure connection or S/Key desk accessory, and give it the same parameters:

% key 100 to17759

Reminder - Do not use this program while logged in via telnet or rlogin.

Enter secret password: &lt;secret password&gt;

CURE MIKE BANE HIM RACY GORE

Or for OPIE:

% opiekey 498 to4268

Using the MD5 algorithm to compute response.

Reminder: Don't use opiekey from telnet or dial-in sessions.

Enter secret pass phrase:

GAME GAG WELT OUT DOWN CHAT

Now switch back over to the insecure connection, and copy the one-time password generated over to the relevant program.

Generating a Single One-time Password

Once you have initialized S/Key, when you login you will be presented with a prompt like this:

% telnet example.com

Trying 10.0.0.1...

Connected to example.com

Escape character is '^]'.



DragonFly/i386 (example.com) (ttypa)



login: &lt;username&gt;

s/key 97 fw13894

Password:

Or for OPIE:

% telnet example.com

Trying 10.0.0.1...

Connected to example.com

Escape character is '^]'.



DragonFly/i386 (example.com) (ttypa)



login: &lt;username&gt;

otp-md5 498 gr4269 ext

Password:

As a side note, the S/Key and OPIE prompts have a useful feature (not shown here): if you press Return at the password prompt, the prompter will turn echo on, so you can see what you are typing. This can be extremely useful if you are attempting to type in a password by hand, such as from a printout.

At this point you need to generate your one-time password to answer this login prompt. This must be done on a trusted system that you can run key or opiekey on. (There are versions of these for DOS, Windows® and Mac OS® as well.) They need both the iteration count and the seed as command line options. You can cut-and-paste these right from the login prompt on the machine that you are logging in to.

On the trusted system:

% key 97 fw13894

Reminder - Do not use this program while logged in via telnet or rlogin.

Enter secret password:

WELD LIP ACTS ENDS ME HAAG

For OPIE:

% opiekey 498 to4268

Using the MD5 algorithm to compute response.

Reminder: Don't use opiekey from telnet or dial-in sessions.

Enter secret pass phrase:

GAME GAG WELT OUT DOWN CHAT

Now that you have your one-time password you can continue logging in:

login: &lt;username&gt;

s/key 97 fw13894

Password: &lt;return to enable echo&gt;

s/key 97 fw13894

Password [echo on]: WELD LIP ACTS ENDS ME HAAG

Last login: Tue Mar 21 11:56:41 from 10.0.0.2 ...

Generating Multiple One-time Passwords

Sometimes you have to go places where you do not have access to a trusted machine or secure connection. In this case, it is possible to use the key and opiekey commands to generate a number of one-time passwords beforehand to be printed out and taken with you. For example:

% key -n 5 30 zz99999

Reminder - Do not use this program while logged in via telnet or rlogin.

Enter secret password: &lt;secret password&gt;

26: SODA RUDE LEA LIND BUDD SILT

27: JILT SPY DUTY GLOW COWL ROT

28: THEM OW COLA RUNT BONG SCOT

29: COT MASH BARR BRIM NAN FLAG

30: CAN KNEE CAST NAME FOLK BILK

Or for OPIE:

% opiekey -n 5 30 zz99999

Using the MD5 algorithm to compute response.

Reminder: Don't use opiekey from telnet or dial-in sessions.

Enter secret pass phrase: &lt;secret password&gt;

26: JOAN BORE FOSS DES NAY QUIT

27: LATE BIAS SLAY FOLK MUCH TRIG

28: SALT TIN ANTI LOON NEAL USE

29: RIO ODIN GO BYE FURY TIC

30: GREW JIVE SAN GIRD BOIL PHI

The -n 5 requests five keys in sequence, the 30 specifies what the last iteration number should be. Note that these are printed out in reverse order of eventual use. If you are really paranoid, you might want to write the results down by hand; otherwise you can cut-and-paste into lpr. Note that each line shows both the iteration count and the one-time password; you may still find it handy to scratch off passwords as you use them.

Restricting Use of UNIX® Passwords

S/Key can place restrictions on the use of UNIX passwords based on the host name, user name, terminal port, or IP address of a login session. These restrictions can be found in the configuration file /etc/skey.access. The skey.access(5) manual page has more information on the complete format of the file and also details some security cautions to be aware of before depending on this file for security.

If there is no /etc/skey.access file (this is the default), then all users will be allowed to use UNIX passwords. If the file exists, however, then all users will be required to use S/Key unless explicitly permitted to do otherwise by configuration statements in the skey.access file. In all cases, UNIX passwords are permitted on the console.

Here is a sample skey.access configuration file which illustrates the three most common sorts of configuration statements:

permit internet 192.168.0.0 255.255.0.0

permit user fnord

permit port ttyd0

The first line (permit internet) allows users whose IP source address (which is vulnerable to spoofing) matches the specified value and mask, to use UNIX passwords. This should not be considered a security mechanism, but rather, a means to remind authorized users that they are using an insecure network and need to use S/Key for authentication.

The second line (permit user) allows the specified username, in this case fnord, to use UNIX passwords at any time. Generally speaking, this should only be used for people who are either unable to use the key program, like those with dumb terminals, or those who are uneducable.

The third line (permit port) allows all users logging in on the specified terminal line to use UNIX passwords; this would be used for dial-ups.

Here is a sample opieaccess file:

permit 192.168.0.0 255.255.0.0

This line allows users whose IP source address (which is vulnerable to spoofing) matches the specified value and mask, to use UNIX passwords at any time.

If no rules in opieaccess are matched, the default is to deny non-OPIE logins.

Notes

[[!table Error: empty data]]

CategoryHandbook

CategoryHandbook-security

Firewalls

*Contributed by Gary Palmer and Alex Nash. *

Firewalls are an area of increasing interest for people who are connected to the Internet, and are even finding applications on private networks to provide enhanced security. This section will hopefully explain what firewalls are, how to use them, and how to use the facilities provided in the DragonFly kernel to implement them.

Note: People often think that having a firewall between your internal network and the Big Bad Internet will solve all your security problems. It may help, but a poorly set up firewall system is more of a security risk than not having one at all. A firewall can add another layer of security to your systems, but it cannot stop a really determined cracker from penetrating your internal network. If you let internal security lapse because you believe your firewall to be impenetrable, you have just made the crackers job that much easier.

What Is a Firewall?

There are currently two distinct types of firewalls in common use on the Internet today. The first type is more properly called a packet filtering router. This type of firewall utilizes a multi-homed machine and a set of rules to determine whether to forward or block individual packets. A multi-homed machine is simply a device with multiple network interfaces. The second type, known as a proxy server, relies on daemons to provide authentication and to forward packets, possibly on a multi-homed machine which has kernel packet forwarding disabled.

Sometimes sites combine the two types of firewalls, so that only a certain machine (known as a bastion host) is allowed to send packets through a packet filtering router onto an internal network. Proxy services are run on the bastion host, which are generally more secure than normal authentication mechanisms.

DragonFly comes with a kernel packet filter (known as IPFW), which is what the rest of this section will concentrate on. Proxy servers can be built on DragonFly from third party software, but there is such a variety of proxy servers available that it would be impossible to cover them in this section.

Packet Filtering Routers

A router is a machine which forwards packets between two or more networks. A packet filtering router is programmed to compare each packet to a list of rules before deciding if it should be forwarded or not. Most modern IP routing software includes packet filtering functionality that defaults to forwarding all packets. To enable the filters, you need to define a set of rules.

To decide whether a packet should be passed on, the firewall looks through its set of rules for a rule which matches the contents of the packet's headers. Once a match is found, the rule action is obeyed. The rule action could be to drop the packet, to forward the packet, or even to send an ICMP message back to the originator. Only the first match counts, as the rules are searched in order. Hence, the list of rules can be referred to as a rule chain.

The packet-matching criteria varies depending on the software used, but typically you can specify rules which depend on the source IP address of the packet, the destination IP address, the source port number, the destination port number (for protocols which support ports), or even the packet type (UDP, TCP, ICMP, etc).

Proxy Servers

Proxy servers are machines which have had the normal system daemons ( telnetd , ftpd , etc) replaced with special servers. These servers are called proxy servers, as they normally only allow onward connections to be made. This enables you to run (for example) a proxy telnet server on your firewall host, and people can telnet in to your firewall from the outside, go through some authentication mechanism, and then gain access to the internal network (alternatively, proxy servers can be used for signals coming from the internal network and heading out).

Proxy servers are normally more secure than normal servers, and often have a wider variety of authentication mechanisms available, including one-shot password systems so that even if someone manages to discover what password you used, they will not be able to use it to gain access to your systems as the password expires immediately after the first use. As they do not actually give users access to the host machine, it becomes a lot more difficult for someone to install backdoors around your security system.

Proxy servers often have ways of restricting access further, so that only certain hosts can gain access to the servers. Most will also allow the administrator to specify which users can talk to which destination machines. Again, what facilities are available depends largely on what proxy software you choose.

Firewall options in DragonFlyBSD

DragonFlyBSD inherited the IPFW firewall (versions 1 and 2) when it forked from FreeBSD. Pretty soon after though, we imported the new pf packet filter that the OpenBSD developers created from scratch. It is a cleaner code base and is now the recommended solution for firewalling DragonFly. Keep in mind that the PF version in DragonFly is not in sync with OpenBSD's PF code. We have not yet incorporated the improvements made in PF over the last few years, but we have some improvements of our own. IPFW is still and will remain supported for the forseeable future; it has some features not yet available in PF.

A copy of the OpenBSD PF user's guide corresponding to the version of PF in DragonFly can be found in ?PFUsersGuide.

What Does IPFW Allow Me to Do?

IPFW, the software supplied with DragonFly, is a packet filtering and accounting system which resides in the kernel, and has a user-land control utility, ipfw(8). Together, they allow you to define and query the rules used by the kernel in its routing decisions.

There are two related parts to IPFW. The firewall section performs packet filtering. There is also an IP accounting section which tracks usage of the router, based on rules similar to those used in the firewall section. This allows the administrator to monitor how much traffic the router is getting from a certain machine, or how much WWW traffic it is forwarding, for example.

As a result of the way that IPFW is designed, you can use IPFW on non-router machines to perform packet filtering on incoming and outgoing connections. This is a special case of the more general use of IPFW, and the same commands and techniques should be used in this situation.

Enabling IPFW on DragonFly

As the main part of the IPFW system lives in the kernel, you will need to add one or more options to your kernel configuration file, depending on what facilities you want, and recompile your kernel. See "Reconfiguring your Kernel" ([kernelconfig.html Chapter 9]) for more details on how to recompile your kernel.

Warning: IPFW defaults to a policy of deny ip from any to any. If you do not add other rules during startup to allow access, you will lock yourself out of the server upon rebooting into a firewall-enabled kernel. We suggest that you set firewall_type=open in your /etc/rc.conf file when first enabling this feature, then refining the firewall rules in /etc/rc.firewall after you have tested that the new kernel feature works properly. To be on the safe side, you may wish to consider performing the initial firewall configuration from the local console rather than via ssh . Another option is to build a kernel using both the IPFIREWALL and IPFIREWALL_DEFAULT_TO_ACCEPT options. This will change the default rule of IPFW to allow ip from any to any and avoid the possibility of a lockout.

There are currently four kernel configuration options relevant to IPFW:

options IPFIREWALL:: Compiles into the kernel the code for packet filtering.options IPFIREWALL_VERBOSE:: Enables code to allow logging of packets through syslogd(8). Without this option, even if you specify that packets should be logged in the filter rules, nothing will happen.options IPFIREWALL_VERBOSE_LIMIT=10:: Limits the number of packets logged through syslogd(8) on a per entry basis. You may wish to use this option in hostile environments in which you want to log firewall activity, but do not want to be open to a denial of service attack via syslog flooding.

When a chain entry reaches the packet limit specified, logging is turned off for that particular entry. To resume logging, you will need to reset the associated counter using the ipfw(8) utility:

# ipfw zero 4500

Where 4500 is the chain entry you wish to continue logging.options IPFIREWALL_DEFAULT_TO_ACCEPT:: This changes the default rule action from deny to allow. This avoids the possibility of locking yourself out if you happen to boot a kernel with IPFIREWALL support but have not configured your firewall yet. It is also very useful if you often use ipfw(8) as a filter for specific problems as they arise. Use with care though, as this opens up the firewall and changes the way it works.

Configuring IPFW

The configuration of the IPFW software is done through the ipfw(8) utility. The syntax for this command looks quite complicated, but it is relatively simple once you understand its structure.

There are currently four different command categories used by the utility: addition/deletion, listing, flushing, and clearing. Addition/deletion is used to build the rules that control how packets are accepted, rejected, and logged. Listing is used to examine the contents of your rule set (otherwise known as the chain) and packet counters (accounting). Flushing is used to remove all entries from the chain. Clearing is used to zero out one or more accounting entries.

Altering the IPFW Rules

The syntax for this form of the command is:

ipfw [-N] command [index] action [log] protocol addresses [options]

There is one valid flag when using this form of the command:

-N:: Resolve addresses and service names in output.

The command given can be shortened to the shortest unique form. The valid commands are:

add:: Add an entry to the firewall/accounting rule listdelete:: Delete an entry from the firewall/accounting rule list

Previous versions of IPFW used separate firewall and accounting entries. The present version provides packet accounting with each firewall entry.

If an index value is supplied, it is used to place the entry at a specific point in the chain. Otherwise, the entry is placed at the end of the chain at an index 100 greater than the last chain entry (this does not include the default policy, rule 65535, deny).

The log option causes matching rules to be output to the system console if the kernel was compiled with IPFIREWALL_VERBOSE.

Valid actions are:

reject:: Drop the packet, and send an ICMP host or port unreachable (as appropriate) packet to the source.allow:: Pass the packet on as normal. (aliases: pass, permit, and accept)deny:: Drop the packet. The source is not notified via an ICMP message (thus it appears that the packet never arrived at the destination).count:: Update packet counters but do not allow/deny the packet based on this rule. The search continues with the next chain entry.

Each action will be recognized by the shortest unambiguous prefix.

The protocols which can be specified are:

all:: Matches any IP packeticmp:: Matches ICMP packetstcp:: Matches TCP packetsudp:: Matches UDP packets

The address specification is:

from ***address/mask*** [***port***] to ***address/mask*** [***port***] [via ***interface***]

You can only specify ***port*** in conjunction with protocols which support ports (UDP and TCP).

The via is optional and may specify the IP address or domain name of a local IP interface, or an interface name (e.g. ed0) to match only packets coming through this interface. Interface unit numbers can be specified with an optional wildcard. For example, ppp* would match all kernel PPP interfaces.

The syntax used to specify an ***address/mask*** is:

`***address***`

or

`***address***`/`***mask-bits***`

or

`***address***`:`***mask-pattern***`

A valid hostname may be specified in place of the IP address. ' mask-bits ' is a decimal number representing how many bits in the address mask should be set. e.g. specifying 192.216.222.1/24 will create a mask which will allow any address in a class C subnet (in this case, 192.216.222) to be matched. ' mask-pattern ' is an IP address which will be logically AND'ed with the address given. The keyword any may be used to specify any IP address.

The port numbers to be blocked are specified as:

***port*** [,***port*** [,***port*** [...]]]

to specify either a single port or a list of ports, or

***port***-***port***

to specify a range of ports. You may also combine a single range with a list, but the range must always be specified first.

The options available are:

frag:: Matches if the packet is not the first fragment of the datagram.in:: Matches if the packet is on the way in.out:: Matches if the packet is on the way out.ipoptions ***spec***:: Matches if the IP header contains the comma separated list of options specified in ***spec***. The supported IP options are: ssrr (strict source route), lsrr (loose source route), rr (record packet route), and ts (time stamp). The absence of a particular option may be specified with a leading !.established:: Matches if the packet is part of an already established TCP connection (i.e. it has the RST or ACK bits set). You can optimize the performance of the firewall by placing established rules early in the chain.setup:: Matches if the packet is an attempt to establish a TCP connection (the SYN bit is set but the ACK bit is not).tcpflags ***flags***:: Matches if the TCP header contains the comma separated list of ***flags***. The supported flags are fin, syn, rst, psh, ack, and urg. The absence of a particular flag may be indicated by a leading !.icmptypes ***types***:: Matches if the ICMP type is present in the list ***types***. The list may be specified as any combination of ranges and/or individual types separated by commas. Commonly used ICMP types are: 0 echo reply (ping reply), 3 destination unreachable, 5 redirect, 8 echo request (ping request), and 11 time exceeded (used to indicate TTL expiration as with traceroute(8)).

Listing the IPFW Rules

The syntax for this form of the command is:

ipfw [-a] [-c] [-d] [-e] [-t] [-N] [-S] list

There are seven valid flags when using this form of the command:

-a:: While listing, show counter values. This option is the only way to see accounting counters.-c:: List rules in compact form.-d:: Show dynamic rules in addition to static rules.-e:: If -d was specified, also show expired dynamic rules.-t:: Display the last match times for each chain entry. The time listing is incompatible with the input syntax used by the ipfw(8) utility.-N:: Attempt to resolve given addresses and service names.-S:: Show the set each rule belongs to. If this flag is not specified, disabled rules will not be listed.

Flushing the IPFW Rules

The syntax for flushing the chain is:

ipfw flush

This causes all entries in the firewall chain to be removed except the fixed default policy enforced by the kernel (index 65535). Use caution when flushing rules; the default deny policy will leave your system cut off from the network until allow entries are added to the chain.

Clearing the IPFW Packet Counters

The syntax for clearing one or more packet counters is:

ipfw zero [***index***]

When used without an ***index*** argument, all packet counters are cleared. If an ***index*** is supplied, the clearing operation only affects a specific chain entry.

Example Commands for ipfw

This command will deny all packets from the host evil.crackers.org to the telnet port of the host nice.people.org:

# ipfw add deny tcp from evil.crackers.org to nice.people.org 23

The next example denies and logs any TCP traffic from the entire crackers.org network (a class C) to the nice.people.org machine (any port).

# ipfw add deny log tcp from evil.crackers.org/24 to nice.people.org

If you do not want people sending X sessions to your internal network (a subnet of a class C), the following command will do the necessary filtering:

# ipfw add deny tcp from any to my.org/28 6000 setup

To see the accounting records:

# ipfw -a list

or in the short form

# ipfw -a l

You can also see the last time a chain entry was matched with:

# ipfw -at l

Building a Packet Filtering Firewall

Note: The following suggestions are just that: suggestions. The requirements of each firewall are different and we cannot tell you how to build a firewall to meet your particular requirements.

When initially setting up your firewall, unless you have a test bench setup where you can configure your firewall host in a controlled environment, it is strongly recommend you use the logging version of the commands and enable logging in the kernel. This will allow you to quickly identify problem areas and cure them without too much disruption. Even after the initial setup phase is complete, I recommend using the logging for `deny' as it allows tracing of possible attacks and also modification of the firewall rules if your requirements alter.

Note: If you use the logging versions of the accept command, be aware that it can generate large amounts of log data. One log entry will be generated for every packet that passes through the firewall, so large FTP/http transfers, etc, will really slow the system down. It also increases the latencies on those packets as it requires more work to be done by the kernel before the packet can be passed on. syslogd will also start using up a lot more processor time as it logs all the extra data to disk, and it could quite easily fill the partition /var/log is located on.

You should enable your firewall from /etc/rc.conf.local or /etc/rc.conf. The associated manual page explains which knobs to fiddle and lists some preset firewall configurations. If you do not use a preset configuration, ipfw list will output the current ruleset into a file that you can pass to rc.conf. If you do not use /etc/rc.conf.local or /etc/rc.conf to enable your firewall, it is important to make sure your firewall is enabled before any IP interfaces are configured.

The next problem is what your firewall should actually do! This is largely dependent on what access to your network you want to allow from the outside, and how much access to the outside world you want to allow from the inside. Some general rules are:

Another checklist for firewall configuration is available from CERT at http://www.cert.org/tech_tips/packet_filtering.html

As stated above, these are only guidelines. You will have to decide what filter rules you want to use on your firewall yourself. We cannot accept ANY responsibility if someone breaks into your network, even if you follow the advice given above.

IPFW Overhead and Optimization

Many people want to know how much overhead IPFW adds to a system. The answer to this depends mostly on your rule set and processor speed. For most applications dealing with Ethernet and small rule sets, the answer is negligible. For those of you that need actual measurements to satisfy your curiosity, read on.

The following measurements were made using FreeBSD 2.2.5-STABLE on a 486-66. (While IPFW has changed slightly in later releases of DragonFly, it still performs with similar speed.) IPFW was modified to measure the time spent within the ip_fw_chk routine, displaying the results to the console every 1000 packets.

Two rule sets, each with 1000 rules, were tested. The first set was designed to demonstrate a worst case scenario by repeating the rule:

# ipfw add deny tcp from any to any 55555

This demonstrates a worst case scenario by causing most of IPFW's packet check routine to be executed before finally deciding that the packet does not match the rule (by virtue of the port number). Following the 999th iteration of this rule was an allow ip from any to any.

The second set of rules were designed to abort the rule check quickly:

# ipfw add deny ip from 1.2.3.4 to 1.2.3.4

The non-matching source IP address for the above rule causes these rules to be skipped very quickly. As before, the 1000th rule was an allow ip from any to any.

The per-packet processing overhead in the former case was approximately 2.703 ms/packet, or roughly 2.7 microseconds per rule. Thus the theoretical packet processing limit with these rules is around 370 packets per second. Assuming 10 Mbps Ethernet and a ~1500 byte packet size, we would only be able to achieve 55.5% bandwidth utilization.

For the latter case each packet was processed in approximately 1.172 ms, or roughly 1.2 microseconds per rule. The theoretical packet processing limit here would be about 853 packets per second, which could consume 10 Mbps Ethernet bandwidth.

The excessive number of rules tested and the nature of those rules do not provide a real-world scenario -- they were used only to generate the timing information presented here. Here are a few things to keep in mind when building an efficient rule set:

CategoryHandbook

CategoryHandbook-security

OpenSSL

OpenSSL provides a general-purpose cryptography library, as well as the Secure Sockets Layer v2/v3 (SSLv2/SSLv3) and Transport Layer Security v1 (TLSv1) network security protocols.

However, one of the algorithms (specifically IDEA) included in OpenSSL is protected by patents in the USA and elsewhere, and is not available for unrestricted use. IDEA is included in the OpenSSL sources in DragonFly, but it is not built by default. If you wish to use it, and you comply with the license terms, enable the MAKE_IDEA switch in /etc/make.conf and rebuild your sources using make world.

Today, the RSA algorithm is free for use in USA and other countries. In the past it was protected by a patent.

VPN over IPsec

*Written by Nik Clayton. *

Creating a VPN between two networks, separated by the Internet, using DragonFly gateways.

Understanding IPsec

*Written by Hiten M. Pandya. *

This section will guide you through the process of setting up IPsec, and to use it in an environment which consists of DragonFly and Microsoft® Windows® 2000/XP machines, to make them communicate securely. In order to set up IPsec, it is necessary that you are familiar with the concepts of building a custom kernel (see [kernelconfig.html Chapter 9]).

IPsec is a protocol which sits on top of the Internet Protocol (IP) layer. It allows two or more hosts to communicate in a secure manner (hence the name). The DragonFly IPsec network stack is based on the KAME implementation, which has support for both protocol families, IPv4 and IPv6.

IPsec consists of two sub-protocols:

ESP and AH can either be used together or separately, depending on the environment.

IPsec can either be used to directly encrypt the traffic between two hosts (known as Transport Mode); or to build virtual tunnels between two subnets, which could be used for secure communication between two corporate networks (known as Tunnel Mode). The latter is more commonly known as a Virtual Private Network (VPN). The ipsec(4) manual page should be consulted for detailed information on the IPsec subsystem in DragonFly.

To add IPsec support to your kernel, add the following options to your kernel configuration file:

options   IPSEC        #IP security

options   IPSEC_ESP    #IP security (crypto; define w/ IPSEC)

If IPsec debugging support is desired, the following kernel option should also be added:

options   IPSEC_DEBUG  #debug for IP security

The Problem

There's no standard for what constitutes a VPN. VPNs can be implemented using a number of different technologies, each of which have their own strengths and weaknesses. This article presents a number of scenarios, and strategies for implementing a VPN for each scenario.

Scenario #1: Two networks, connected to the Internet, to behave as one

This is the scenario that caused me to first investigating VPNs. The premise is as follows:

If you find that you are trying to connect two networks, both of which, internally, use the same private IP address range (e.g., both of them use 192.168.1.x), then one of the networks will have to be renumbered.

The network topology might look something like this:

security/ipsec-network.png

Notice the two public IP addresses. I'll use the letters to refer to them in the rest of this article. Anywhere you see those letters in this article, replace them with your own public IP addresses. Note also that internally, the two gateway machines have .1 IP addresses, and that the two networks have different private IP addresses (192.168.1.x and 192.168.2.x respectively). All the machines on the private networks have been configured to use the .1 machine as their default gateway.

The intention is that, from a network point of view, each network should view the machines on the other network as though they were directly attached the same router -- albeit a slightly slow router with an occasional tendency to drop packets.

This means that (for example), machine 192.168.1.20 should be able to run

ping 192.168.2.34

and have it work, transparently. Windows machines should be able to see the machines on the other network, browse file shares, and so on, in exactly the same way that they can browse machines on the local network.

And the whole thing has to be secure. This means that traffic between the two networks has to be encrypted.

Creating a VPN between these two networks is a multi-step process. The stages are as follows:

  1. Create a virtual network link between the two networks, across the Internet. Test it, using tools like ping(8), to make sure it works.

  2. Apply security policies to ensure that traffic between the two networks is transparently encrypted and decrypted as necessary. Test this, using tools like tcpdump(1), to ensure that traffic is encrypted.

  3. Configure additional software on the DragonFly gateways, to allow Windows machines to see one another across the VPN.

Step 1: Creating and testing a virtual network link

Suppose that you were logged in to the gateway machine on network #1 (with public IP address A.B.C.D, private IP address 192.168.1.1), and you ran ping 192.168.2.1, which is the private address of the machine with IP address W.X.Y.Z. What needs to happen in order for this to work?

  1. The gateway machine needs to know how to reach 192.168.2.1. In other words, it needs to have a route to 192.168.2.1.

  2. Private IP addresses, such as those in the 192.168.x range are not supposed to appear on the Internet at large. Instead, each packet you send to 192.168.2.1 will need to be wrapped up inside another packet. This packet will need to appear to be from A.B.C.D, and it will have to be sent to W.X.Y.Z. This process is called encapsulation.

  3. Once this packet arrives at W.X.Y.Z it will need to unencapsulated, and delivered to 192.168.2.1.

You can think of this as requiring a tunnel between the two networks. The two tunnel mouths are the IP addresses A.B.C.D and W.X.Y.Z, and the tunnel must be told the addresses of the private IP addresses that will be allowed to pass through it. The tunnel is used to transfer traffic with private IP addresses across the public Internet.

This tunnel is created by using the generic interface, or gif devices on DragonFly. As you can imagine, the gif interface on each gateway host must be configured with four IP addresses; two for the public IP addresses, and two for the private IP addresses.

Support for the gif device must be compiled in to the DragonFly kernel on both machines. You can do this by adding the line:

pseudo-device gif

to the kernel configuration files on both machines, and then compile, install, and reboot as normal.

Configuring the tunnel is a two step process. First the tunnel must be told what the outside (or public) IP addresses are, using gifconfig(8). Then the private IP addresses must be configured using ifconfig(8).

On the gateway machine on network #1 you would run the following two commands to configure the tunnel.

gifconfig gif0 A.B.C.D W.X.Y.Z

ifconfig gif0 inet 192.168.1.1 192.168.2.1 netmask 0xffffffff

On the other gateway machine you run the same commands, but with the order of the IP addresses reversed.

gifconfig gif0 W.X.Y.Z A.B.C.D

ifconfig gif0 inet 192.168.2.1 192.168.1.1 netmask 0xffffffff

You can then run:

gifconfig gif0

to see the configuration. For example, on the network #1 gateway, you would see this:

# gifconfig gif0

gif0: flags=8011&lt;UP,POINTTOPOINT,MULTICAST&gt; mtu 1280

inet 192.168.1.1 --&gt; 192.168.2.1 netmask 0xffffffff

physical address inet A.B.C.D --&gt; W.X.Y.Z

As you can see, a tunnel has been created between the physical addresses A.B.C.D and W.X.Y.Z, and the traffic allowed through the tunnel is that between 192.168.1.1 and 192.168.2.1.

This will also have added an entry to the routing table on both machines, which you can examine with the command netstat -rn. This output is from the gateway host on network #1.

# netstat -rn

Routing tables



Internet:

Destination      Gateway       Flags    Refs    Use    Netif  Expire

...

192.168.2.1      192.168.1.1   UH        0        0    gif0

...

As the Flags value indicates, this is a host route, which means that each gateway knows how to reach the other gateway, but they do not know how to reach the rest of their respective networks. That problem will be fixed shortly.

It is likely that you are running a firewall on both machines. This will need to be circumvented for your VPN traffic. You might want to allow all traffic between both networks, or you might want to include firewall rules that protect both ends of the VPN from one another.

It greatly simplifies testing if you configure the firewall to allow all traffic through the VPN. You can always tighten things up later. If you are using ipfw(8) on the gateway machines then a command like

ipfw add 1 allow ip from any to any via gif0

will allow all traffic between the two end points of the VPN, without affecting your other firewall rules. Obviously you will need to run this command on both gateway hosts.

This is sufficient to allow each gateway machine to ping the other. On 192.168.1.1, you should be able to run

ping 192.168.2.1

and get a response, and you should be able to do the same thing on the other gateway machine.

However, you will not be able to reach internal machines on either network yet. This is because of the routing -- although the gateway machines know how to reach one another, they do not know how to reach the network behind each one.

To solve this problem you must add a static route on each gateway machine. The command to do this on the first gateway would be:

route add 192.168.2.0 192.168.2.1 netmask 0xffffff00

This says In order to reach the hosts on the network 192.168.2.0, send the packets to the host 192.168.2.1. You will need to run a similar command on the other gateway, but with the 192.168.1.x addresses instead.

IP traffic from hosts on one network will now be able to reach hosts on the other network.

That has now created two thirds of a VPN between the two networks, in as much as it is virtual and it is a network. It is not private yet. You can test this using ping(8) and tcpdump(1). Log in to the gateway host and run

tcpdump dst host 192.168.2.1

In another log in session on the same host run

ping 192.168.2.1

You will see output that looks something like this:

16:10:24.018080 192.168.1.1 &gt; 192.168.2.1: icmp: echo request

16:10:24.018109 192.168.1.1 &gt; 192.168.2.1: icmp: echo reply

16:10:25.018814 192.168.1.1 &gt; 192.168.2.1: icmp: echo request

16:10:25.018847 192.168.1.1 &gt; 192.168.2.1: icmp: echo reply

16:10:26.028896 192.168.1.1 &gt; 192.168.2.1: icmp: echo request

16:10:26.029112 192.168.1.1 &gt; 192.168.2.1: icmp: echo reply

As you can see, the ICMP messages are going back and forth unencrypted. If you had used the -s parameter to tcpdump(1) to grab more bytes of data from the packets you would see more information.

Obviously this is unacceptable. The next section will discuss securing the link between the two networks so that it all traffic is automatically encrypted.

Summary:

Step 2: Securing the link

To secure the link we will be using IPsec. IPsec provides a mechanism for two hosts to agree on an encryption key, and to then use this key in order to encrypt data between the two hosts.

The are two areas of configuration to be considered here.

  1. There must be a mechanism for two hosts to agree on the encryption mechanism to use. Once two hosts have agreed on this mechanism there is said to be a security association between them.

  2. There must be a mechanism for specifying which traffic should be encrypted. Obviously, you don't want to encrypt all your outgoing traffic -- you only want to encrypt the traffic that is part of the VPN. The rules that you put in place to determine what traffic will be encrypted are called security policies.

Security associations and security policies are both maintained by the kernel, and can be modified by userland programs. However, before you can do this you must configure the kernel to support IPsec and the Encapsulated Security Payload (ESP) protocol. This is done by configuring a kernel with:

options IPSEC

options IPSEC_ESP

and recompiling, reinstalling, and rebooting. As before you will need to do this to the kernels on both of the gateway hosts.

You have two choices when it comes to setting up security associations. You can configure them by hand between two hosts, which entails choosing the encryption algorithm, encryption keys, and so forth, or you can use daemons that implement the Internet Key Exchange protocol (IKE) to do this for you.

I recommend the latter. Apart from anything else, it is easier to set up.

Editing and displaying security policies is carried out using setkey(8). By analogy, setkey is to the kernel's security policy tables as route(8) is to the kernel's routing tables. setkey can also display the current security associations, and to continue the analogy further, is akin to netstat -r in that respect.

There are a number of choices for daemons to manage security associations with DragonFly. This article will describe how to use one of these, racoon. racoon is in the FreeBSD ports collection, in the security/ category, and is installed in the usual way.

racoon must be run on both gateway hosts. On each host it is configured with the IP address of the other end of the VPN, and a secret key (which you choose, and must be the same on both gateways).

The two daemons then contact one another, confirm that they are who they say they are (by using the secret key that you configured). The daemons then generate a new secret key, and use this to encrypt the traffic over the VPN. They periodically change this secret, so that even if an attacker were to crack one of the keys (which is as theoretically close to unfeasible as it gets) it won't do them much good -- by the time they've cracked the key the two daemons have chosen another one.

racoon's configuration is stored in ${PREFIX}/etc/racoon. You should find a configuration file there, which should not need to be changed too much. The other component of racoon's configuration, which you will need to change, is the pre-shared key.

The default racoon configuration expects to find this in the file ${PREFIX}/etc/racoon/psk.txt. It is important to note that the pre-shared key is not the key that will be used to encrypt your traffic across the VPN link, it is simply a token that allows the key management daemons to trust one another.

psk.txt contains a line for each remote site you are dealing with. In this example, where there are two sites, each psk.txt file will contain one line (because each end of the VPN is only dealing with one other end).

On gateway host #1 this line should look like this:

W.X.Y.Z            secret

That is, the public IP address of the remote end, whitespace, and a text string that provides the secret. Obviously, you shouldn't use secret as your key -- the normal rules for choosing a password apply.

On gateway host #2 the line would look like this

A.B.C.D            secret

That is, the public IP address of the remote end, and the same secret key. psk.txt must be mode 0600 (i.e., only read/write to root) before racoon will run.

You must run racoon on both gateway machines. You will also need to add some firewall rules to allow the IKE traffic, which is carried over UDP to the ISAKMP (Internet Security Association Key Management Protocol) port. Again, this should be fairly early in your firewall ruleset.

ipfw add 1 allow udp from A.B.C.D to W.X.Y.Z isakmp

ipfw add 1 allow udp from W.X.Y.Z to A.B.C.D isakmp

Once racoon is running you can try pinging one gateway host from the other. The connection is still not encrypted, but racoon will then set up the security associations between the two hosts -- this might take a moment, and you may see this as a short delay before the ping commands start responding.

Once the security association has been set up you can view it using setkey(8). Run

setkey -D

on either host to view the security association information.

That's one half of the problem. They other half is setting your security policies.

To create a sensible security policy, let's review what's been set up so far. This discussions hold for both ends of the link.

Each IP packet that you send out has a header that contains data about the packet. The header includes the IP addresses of both the source and destination. As we already know, private IP addresses, such as the 192.168.x.y range are not supposed to appear on the public Internet. Instead, they must first be encapsulated inside another packet. This packet must have the public source and destination IP addresses substituted for the private addresses.

So if your outgoing packet started looking like this:

security/ipsec-out-pkt.png

Then it will be encapsulated inside another packet, looking something like this:

security/ipsec-encap-pkt.png

This encapsulation is carried out by the gif device. As you can see, the packet now has real IP addresses on the outside, and our original packet has been wrapped up as data inside the packet that will be put out on the Internet.

Obviously, we want all traffic between the VPNs to be encrypted. You might try putting this in to words, as:

If a packet leaves from A.B.C.D, and it is destined for W.X.Y.Z, then encrypt it, using the necessary security associations.

If a packet arrives from W.X.Y.Z, and it is destined for A.B.C.D, then decrypt it, using the necessary security associations.

That's close, but not quite right. If you did this, all traffic to and from W.X.Y.Z, even traffic that was not part of the VPN, would be encrypted. That's not quite what you want. The correct policy is as follows

If a packet leaves from A.B.C.D, and that packet is encapsulating another packet, and it is destined for W.X.Y.Z, then encrypt it, using the necessary security associations.

If a packet arrives from W.X.Y.Z, and that packet is encapsulating another packet, and it is destined for A.B.C.D, then encrypt it, using the necessary security associations.

A subtle change, but a necessary one.

Security policies are also set using setkey(8). setkey(8) features a configuration language for defining the policy. You can either enter configuration instructions via stdin, or you can use the -f option to specify a filename that contains configuration instructions.

The configuration on gateway host #1 (which has the public IP address A.B.C.D) to force all outbound traffic to W.X.Y.Z to be encrypted is:

spdadd A.B.C.D/32 W.X.Y.Z/32 ipencap -P out ipsec esp/tunnel/A.B.C.D-W.X.Y.Z/require;

Put these commands in a file (e.g., /etc/ipsec.conf) and then run

# setkey -f /etc/ipsec.conf

spdadd tells setkey(8) that we want to add a rule to the secure policy database. The rest of this line specifies which packets will match this policy. A.B.C.D/32 and W.X.Y.Z/32 are the IP addresses and netmasks that identify the network or hosts that this policy will apply to. In this case, we want it to apply to traffic between these two hosts. ipencap tells the kernel that this policy should only apply to packets that encapsulate other packets. -P out says that this policy applies to outgoing packets, and ipsec says that the packet will be secured.

The second line specifies how this packet will be encrypted. esp is the protocol that will be used, while tunnel indicates that the packet will be further encapsulated in an IPsec packet. The repeated use of A.B.C.D and W.X.Y.Z is used to select the security association to use, and the final require mandates that packets must be encrypted if they match this rule.

This rule only matches outgoing packets. You will need a similar rule to match incoming packets.

spdadd W.X.Y.Z/32 A.B.C.D/32 ipencap -P in ipsec esp/tunnel/W.X.Y.Z-A.B.C.D/require;

Note the in instead of out in this case, and the necessary reversal of the IP addresses.

The other gateway host (which has the public IP address W.X.Y.Z) will need similar rules.

spdadd W.X.Y.Z/32 A.B.C.D/32 ipencap -P out ipsec esp/tunnel/W.X.Y.Z-A.B.C.D/require;

       spdadd A.B.C.D/32 W.X.Y.Z/32 ipencap -P in ipsec esp/tunnel/A.B.C.D-W.X.Y.Z/require;

Finally, you need to add firewall rules to allow ESP and IPENCAP packets back and forth. These rules will need to be added to both hosts.

ipfw add 1 allow esp from A.B.C.D to W.X.Y.Z

ipfw add 1 allow esp from W.X.Y.Z to A.B.C.D

ipfw add 1 allow ipencap from A.B.C.D to W.X.Y.Z

ipfw add 1 allow ipencap from W.X.Y.Z to A.B.C.D

Because the rules are symmetric you can use the same rules on each gateway host.

Outgoing packets will now look something like this:

security/ipsec-crypt-pkt.png

When they are received by the far end of the VPN they will first be decrypted (using the security associations that have been negotiated by racoon). Then they will enter the gif interface, which will unwrap the second layer, until you are left with the innermost packet, which can then travel in to the inner network.

You can check the security using the same ping(8) test from earlier. First, log in to the A.B.C.D gateway machine, and run:

tcpdump dst host 192.168.2.1

In another log in session on the same host run

ping 192.168.2.1

This time you should see output like the following:

XXX tcpdump output

Now, as you can see, tcpdump(1) shows the ESP packets. If you try to examine them with the -s option you will see (apparently) gibberish, because of the encryption.

Congratulations. You have just set up a VPN between two remote sites.

Summary

The previous two steps should suffice to get the VPN up and running. Machines on each network will be able to refer to one another using IP addresses, and all traffic across the link will be automatically and securely encrypted.


OpenSSH

*Contributed by Chern Lee. *

OpenSSH is a set of network connectivity tools used to access remote machines securely. It can be used as a direct replacement for rlogin, rsh, rcp, and telnet. Additionally, any other TCP/IP connections can be tunneled/forwarded securely through SSH. OpenSSH encrypts all traffic to effectively eliminate eavesdropping, connection hijacking, and other network-level attacks.

OpenSSH is maintained by the OpenBSD project, and is based upon SSH v1.2.12 with all the recent bug fixes and updates. It is compatible with both SSH protocols 1 and 2.

Advantages of Using OpenSSH

Normally, when using telnet(1) or rlogin(1), data is sent over the network in an clear, un-encrypted form. Network sniffers anywhere in between the client and server can steal your user/password information or data transferred in your session. OpenSSH offers a variety of authentication and encryption methods to prevent this from happening.

Enabling sshd

Be sure to make the following addition to your rc.conf file:

sshd_enable="YES"

This will load sshd(8), the daemon program for OpenSSH , the next time your system initializes. Alternatively, you can simply run directly the sshd daemon by typing rcstart sshd on the command line.

SSH Client

The ssh(1) utility works similarly to rlogin(1).

# ssh user@example.com

Host key not found from the list of known hosts.

Are you sure you want to continue connecting (yes/no)? yes

Host 'example.com' added to the list of known hosts.

user@example.com's password: *******

The login will continue just as it would have if a session was created using rlogin or telnet. SSH utilizes a key fingerprint system for verifying the authenticity of the server when the client connects. The user is prompted to enter yes only when connecting for the first time. Future attempts to login are all verified against the saved fingerprint key. The SSH client will alert you if the saved fingerprint differs from the received fingerprint on future login attempts. The fingerprints are saved in ~/.ssh/known_hosts, or ~/.ssh/known_hosts2 for SSH v2 fingerprints.

By default, OpenSSH servers are configured to accept both SSH v1 and SSH v2 connections. The client, however, can choose between the two. Version 2 is known to be more robust and secure than its predecessor.

The ssh(1) command can be forced to use either protocol by passing it the -1 or -2 argument for v1 and v2, respectively.

Secure Copy

The scp(1) command works similarly to rcp(1); it copies a file to or from a remote machine, except in a secure fashion.

#  scp user@example.com:/COPYRIGHT COPYRIGHT

user@example.com's password: *******

COPYRIGHT            100% |*****************************|  4735

00:00

#

Since the fingerprint was already saved for this host in the previous example, it is verified when using scp(1) here.

The arguments passed to scp(1) are similar to cp(1), with the file or files in the first argument, and the destination in the second. Since the file is fetched over the network, through SSH, one or more of the file arguments takes on the form user@host:<path_to_remote_file>. The user@ part is optional. If omitted, it will default to the same username as you are currently logged in as, unless configured otherwise.

Configuration

The system-wide configuration files for both the OpenSSH daemon and client reside within the /etc/ssh directory.

ssh_config configures the client settings, while sshd_config configures the daemon.

Additionally, the sshd_program (/usr/sbin/sshd by default), and sshd_flags rc.conf options can provide more levels of configuration.

Each user can have a personal configuration file in ~/.ssh/config. The file can configure various client options, and can include host-specific options. With the following configuration file, a user could type ssh shell which would be equivalent to ssh -X user@shell.example.com.

Host shell

 Hostname shell.example.com

 Username user

 Protocol 2

 ForwardX11 yes

ssh-keygen

Instead of using passwords, ssh-keygen(1) can be used to generate RSA keys to authenticate a user:

% ssh-keygen -t rsa1

Initializing random number generator...

Generating p:  .++ (distance 66)

Generating q:  ..............................++ (distance 498)

Computing the keys...

Key generation complete.

Enter file in which to save the key (/home/user/.ssh/identity):

Enter passphrase:

Enter the same passphrase again:

Your identification has been saved in /home/user/.ssh/identity.

...

ssh-keygen(1) will create a public and private key pair for use in authentication. The private key is stored in ~/.ssh/identity, whereas the public key is stored in ~/.ssh/identity.pub. The public key must be placed in ~/.ssh/authorized_keys of the remote machine in order for the setup to work.

This will allow connection to the remote machine based upon RSA authentication instead of passwords.

Note: The -t rsa1 option will create RSA keys for use by SSH protocol version 1. If you want to use RSA keys with the SSH protocol version 2, you have to use the command ssh-keygen -t rsa.

If a passphrase is used in ssh-keygen(1), the user will be prompted for a password each time in order to use the private key.

A SSH protocol version 2 DSA key can be created for the same purpose by using the ssh-keygen -t dsa command. This will create a public/private DSA key for use in SSH protocol version 2 sessions only. The public key is stored in ~/.ssh/id_dsa.pub, while the private key is in ~/.ssh/id_dsa.

DSA public keys are also placed in ~/.ssh/authorized_keys on the remote machine.

ssh-agent(1) and ssh-add(1) are utilities used in managing multiple passworded private keys.

Warning: The various options and files can be different according to the OpenSSH version you have on your system, to avoid problems you should consult the ssh-keygen(1) manual page.

SSH Tunneling

OpenSSH has the ability to create a tunnel to encapsulate another protocol in an encrypted session.

The following command tells ssh(1) to create a tunnel for telnet :

% ssh -2 -N -f -L 5023:localhost:23 user@foo.example.com

%

The ssh command is used with the following options:

-2

:: Forces ssh to use version 2 of the protocol. (Do not use if you are working with older SSH servers)

-N

:: Indicates no command, or tunnel only. If omitted, ssh would initiate a normal session.

-f

:: Forces ssh to run in the background.

-L

:: Indicates a local tunnel in ***localport:remotehost:remoteport*** fashion.

user@foo.example.com

:: The remote SSH server.

An SSH tunnel works by creating a listen socket on localhost on the specified port. It then forwards any connection received on the local host/port via the SSH connection to the specified remote host and port.

In the example, port ***5023*** on localhost is being forwarded to port ***23*** on localhost of the remote machine. Since ***23*** is telnet , this would create a secure telnet session through an SSH tunnel.

This can be used to wrap any number of insecure TCP protocols such as SMTP, POP3, FTP, etc.

Example 10-1. Using SSH to Create a Secure Tunnel for SMTP

% ssh -2 -N -f -L 5025:localhost:25 user@mailserver.example.com

user@mailserver.example.com's password: *****

% telnet localhost 5025

Trying 127.0.0.1...

Connected to localhost.

Escape character is '^]'.

220 mailserver.example.com ESMTP

This can be used in conjunction with an ssh-keygen(1) and additional user accounts to create a more seamless/hassle-free SSH tunneling environment. Keys can be used in place of typing a password, and the tunnels can be run as a separate user.

Practical SSH Tunneling Examples

Secure Access of a POP3 Server

At work, there is an SSH server that accepts connections from the outside. On the same office network resides a mail server running a POP3 server. The network, or network path between your home and office may or may not be completely trustable. Because of this, you need to check your e-mail in a secure manner. The solution is to create an SSH connection to your office's SSH server, and tunnel through to the mail server.

% ssh -2 -N -f -L 2110:mail.example.com:110 user@ssh-server.example.com

user@ssh-server.example.com's password: ******

When the tunnel is up and running, you can point your mail client to send POP3 requests to localhost port 2110. A connection here will be forwarded securely across the tunnel to mail.example.com.

Bypassing a Draconian Firewall

Some network administrators impose extremely draconian firewall rules, filtering not only incoming connections, but outgoing connections. You may be only given access to contact remote machines on ports 22 and 80 for SSH and web surfing.

You may wish to access another (perhaps non-work related) service, such as an Ogg Vorbis server to stream music. If this Ogg Vorbis server is streaming on some other port than 22 or 80, you will not be able to access it.

The solution is to create an SSH connection to a machine outside of your network's firewall, and use it to tunnel to the Ogg Vorbis server.

% ssh -2 -N -f -L 8888:music.example.com:8000 user@unfirewalled-system.example.org

user@unfirewalled-system.example.org's password: *******

Your streaming client can now be pointed to localhost port 8888, which will be forwarded over to music.example.com port 8000, successfully evading the firewall.

Contributed by Matteo Riondato. Updated for DragonFly by Dario Banno.

Synopsis

This chapter will provide an explanation of what DragonFly jails are and how to use them. Jails, sometimes referred to as an enhanced replacement of chroot environments, are a very powerful tool for system administrators, but their basic usage can also be useful for advanced users.

After reading this chapter, you will know:


Other sources of useful information about jails are:



For information on how to setup a jail, see: Setting up a jail


Terms Related to Jails

To facilitate better understanding of parts of the DragonFly system related to jails, their internals and the way they interact with the rest of DragonFly, the following terms are used further in this chapter:

chroot(8) (command)

A system call of DragonFly, which changes the root directory of a process and all its descendants.


chroot(2) (environment)

The environment of processes running in a “chroot”. This includes resources such as the part of the file system which is visible, user and group IDs which are available, network interfaces and other IPC mechanisms, etc.


* jail(8) (command)*

The system administration utility which allows launching of processes within a jail environment.


host (system, process, user, etc.)

The controlling system of a jail environment. The host system has access to all the hardware resources available, and can control processes both outside of and inside a jail environment. One of the important differences of the host system from a jail is that the limitations which apply to superuser processes inside a jail are not enforced for processes of the host system.


hosted (system, process, user, etc.)

A process, user or other entity, whose access to resources is restricted by a DragonFly jail.


Introduction

Since system administration is a difficult and perplexing task, many powerful tools were developed to make life easier for the administrator. These tools mostly provide enhancements of some sort to the way systems are installed, configured and maintained. Part of the tasks which an administrator is expected to do is to properly configure the security of a system, so that it can continue serving its real purpose, without allowing security violations.


One of the tools which can be used to enhance the security of a DragonFly system are jails. The jail feature was written by Poul-Henning Kamp phk@freebsd.org for R&D Associates http://www.rndassociates.com/ who contributed it to FreeBSD 4.X. Support for multiple IPs and IPv6 were introduced in DragonFly 1.7. Their development still goes on, enhancing their usefulness, performance, reliability, and security.

What is a Jail

BSD-like operating systems have had chroot(2) since the time of 4.2BSD. The chroot(8) utility can be used to change the root directory of a set of processes, creating a safe environment, separate from the rest of the system. Processes created in the chrooted environment can not access files or resources outside of it. For that reason, compromising a service running in a chrooted environment should not allow the attacker to compromise the entire system. The chroot(8) utility is good for easy tasks, which do not require a lot of flexibility or complex and advanced features. Since the inception of the chroot concept, however, many ways have been found to escape from a chrooted environment and, although they have been fixed in modern versions of the DragonFly kernel, it was clear that chroot(2) was not the ideal solution for securing services. A new subsystem had to be implemented.

This is one of the main reasons why jails were developed.

Jails improve on the concept of the traditional chroot(2) environment, in several ways. In a traditional chroot(2) environment, processes are only limited in the part of the file system they can access. The rest of the system resources (like the set of system users, the running processes, or the networking subsystem) are shared by the chrooted processes and the processes of the host system. Jails expand this model by virtualizing not only access to the file system, but also the set of users, the networking subsystem of the DragonFly kernel and a few other things. A more complete set of fine-grained controls available for tuning the access of a jailed environment is described in Section 12.5.


A jail is characterized by four elements:


Apart from these, jails can have their own set of users and their own root user. Naturally, the powers of the root user are limited within the jail environment and, from the point of view of the host system, the jail root user is not an omnipotent user. In addition, the root user of a jail is not allowed to perform critical operations to the system outside of the associated jail(8) environment. More information about capabilities and restrictions of the root user will be discussed in Section 12.5 below.


Creating and Controlling Jails

Some administrators divide jails into the following two types: complete jails, which resemble a real DragonFly system, and service jails, dedicated to one application or service, possibly running with privileges. This is only a conceptual division and the process of building a jail is not affected by it. The jail(8) manual page is quite clear about the procedure for building a jail:

# setenv D /here/is/the/jail
# mkdir -p $D                                     (1)
# cd /usr/src
# make installworld DESTDIR=$D                    (2)
# cd etc 
# make distribution DESTDIR=$D -DNO_MAKEDEV_RUN   (3)
# cd $D
# ln -sf dev/null kernel
# mount_devfs -o jail $D/dev
#

(1)

Selecting a location for a jail is the best starting point. This is where the jail will physically reside within the file system of the jail's host. A good choice can be /usr/jail/jailname, where jailname is the hostname identifying the jail. The /usr/ file system usually has enough space for the jail file system, which for complete jails is, essentially, a replication of every file present in a default installation of the DragonFly base system.


(2)

This command will populate the directory subtree chosen as jail's physical location on the file system with the necessary binaries, libraries, manual pages and so on. Everything is done in the typical DragonFly style -- first everything is built/compiled, then installed to the destination path.


(3)

The distribution target for make installs every needed configuration file. In simple words, it installs every installable file of /usr/src/etc/ to the /etc directory of the jail environment: $D/etc/.


Once a jail is installed, it can be started by using the jail(8) utility. The jail(8) utility takes four mandatory arguments which are described in the Section 12.3.1. Other arguments may be specified too, e.g., to run the jailed process with the credentials of a specific user. The command argument depends on the type of the jail; for a virtual system, /etc/rc is a good choice, since it will replicate the startup sequence of a real DragonFly system. For a service jail, it depends on the service or application that will run within the jail.

Jails are often started at boot time and the DragonFly rc mechanism provides an easy way to do this.


A list of the jails which are enabled to start at boot time should be added to the rc.conf(5) file:

jail_enable="YES"   # Set to NO to disable starting of any jails
jail_list="www"     # Space separated list of names of jails
#

For each jail listed in jail_list, a group of rc.conf(5) settings, which describe the particular jail, should be added:

jail_www_rootdir="/usr/jail/www"     # jail's root directory
jail_www_hostname="www.example.org"  # jail's hostname
jail_www_ip="192.168.0.10"           # jail's IP address
#

The default startup of jails configured in rc.conf(5), will run the /etc/rc script of the jail, which assumes the jail is a complete virtual system. For service jails, the default startup command of the jail should be changed, by setting the jail_jailname_exec_start option appropriately.

Note: For a full list of available options, please see the rc.conf(5) manual page.

The /etc/rc.d/jail script can be used to start or stop a jail by hand, if an entry for it exists in rc.conf:

# /etc/rc.d/jail start www
# /etc/rc.d/jail stop www

A clean way to shut down a jail(8) is not available at the moment. This is because commands normally used to accomplish a clean system shutdown cannot be used inside a jail. The best way to shut down a jail is to run the following command from within the jail itself or using the jexec(8) utility from outside the jail:

# sh /etc/rc.shutdown

More information about this can be found in the jail(8) manual page.


Fine Tuning and Administration

There are several options which can be set for any jail, and various ways of combining a host DragonFly system with jails, to produce higher level applications. This section presents some of the options available for tuning the behavior and security restrictions implemented by a jail installation.

System tools for jail tuning in DragonFly

Fine tuning of a jail's configuration is mostly done by setting sysctl(8) variables. A special subtree of sysctl exists as a basis for organizing all the relevant options: the securityjail* hierarchy of DragonFly kernel options. Here is a list of the main jail-related sysctls, complete with their default value. Names should be self-explanatory, but for more information about them, please refer to the jail(8) and sysctl(8) manual pages.

These variables can be used by the system administrator of the host system to add or remove some of the limitations imposed by default on the root user. Note that there are some limitations which cannot be removed. The root user is not allowed to mount or unmount file systems from within a jail(8). The root inside a jail may not set firewall rules or do many other administrative tasks which require modifications of in-kernel data, such as setting the securelevel of the kernel.


The base system of DragonFly contains a basic set of tools for viewing information about the active jails, and attaching to a jail to run administrative commands. The jls(8) and jexec(8) commands are part of the base DragonFly system, and can be used to perform the following simple tasks:

Synopsis

This chapter will provide an explanation of what DragonFly jails are and how to use them. Jails, sometimes referred to as an enhanced replacement of chroot environments, are a very powerful tool for system administrators, but their basic usage can also be useful for advanced users.

After reading this chapter, you will know:


Other sources of useful information about jails are:



For information on how to setup a jail, see: Setting up a jail


Terms Related to Jails

To facilitate better understanding of parts of the DragonFly system related to jails, their internals and the way they interact with the rest of DragonFly, the following terms are used further in this chapter:

chroot(8) (command)

A system call of DragonFly, which changes the root directory of a process and all its descendants.


chroot(2) (environment)

The environment of processes running in a “chroot”. This includes resources such as the part of the file system which is visible, user and group IDs which are available, network interfaces and other IPC mechanisms, etc.


* jail(8) (command)*

The system administration utility which allows launching of processes within a jail environment.


host (system, process, user, etc.)

The controlling system of a jail environment. The host system has access to all the hardware resources available, and can control processes both outside of and inside a jail environment. One of the important differences of the host system from a jail is that the limitations which apply to superuser processes inside a jail are not enforced for processes of the host system.


hosted (system, process, user, etc.)

A process, user or other entity, whose access to resources is restricted by a DragonFly jail.


Introduction

Since system administration is a difficult and perplexing task, many powerful tools were developed to make life easier for the administrator. These tools mostly provide enhancements of some sort to the way systems are installed, configured and maintained. Part of the tasks which an administrator is expected to do is to properly configure the security of a system, so that it can continue serving its real purpose, without allowing security violations.


One of the tools which can be used to enhance the security of a DragonFly system are jails. The jail feature was written by Poul-Henning Kamp phk@freebsd.org for R&D Associates http://www.rndassociates.com/ who contributed it to FreeBSD 4.X. Support for multiple IPs and IPv6 were introduced in DragonFly 1.7. Their development still goes on, enhancing their usefulness, performance, reliability, and security.

What is a Jail

BSD-like operating systems have had chroot(2) since the time of 4.2BSD. The chroot(8) utility can be used to change the root directory of a set of processes, creating a safe environment, separate from the rest of the system. Processes created in the chrooted environment can not access files or resources outside of it. For that reason, compromising a service running in a chrooted environment should not allow the attacker to compromise the entire system. The chroot(8) utility is good for easy tasks, which do not require a lot of flexibility or complex and advanced features. Since the inception of the chroot concept, however, many ways have been found to escape from a chrooted environment and, although they have been fixed in modern versions of the DragonFly kernel, it was clear that chroot(2) was not the ideal solution for securing services. A new subsystem had to be implemented.

This is one of the main reasons why jails were developed.

Jails improve on the concept of the traditional chroot(2) environment, in several ways. In a traditional chroot(2) environment, processes are only limited in the part of the file system they can access. The rest of the system resources (like the set of system users, the running processes, or the networking subsystem) are shared by the chrooted processes and the processes of the host system. Jails expand this model by virtualizing not only access to the file system, but also the set of users, the networking subsystem of the DragonFly kernel and a few other things. A more complete set of fine-grained controls available for tuning the access of a jailed environment is described in Section 12.5.


A jail is characterized by four elements:


Apart from these, jails can have their own set of users and their own root user. Naturally, the powers of the root user are limited within the jail environment and, from the point of view of the host system, the jail root user is not an omnipotent user. In addition, the root user of a jail is not allowed to perform critical operations to the system outside of the associated jail(8) environment. More information about capabilities and restrictions of the root user will be discussed in Section 12.5 below.


Creating and Controlling Jails

Some administrators divide jails into the following two types: complete jails, which resemble a real DragonFly system, and service jails, dedicated to one application or service, possibly running with privileges. This is only a conceptual division and the process of building a jail is not affected by it. The jail(8) manual page is quite clear about the procedure for building a jail:

# setenv D /here/is/the/jail
# mkdir -p $D                                     (1)
# cd /usr/src
# make installworld DESTDIR=$D                    (2)
# cd etc 
# make distribution DESTDIR=$D -DNO_MAKEDEV_RUN   (3)
# cd $D
# ln -sf dev/null kernel
# mount_devfs -o jail $D/dev
#

(1)

Selecting a location for a jail is the best starting point. This is where the jail will physically reside within the file system of the jail's host. A good choice can be /usr/jail/jailname, where jailname is the hostname identifying the jail. The /usr/ file system usually has enough space for the jail file system, which for complete jails is, essentially, a replication of every file present in a default installation of the DragonFly base system.


(2)

This command will populate the directory subtree chosen as jail's physical location on the file system with the necessary binaries, libraries, manual pages and so on. Everything is done in the typical DragonFly style -- first everything is built/compiled, then installed to the destination path.


(3)

The distribution target for make installs every needed configuration file. In simple words, it installs every installable file of /usr/src/etc/ to the /etc directory of the jail environment: $D/etc/.


Once a jail is installed, it can be started by using the jail(8) utility. The jail(8) utility takes four mandatory arguments which are described in the Section 12.3.1. Other arguments may be specified too, e.g., to run the jailed process with the credentials of a specific user. The command argument depends on the type of the jail; for a virtual system, /etc/rc is a good choice, since it will replicate the startup sequence of a real DragonFly system. For a service jail, it depends on the service or application that will run within the jail.

Jails are often started at boot time and the DragonFly rc mechanism provides an easy way to do this.


A list of the jails which are enabled to start at boot time should be added to the rc.conf(5) file:

jail_enable="YES"   # Set to NO to disable starting of any jails
jail_list="www"     # Space separated list of names of jails
#

For each jail listed in jail_list, a group of rc.conf(5) settings, which describe the particular jail, should be added:

jail_www_rootdir="/usr/jail/www"     # jail's root directory
jail_www_hostname="www.example.org"  # jail's hostname
jail_www_ip="192.168.0.10"           # jail's IP address
#

The default startup of jails configured in rc.conf(5), will run the /etc/rc script of the jail, which assumes the jail is a complete virtual system. For service jails, the default startup command of the jail should be changed, by setting the jail_jailname_exec_start option appropriately.

Note: For a full list of available options, please see the rc.conf(5) manual page.

The /etc/rc.d/jail script can be used to start or stop a jail by hand, if an entry for it exists in rc.conf:

# /etc/rc.d/jail start www
# /etc/rc.d/jail stop www

A clean way to shut down a jail(8) is not available at the moment. This is because commands normally used to accomplish a clean system shutdown cannot be used inside a jail. The best way to shut down a jail is to run the following command from within the jail itself or using the jexec(8) utility from outside the jail:

# sh /etc/rc.shutdown

More information about this can be found in the jail(8) manual page.


Fine Tuning and Administration

There are several options which can be set for any jail, and various ways of combining a host DragonFly system with jails, to produce higher level applications. This section presents some of the options available for tuning the behavior and security restrictions implemented by a jail installation.

System tools for jail tuning in DragonFly

Fine tuning of a jail's configuration is mostly done by setting sysctl(8) variables. A special subtree of sysctl exists as a basis for organizing all the relevant options: the securityjail* hierarchy of DragonFly kernel options. Here is a list of the main jail-related sysctls, complete with their default value. Names should be self-explanatory, but for more information about them, please refer to the jail(8) and sysctl(8) manual pages.

These variables can be used by the system administrator of the host system to add or remove some of the limitations imposed by default on the root user. Note that there are some limitations which cannot be removed. The root user is not allowed to mount or unmount file systems from within a jail(8). The root inside a jail may not set firewall rules or do many other administrative tasks which require modifications of in-kernel data, such as setting the securelevel of the kernel.


The base system of DragonFly contains a basic set of tools for viewing information about the active jails, and attaching to a jail to run administrative commands. The jls(8) and jexec(8) commands are part of the base DragonFly system, and can be used to perform the following simple tasks:

This Page is under constructions. New software are being added regularly. Follow the links below to see how to make corresponding software work on DragonFly

Servers

?SSHserver

FTP Server

Installing flash player on firefox.

The Flash plugin is distributed by Adobe in binary form only. Adobe does not provide a native DragonFlyBSD plugin, but there is a Linux plugin which you can use under Linux emulation. Linux emulation software is installed from pkgsrc and is tested to be working fine on x86 Platform. The steps are

Configure Linux Support

# echo "linux_load=yes" >> /boot/loader.conf
# echo "linux_enable=yes" >> /etc/rc.conf
# echo "proc  /compat/linux/proc  linprocfs  rw  0  0" >> /etc/fstab

Reboot DragonFly ( not necessarry but easier for new users ) so that Linux Emulation is loaded to the kernel and configured correctly.

Install multimedia/libflashsupport from pkgsrc.

# cd /usr/pkgsrc/multimedia/libflashsupport
# bmake package

This will get you all packages needed for Linux Emulation. Currently the Linux Emulation package installs software from Suse 10 distribution. You could see the list of packages installed using pkg_info.

# pkg_info |grep suse
suse_openssl-10.0nb5 Linux compatibility package for OpenSSL
suse_gtk2-10.0nb4   Linux compatibility package for GTK+-2.x
suse_gtk-10.0nb2    Linux compatibility package for GTK+-1.x
suse_libjpeg-10.0nb2 Linux compatibility package for JPEG
suse_base-10.0nb5   Linux compatibility package
suse_slang-10.0nb3  Linux compatibility package for S-Lang
suse_locale-10.0nb2 Linux compatibility package with locale files
suse_fontconfig-10.0nb6 Linux compatibility package for fontconfig
suse_libtiff-10.0nb4 Linux compatibility package for TIFF
suse_openmotif-10.0nb2 Linux compatibility package for OpenMotif
suse_libpng-10.0nb4 Linux compatibility package for PNG
suse_libcups-10.0nb4 Linux compatibility package for CUPS
suse_gdk-pixbuf-10.0nb3 Linux compatibility package for gdk-pixbuf
suse_expat-10.0nb2  Linux compatibility package for expat
suse_vmware-10.0nb2 Linux compatibility package to help run VMware
suse_libxml2-10.0nb2 Linux compatibility package for libxml2
suse_compat-10.0nb3 Linux compatibility package with old shared libraries
suse_x11-10.0nb4    Linux compatibility package for X11
suse_glx-10.0nb4    Linux compatibility package for OpenGL/Mesa
suse_freetype2-10.0nb5 Linux compatibility package for freetype-2.x
suse_aspell-10.0nb2 Linux compatibility package for aspell
suse-10.0nb4        SUSE-based Linux binary emulation environment

Install www/nspluginwrapper

This will allow DragonFly to use the Linux Binary Flash Plugin

# cd /usr/pkgsrc/www/nspluginwrapper
# bmake package

Install multimedia/ns-flash

This is the Linux Flash Plugin itself.

# cd /usr/pkgsrc/multimedia/ns-flash
# bmake NO_CHECKSUM=yes package

You can check if the Flash plugin is installed right by.

# /usr/pkg/bin/nspluginwrapper --list
/usr/pkg/lib/netscape/plugins/npwrapper.libflashplayer.so
  Original plugin: /usr/pkg/lib/netscape/plugins/libflashplayer.so
  Wrapper version string: 1.2.2
/usr/pkg/lib/netscape/plugins/npwrapper.libflashplayer.so
  Original plugin: /usr/pkg/lib/netscape/plugins/libflashplayer.so
  Wrapper version string: 1.2.2

Don't worry if it is listed twice as above.

Now Start Firefox and type "about:plugins" in the address bar and you should find the flash plugins listed as shown in this Picture.

You can watch Streaming Flash now.

Chapter 18 Serial Communications

*Reorganized, and parts rewritten by Ivailo Mladenov. *

Synopsis

UNIX® has always had support for serial communications. In fact, the very first UNIX machines relied on serial lines for user input and output. Things have changed a lot from the days when the average terminal consisted of a 10-character-per-second serial printer and a keyboard. This chapter will cover some of the ways in which DragonFly uses serial communications.

After reading this chapter, you will know:

Before reading this chapter, you should:


18.1 Introduction

18.1.1 Terminology

bps:: Bits per Second -- the rate at which data is transmitted;

DTE:: Data Terminal Equipment -- for example, your computer;

DCE:: Data Communications Equipment -- your modem;

RS-232:: EIA standard for hardware serial communications.

When talking about communications data rates, this section does not use the term baud. Baud refers to the number of electrical state transitions that may be made in a period of time, while bps (bits per second) is the correct term to use (at least it does not seem to bother the curmudgeons quite as much).

18.1.2 Cables and Ports

To connect a modem or terminal to your DragonFly system, you will need a serial port on your computer and the proper cable to connect to your serial device. If you are already familiar with your hardware and the cable it requires, you can safely skip this section.

18.1.2.1 Cables

There are several different kinds of serial cables. The two most common types for our purposes are null-modem cables and standard (straight) RS-232 cables. The documentation for your hardware should describe the type of cable required.

18.1.2.1.1 Null-modem Cables

A null-modem cable passes some signals, such as signal ground, straight through, but switches other signals. For example, the send data pin on one end goes to the receive data pin on the other end.

If you like making your own cables, you can construct a null-modem cable for use with terminals. This table shows the RS-232C signal names and the pin numbers on a DB-25 connector.

Signal Pin # Pin # Signal
SG 7 connects to 7 SG
TxD 2 connects to 3 RxD
RxD 3 connects to 2 TxD
RTS 4 connects to 5 CTS
CTS 5 connects to 4 RTS
DTR 20 connects to 6 DSR
DCD 8 6 DSR
DSR 6 connects to 20 DTR

Note: Connect Data Set Ready (DSR) and Data Carrier Detect (DCD) internally in the connector hood, and then to Data Terminal Ready (DTR) in the remote hood.

18.1.2.1.2 Standard RS-232C Cables

A standard serial cable passes all the RS-232C signals straight-through. That is, the send data pin on one end of the cable goes to the send data pin on the other end. This is the type of cable to use to connect a modem to your DragonFly system, and is also appropriate for some terminals.

18.1.2.2 Ports

Serial ports are the devices through which data is transferred between the DragonFly host computer and the terminal. This section describes the kinds of ports that exist and how they are addressed in DragonFly.

18.1.2.2.1 Kinds of Ports

Several kinds of serial ports exist. Before you purchase or construct a cable, you need to make sure it will fit the ports on your terminal and on the DragonFly system.

Most terminals will have DB25 ports. Personal computers, including PCs running DragonFly, will have DB25 or DB9 ports. If you have a multiport serial card for your PC, you may have RJ-12 or RJ-45 ports.

See the documentation that accompanied the hardware for specifications on the kind of port in use. A visual inspection of the port often works too.

18.1.2.2.2 Port Names

In DragonFly, you access each serial port through an entry in the /dev directory. There are two different kinds of entries:

If you have connected a terminal to the first serial port (COM1 in MS-DOS®), then you will use /dev/ttyd0 to refer to the terminal. If the terminal is on the second serial port (also known as COM2), use /dev/ttyd1, and so forth.

18.1.3 Kernel Configuration

DragonFly supports four serial ports by default. In the MS-DOS world, these are known as COM1, COM2, COM3, and COM4. DragonFly currently supports dumb multiport serial interface cards, such as the BocaBoard 1008 and 2016, as well as more intelligent multi-port cards such as those made by Digiboard and Stallion Technologies. However, the default kernel only looks for the standard COM ports.

To see if your kernel recognizes any of your serial ports, watch for messages while the kernel is booting, or use the /sbin/dmesg command to replay the kernel's boot messages. In particular, look for messages that start with the characters sio.

Tip: To view just the messages that have the word sio, use the command:

# /sbin/dmesg | grep 'sio'

For example, on a system with four serial ports, these are the serial-port specific kernel boot messages:

sio0 at 0x3f8-0x3ff irq 4 on isa

sio0: type 16550A

sio1 at 0x2f8-0x2ff irq 3 on isa

sio1: type 16550A

sio2 at 0x3e8-0x3ef irq 5 on isa

sio2: type 16550A

sio3 at 0x2e8-0x2ef irq 9 on isa

sio3: type 16550A

If your kernel does not recognize all of your serial ports, you will probably need to configure a custom DragonFly kernel for your system. For detailed information on configuring your kernel, please see [kernelconfig.html Chapter 12].

The relevant device lines for your kernel configuration file would look like this:

device      sio0    at isa? port IO_COM1 irq 4

device      sio1    at isa? port IO_COM2 irq 3

device      sio2    at isa? port IO_COM3 irq 5

device      sio3    at isa? port IO_COM4 irq 9

Note: port IO_COM1 is a substitution for port 0x3f8, IO_COM2 is 0x2f8, IO_COM3 is 0x3e8, and IO_COM4 is 0x2e8, which are fairly common port addresses for their respective serial ports; interrupts 4, 3, 5, and 9 are fairly common interrupt request lines. Also note that regular serial ports cannot share interrupts on ISA-bus PCs (multiport boards have on-board electronics that allow all the 16550A's on the board to share one or two interrupt request lines).

18.1.4 Device Special Files

Most devices in the kernel are accessed through device special files, which are located in the /dev directory. The sio devices are accessed through the /dev/ttydN (dial-in) and /dev/cuaaN (call-out) devices. DragonFly also provides initialization devices (/dev/ttyidN and /dev/cuaiaN) and locking devices (/dev/ttyldN and /dev/cualaN). The initialization devices are used to initialize communications port parameters each time a port is opened, such as crtscts for modems which use RTS/CTS signaling for flow control. The locking devices are used to lock flags on ports to prevent users or programs changing certain parameters; see the manual pages termios(4), sio(4), and stty(1) for information on the terminal settings, locking and initializing devices, and setting terminal options, respectively.

18.1.5 Serial Port Configuration

The ttydN (or cuaaN) device is the regular device you will want to open for your applications. When a process opens the device, it will have a default set of terminal I/O settings. You can see these settings with the command

# stty -a -f /dev/ttyd1

When you change the settings to this device, the settings are in effect until the device is closed. When it is reopened, it goes back to the default set. To make changes to the default set, you can open and adjust the settings of the initial state device. For example, to turn on CLOCAL mode, 8 bit communication, and XON/XOFF flow control by default for ttyd5, type:

# stty -f /dev/ttyid5 clocal cs8 ixon ixoff

System-wide initialization of the serial devices is controlled in /etc/rc.serial. This file affects the default settings of serial devices.

To prevent certain settings from being changed by an application, make adjustments to the lock state device. For example, to lock the speed of ttyd5 to 57600 bps, type:

# stty -f /dev/ttyld5 57600

Now, an application that opens ttyd5 and tries to change the speed of the port will be stuck with 57600 bps.

Naturally, you should make the initial state and lock state devices writable only by the root account.


18.2 Terminals

Terminals provide a convenient and low-cost way to access your DragonFly system when you are not at the computer's console or on a connected network. This section describes how to use terminals with DragonFly.

18.2.1 Uses and Types of Terminals

The original UNIX® systems did not have consoles. Instead, people logged in and ran programs through terminals that were connected to the computer's serial ports. It is quite similar to using a modem and terminal software to dial into a remote system to do text-only work.

Today's PCs have consoles capable of high quality graphics, but the ability to establish a login session on a serial port still exists in nearly every UNIX style operating system today; DragonFly is no exception. By using a terminal attached to an unused serial port, you can log in and run any text program that you would normally run on the console or in an xterm window in the X Window System.

For the business user, you can attach many terminals to a DragonFly system and place them on your employees' desktops. For a home user, a spare computer such as an older IBM PC or a Macintosh® can be a terminal wired into a more powerful computer running DragonFly. You can turn what might otherwise be a single-user computer into a powerful multiple user system.

For DragonFly, there are three kinds of terminals:

18.2.1.1 Dumb Terminals

Dumb terminals are specialized pieces of hardware that let you connect to computers over serial lines. They are called dumb because they have only enough computational power to display, send, and receive text. You cannot run any programs on them. It is the computer to which you connect them that has all the power to run text editors, compilers, email, games, and so forth.

There are hundreds of kinds of dumb terminals made by many manufacturers, including Digital Equipment Corporation's VT-100 and Wyse's WY-75. Just about any kind will work with DragonFly. Some high-end terminals can even display graphics, but only certain software packages can take advantage of these advanced features.

Dumb terminals are popular in work environments where workers do not need access to graphical applications such as those provided by the X Window System.

18.2.1.2 PCs Acting as Terminals

If a dumb terminal has just enough ability to display, send, and receive text, then certainly any spare personal computer can be a dumb terminal. All you need is the proper cable and some terminal emulation software to run on the computer.

Such a configuration is popular in homes. For example, if your spouse is busy working on your DragonFly system's console, you can do some text-only work at the same time from a less powerful personal computer hooked up as a terminal to the DragonFly system.

18.2.1.3 X Terminals

X terminals are the most sophisticated kind of terminal available. Instead of connecting to a serial port, they usually connect to a network like Ethernet. Instead of being relegated to text-only applications, they can display any X application.

We introduce X terminals just for the sake of completeness. However, this chapter does not cover setup, configuration, or use of X terminals.

18.2.2 Configuration

This section describes what you need to configure on your DragonFly system to enable a login session on a terminal. It assumes you have already configured your kernel to support the serial port to which the terminal is connected--and that you have connected it.

Recall from [boot.html Chapter 10] that the init process is responsible for all process control and initialization at system startup. One of the tasks performed by init is to read the /etc/ttys file and start a getty process on the available terminals. The getty process is responsible for reading a login name and starting the login program.

Thus, to configure terminals for your DragonFly system the following steps should be taken as root:

  1. Add a line to /etc/ttys for the entry in the /dev directory for the serial port if it is not already there.

  2. Specify that /usr/libexec/getty be run on the port, and specify the appropriate ***getty*** type from the /etc/gettytab file.

  3. Specify the default terminal type.

  4. Set the port to on.

  5. Specify whether the port should be secure.

  6. Force init to reread the /etc/ttys file.

As an optional step, you may wish to create a custom ***getty*** type for use in step 2 by making an entry in /etc/gettytab. This chapter does not explain how to do so; you are encouraged to see the gettytab(5) and the getty(8) manual pages for more information.

18.2.2.1 Adding an Entry to /etc/ttys

The /etc/ttys file lists all of the ports on your DragonFly system where you want to allow logins. For example, the first virtual console ttyv0 has an entry in this file. You can log in on the console using this entry. This file also contains entries for the other virtual consoles, serial ports, and pseudo-ttys. For a hardwired terminal, just list the serial port's /dev entry without the /dev part (for example, /dev/ttyv0 would be listed as ttyv0).

A default DragonFly install includes an /etc/ttys file with support for the first four serial ports: ttyd0 through ttyd3. If you are attaching a terminal to one of those ports, you do not need to add another entry.

Example 17-1. Adding Terminal Entries to /etc/ttys

Suppose we would like to connect two terminals to the system: a Wyse-50 and an old 286 IBM PC running Procomm terminal software emulating a VT-100 terminal. We connect the Wyse to the second serial port and the 286 to the sixth serial port (a port on a multiport serial card). The corresponding entries in the /etc/ttys file would look like this:

ttyd1./imagelib/callouts/1.png  "/usr/libexec/getty std.38400"./imagelib/callouts/2.png  wy50./imagelib/callouts/3.png  on./imagelib/callouts/4.png  insecure./imagelib/callouts/5.png

ttyd5   "/usr/libexec/getty std.19200"  vt100  on  insecure

./imagelib/callouts/1.png:: The first field normally specifies the name of the terminal special file as it is found in /dev. ./imagelib/callouts/2.png:: The second field is the command to execute for this line, which is usually getty(8). getty initializes and opens the line, sets the speed, prompts for a user name and then executes the login(1) program.The getty program accepts one (optional) parameter on its command line, the ***getty*** type. A ***getty*** type configures characteristics on the terminal line, like bps rate and parity. The getty program reads these characteristics from the file /etc/gettytab.The file /etc/gettytab contains lots of entries for terminal lines both old and new. In almost all cases, the entries that start with the text std will work for hardwired terminals. These entries ignore parity. There is a std entry for each bps rate from 110 to 115200. Of course, you can add your own entries to this file. The gettytab(5) manual page provides more information.When setting the ***getty*** type in the /etc/ttys file, make sure that the communications settings on the terminal match.For our example, the Wyse-50 uses no parity and connects at 38400 bps. The 286 PC uses no parity and connects at 19200 bps. ./imagelib/callouts/3.png:: The third field is the type of terminal usually connected to that tty line. For dial-up ports, unknown or dialup is typically used in this field since users may dial up with practically any type of terminal or software. For hardwired terminals, the terminal type does not change, so you can put a real terminal type from the termcap(5) database file in this field.For our example, the Wyse-50 uses the real terminal type while the 286 PC running Procomm will be set to emulate at VT-100. ./imagelib/callouts/4.png:: The fourth field specifies if the port should be enabled. Putting on here will have the init process start the program in the second field, getty. If you put off in this field, there will be no getty, and hence no logins on the port. ./imagelib/callouts/5.png:: The final field is used to specify whether the port is secure. Marking a port as secure means that you trust it enough to allow the root account (or any account with a user ID of 0) to login from that port. Insecure ports do not allow root logins. On an insecure port, users must login from unprivileged accounts and then use su(1) or a similar mechanism to gain superuser privileges.It is highly recommended that you use insecure even for terminals that are behind locked doors. It is quite easy to login and use su if you need superuser privileges.

18.2.2.2 Force init to Reread /etc/ttys

After making the necessary changes to the /etc/ttys file you should send a SIGHUP (hangup) signal to the init process to force it to re-read its configuration file. For example:

# kill -HUP 1

Note: init is always the first process run on a system, therefore it will always have PID 1.

If everything is set up correctly, all cables are in place, and the terminals are powered up, then a getty process should be running on each terminal and you should see login prompts on your terminals at this point.

18.2.3 Troubleshooting Your Connection

Even with the most meticulous attention to detail, something could still go wrong while setting up a terminal. Here is a list of symptoms and some suggested fixes.

18.2.3.1 No Login Prompt Appears

Make sure the terminal is plugged in and powered up. If it is a personal computer acting as a terminal, make sure it is running terminal emulation software on the correct serial port.

Make sure the cable is connected firmly to both the terminal and the DragonFly computer. Make sure it is the right kind of cable.

Make sure the terminal and DragonFly agree on the bps rate and parity settings. If you have a video display terminal, make sure the contrast and brightness controls are turned up. If it is a printing terminal, make sure paper and ink are in good supply.

Make sure that a getty process is running and serving the terminal. For example, to get a list of running getty processes with ps, type:

# ps -axww|grep getty

You should see an entry for the terminal. For example, the following display shows that a getty is running on the second serial port ttyd1 and is using the std.38400 entry in /etc/gettytab:

22189  d1  Is+    0:00.03 /usr/libexec/getty std.38400 ttyd1

If no getty process is running, make sure you have enabled the port in /etc/ttys. Also remember to run kill -HUP 1 after modifying the ttys file.

If the getty process is running but the terminal still does not display a login prompt, or if it displays a prompt but will not allow you to type, your terminal or cable may not support hardware handshaking. Try changing the entry in /etc/ttys from std.38400 to 3wire.38400 remember to run kill -HUP 1 after modifying /etc/ttys). The 3wire entry is similar to std, but ignores hardware handshaking. You may need to reduce the baud rate or enable software flow control when using 3wire to prevent buffer overflows.

18.2.3.2 If Garbage Appears Instead of a Login Prompt

Make sure the terminal and DragonFly agree on the bps rate and parity settings. Check the getty processes to make sure the correct ***getty*** type is in use. If not, edit /etc/ttys and run kill -HUP 1.

18.2.3.3 Characters Appear Doubled, the Password Appears When Typed

Switch the terminal (or the terminal emulation software) from half duplex or local echo to full duplex.


18.3 Dial-in Service

Configuring your DragonFly system for dial-in service is very similar to connecting terminals except that you are dealing with modems instead of terminals.

18.3.1 External vs. Internal Modems

External modems seem to be more convenient for dial-up, because external modems often can be semi-permanently configured via parameters stored in non-volatile RAM and they usually provide lighted indicators that display the state of important RS-232 signals. Blinking lights impress visitors, but lights are also very useful to see whether a modem is operating properly.

Internal modems usually lack non-volatile RAM, so their configuration may be limited only to setting DIP switches. If your internal modem has any signal indicator lights, it is probably difficult to view the lights when the system's cover is in place.

18.3.1.1 Modems and Cables

If you are using an external modem, then you will of course need the proper cable. A standard RS-232C serial cable should suffice as long as all of the normal signals are wired:

DragonFly needs the RTS and CTS signals for flow-control at speeds above 2400 bps, the CD signal to detect when a call has been answered or the line has been hung up, and the DTR signal to reset the modem after a session is complete. Some cables are wired without all of the needed signals, so if you have problems, such as a login session not going away when the line hangs up, you may have a problem with your cable.

Like other UNIX® like operating systems, DragonFly uses the hardware signals to find out when a call has been answered or a line has been hung up and to hangup and reset the modem after a call. DragonFly avoids sending commands to the modem or watching for status reports from the modem. If you are familiar with connecting modems to PC-based bulletin board systems, this may seem awkward.

18.3.2 Serial Interface Considerations

DragonFly supports NS8250-, NS16450-, NS16550-, and NS16550A-based EIA RS-232C (CCITT V.24) communications interfaces. The 8250 and 16450 devices have single-character buffers. The 16550 device provides a 16-character buffer, which allows for better system performance. (Bugs in plain 16550's prevent the use of the 16-character buffer, so use 16550A's if possible). Because single-character-buffer devices require more work by the operating system than the 16-character-buffer devices, 16550A-based serial interface cards are much preferred. If the system has many active serial ports or will have a heavy load, 16550A-based cards are better for low-error-rate communications.

18.3.3 Quick Overview

As with terminals, init spawns a getty process for each configured serial port for dial-in connections. For example, if a modem is attached to /dev/ttyd0, the command ps ax might show this:

 4850 ??  I      0:00.09 /usr/libexec/getty V19200 ttyd0

When a user dials the modem's line and the modems connect, the CD (Carrier Detect) line is reported by the modem. The kernel notices that carrier has been detected and completes getty's open of the port. getty sends a login: prompt at the specified initial line speed. getty watches to see if legitimate characters are received, and, in a typical configuration, if it finds junk (probably due to the modem's connection speed being different than getty's speed), getty tries adjusting the line speeds until it receives reasonable characters.

After the user enters his/her login name, getty executes /usr/bin/login, which completes the login by asking for the user's password and then starting the user's shell.

18.3.4 Configuration Files

There are three system configuration files in the /etc directory that you will probably need to edit to allow dial-up access to your DragonFly system. The first, /etc/gettytab, contains configuration information for the /usr/libexec/getty daemon. Second, /etc/ttys holds information that tells /sbin/init what tty devices should have getty processes running on them. Lastly, you can place port initialization commands in the /etc/rc.serial script.

There are two schools of thought regarding dial-up modems on UNIX. One group likes to configure their modems and systems so that no matter at what speed a remote user dials in, the local computer-to-modem RS-232 interface runs at a locked speed. The benefit of this configuration is that the remote user always sees a system login prompt immediately. The downside is that the system does not know what a user's true data rate is, so full-screen programs like Emacs will not adjust their screen-painting methods to make their response better for slower connections.

The other school configures their modems' RS-232 interface to vary its speed based on the remote user's connection speed. For example, V.32bis (14.4 Kbps) connections to the modem might make the modem run its RS-232 interface at 19.2 Kbps, while 2400 bps connections make the modem's RS-232 interface run at 2400 bps. Because getty does not understand any particular modem's connection speed reporting, getty gives a login: message at an initial speed and watches the characters that come back in response. If the user sees junk, it is assumed that they know they should press the Enter key until they see a recognizable prompt. If the data rates do not match, getty sees anything the user types as junk, tries going to the next speed and gives the login: prompt again. This procedure can continue ad nauseam, but normally only takes a keystroke or two before the user sees a good prompt. Obviously, this login sequence does not look as clean as the former locked-speed method, but a user on a low-speed connection should receive better interactive response from full-screen programs.

This section will try to give balanced configuration information, but is biased towards having the modem's data rate follow the connection rate.

18.3.4.1 /etc/gettytab

/etc/gettytab is a termcap(5)-style file of configuration information for getty(8). Please see the gettytab(5) manual page for complete information on the format of the file and the list of capabilities.

18.3.4.1.1 Locked-speed Config

If you are locking your modem's data communications rate at a particular speed, you probably will not need to make any changes to /etc/gettytab.

18.3.4.1.2 Matching-speed Config

You will need to set up an entry in /etc/gettytab to give getty information about the speeds you wish to use for your modem. If you have a 2400 bps modem, you can probably use the existing D2400 entry.

#

# Fast dialup terminals, 2400/1200/300 rotary (can start either way)

#

D2400|d2400|Fast-Dial-2400:\

        :nx#D1200:tc2400-baud:

3|D1200|Fast-Dial-1200:\

        :nx#D300:tc1200-baud:

5|D300|Fast-Dial-300:\

        :nx#D2400:tc300-baud:

If you have a higher speed modem, you will probably need to add an entry in /etc/gettytab; here is an entry you could use for a 14.4 Kbps modem with a top interface speed of 19.2 Kbps:

#

# Additions for a V.32bis Modem

#

um|V300|High Speed Modem at 300,8-bit:\

        :nx#V19200:tcstd.300:

un|V1200|High Speed Modem at 1200,8-bit:\

        :nx#V300:tcstd.1200:

uo|V2400|High Speed Modem at 2400,8-bit:\

        :nx#V1200:tcstd.2400:

up|V9600|High Speed Modem at 9600,8-bit:\

        :nx#V2400:tcstd.9600:

uq|V19200|High Speed Modem at 19200,8-bit:\

        :nx#V9600:tcstd.19200:

This will result in 8-bit, no parity connections.

The example above starts the communications rate at 19.2 Kbps (for a V.32bis connection), then cycles through 9600 bps (for V.32), 2400 bps, 1200 bps, 300 bps, and back to 19.2 Kbps. Communications rate cycling is implemented with the nx# (next table) capability. Each of the lines uses a tc (table continuation) entry to pick up the rest of the standard settings for a particular data rate.

If you have a 28.8 Kbps modem and/or you want to take advantage of compression on a 14.4 Kbps modem, you need to use a higher communications rate than 19.2 Kbps. Here is an example of a gettytab entry starting a 57.6 Kbps:

#

# Additions for a V.32bis or V.34 Modem

# Starting at 57.6 Kbps

#

vm|VH300|Very High Speed Modem at 300,8-bit:\

        :nx#VH57600:tcstd.300:

vn|VH1200|Very High Speed Modem at 1200,8-bit:\

        :nx#VH300:tcstd.1200:

vo|VH2400|Very High Speed Modem at 2400,8-bit:\

        :nx#VH1200:tcstd.2400:

vp|VH9600|Very High Speed Modem at 9600,8-bit:\

        :nx#VH2400:tcstd.9600:

vq|VH57600|Very High Speed Modem at 57600,8-bit:\

        :nx#VH9600:tcstd.57600:

If you have a slow CPU or a heavily loaded system and do not have 16550A-based serial ports, you may receive sio silo errors at 57.6 Kbps.

18.3.4.2 /etc/ttys

Configuration of the /etc/ttys file was covered in Example 17-1. Configuration for modems is similar but we must pass a different argument to getty and specify a different terminal type. The general format for both locked-speed and matching-speed configurations is:

ttyd0   "/usr/libexec/getty `***xxx***`"   dialup on

The first item in the above line is the device special file for this entry -- ttyd0 means /dev/ttyd0 is the file that this getty will be watching. The second item, "/usr/libexec/gettyxxx" (***xxx*** will be replaced by the initial gettytab capability) is the process init will run on the device. The third item, dialup, is the default terminal type. The fourth parameter, on, indicates to init that the line is operational. There can be a fifth parameter, secure, but it should only be used for terminals which are physically secure (such as the system console).

The default terminal type (dialup in the example above) may depend on local preferences. dialup is the traditional default terminal type on dial-up lines so that users may customize their login scripts to notice when the terminal is dialup and automatically adjust their terminal type. However, the author finds it easier at his site to specify vt102 as the default terminal type, since the users just use VT102 emulation on their remote systems.

After you have made changes to /etc/ttys, you may send the init process a HUP signal to re-read the file. You can use the command

# kill -HUP 1

to send the signal. If this is your first time setting up the system, you may want to wait until your modem(s) are properly configured and connected before signaling init.

18.3.4.2.1 Locked-speed Config

For a locked-speed configuration, your ttys entry needs to have a fixed-speed entry provided to getty. For a modem whose port speed is locked at 19.2 Kbps, the ttys entry might look like this:

ttyd0   "/usr/libexec/getty std.19200"   dialup on

If your modem is locked at a different data rate, substitute the appropriate value for std.speed* instead of std.19200. Make sure that you use a valid type listed in /etc/gettytab.

18.3.4.2.2 Matching-speed Config

In a matching-speed configuration, your ttys entry needs to reference the appropriate beginning auto-baud (sic) entry in /etc/gettytab. For example, if you added the above suggested entry for a matching-speed modem that starts at 19.2 Kbps (the gettytab entry containing the V19200 starting point), your ttys entry might look like this:

ttyd0   "/usr/libexec/getty V19200"   dialup on

18.3.4.3 /etc/rc.serial

High-speed modems, like V.32, V.32bis, and V.34 modems, need to use hardware (RTS/CTS) flow control. You can add stty commands to /etc/rc.serial to set the hardware flow control flag in the DragonFly kernel for the modem ports.

For example to set the termios flag crtscts on serial port #1's (COM2) dial-in and dial-out initialization devices, the following lines could be added to /etc/rc.serial:

# Serial port initial configuration

stty -f /dev/ttyid1 crtscts

stty -f /dev/cuaia1 crtscts

18.3.5 Modem Settings

If you have a modem whose parameters may be permanently set in non-volatile RAM, you will need to use a terminal program (such as Telix under MS-DOS® or tip under DragonFly) to set the parameters. Connect to the modem using the same communications speed as the initial speed getty will use and configure the modem's non-volatile RAM to match these requirements:

Please read the documentation for your modem to find out what commands and/or DIP switch settings you need to give it.

For example, to set the above parameters on a U.S. Robotics® Sportster® 14,400 external modem, one could give these commands to the modem:

ATZ

AT&amp;C1&amp;D2&amp;H1&amp;I0&amp;R2&amp;W

You might also want to take this opportunity to adjust other settings in the modem, such as whether it will use V.42bis and/or MNP5 compression.

The U.S. Robotics Sportster 14,400 external modem also has some DIP switches that need to be set; for other modems, perhaps you can use these settings as an example:

Result codes should be disabled/suppressed for dial-up modems to avoid problems that can occur if getty mistakenly gives a login: prompt to a modem that is in command mode and the modem echoes the command or returns a result code. This sequence can result in a extended, silly conversation between getty and the modem.

18.3.5.1 Locked-speed Config

For a locked-speed configuration, you will need to configure the modem to maintain a constant modem-to-computer data rate independent of the communications rate. On a U.S. Robotics Sportster 14,400 external modem, these commands will lock the modem-to-computer data rate at the speed used to issue the commands:

ATZ

AT&amp;B1&amp;W

18.3.5.2 Matching-speed Config

For a variable-speed configuration, you will need to configure your modem to adjust its serial port data rate to match the incoming call rate. On a U.S. Robotics Sportster 14,400 external modem, these commands will lock the modem's error-corrected data rate to the speed used to issue the commands, but allow the serial port rate to vary for non-error-corrected connections:

ATZ

AT&amp;B2&amp;W

18.3.5.3 Checking the Modem's Configuration

Most high-speed modems provide commands to view the modem's current operating parameters in a somewhat human-readable fashion. On the U.S. Robotics Sportster 14,400 external modems, the command ATI5 displays the settings that are stored in the non-volatile RAM. To see the true operating parameters of the modem (as influenced by the modem's DIP switch settings), use the commands ATZ and then ATI4.

If you have a different brand of modem, check your modem's manual to see how to double-check your modem's configuration parameters.

18.3.6 Troubleshooting

Here are a few steps you can follow to check out the dial-up modem on your system.

18.3.6.1 Checking Out the DragonFly System

Hook up your modem to your DragonFly system, boot the system, and, if your modem has status indication lights, watch to see whether the modem's DTR indicator lights when the login: prompt appears on the system's console -- if it lights up, that should mean that DragonFly has started a getty process on the appropriate communications port and is waiting for the modem to accept a call.

If the DTR indicator does not light, login to the DragonFly system through the console and issue a ps ax to see if DragonFly is trying to run a getty process on the correct port. You should see lines like these among the processes displayed:

  114 ??  I      0:00.10 /usr/libexec/getty V19200 ttyd0

  115 ??  I      0:00.10 /usr/libexec/getty V19200 ttyd1

If you see something different, like this:

  114 d0  I      0:00.10 /usr/libexec/getty V19200 ttyd0

and the modem has not accepted a call yet, this means that getty has completed its open on the communications port. This could indicate a problem with the cabling or a mis-configured modem, because getty should not be able to open the communications port until CD (carrier detect) has been asserted by the modem.

If you do not see any getty processes waiting to open the desired ttydN* port, double-check your entries in /etc/ttys to see if there are any mistakes there. Also, check the log file /var/log/messages to see if there are any log messages from init or getty regarding any problems. If there are any messages, triple-check the configuration files /etc/ttys and /etc/gettytab, as well as the appropriate device special files /dev/ttydN, for any mistakes, missing entries, or missing device special files.

18.3.6.2 Try Dialing In

Try dialing into the system; be sure to use 8 bits, no parity, and 1 stop bit on the remote system. If you do not get a prompt right away, or get garbage, try pressing Enter about once per second. If you still do not see a login: prompt after a while, try sending a BREAK. If you are using a high-speed modem to do the dialing, try dialing again after locking the dialing modem's interface speed (via AT&amp;B1 on a U.S. Robotics Sportster modem, for example).

If you dial but the modem on the DragonFly system will not answer, make sure that the modem is configured to answer the phone when DTR is asserted. If the modem seems to be configured correctly, verify that the DTR line is asserted by checking the modem's indicator lights (if it has any).

If you have gone over everything several times and it still does not work, take a break and come back to it later. If it still does not work, perhaps you can send an electronic mail message to the DragonFly User related mailing list describing your modem and your problem, and the good folks on the list will try to help.


18.4 Dial-out Service

The following are tips for getting your host to be able to connect over the modem to another computer. This is appropriate for establishing a terminal session with a remote host.

This is useful to log onto a BBS.

This kind of connection can be extremely helpful to get a file on the Internet if you have problems with PPP. If you need to FTP something and PPP is broken, use the terminal session to FTP it. Then use zmodem to transfer it to your machine.

18.4.1 My Stock Hayes Modem Is Not Supported, What Can I Do?

Actually, the manual page for tip is out of date. There is a generic Hayes dialer already built in. Just use at=hayes in your /etc/remote file.

The Hayes driver is not smart enough to recognize some of the advanced features of newer modems--messages like BUSY, NO DIALTONE, or CONNECT 115200 will just confuse it. You should turn those messages off when you use tip (using ATX0&amp;W).

Also, the dial timeout for tip is 60 seconds. Your modem should use something less, or else tip will think there is a communication problem. Try ATS7=45&amp;W.

Note: As shipped, tip does not yet support Hayes modems fully. The solution is to edit the file tipconf.h in the directory /usr/src/usr.bin/tip/tip. Obviously you need the source distribution to do this.

Edit the line #define HAYES 0 to #define HAYES 1. Then make and make install. Everything works nicely after that.

18.4.2 How Am I Expected to Enter These AT Commands?

Make what is called a direct entry in your /etc/remote file. For example, if your modem is hooked up to the first serial port, /dev/cuaa0, then put in the following line:

cuaa0:dv#/dev/cuaa0:br#19200:panone

Use the highest bps rate your modem supports in the br capability. Then, type tip cuaa0 and you will be connected to your modem.

Or use cu as root with the following command:

# cu -l`***line***` -s`***speed***`

***line*** is the serial port (e.g./dev/cuaa0) and ***speed*** is the speed (e.g.57600). When you are done entering the AT commands hit ~. to exit.

18.4.3 The @ Sign for the pn Capability Does Not Work!

The @ sign in the phone number capability tells tip to look in /etc/phones for a phone number. But the @ sign is also a special character in capability files like /etc/remote. Escape it with a backslash:

pn=\@

18.4.4 How Can I Dial a Phone Number on the Command Line?

Put what is called a generic entry in your /etc/remote file. For example:

tip115200|Dial any phone number at 115200 bps:\

        :dv#/dev/cuaa0:br#115200:athayes:pa=none:du:

tip57600|Dial any phone number at 57600 bps:\

        :dv#/dev/cuaa0:br#57600:athayes:pa=none:du:

Then you can do things like:

# tip -115200 5551234

If you prefer cu over tip, use a generic cu entry:

cu115200|Use cu to dial any number at 115200bps:\

        :dv#/dev/cuaa1:br#57600:athayes:pa=none:du:

and type:

# cu 5551234 -s 115200

18.4.5 Do I Have to Type in the bps Rate Every Time I Do That?

Put in an entry for tip1200 or cu1200, but go ahead and use whatever bps rate is appropriate with the br capability. tip thinks a good default is 1200 bps which is why it looks for a tip1200 entry. You do not have to use 1200 bps, though.

18.4.6 I Access a Number of Hosts Through a Terminal Server

Rather than waiting until you are connected and typing CONNECT &lt;host&gt; each time, use tip's cm capability. For example, these entries in /etc/remote:

pain|pain.deep13.com|Forrester's machine:\

        :cm#CONNECT pain\n:tcdeep13:

muffin|muffin.deep13.com|Frank's machine:\

        :cm#CONNECT muffin\n:tcdeep13:

deep13:Gizmonics Institute terminal server:\

        :dv#/dev/cuaa2:br#38400:athayes:du:pa=none:pn=5551234:

will let you type tip pain or tip muffin to connect to the hosts pain or muffin, and tip deep13 to get to the terminal server.

18.4.7 Can Tip Try More Than One Line for Each Site?

This is often a problem where a university has several modem lines and several thousand students trying to use them.

Make an entry for your university in /etc/remote and use @ for the pn capability:

big-university:\

        :pn#\@:tcdialout

dialout:\

        :dv#/dev/cuaa3:br#9600:atcourier:du:pa=none:

Then, list the phone numbers for the university in /etc/phones:

big-university 5551111

big-university 5551112

big-university 5551113

big-university 5551114

tip will try each one in the listed order, then give up. If you want to keep retrying, run tip in a while loop.

18.4.8 Why Do I Have to Hit Ctrl + P Twice to Send Ctrl + P Once?

Ctrl + P is the default force character, used to tell tip that the next character is literal data. You can set the force character to any other character with the ~s escape, which means set a variable.

Type ~sforce=single-char* followed by a newline. ***single-char*** is any single character. If you leave out ***single-char***, then the force character is the nul character, which you can get by typing Ctrl + 2 or Ctrl + Space . A pretty good value for ***single-char*** is Shift + Ctrl + 6 , which is only used on some terminal servers.

You can have the force character be whatever you want by specifying the following in your $HOME/.tiprc file:

force=&lt;single-char&gt;

18.4.9 Suddenly Everything I Type Is in Upper Case??

You must have pressed Ctrl + A , tip's raise character, specially designed for people with broken caps-lock keys. Use ~s as above and set the variable raisechar to something reasonable. In fact, you can set it to the same as the force character, if you never expect to use either of these features.

Here is a sample .tiprc file perfect for Emacs users who need to type Ctrl + 2 and Ctrl + A a lot:

force=^^

raisechar=^^

The ^^ is Shift + Ctrl + 6 .

18.4.10 How Can I Do File Transfers with tip?

If you are talking to another UNIX® system, you can send and receive files with ~p (put) and ~t (take). These commands run cat and echo on the remote system to accept and send files. The syntax is:

~p local-file [remote-file]

~t remote-file [local-file]

There is no error checking, so you probably should use another protocol, like zmodem.

18.4.11 How Can I Run zmodem with tip?

To receive files, start the sending program on the remote end. Then, type ~C rz to begin receiving them locally.

To send files, start the receiving program on the remote end. Then, type ~C szfiles* to send them to the remote system.


18.5 Setting Up the Serial Console

18.5.1 Introduction

DragonFly has the ability to boot on a system with only a dumb terminal on a serial port as a console. Such a configuration should be useful for two classes of people: system administrators who wish to install DragonFly on machines that have no keyboard or monitor attached, and developers who want to debug the kernel or device drivers.

As described in [boot.html Chapter 10], DragonFly employs a three stage bootstrap. The first two stages are in the boot block code which is stored at the beginning of the DragonFly slice on the boot disk. The boot block will then load and run the boot loader (/boot/loader) as the third stage code.

In order to set up the serial console you must configure the boot block code, the boot loader code and the kernel.

18.5.2 Serial Console Configuration, Terse Version

This section assumes that you are using the default setup, know how to connect serial ports and just want a fast overview of a serial console. If you encounter difficulty with these steps, please see the more extensive explaination of all the options and advanced settings in [serialconsole-setup.html#SERIALCONSOLE-HOWTO Section 18.5.3].

  1. Connect the serial port. The serial console will be on COM1.

  2. echo -h &gt; /boot.config to enable the serial console for the boot loader and kernel.

  3. Edit /etc/ttys and change off to on for the ttyd0 entry. This enables a login prompt on the serial console, which mirrors how video consoles are typically setup.

  4. shutdown -r now will reboot the system with the serial console.

18.5.3 Serial Console Configuration

  1. Prepare a serial cable.

    You will need either a null-modem cable or a standard serial cable and a null-modem adapter. See Section 18.1.2 for a discussion on serial cables.

  2. Unplug your keyboard.

    Most PC systems probe for the keyboard during the Power-On Self-Test (POST) and will generate an error if the keyboard is not detected. Some machines complain loudly about the lack of a keyboard and will not continue to boot until it is plugged in.

    If your computer complains about the error, but boots anyway, then you do not have to do anything special. (Some machines with Phoenix BIOS installed merely say Keyboard failed and continue to boot normally.)

    If your computer refuses to boot without a keyboard attached then you will have to configure the BIOS so that it ignores this error (if it can). Consult your motherboard's manual for details on how to do this.

    Tip: Setting the keyboard to Not installed in the BIOS setup does not mean that you will not be able to use your keyboard. All this does is tell the BIOS not to probe for a keyboard at power-on, so it will not complain if the keyboard is not plugged in. You can leave the keyboard plugged in even with this flag set to Not installed and the keyboard will still work.

    Note: If your system has a PS/2® mouse, chances are very good that you may have to unplug your mouse as well as your keyboard. This is because PS/2 mice share some hardware with the keyboard and leaving the mouse plugged in can fool the keyboard probe into thinking the keyboard is still there. In general, this is not a problem since the mouse is not much good without the keyboard anyway.

  3. Plug a dumb terminal into COM1 (sio0).

    If you do not have a dumb terminal, you can use an old PC/XT with a modem program, or the serial port on another UNIX® box. If you do not have a COM1 (sio0), get one. At this time, there is no way to select a port other than COM1 for the boot blocks without recompiling the boot blocks. If you are already using COM1 for another device, you will have to temporarily remove that device and install a new boot block and kernel once you get DragonFly up and running. (It is assumed that COM1 will be available on a file/compute/terminal server anyway; if you really need COM1 for something else (and you cannot switch that something else to COM2 (sio1)), then you probably should not even be bothering with all this in the first place.)

  4. Make sure the configuration file of your kernel has appropriate flags set for COM1 (sio0).

    Relevant flags are:

    0x10:: Enables console support for this unit. The other console flags are ignored unless this is set. Currently, at most one unit can have console support; the first one (in config file order) with this flag set is preferred. This option alone will not make the serial port the console. Set the following flag or use the -h option described below, together with this flag.0x20:: Forces this unit to be the console (unless there is another higher priority console), regardless of the -h option discussed below. This flag replaces the COMCONSOLE option in DragonFly versions 2.***X***. The flag 0x20 must be used together with the 0x10 flag.0x40:: Reserves this unit (in conjunction with 0x10) and makes the unit unavailable for normal access. You should not set this flag to the serial port unit which you want to use as the serial console. This reserves this port for "low-level IO", i.e. kernel debugging.0x80:: This port will be used for remote kernel debugging.

    Example:

    device sio0 at isa? port IO_COM1 flags 0x10 irq 4

    See the sio(4) manual page for more details.

    If the flags were not set, you need to run UserConfig (on a different console) or recompile the kernel.

  5. Create boot.config in the root directory of the a partition on the boot drive.

    This file will instruct the boot block code how you would like to boot the system. In order to activate the serial console, you need one or more of the following options--if you want multiple options, include them all on the same line:

    -h:: Toggles internal and serial consoles. You can use this to switch console devices. For instance, if you boot from the internal (video) console, you can use -h to direct the boot loader and the kernel to use the serial port as its console device. Alternatively, if you boot from the serial port, you can use the -h to tell the boot loader and the kernel to use the video display as the console instead.-D:: Toggles single and dual console configurations. In the single configuration the console will be either the internal console (video display) or the serial port, depending on the state of the -h option above. In the dual console configuration, both the video display and the serial port will become the console at the same time, regardless of the state of the -h option. However, note that the dual console configuration takes effect only during the boot block is running. Once the boot loader gets control, the console specified by the -h option becomes the only console.-P:: Makes the boot block probe the keyboard. If no keyboard is found, the -D and -h options are automatically set.

    Note: Due to space constraints in the current version of the boot blocks, the -P option is capable of detecting extended keyboards only. Keyboards with less than 101 keys (and without F11 and F12 keys) may not be detected. Keyboards on some laptop computers may not be properly found because of this limitation. If this is the case with your system, you have to abandon using the -P option. Unfortunately there is no workaround for this problem.

    Use either the -P option to select the console automatically, or the -h option to activate the serial console.

    You may include other options described in boot(8) as well.

    The options, except for -P, will be passed to the boot loader (/boot/loader). The boot loader will determine which of the internal video or the serial port should become the console by examining the state of the -h option alone. This means that if you specify the -D option but not the -h option in /boot.config, you can use the serial port as the console only during the boot block; the boot loader will use the internal video display as the console.

  6. Boot the machine.

    When you start your DragonFly box, the boot blocks will echo the contents of /boot.config to the console. For example:

    /boot.config: -P

    Keyboard: no

    The second line appears only if you put -P in /boot.config and indicates presence/absence of the keyboard. These messages go to either serial or internal console, or both, depending on the option in /boot.config.

    || Options || Message goes to ||

    || none || internal console ||

    || -h || serial console ||

    || -D || serial and internal consoles ||

    || -Dh || serial and internal consoles ||

    || -P, keyboard present || internal console ||

    || -P, keyboard absent || serial console ||

    After the above messages, there will be a small pause before the boot blocks continue loading the boot loader and before any further messages printed to the console. Under normal circumstances, you do not need to interrupt the boot blocks, but you may want to do so in order to make sure things are set up correctly.

    Hit any key, other than Enter, at the console to interrupt the boot process. The boot blocks will then prompt you for further action. You should now see something like:

    >> DragonFly/i386 BOOT

    Default: 0:ad(0,a)/boot/loader

    boot:

    Verify the above message appears on either the serial or internal console or both, according to the options you put in /boot.config. If the message appears in the correct console, hit Enter to continue the boot process.

    If you want the serial console but you do not see the prompt on the serial terminal, something is wrong with your settings. In the meantime, you enter -h and hit Enter/Return (if possible) to tell the boot block (and then the boot loader and the kernel) to choose the serial port for the console. Once the system is up, go back and check what went wrong.

After the boot loader is loaded and you are in the third stage of the boot process you can still switch between the internal console and the serial console by setting appropriate environment variables in the boot loader. See [serialconsole-setup.html#SERIALCONSOLE-LOADER Section 18.5.6].

18.5.4 Summary

Here is the summary of various settings discussed in this section and the console eventually selected.

18.5.4.1 Case 1: You Set the Flags to 0x10 for sio0

device sio0 at isa? port IO_COM1 flags 0x10 irq 4
Options in /boot.config Console during boot blocks Console during boot loader Console in kernel
nothing internal internal internal
-h serial serial serial
-D serial and internal internal internal
-Dh serial and internal serial serial
-P, keyboard present internal internal internal
-P, keyboard absent serial and internal serial serial

18.5.4.2 Case 2: You Set the Flags to 0x30 for sio0

device sio0 at isa? port IO_COM1 flags 0x30 irq 4
Options in /boot.config Console during boot blocks Console during boot loader Console in kernel
nothing internal internal serial
-h serial serial serial
-D serial and internal internal serial
-Dh serial and internal serial serial
-P, keyboard present internal internal serial
-P, keyboard absent serial and internal serial serial

18.5.5 Tips for the Serial Console

18.5.5.1 Setting a Faster Serial Port Speed

By default, the serial port settings are: 9600 baud, 8 bits, no parity, and 1 stop bit. If you wish to change the speed, you need to recompile at least the boot blocks. Add the following line to /etc/make.conf and compile new boot blocks:

BOOT_COMCONSOLE_SPEED=19200

If the serial console is configured in some other way than by booting with -h, or if the serial console used by the kernel is different from the one used by the boot blocks, then you must also add the following option to the kernel configuration file and compile a new kernel:

options CONSPEED=19200

18.5.5.2 Using Serial Port Other Than sio0 for the Console

Using a port other than sio0 as the console requires some recompiling. If you want to use another serial port for whatever reasons, recompile the boot blocks, the boot loader and the kernel as follows.

  1. Get the kernel source.

  2. Edit /etc/make.conf and set BOOT_COMCONSOLE_PORT to the address of the port you want to use (0x3F8, 0x2F8, 0x3E8 or 0x2E8). Only sio0 through sio3 (COM1 through COM4) can be used; multiport serial cards will not work. No interrupt setting is needed.

  3. Create a custom kernel configuration file and add appropriate flags for the serial port you want to use. For example, if you want to make sio1 (COM2) the console:

    device sio1 at isa? port IO_COM2 flags 0x10 irq 3

    or

    device sio1 at isa? port IO_COM2 flags 0x30 irq 3

    The console flags for the other serial ports should not be set.

  4. Recompile and install the boot blocks and the boot loader:

    # cd /sys/boot

    # make

    # make install

  5. Rebuild and install the kernel.

  6. Write the boot blocks to the boot disk with disklabel(8) and boot from the new kernel.

18.5.5.3 Entering the DDB Debugger from the Serial Line

If you wish to drop into the kernel debugger from the serial console (useful for remote diagnostics, but also dangerous if you generate a spurious BREAK on the serial port!) then you should compile your kernel with the following options:

options BREAK_TO_DEBUGGER

options DDB

18.5.5.4 Getting a Login Prompt on the Serial Console

While this is not required, you may wish to get a login prompt over the serial line, now that you can see boot messages and can enter the kernel debugging session through the serial console. Here is how to do it.

Open the file /etc/ttys with an editor and locate the lines:

ttyd0 "/usr/libexec/getty std.9600" unknown off secure

ttyd1 "/usr/libexec/getty std.9600" unknown off secure

ttyd2 "/usr/libexec/getty std.9600" unknown off secure

ttyd3 "/usr/libexec/getty std.9600" unknown off secure

ttyd0 through ttyd3 corresponds to COM1 through COM4. Change off to on for the desired port. If you have changed the speed of the serial port, you need to change std.9600 to match the current setting, e.g. std.19200.

You may also want to change the terminal type from unknown to the actual type of your serial terminal.

After editing the file, you must kill -HUP 1 to make this change take effect.

18.5.6 Changing Console from the Boot Loader

Previous sections described how to set up the serial console by tweaking the boot block. This section shows that you can specify the console by entering some commands and environment variables in the boot loader. As the boot loader is invoked at the third stage of the boot process, after the boot block, the settings in the boot loader will override the settings in the boot block.

18.5.6.1 Setting Up the Serial Console

You can easily specify the boot loader and the kernel to use the serial console by writing just one line in /boot/loader.rc:

set console=comconsole

This will take effect regardless of the settings in the boot block discussed in the previous section.

You had better put the above line as the first line of /boot/loader.rc so as to see boot messages on the serial console as early as possible.

Likewise, you can specify the internal console as:

set console=vidconsole

If you do not set the boot loader environment variable console, the boot loader, and subsequently the kernel, will use whichever console indicated by the -h option in the boot block.

In versions 3.2 or later, you may specify the console in /boot/loader.conf.local or /boot/loader.conf, rather than in /boot/loader.rc. In this method your /boot/loader.rc should look like:

include /boot/loader.4th

start

Then, create /boot/loader.conf.local and put the following line there.

console=comconsole

or

console=vidconsole

Note: At the moment, the boot loader has no option equivalent to the -P option in the boot block, and there is no provision to automatically select the internal console and the serial console based on the presence of the keyboard.

18.5.6.2 Using a Serial Port Other Than sio0 for the Console

You need to recompile the boot loader to use a serial port other than sio0 for the serial console. Follow the procedure described in [serialconsole-setup.html#SERIALCONSOLE-COM2 Section 18.5.5.2].

18.5.7 Caveats

The idea here is to allow people to set up dedicated servers that require no graphics hardware or attached keyboards. Unfortunately, while most systems will let you boot without a keyboard, there are quite a few that will not let you boot without a graphics adapter. Machines with AMI BIOSes can be configured to boot with no graphics adapter installed simply by changing the graphics adapter setting in the CMOS configuration to Not installed.