SSHGuard on FreeBSD 11.0

SSHGuard is a service that automatically creates firewall rules to block the IP address of anyone trying to brute-force attack SSH on your server. IMO it is essential for any internet facing server.

Unfortunately there is a bug in the current version (1.7.1) in ports, which will prevent the service from starting on FreeBSD 11.0:

ipfw: failed to request table info: No such process

SSHGuard is trying to reference a ipfw lookup table which has not yet been created. To fix this bug you need create the table manually with:

/sbin/ipfw -q table 22 create

You should now be able to start the service.

My New FreeBSD Server Checklist

Below are the steps I take to personalise new FreeBSD servers which I run on my home network. The steps could easily be automated, I just don’t deploy new FreeBSD servers at home often enough to justify it.

# Update the base system:

$ freebsd-update fetch
$ freebsd-update install
# If kernel was patched don't forget to:
$ shutdown -r now

# Schedule future security updates to be applied daily:

$ printf '@daily                                  root    freebsd-update cron' >> /­­etc/­cron

# Map the root account to your email address and send a test mail:

$ printf 'root: your@email.com' >> /­­etc/­­aliases
$ newaliases
$ service restart sendmail
$ printf 'test\n' | mail -s "test message" root

# Set the timezone:

$ tzsetup

# Now the timezone is set, we need to enable the NTP daemon so that our servers time stays in sync. I use the default FreeBSD servers in /­etc/ntp.conf:

$ printf 'ntpd_enable="YES"\nntpd_sync_on_start="YES"' >> /­etc/rc.conf

# Now start the NTP daemon:

$ service ntpd start

# Configure the firewall to only allow SSH:

$ printf 'firewall_enable="YES"\nfirewall_quiet="YES"\nfirewall_type="workstation"\nfirewall_myservices="22/tcp"\nfirewall_allowservices="any"\nfirewall_logdeny="YES"' >> /­etc/rc.conf

# Limit the number of logs per IP address, to prevent the logs filling up with traffic from a single persistent user:

$ printf 'net.inet.ip.fw.verbose_limit=5' >> /­etc/sysctl.conf
$ sysctl net.inet.ip.fw.verbose_limit=5

# Start the firewall:

$ service ipfw start

# Install subversion using pkg and then pull down the ports tree:

$ pkg install subversion
$ svn checkout https://svn.FreeBSD.org/ports/head /usr/ports

# Install some tools I can’t live without:

$ cd /­usr/ports/shells/zsh && make install clean
$ zsh
$ cd /­usr/ports/*/vim-lite && make install clean
$ cd /­usr/ports/*/git && make install clean
$ cd /­usr/ports/*/screen && make install clean

# Change the default shell for your user to zsh – note, you need to be careful here, as using a shell from ports could get bricked, you might want to compile zsh statically and then transfer it into /bin/ if you are concerned about this. Otherwise, do this and don’t forget to change the username:

$ chsh -s usr/local/bin/zsh YOUR_USER

# Alias vi -> vim because old habits die hard, we also want to set the variable WITHOUT_X11 to try and stop X11/graphical components finding their way onto our server:

$ printf 'alias vi=vim\nexport WITHOUT_X11=YES' >> ~/.zshrc

# Create a .vimrc file with mouse support disabled (who uses a mouse in vim?!) and with the background set to dark, so we don’t get dark blue comments:

$ printf 'set background=dark\nset mouse-=a' >> ~/.vimrc

# Install oh-my-zsh, a handy tool for enhancing your zsh command-line experience.

$ sh -c "$(curl -fsSL https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh)"

# Change the theme and plugins for oh-my-zsh:

$ sed -i '.bak' 's/ZSH_THEME=\".*\"/ZSH_THEME=\"pygmalio\"/;s/plugins=\(.*\)/plugins=\(git screen nyan vi-mode\)/' "$HOME/.zshrc"

That’s about it to get what I consider my ‘baseline’.

Varnish Cache Replication with VCL

If you’re using Varnish then you’ve probably looked into replication for it at some point. The official solution is to buy Varnish Plus and use the HA replication agent, however you can do basic replication with normal Varnish and some VCL magic.

First you need to define a upstream server for all of your Varnish caches except the one you’re configuring, in this case let’s assume we have 4 nodes ServerA, ServerB, ServerC and ServerD and we are configuring ServerA:

# Cache Servers:
backend cache_serverB {
    .host = "serverB";
    .port = "6081";
        }
}
backend cache_serverC {
    .host = "serverC";
    .port = "6081";
        }
}
backend cache_serverD {
    .host = "serverD";
    .port = "6081";
        }
}

Next you need to define a upstream server for each of your actual backends, in our case we have 4 tomcat instances running on the same servers, this time we will include the server we are configuring, as we want to be able to reverse proxy to the local tomcat server:

# Tomcat Servers:
backend tomcat_serverA {
.host = "serverA";
.port = "7080";
}
}
backend tomcat_serverB {
.host = "serverB";
.port = "7080";
}
}
backend tomcat_serverC {
.host = "serverC";
.port = "7080";
}
}
backend tomcat_serverD {
.host = "serverD";
.port = "7080";
}
}

Now we need to create our director groups for all the backends we just created:

# Director Groups
sub vcl_init {

# Cache director group:
    new cache = directors.random();
    cache.add_backend(cache_serverB,1);
    cache.add_backend(cache_serverC,1);
    cache.add_backend(cache_serverD,1);

# Tomcat director group:
    new tomcat = directors.random();
    tomcat.add_backend(tomcat_serverA,90000000);
    tomcat.add_backend(tomcat_serverB,1);
    tomcat.add_backend(tomcat_serverC,1);
    tomcat.add_backend(tomcat_serverD,1);
}

You should set the weight of the server you are configuring to be super high – serverA in this case, because when you do need to go to the back end the local server should be preffered.

Next we need to create a ACL containing all the cache nodes except the one you are configuring:

acl cache_nodes {
        "serverB";
        "serverC";
        "serverD";
}

And now that’s it, set the default backend to the tomcat director group and then in your VCL add a rule that directs traffic that has not come from a server in our cache ACL to the cache director group:

        if (!client.ip ~ cache_nodes) {
                set req.backend_hint = cache.backend();
        }

The result of this is that when a request comes to a cache, if it has not got the object stored locally it will use one of your other caches as a backend, the other cache receiving this request will either serve a cached copy saving you a backend request, or if it doesn’t have a cached copy it will match the client as being in our cache_nodes ACL and fetch the request from the tomcat director group, populating both caches with the object.

It’s not perfect, but it’s a nice way reducing backend requests, it means you populate 2 caches with 1 backend request, and if you need to restart a cache most of the content should be served from another cache instead of your slow backend.

JSON Logging With NGINX

As we are rolling out Elastic Search at work it has become necessary to adjust the logging of each of our applications – normally this involves reviewing each possible line of output and then groking out the information you need. However, some applications are more flexible than others, with NGINX you can write your logs exactly as you want them and in JSON format too. With our NGINX setup we are also able to insert HTTP headers at both the Varnish and Tomcat layers and print all the information we need to know about our stack in a single log file – ready to be pushed by rsyslog on each server into our logstash parser and then on to our ES cluster.

Here’s a sample NGINX log format called logstash_json:

log_format access_json '{"timestamp_date": "$time_iso8601", '
                         '"@tenant": "some-tennant", '
                         '"@type": "nginx-access-logs", '
                         '"@level": "global_daily", '
                         '"remote_addr_ip": "$remote_addr", '
                         '"remote_user": "$remote_user", '
                         '"body_bytes_sent_l": "$body_bytes_sent", '
                         '"request_time_d": "$request_time", '
                         '"status": "$status", '
                         '"request": "$request_uri", '
                         '"request_method": "$request_method", '
                         '"http_referrer": "$http_referer", '
                         '"request_body": "$request_body", '
                         '"cache_status": "$upstream_http_x_cache_status", '
                         '"request_valid": "$upstream_http_x_request_valid", '
                         '"http_user_agent": "$http_user_agent", '
                         '"message": "$time_iso8601 $remote_addr $remote_user $body_bytes_sent $request_time $status $request $request_method $http_referer $request_body $upstream_http_x_cache_status $upstream_http_x_request_valid $http_user_agent" }';

The upstream HTTP header x_cache_status is set by Varnish depending on whether the request was a hit/miss/pass, x_request_valid is set based on whether the client requested a valid ISIN format.

Next we use rsyslogd to push the log file into our logstash agent, use something like this in rsyslog.d:

module(load="imfile" PollingInterval="10
template(name="nginxAccessTemplate" type="string" string="%msg%\n")
input(type="imfile"
File="/some/log/dir/nginx_json.log"
Tag="nginx.access"
StateFile="stat-file-api-nginx-access"
Severity="info"
Facility="user"
ruleset="nginx-access")

ruleset(name="nginx-access")
{
action(type="omfwd"
template="nginxAccessTemplate"
Target="some-server"
Protocol="udp")
}

As we have already written our log in JSON and with all the fields we need, it means we only need minimal config in logstash:

input
{
udp
  {
  port => 5544
  codec => json
  }
}

That’s it! Instead of reading in the standard logs and then writing grok filters for each field, you can do the formatting with NGINX, its easier, better performing and gives you more flexibility.

The ntpdate(8) tool

NTPDATE

A useful tool for syncing the time with a NTP server – I have ntpd running on all my servers – its bundled with the base FreeBSD system, just enable it in /etc­/rc.conf:

ntpd_enable="YES"

And start the service:

# service ntpd start

However, with my FreeBSD 11 VM on my surface pro the constant sleeping and hibernating throws the clock out to the point network services stop working. It’s easy to update it when you need to be sure:

# ntpdate -b 1.freebsd.pool.ntp.org

Benchmarking Varnish on FreeBSD 10.2 and RHEL 7.2 using goBench

Varnish is a HTTP caching reverse proxy – and a very good one at that. The main problem I have encountered is that the sheer speed of it makes it very hard to benchmark, where I can cripple a tomcat backend directly with a few instances of ApacheBench, Varnish is a different matter, because you will almost certainly be bound by the performance of your client and not Varnish.

On that note – I wanted to share a very nice benchmarking tool that I have used to replace ab called goBench, it’s a simple single script written in go and with the sole purpose of high throughput benchmarking – well suited for Varnish then!

As I have been playing with the newly released RHEL 7.2 and the also newly released FreeBSD 10.2 in my home lab I figured it would be a good opportunity to compare the two and play with goBench.

Both systems are clean installs on VM’s with identical specs (8x vCPU, 2GB RAM, 32GB HDD) and the default Varnish configuration is used on both with the exception of the backend host which is facing a NGINX instance hosting a single 1MB XML file, firewalls are disabled on both systems.

Installing Varnish on RHEL 7.2:

I had to enable these two repos through the subscription manager :

# subscription-manager repos --enable=rhel-7-server-optional-rpms
# subscription-manager repos --enable=rhel-7-server-extras-rpms

As well as the Fedora repo (otherwise it will fail with a missing jemalloc dependency):

# wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# rpm -Uvh epel-release-latest-7.noarch.rpm

Then you’re ready to install varnish:

# rpm --nosignature -i https://repo.varnish-cache.org/redhat/varnish-4.1.el7.rpm 
# yum install varnish

Start it up with:

# systemctl start varnish

Installing Varnish on FreeBSD 10.2 (it’s much easier..):

# cd /usr/ports/www/varnish4
# make install clean

Optionally, if you want it to start automatically:

# echo 'varnishd_enable="YES"' >> /etc­/rc.conf

Start it up with:

# service varnishd start

Benchmarking:

The script doesn’t have many parameters to worry about, I ran my tests from two clients using these parameters:

# go run gobench.go -u http://some_host:6081/some_isin.xml -k=true -c 250 -t 100

That will create 250 clients and poll the specified address continuously for 100 seconds.. simple right! Combined with a simple shell script to run goBench from both my clients at the same time and sum the results, we’re ready to go:

FreeBSD:

Dispatching 500 clients
Waiting for results...
Requests:                           301348 hits
Successful requests:                300876 hits
Network failed:                          3 hits
Bad requests failed (!2xx):              0 hits
Successful requests rate:             3008 hits/sec
Read throughput:                  17934995 bytes/sec
Write throughput:                  412882 bytes/sec
Test time:                             100 sec

RHEL:

Dispatching 500 clients
Waiting for results...
Requests:                           287123 hits
Successful requests:                274800 hits
Network failed:                      11934 hits
Bad requests failed (!2xx):              0 hits
Successful requests rate:             2747 hits/sec
Read throughput:                  16379739 bytes/sec
Write throughput:                   374629 bytes/sec
Test time:                             100 sec

Verdict:

Well, ‘out the box’ FreeBSD can serve 261 requests per second more than RHEL on identical hardware and an identical varnish config – more alarming is the additional 11931 failed requests to RHEL over FreeBSD during this 100 second test.

Ultimately, the test is pretty irrelevant and the intention really was to play with goBench, the test time-frame was far too short (although I did run it multiple times and the results were consistent) but more importantly in a prod environment you’re going to be spending some time tuning sysctl.conf and your varnish configuration.. Although I suspect the outcome would ultimately favour FreeBSD regardless, but I’m biased :)

shellinabox

A very nice web-based SSH shell, and ready to go from the FreeBSD ports repo:

# cd /usr/ports/www/shellinabox
# make install clean

Then enable the service and set the port in your rc.conf:

shellinaboxd_enable="YES"
shellinaboxd_port="8022"

And kick off the service:

service shellinaboxd start

It’s particularly useful you are using a Wi-Fi hotspot or in a environment where only HTTP(S) traffic is allowed:

shellinabox_preview

“You must upgrade the ports-mgmt/pkg port first.”

After a fresh install of FreeBSD 10.2 I did my usual first steps;

# freebsd-update fetch install
# shutdown -r now
# portsnap fetch extract
# portsnap fetch update
# cd /usr/ports/*/zsh
# make install clean

At this point I got the error:

===>  zsh-5.2_1 pkg(8) must be version 1.6.0 or greater, but you have 1.5.4.
You must upgrade the ports-mgmt/pkg port first.

Well, the fix is simple:

# cd /usr/ports/ports-mgmt/pkg
# make deinstall reinstall

 

Building a Squid HTTP(S) proxy to route traffic through a VPN from inside a FreeBSD jail

In this post I’m going to cover how to build a Squid proxy server to route traffic out securely through your VPN using the openVPN client. Best of all we’re going to run the whole thing inside a FreeBSD jail using the VIMAGE virtualised networking stack. I’m using CBSD instead of ezjail for this post – more on that to come. Anyway, let’s get started:

Prep

First we need to compile our kernel with support for VIMAGE – FreeBSD’s virtulized network stack:

# cd /usr/src/sys/amd64/conf/

Now your going to want to edit your kernel config, probably called GENERIC. Add these two devices:

...
device          epair
device          if_bridge

and enable VIMAGE:

...
options         VIMAGE

Now navigate back to the sources dir, build and then install your new kernel:

# cd /usr/src
# make buildkernel KERNCONF=GENERIC
# make installkernel KERNCONF=GENERIC

 

Reboot into your new kernel:

# shutdown -r now

As we’re going to use the virtualised network stack, don’t worry about creating a loopback device or IP alias, all we need is a devfs rule to allow our jail access to tunnel devices, net devices and the Berkley packet filter. Create a new section in your CBSD devfs.rules:

 

[devfsrules_jail_with_bpf=7]
add include $devfsrules_hide_all
add include $devfsrules_unhide_basic
add include $devfsrules_unhide_login
add path 'bpf*' unhide
add path 'net*' unhide
add path 'tun*' unhide
Building the jail with cbsd

Now use the tui to create the jail:

# cbsd jconstruct-tui

Most of the options are pretty self explanatory so I’m not going to run through it all. Make sure you set these two as a minimum:

devfs_ruleset    6
vnet             [X]

That’s it – start the jail up:

# cbsd jstart jailname
Inside the jail

Do your usual prep for /­­etc/resolv.conf and /­etc/hosts, then add the IP address you created earlier to your new virtual interface in /­etc/rc.conf:

ifconfig_eth0="inet 192.168.0.107 netmask 255.255.255.0"
defaultrouter="192.168.0.1"

next up install openvpn:

# cd /usr/ports/*/openvpn
# make install

enable openvpn and set the interface in your /­etc/rc.conf:

# echo 'openvpn_if="tun"' >> /­etc/rc.conf
# echo 'openvpn_enable="YES"' >> /­etc/rc.conf

add your tunnel:

# kldload if_tun
OpenVPN

Now we can start configuring openvpn, grab your VPN providers cert – I use VyprVPN by Golden Frog:

# cd /usr/local/­etc/openvpn
# fetch http://www.goldenfrog.com/downloads/ca.vyprvpn.com.crt

Now for the openvpn config, here’s mine, make sure you call it openvpn.conf:

port 1194
client
dev tunN
proto udp
remote uk1.vyprvpn.com
resolv-retry infinite
nobind
persist-key
persist-tun
persist-remote-ip
ca ca.vyprvpn.com.crt
tls-remote uk1.vyprvpn.com
auth-user-pass vyprvpn.pas
comp-lzo
verb 4
keepalive 10 60
route-noexec
script-security 2
up /usr/local/­etc/openvpn/set_routes.sh

You will need to tweak these to fit – make sure you create a file with your username and password in this dir, I’ve called mine vyprvpn.pas.

Now you’ll note for the last 3 options I have disabled creating the routes, and enabled a script at openvpn startup, I had problems getting openvpn to create the routes for me so I ended up writing a shell script to read /var/log/messages and pull out the route info and set it after openvpn has connected:

option1=$(tail -r /var/log/messages | grep "Peer Connection Initiated with" | head -n 1 | grep -o "[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*")
option2=$(tail -r /var/log/messages | grep "PUSH: Received control message:" | head -n 1 | grep -o "route-gateway [0-9]*\.[0-9]*\.[0-9]*\.[0-9]*" | awk '{print $2}')
/sbin/route add -net $option1 192.168.0.1 255.255.255.255
/sbin/route add -net 0.0.0.0 $option2 128.0.0.0
/sbin/route add -net 128.0.0.0 $option2 128.0.0.0

I’m going to re-write this when I get time, as it was just a quick-fix and the info to set the routes actually exists in some variables set by openvpn.

Now, you should be connected! As I am using VyprVPN I’ll use their page to check my IP address:

# fetch --no-verify-peer -o - https://www.goldenfrog.com/whatismyipaddress | grep -i "public ip" | grep -o "[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*"
Squid

Install it – review all the options carefully when you compile:

# cd /usr/ports/*/squid
# make install

Here is my config, I don’t have any interest in caching anything for now, I just want to route all my http traffic through my VPN, but in a later post we’ll be setting up caching for the FreeBSD and various linux sites. The config is located in /usr/local/­etc/squid/squid.conf:

acl local_net src 192.168.0.0/24
http_access allow local_net
http_access allow localhost
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
http_port 8081
visible_hostname Hermes.core.net
cache_effective_user root
cache_effective_group wheel
cache_dir null /tmp
cache deny all
Sharing the config

You can have clients on your network automatically pick up your proxy server by using a pac file. You’ll need to host it on a web server. Set it using DHCP option 252:

dhcp-option=252,http://artemis.core.net/proxy.pac

The contents of my proxy.pac are:

function FindProxyForURL(url, host)
{
if (isInNet(host, "192.168.0.0", "255.255.0.0")) {
     return "DIRECT";
  } else {
     if (shExpMatch(url, "http:*"))
        return "PROXY Hermes:8081" ;
     if (shExpMatch(url, "https:*"))
        return "PROXY Hermes:8081" ;
     return "DIRECT";
  }
}

You should have a proxy server set up, being automatically configured on your clients by your DHCP server and routing all of your traffic through your VPN.

If you have any issues grab me on the FreeBSD forums – username tabs.

Setting up a DNS and DHCP server in a FreeBSD jail

The software

BIND for DNS – the most widely used DNS software on the internet, developed at Berkeley.

ICS-DHCP for DHCP – A DHCP server implementation by the Internet Systems Consortium.

The Plan

Configure two highly available BIND DNS servers in jails running on separate physical hosts and two highly available DHCP servers that will dynamically update our DNS servers. I’m going to split this over two posts, in this one we’re going to get our DNS and DHCP servers working inside our jail.

THE JAILS

For each jail you want to use (i’m going to use 2, each with a DNS and DHCP server) you’re going to need to do some prep.

Create a loopback interface:

# service netif cloneup lo1

Add the loopback interface to your rc.conf:

# echo 'cloned_interfaces="${cloned_interfaces} lo1"' >> /­etc/rc.conf

Now create a IP alias to use:

# ifconfig bge1 alias 192.168.0.111 netmask 255.255.255.255 broadcast 192.168.0.255

Add your IP alias to your rc.conf:

# echo 'ifconfig_bge1_alias="inet 192.168.0.111 netmask 255.255.255.255 broadcast 192.168.0.255"' >> /­etc/rc.conf

Next up let’s create our jail using ezjail, specifying the IP alias and loopback device we created earlier:

# ezjail-admin create Ares 'bge1|192.168.0.111,lo1|127.0.2.1'

Don’t start it up just yet! We have more config to do first. Copy your resolv.conf into your jail so it can use your current DNS server to resolve the BIND and DHCPD software downloads:

# cp /­etc/resolv.conf /usr/jails/Ares/­etc/

Now let’s edit our jails hosts file to reflect our different loopback address – keep your localhost entry but add the FQDN name in:

# vi /usr/jails/Ares/­etc/hosts
...
::1 localhost Ares.core.net
127.0.2.1 localhost Ares.core.net

Next up we need to allow access to the BPF (Berkeley packet filter device) this allows the DHCP server to avoid copying uninteresting packets from the kernel to the software running in user mode. Let’s create a devfs rule:

# vi /­etc/devfs.rules
...
[devfsrules_jail_with_bpf=6]
add include $devfsrules_hide_all
add include $devfsrules_unhide_basic
add include $devfsrules_unhide_login
add path 'bpf*' unhide

Now we can add the rule into our jails config file:

# echo 'export jail_Ares_devfs_ruleset="6"' >> /usr/local/­etc/ezjail/Ares

Next up we’re going to need to allow our jail access to raw sockets for the DHCP server to capture broadcast requests. Add the rule to your jail config file:

# echo 'export jail_Ares_parameters="allow.raw_sockets allow.sysvipc' >> /usr/local/­etc/ezjail/Ares

Now bring up the loopback device you created earlier:

# service netif cloneup lo1

Start up your Jail and log in:

# ezjail-admin start Ares
# ezjail-admin console Ares
BIND

Now you’re in your prepped up jail, install BIND (remove -DBATCH if you want to set any specific compile options):

# cd /usr/ports/dns/bind99/
# make install -DBATCH

Now let’s edit the config file at /usr/local/­etc/namedb/named.conf:

Set a ACL right at the top of the config file so we only accept requests from hosts on our subnet:

acl "trusted" {
     192.168.0.0/24;
     localhost;
     localnets;
};

Next section to edit is the forwarders, I just go with Google’s DNS servers, make sure you limit these using the ACL we set up earlier:

forwarders {
       8.8.8.8;
       8.8.4.4;
};
allow-query       { any; };
allow-recursion   { trusted; };
allow-query-cache { trusted; };

Thats it for a very basic usable config – enable BIND in the rc.conf file:

# echo 'named_enable="YES"'>> /­etc/rc.conf

Start BIND and test it on your favourite web server (set the IP to your new DNS server):

# service named start
# dig @192.168.0.111 guytabrar.co.uk

The line you are looking for is:

;; Got answer:
 ICS-DHCP

So you have a working DNS server, now we’re going to get a working DHCP server and then next post we’ll make it play together and get it failing over. Let’s get compiling:

# cd /usr/ports/net/isc-dhcp43-server
# make install -DBATCH

Luckily there’s not much config needed for a simple set up, the file is in /usr/local/­etc/dhcpd.conf here is mine, I think it’s all pretty self-explanatory:

default-lease-time 600;
max-lease-time 7200;
# Enable this only option only if it's your only DHCP server:
authoritative;
option subnet-mask 255.255.255.0;
option broadcast-address 192.168.0.110;
option routers 192.168.0.1;
option domain-name-servers 192.168.0.110, 192.168.0.111;
option domain-name "core.net";
subnet 192.168.0.0 netmask 255.255.255.0 {
range 192.168.0.2 192.168.0.99;
}

 

Now let’s enable DHCPD and some options in our /­etc/rc.conf file:

...
dhcpd_enable="YES"
dhcpd_ifaces="bge1"
dhcpd_withumask="022"
dhcpd_flags="-q"

Now fire it up with:

# service ics-dhcpd start

Now we need to try it out, before you do let’s get tcpdump running:

# tcpdump -envvvi bge1 port 67 or port 68

That should show you all the DHCP traffic on the network, at this point I grabbed my iPhone and renewed the DHCP lease, you can should see the request and the response in your tcpdump. Here’s my iPhone asking for a DHCP lease:

20:57:17.896014 9c:f3:87:37:ec:bd > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 342: (tos 0x0, ttl 255, id 3969, offset 0, flags [none], proto UDP (17), length 328)
0.0.0.0.68 > 255.255.255.255.67: [udp sum ok] BOOTP/DHCP, Request from 9c:f3:87:37:ec:bd, length 300, xid 0x564e0c0e, Flags [none] (0x0000)
Client-Ethernet-Address 9c:f3:87:37:ec:bd
Vendor-rfc1048 Extensions
Magic Cookie 0x63825363
DHCP-Message Option 53, length 1: Request
Parameter-Request Option 55, length 6:
Subnet-Mask, Default-Gateway, Domain-Name-Server, Domain-Name
Option 119, Option 252
MSZ Option 57, length 2: 1500
Client-ID Option 61, length 7: ether 9c:f3:87:37:ec:bd
Requested-IP Option 50, length 4: 192.168.0.3
Lease-Time Option 51, length 4: 7776000
Hostname Option 12, length 15: "GuyTabrrsiPhone"
END Option 255, length 0
PAD Option 0, length 0, occurs 6

Here’s my DHCP server responding:

20:57:17.896215 3c:07:54:03:57:f0 > 9c:f3:87:37:ec:bd, ethertype IPv4 (0x0800), length 342: (tos 0x10, ttl 128, id 0, offset 0, flags [none], proto UDP (17), length 328)
192.168.0.110.67 > 192.168.0.3.68: [udp sum ok] BOOTP/DHCP, Reply, length 300, xid 0x564e0c0e, Flags [none] (0x0000)
Your-IP 192.168.0.3
Client-Ethernet-Address 9c:f3:87:37:ec:bd
Vendor-rfc1048 Extensions
Magic Cookie 0x63825363
DHCP-Message Option 53, length 1: ACK
Server-ID Option 54, length 4: 192.168.0.110
Lease-Time Option 51, length 4: 6688
Subnet-Mask Option 1, length 4: 255.255.255.0
Default-Gateway Option 3, length 4: 192.168.0.1
Domain-Name-Server Option 6, length 8: 192.168.0.110,192.168.0.111
Domain-Name Option 15, length 8: "core.net"
END Option 255, length 0
PAD Option 0, length 0, occurs 12