Varnish is a HTTP caching reverse proxy – and a very good one at that. The main problem I have encountered is that the sheer speed of it makes it very hard to benchmark, where I can cripple a tomcat backend directly with a few instances of ApacheBench, Varnish is a different matter, because you will almost certainly be bound by the performance of your client and not Varnish.
On that note – I wanted to share a very nice benchmarking tool that I have used to replace ab called goBench, it’s a simple single script written in go and with the sole purpose of high throughput benchmarking – well suited for Varnish then!
Both systems are clean installs on VM’s with identical specs (8x vCPU, 2GB RAM, 32GB HDD) and the default Varnish configuration is used on both with the exception of the backend host which is facing a NGINX instance hosting a single 1MB XML file, firewalls are disabled on both systems.
Installing Varnish on RHEL 7.2:
I had to enable these two repos through the subscription manager :
# subscription-manager repos --enable=rhel-7-server-optional-rpms # subscription-manager repos --enable=rhel-7-server-extras-rpms
As well as the Fedora repo (otherwise it will fail with a missing jemalloc dependency):
# wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm # rpm -Uvh epel-release-latest-7.noarch.rpm
Then you’re ready to install varnish:
# rpm --nosignature -i https://repo.varnish-cache.org/redhat/varnish-4.1.el7.rpm # yum install varnish
Start it up with:
# systemctl start varnish
Installing Varnish on FreeBSD 10.2 (it’s much easier..):
# cd /usr/ports/www/varnish4 # make install clean
Optionally, if you want it to start automatically:
# echo 'varnishd_enable="YES"' >> /etc/rc.conf
Start it up with:
# service varnishd start
The script doesn’t have many parameters to worry about, I ran my tests from two clients using these parameters:
# go run gobench.go -u http://some_host:6081/some_isin.xml -k=true -c 250 -t 100
That will create 250 clients and poll the specified address continuously for 100 seconds.. simple right! Combined with a simple shell script to run goBench from both my clients at the same time and sum the results, we’re ready to go:
Dispatching 500 clients Waiting for results... Requests: 301348 hits Successful requests: 300876 hits Network failed: 3 hits Bad requests failed (!2xx): 0 hits Successful requests rate: 3008 hits/sec Read throughput: 17934995 bytes/sec Write throughput: 412882 bytes/sec Test time: 100 sec
Dispatching 500 clients Waiting for results... Requests: 287123 hits Successful requests: 274800 hits Network failed: 11934 hits Bad requests failed (!2xx): 0 hits Successful requests rate: 2747 hits/sec Read throughput: 16379739 bytes/sec Write throughput: 374629 bytes/sec Test time: 100 sec
Well, ‘out the box’ FreeBSD can serve 261 requests per second more than RHEL on identical hardware and an identical varnish config – more alarming is the additional 11931 failed requests to RHEL over FreeBSD during this 100 second test.
Ultimately, the test is pretty irrelevant and the intention really was to play with goBench, the test time-frame was far too short (although I did run it multiple times and the results were consistent) but more importantly in a prod environment you’re going to be spending some time tuning sysctl.conf and your varnish configuration.. Although I suspect the outcome would ultimately favour FreeBSD regardless, but I’m biased