Varnish Cache Replication with VCL

If you’re using Varnish then you’ve probably looked into replication for it at some point. The official solution is to buy Varnish Plus and use the HA replication agent, however you can do basic replication with normal Varnish and some VCL magic.

First you need to define a upstream server for all of your Varnish caches except the one you’re configuring, in this case let’s assume we have 4 nodes ServerA, ServerB, ServerC and ServerD and we are configuring ServerA:

# Cache Servers:
backend cache_serverB {
    .host = "serverB";
    .port = "6081";
        }
}
backend cache_serverC {
    .host = "serverC";
    .port = "6081";
        }
}
backend cache_serverD {
    .host = "serverD";
    .port = "6081";
        }
}

Next you need to define a upstream server for each of your actual backends, in our case we have 4 tomcat instances running on the same servers, this time we will include the server we are configuring, as we want to be able to reverse proxy to the local tomcat server:

# Tomcat Servers:
backend tomcat_serverA {
.host = "serverA";
.port = "7080";
}
}
backend tomcat_serverB {
.host = "serverB";
.port = "7080";
}
}
backend tomcat_serverC {
.host = "serverC";
.port = "7080";
}
}
backend tomcat_serverD {
.host = "serverD";
.port = "7080";
}
}

Now we need to create our director groups for all the backends we just created:

# Director Groups
sub vcl_init {

# Cache director group:
    new cache = directors.random();
    cache.add_backend(cache_serverB,1);
    cache.add_backend(cache_serverC,1);
    cache.add_backend(cache_serverD,1);

# Tomcat director group:
    new tomcat = directors.random();
    tomcat.add_backend(tomcat_serverA,90000000);
    tomcat.add_backend(tomcat_serverB,1);
    tomcat.add_backend(tomcat_serverC,1);
    tomcat.add_backend(tomcat_serverD,1);
}

You should set the weight of the server you are configuring to be super high – serverA in this case, because when you do need to go to the back end the local server should be preffered.

Next we need to create a ACL containing all the cache nodes except the one you are configuring:

acl cache_nodes {
        "serverB";
        "serverC";
        "serverD";
}

And now that’s it, set the default backend to the tomcat director group and then in your VCL add a rule that directs traffic that has not come from a server in our cache ACL to the cache director group:

        if (!client.ip ~ cache_nodes) {
                set req.backend_hint = cache.backend();
        }

The result of this is that when a request comes to a cache, if it has not got the object stored locally it will use one of your other caches as a backend, the other cache receiving this request will either serve a cached copy saving you a backend request, or if it doesn’t have a cached copy it will match the client as being in our cache_nodes ACL and fetch the request from the tomcat director group, populating both caches with the object.

It’s not perfect, but it’s a nice way reducing backend requests, it means you populate 2 caches with 1 backend request, and if you need to restart a cache most of the content should be served from another cache instead of your slow backend.

JSON Logging With NGINX

As we are rolling out Elastic Search at work it has become necessary to adjust the logging of each of our applications – normally this involves reviewing each possible line of output and then groking out the information you need. However, some applications are more flexible than others, with NGINX you can write your logs exactly as you want them and in JSON format too. With our NGINX setup we are also able to insert HTTP headers at both the Varnish and Tomcat layers and print all the information we need to know about our stack in a single log file – ready to be pushed by rsyslog on each server into our logstash parser and then on to our ES cluster.

Here’s a sample NGINX log format called logstash_json:

log_format access_json '{"timestamp_date": "$time_iso8601", '
                         '"@tenant": "some-tennant", '
                         '"@type": "nginx-access-logs", '
                         '"@level": "global_daily", '
                         '"remote_addr_ip": "$remote_addr", '
                         '"remote_user": "$remote_user", '
                         '"body_bytes_sent_l": "$body_bytes_sent", '
                         '"request_time_d": "$request_time", '
                         '"status": "$status", '
                         '"request": "$request_uri", '
                         '"request_method": "$request_method", '
                         '"http_referrer": "$http_referer", '
                         '"request_body": "$request_body", '
                         '"cache_status": "$upstream_http_x_cache_status", '
                         '"request_valid": "$upstream_http_x_request_valid", '
                         '"http_user_agent": "$http_user_agent", '
                         '"message": "$time_iso8601 $remote_addr $remote_user $body_bytes_sent $request_time $status $request $request_method $http_referer $request_body $upstream_http_x_cache_status $upstream_http_x_request_valid $http_user_agent" }';

The upstream HTTP header x_cache_status is set by Varnish depending on whether the request was a hit/miss/pass, x_request_valid is set based on whether the client requested a valid ISIN format.

Next we use rsyslogd to push the log file into our logstash agent, use something like this in rsyslog.d:

module(load="imfile" PollingInterval="10
template(name="nginxAccessTemplate" type="string" string="%msg%\n")
input(type="imfile"
File="/some/log/dir/nginx_json.log"
Tag="nginx.access"
StateFile="stat-file-api-nginx-access"
Severity="info"
Facility="user"
ruleset="nginx-access")

ruleset(name="nginx-access")
{
action(type="omfwd"
template="nginxAccessTemplate"
Target="some-server"
Protocol="udp")
}

As we have already written our log in JSON and with all the fields we need, it means we only need minimal config in logstash:

input
{
udp
  {
  port => 5544
  codec => json
  }
}

That’s it! Instead of reading in the standard logs and then writing grok filters for each field, you can do the formatting with NGINX, its easier, better performing and gives you more flexibility.

The ntpdate(8) tool

NTPDATE

A useful tool for syncing the time with a NTP server – I have ntpd running on all my servers – its bundled with the base FreeBSD system, just enable it in /etc­/rc.conf:

ntpd_enable="YES"

And start the service:

# service ntpd start

However, with my FreeBSD 11 VM on my surface pro the constant sleeping and hibernating throws the clock out to the point network services stop working. It’s easy to update it when you need to be sure:

# ntpdate -b 1.freebsd.pool.ntp.org