Website Security Solutions | Latest Guides | Blog

Two of the most important considerations for any website owner are security and speed. Historically, these goals have been ever at odds. One of the most effective techniques for insuring a consistent experience for end users is a caching layer. Varnish, the most well-known, does not natively support SSL/TLS.

Luckily, by combining Varnish with a reverse proxy like nginx, we can take advantage of this powerful caching tool while still getting the SEO boost from serving only HTTPS content to the internet at large. Varnish works by examining traffic passing through the software, and based on a rules engine provided by the administrator, decides what’s okay to return directly from RAM and what requires going back out to the web application. For static sites this rules engine is very simple – if you have enough RAM, Varnish becomes basically analogous to hosting your files in a big RAM Disk. If you have a dynamic application however, you can write Varnish rules to give it “hints” about what’s okay to serve out-of-date and what isn’t. This guide will walk you through configuring nginx as a reverse proxy in front of varnish on ubuntu. For the purposes of this guide, varnish will look to static content hosted on apache for its content.

Getting started

Install required software

Apt-get update
Apt-get install varnish nginx openssl



Configuring Varnish

Lets examine the varnish configuration file at /etc/varnish/default.vcl

One of the most relevant portions of this configuration are where the backend is defined:

backend default {
    .host = "127.0.0.1";
    .port = "8080";
}

This means varnish will look to the localhost on port 8080 for content, caching pages intelligently that get returned to the client requesting the page from varnish. If you’re serving static content, all that’s left is to setup nginx between the client and the varnish caching proxy. If however you have some dynamic content you’d like to exclude, there is a rich VCL syntax that will allow you to customise the behaviour of varnish. For large applications, you will want to make sure varnish has an abundance of RAM – the more RAM it has, the more it can cache.

In order to exclude content, we can write rules inside the vcl_recv function in the default.vcl. Let’s pretend you serve your static site at somesite.com, but that you have a Business to Business portal located at somesite.com/webapp. You might want to never cache anything from your webapp, but always return your main site as fast as possible. This can be accomplished with the following VCL rule:

sub vcl_recv {
  if (req.http.host == "somesite.com" &&
        req.url ~ "^/webapp")
    {
        return (pass);
    }
}

When you’re done making changes, issue:

cp /etc/varnish/default.vcl /etc/varnish/user.vcl

Varnish listens on port 6081 by default, but this can be changed by modifying the Daemon_Opts inside of /etc/default/varnish. Because we will be terminating the connection behind nginx anyway, port 6081 is fine for our purposes.

By default, varnish will cache requests for 2 minutes and serve cached content to the next client that requests it instead of going back to the web application. This can be overridden by including

set beresp.ttl = 5m;

inside of the vcl_backend_response block..

sub vcl_backend_response { ... }



Terminating with nginx

Next we want to configure nginx to proxy client connections over to varnish. For the purposes of this guide, we will generate a self-signed certificate, but on an internet facing server this is where you would generate a CSR and get it signed by a trusted certificate provider..

Issue:

mkdir /etc/nginx/ssl
chown -R www-data:www-data /etc/nginx/ssl
chmod 700 /etc/nginx/ssl
cd /etc/nginx/ssl

openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout /etc/nginx/ssl/server.key -out /etc/nginx/ssl/server.crt



create a file in /etc/nginx/sites-available named varnish.conf and populate it with the following, replacing domain names with your own:

server {

    listen 443;
    server_name yourdomain.com;
    ssl_certificate           /etc/nginx/ssl/server.crt; ## Your Certificate
    ssl_certificate_key       /etc/nginx/ssl/server.key; ## Your Certificate Private Key

    ssl on;
    ssl_session_cache  builtin:1000  shared:SSL:10m;
    ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
    ssl_prefer_server_ciphers on;

    access_log            /var/log/nginx/access.log;

    location / {

      proxy_set_header        Host $host;
      proxy_set_header        X-Real-IP $remote_addr;
      proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header        X-Forwarded-Proto $scheme;

      proxy_pass          http://localhost:6081;
      proxy_read_timeout  90;

      proxy_redirect      http://localhost:6081 https://yourdomain.com;
    }
  }

Create a symlink from sites-avaialble to sites-enabled in order to activate your configuration:

Ln -s /etc/nginx/sites-available/varnish.conf /etc/nginx/sites-enabled/varnish.conf

Restart Nginx and test

sudo systemctl restart nginx


Author: Paul Baka
Published:

    Next Guide...
    wireshark-troubleshoot-network-ssl-tls

    Wireshark is an extremely powerful tool for analyzing the conversations your computer is having over the network. When an application’s logs come up empty, Wireshark is often the best way to figure out what’s going with software. When troubleshooting issues with SSL/TLS, Wireshark is invaluable. Hav…