Sooner or later, while gaining new users, every project faces a problem of slow server response time. The most efficient solution to improve website performance (this article applies to highly loaded Symfony2 sites) is using the Varnish caching proxy server with support for Edge Side Includes (ESI).
Wait. What is Varnish? It’s basically the HTTP accelerator designed for high-load dynamic websites. What is Symfony2? It’s a PHP framework for work with a proxy-server.
In the documentation developer can find methods to install Varnish for popular Linux distributives or download the source code, since Varnish is a fully free software, available under BSD license. At the time I am writing the article, stable release of Varnish is 4.0.3. Here’s the example of its installation for Ubuntu 14.04 (Trusty Tahr).
$ apt-get install apt-transport-https $ curl https://repo.varnish-cache.org/GPG-key.txt |apt-key add - $ echo"deb https://repo.varnish-cache.org/ubuntu/ trusty varnish-4.0">>/etc/apt/sources.list.d/varnish-cache.list $ apt-get update $ apt-get install varnish
After installation you’ll have a list of available utilities:
- varnishadm is used for «varnishd» demon administration.
- varnishhist is a histogram of requests.
- varnishlog is a log of workflow.
- varnishncsa shows logs in Apache / NCSA style.
- varnishstat stands for cache usage statistics.
- varnishtest is the utility for testing.
- varnishtop stands for top statistics of cache usage.
Varnish reverse proxy is using subject-oriented programming language called VCL (Varnish Configuration Language) for configuration. Here’s the configuration file «/etc/varnish/default.vcl»:
vcl 4.0; backend default_server { .host="127.0.0.1"; .port="8080";} sub vcl_recv {if(req.http.Cookie){ set req.http.Cookie=";"+ req.http.Cookie; set req.http.Cookie= regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie= regsuball(req.http.Cookie, ";(PHPSESSID)=", "; \1="); set req.http.Cookie= regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie= regsuball(req.http.Cookie, "^[; ]+|[; ]+$", "");if(req.http.Cookie==""){ unset req.http.Cookie;}}// Add a Surrogate-Capability header to announce ESI support. set req.http.Surrogate-Capability ="abc=ESI/1.0";return(hash);} sub vcl_backend_response {// Check for ESI acknowledgement and remove Surrogate-Control headerif(beresp.http.Surrogate-Control ~ "ESI/1.0"){ unset beresp.http.Surrogate-Control; set beresp.do_esi=true;}}
default_server
is used to customize backend servers. In this case there’s one «default server» — Symfony add-on on port 8080. By default Varnish is using port 6081. That can be adjusted in the configuration file of its demon «/etc/default/varnish».
vcl_recv
is called at the beginning of request.
vcl_backend_response
is used when receiving server response.
It’s worth noting that HTTP caching only works with «safe» methods like GET, HEAD. «Safety» here means that the methods won’t change the data state (editing, deleting). You should never cache methods that alter state (POST, DELETE, PUT, PATH), because the requests may not reach functional executive level on backend, so they will be processed on cache level.
At this stage Varnish configuration is complete.
If you go to http://localhost:6081/, you should see something like that:
Since Symfony2 isn’t customized yet, we get a 503 error. At the time I am writing the article, stable release is 2.7, so the configuration should correspond with this version. You can read about the installation process here. For starters you should override some configuration parameters at «app/config/config.yml»:
framework: esi: { enabled: true} trusted_proxies: ['127.0.0.1'] fragments: { path: /_fragment }
By looking at the parameters initialization, you might guess their purpose. The first one esi {enabled: true}
activates esi
support, trusted_proxies
is a list of proxy-servers, fragments
is a route for the content fragments generation. After that we need to set cache parameters for particular actions.
<?php namespace AppBundle\Controller; use Sensio\Bundle\FrameworkExtraBundle\Configuration\Cache;use Sensio\Bundle\FrameworkExtraBundle\Configuration\Route;use Symfony\Bundle\FrameworkBundle\Controller\Controller;use Symfony\Component\HttpFoundation\Response; class DefaultController extends Controller {/** * @Route("/app/example", name="homepage") * @Cache(smaxage=30) */publicfunction indexAction(){return$this->render('default/index.html.twig');}}
For more convenient and mass configuration FOSHttpCacheBundle can be used. Now if you go to http://localhost:6081/, you should see the content of test controller with the headers in response.
Age: 7 Cache-Control: public, s-maxage=30
Age
is time in seconds. It’s basically age of the cache.
Cache-Control: public
points out that this content is publically available (it can be cached by public proxy-servers, and it’s common for all users), s-maxage
defines the lifetime in seconds. Alternative option is to replace @Cache(smaxage=30)
by @Cache(maxage=30)
, here private
will mean that public servers won’t be able to cache responds, and only private servers (browsers) will be able to do that.
For dynamic content fragments you should use ESI. Here’s its general work scheme:
At first user sends a request to get a resource, and if the text of the response contains special tags <esi:include src”/...” />
, then the caching server sends a request to backend in order to get additional fragments of content. Symphony2 has a Twig function render_esi
to display esi tags, as a parameter you can set an object of type ControllerReference
(e.g. {{render_esi(controller('AppBundle:News:latest', { 'maxPerPage': 5 })) }}
), or URL link ({{ render_esi(url('latest_news', { 'maxPerPage': 5 })) }}
). After adding the esi fragment the template «app/Resources/views/default/index.html.twig» will look like that:
{%extends'base.html.twig'%} {% block body %} Homepage.{{ render_esi(controller('AppBundle:Default:esiFragment'))}}{% endblock %}
You also need to add the action esiFragment
to the controller.
/** * @Cache(smaxage=15) */publicfunction esiFragmentAction(){returnnew Response('esi fragment');}
In this case the main fragment indexAction
cache for 30 seconds and additional esiFragmentAction
is going to cache for 15 seconds. To compare the results let’s do some testing:
With Varnish «$ ab -c 10 -n 1000 http://localhost:6081/app/example»
This time not using «$ ab -c 10 -n 1000 http://localhost:8080/app/example»
Results show that the difference in productivity growth is 9197 to 143 requests, which is 65 times difference. As you can see Varnish Web Accelerator is a pretty good way to handle server load balancing.