Tutorials

Optimizing WordPress performance with Amazon EFS

Listen to this article
Advertisements
Advertisements

Many organizations use content management systems (CMS) like WordPress using a single node installation, but could benefit from a multi-node installation, as it is a best practice that provides benefits in terms of performance and availability. The reliability pillar of the AWS Well-Architected Framework recommends the following design principle: “Scale horizontally to increase aggregate system availability: Replace one large resource with multiple small resources to reduce the impact of a single failure on the overall system. Distribute requests across multiple, smaller resources to ensure that they don’t share a common point of failure.” This blog post discusses how Amazon Elastic File System (Amazon EFS) can be used as a shared content store for highly availablity WordPress deployments, and presents optimization tips to improve your site’s performance.

One approach for running a multi-node WordPress site is to store files in a central location and download this data during the bootstrap process. While this can work, this option makes it more difficult to ensure that content stays synchronized as your web site evolves. Using a shared file system like Amazon EFS allows multiple nodes to have access to WordPress files at the same time. This can significantly simplify the processes of scaling horizontally and updating your web site.

Understanding page load time

Amazon EFS provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS is a regional service, storing data within and across multiple Availability Zones for high availability and durability.

As for any networked file system, there is an overhead associated with the network communications between the client and the server. This overhead is proportionally larger when operating on small files from single threaded applications. Multi-threaded I/O and I/O on larger files can often be pipelined, allowing any network latencies to be amortized over a larger number of operations.

Say, for example, that a PHP website needs access to 100 small files to generate a home page; this could be PHP files, includes, modules, etc. If you introduce a small, low single-digit millisecond latency when loading pages to the PHP parser, then your users will experience an additional delay of a few hundred milliseconds when accessing your website (100 files X few milliseconds delay). This is important as user tolerance for a loading webpage is not very high.

This diagram shows the different steps that must be taken for a page to load. Each step introduces additional latency. We are concentrating on how to optimize your WordPress website to run on Amazon EFS and serve the ‘first byte’ as soon as possible.

Steps For a Page to Load

Latency impact

PHP is an interpreted language. The interpreter must read, interpret, and compile code of your application for each request made to it. For a simple <?php echo ”Hello world” ?> the interpreter needs access to a single file, but if you are running a CMS like WordPress it may need to read 100s of files before it can generate a page. To give you an idea, the PHP interpreter reads 227 files in sequence before generating a ‘’Welcome” page from a newly installed WordPress.

To demonstrate the performance a web server experiences when retrieving files directly in a serial fashion over the network, I created a fresh out-of-the box “Welcome” page from a newly installed WordPress (v5.4) website. The website ran on a t2.medium Amazon EC2 instance. The WordPress directory was stored in an Amazon EFS file system I created using all the defaults (General Purpose Performance mode and Bursting Throughput mode). In addition, the directory was mounted using the Amazon EFS mount helper, which by default uses the recommended mount options. Once the setup was complete, I ran several tests. The first test loads the default “Welcome” page and tests #2 to #5 load static files of various sizes.

The time-to-first-byte (TTFB) metric is useful for measuring the results of each test. When someone opens a website, the browser asks the server for information; this is known as a ‘GET’ request. TTFB is the time it takes for a browser to receive the first byte from the web server. Ideally, we would like the TTFB to be as small as possible. I ran each test on the same Amazon EC2 instance that was hosting the WordPress install so that there wouldn’t be any network latency interfering with the results.

I used the Linux curl command to run these tests. At the end, I recorded 250 samples of the ‘time_starttransfer’ for each test. This is a sample script that I used to capture the TTFB for different files under different conditions.

This is a sample of the bash script:

#!/bin/bash
for i in {1..250}
do 
    curl -o /dev/null \
    -s \
    -w "%{time_starttransfer}\n" \
    http://127.0.0.1/wordpress/ >> /tmp/wordpress-efs-ttfb.txt
done

These are the test results:

TestGET OperationBytes receivedFiles readAverage TTFB
1wordpress/3 KB227759 ms
2hello.txt12 B13 ms
3small-file1 MB15 ms
4medium-file10 MB15 ms
5large-file100 MB16 ms
Latency Impact

The latency is relatively consistent regardless of the file size for the static files. The dynamically generated page takes longer to load. This is because PHP is fetching files sequentially accumulating latency before the page is served to the browser.

Also Read: Performance Tuning AWS EFS For WordPress

Advertisements
Advertisements

OPcache to the rescue

Zend OPcache improves PHP performance by storing precompiled script bytecode in shared memory, removing the need for PHP to load and parse scripts on each request. This extension is bundled with PHP 5.5.0 and later, and is available in PECL for PHP versions 5.2, 5.3 and 5.4.

I ran the same “GET wordpress/” test, but this time with OPCache enabled. I did so to determine how much latency can be reduced by not having to read from disk and not having to compile the code with every request.

You can use the setting opcache.revalidate_freq with opcache.validate_timestamps=1 within the OPcache configuration to determine how often to expire PHP bytecode, forcing a new roundtrip to Amazon EFS. In this example, I am setting a revalidation frequency of 15 minutes (900 seconds). Remember that with OPCache and as with any caching system you are exchanging speed for instant visibility of files available in the shared file system.

; Enable Zend OPcache extension module
zend_extension=opcache
; Determines if Zend OPCache is enabled
opcache.enable=1
; The OPcache shared memory storage size.
opcache.memory_consumption=128
; The amount of memory for interned strings in Mbytes.
opcache.interned_strings_buffer=8
; The maximum number of keys (scripts) in the OPcache hash table.
opcache.max_accelerated_files=4000
; The location of the OPcache blacklist file (wildcards allowed).
opcache.blacklist_filename=/etc/php.d/opcache*.blacklist
; When disabled, you must reset the OPcache manually or restart the
; webserver for changes to the filesystem to take effect.
opcache.validate_timestamps=1
; How often (in seconds) to check file timestamps for changes to the shared
; memory storage allocation. ("1" means validate once per second, but only
; once per request. "0" means always validate)
opcache.revalidate_freq=900

These are the test results:

TestGET operationBytes receivedFiles readAverage TTFB
1wordpress/(without OPCache)3 Kb227759 ms
2wordpress/(with OPCache)3 Kb22722 ms
Improvement:35X

This is the phpinfo() function showing OPCache metrics. Note how it has now cached all the 227 scripts that WordPress requires to display the homepage into memory.

OPCache Metrics
Decrease in Latency

There is a 35X improvement when using OPCache with files stored on Amazon EFS (from 759 ms to 22 ms).

Static content caching

PHP renders dynamic pages while the web server facilitates access to them via HTTP(s). A web server also provides access to static files like images, CSS, JavaScript, etc. These files can be served directly from Amazon EFS, and the user will experience a low single-digit millisecond latency. For web sites that have 100s (or even more) of static files that are loaded sequentially, the sum of these latencies can increase page load times.

A solution to this problem is to cache static files somewhere, such as a Content Delivery Network (CDN), like Amazon CloudFront, or local storage. If you are using Apache you can look at the ‘mod_cache_disk’ module, it implements a disk-based storage manager for mod_cache. The way it works is that the headers and bodies of cached responses are stored separately on a location that you specify. This configuration avoids a network round trip to the shared file system when requests are served from the cache.

For demonstration purposes, I’ve prepared a configuration file that configures Apache to cache non-PHP files and revalidate them 15 minutes after the first access occurred. The files are stored in /var/cache/httpd/proxy this location is using local disks. When a new request comes, Apache will first check if the file has been cached. If the file hasn’t expired, Apache retrieves it from this place, otherwise it fetches it from the wwwroot folder that sits on top of Amazon EFS. Apache cache can be easily flushed or kept within a size limit by using the htcacheclean utility.

# Directory on the disk to contain cached files 
CacheRoot "/var/cache/httpd/proxy"
# Cache all
CacheEnable disk "/"
# Enable cache and set 15-minute caching as the default
ExpiresActive On
ExpiresDefault "access plus 15 minutes"
# Force no caching for PHP files
<FilesMatch "\.(php)$">
    ExpiresActive Off
</FilesMatch>

Once the server is configured, I performed the ‘hello.txt’ test to compare results ever 15 minutes.

TestGET operationBytes receivedFiles readAverage TTFB
1GET hello.txt(without mod_disk_cache)12 B13 ms
2GET hello.txt(with mod_disk_cache)12 B10.6 ms
Improvement:5X
Decrease in Latency

After caching is enabled, there is a 5x improvement when serving static assets stored on Amazon EFS.

One last note, these settings may be enough for many WordPress installations but in some cases, you have to optimize further. For example, some plugins can be configured to write logs. Ideally you would want these logs files to be written in local disk in order to avoid network round trips to the shared file system potentially every time a line is added to the log.

Conclusion

In this blog post, I discussed how the AWS Well-Architected Framework recommends scaling out workloads horizontally. Furthermore, I covered how Amazon EFS is a good choice as the network shared file system for WordPress workloads. This is because Amazon EFS allows multiple nodes to have access to WordPress files at the same time, simplifying scaling and deployment. I hope that my demonstration of using caching techniques to optimize performance of static files by 5x and dynamic files by 35X proves valuable to anyone reading.

Thanks for reading, please leave any comments or questions you may have in the comments section.

Also Read: Implementing a highly available Lightsail database with WordPress

Advertisements
Advertisements

Leave a Reply

%d bloggers like this: