WordPress Super Cache Benchmark – Locust IO Load Testing


WordPress Super Cache benchmark and load testing continues. I came across Locust load testing tool which is an open source Python based load testing tool which allows distributed master and slave load testing against a target web site (this blog). I setup a master & slave Locust load tester server cluster with 2 dedicated servers I have – Las Vegas, Nevada Intel Xeon E3-1240v3 32GB, 250GB SSD server as master Locust server and a Montreal, Canada Intel Xeon Xeon E3-1245v2 32GB, 3x120GB SSD server as slave.

  • Las Vegas, Nevada Intel Xeon E3-1240v3 32GB, 250GB SSD server CentOS 6.6 64bit based as master Locust server – ping times to this blog = ~18ms
  • Montreal, Canada Intel Xeon Xeon E3-1245v2 32GB, 3x120GB SSD server CentOS 6.6 64bit based as slave – ping times to this blog = ~70ms
  • This blog is served from a 2GB KVM based DigitalOcean KBM VPS with 2 cpu cores on CentOS 7.0 64bit located in San Francisco

It’s first time using Locust, so still playing with settings and understanding how it all works. I set the load test at 10,000 concurrent users with a 250 user Hatch rate (users spawned/second). Once, it reaches 10,000 concurrent users, it resets stats and continuously load tests the target site (this blog) at 10,000 concurrent user level.

The target url for test is WordPress November archive url: http://wordpress7.centminmod.com/date/2014/11/

Start New Locust swarm at 10,000 concurrent users with a 250 user Hatch rate

While Locust load testing at 10,000 concurrent users serving a total of 100,000+ requests at 458.6 requests/second

 

Stopped Locust load testing

 

Newrelic stats for this blog peaking at 20Mb/s network I/O with only 0.0383 to 0.171 cpu load average peak over 2 cpu cores

 

 

Nginx server usage stats

 

Pushing 20.1Mb/sec bandwidth and 3.13k packets/sec

 

Disk I/O numbers

 

Update:

Re-read the Locust manual and documentation and it seems you should start up one slave instance per cpu core on slave servers and by default the master doesn’t do any of the load testing at all. So above config over had slave server 1 instance doing all the load testing. Redid the setup reversing the Montreal server as master and Las Vegas server as slave and started master with 3 slave instances + slave with 5 slave instances = total 8 slave servers to launch the 10,000 concurrent users from.

Remember, this blog is served from a 2GB KVM based DigitalOcean KBM VPS with 2 cpu cores on CentOS 7.0 64bit located in San Francisco so what you’re seeing is the benefits of WordPress Super Cache plugin + Nginx (Centmin Mod LEMP blend) to be able to handle 10,000 concurrent users.

Same 10,000 concurrent users with 250 hatch rate but this time testing 3 target urls, this blog’s index page, and archive pages for November and December 2014.

Locus http load testing across 3 target urls of this blog with 10,000 user concurrency at 582.5 requests/sec and response times averaging 144ms

 

Locust http load testing stopped after 125k total requests at 10,000 user concurrency

 

Newrelic stats for this blog while under Locust http load testing with peak of 24.7Mb/sec network I/O

 

The Centmin Mod LEMP web stack consists of the following:

For more information on the Centmin Mod LEMP web stack install, check out the How to install Centmin Mod LEMP web stack on DigitalOcean Droplet Guide.

The following two tabs change content below.

George

Owner / Creator at CentminMod.com
Centminmod.com LEMP web stack creator - auto installs Nginx, PHP-FPM, MariaDB MySQL + CSF Firewall on CentOS

Latest posts by George (see all)