http://loadimpact.com/ is a site that offers stress testing for sites with a graphical output so that you can see how your site handles simultaneous connections from a number of visitors.
While a large part of a site under stress has to to with the server, site stress isn’t only based on server resources available – it also has to do with scripting, coding, whether a CDN is used, caching, and a whole host of factors. Poor coding can double or triple the stress on the server of a massive rush of visitors vs. the same number of visitors on a well coded and streamlined site.
http://loadimpact.com/ has serious testing of up to 50,000 users for $9.00 a day (yes, we said per day), but they have a taster stress testing offering called “Load Test Light” that will graph the load of 50 simultaneous users at no charge to you. (50,000 simultaneous users, by the way, will likely crash the server – if you’re getting that many simultaneous users on a shared hosting account, you shouldn’t be on a shared hosting account. Seriously.)
Let’s stress test my personal site, jenlepp.com, which is actually housed on one of our shared servers Espeon (the box that we are currently doing new installs on). It’s a WordPress site with caching, though it’s fairly static without a blog.
And this is what we come up with:
Which is a nice, even line with barely any fluctuation between 10 clients (1.5 second load time) and 50 clients (1.6 second load time), and for us, this is what we want to see. From “How to interpret graphs?”
What if my graph is completely flat?
This usually means you are nowhere near being able to stress the target system. If you try to run a load test on google.com you will get a flat curve. Their site is powerful enough that any change in response times as a result of the load we generate is all but impossible to measure. If your site runs on powerful servers, with lots of Internet bandwidth, or if your system is very efficient you can also get a fairly flat curve.
So, we want to see a nice fairly flat curve when testing a shared hosting site – it means with 50 people slamming our site at the same time, everyone loads relatively quickly, the system isn’t stressed and our code isn’t causing any kind of bottlenecks, though we could likely shave a few fractional seconds off that response time with a CDN.
We’ll try one more time on an older box that’s been around – Espeon’s a pretty behemoth server, and it’s our newest, while Blastoid is a bit older. We’ll test my husband’s site, mrlepp.com, which lives over on Blastoid.
This is also a great way to demonstrate the difference caching can make on a WordPress blog, because while my nearly identical single-page WordPress site is running W3 Total Cache, my husband’s site is being generated from the PHP and database calls with no caching in use whatsoever so each visitor launches a number of requests and processes.
and as I suspected:
at 20 visitors is starts to go up a smidge, and at 40 to 50 you start to see a bit of a slow down of a full half a second and it’s creeping up at a fairly steady rate. Initially, the response is relatively the same as on the larger server and with caching, but as more people pile on, the site starts to work a little harder and slow down by almost a full second, though this is still somewhat of good result as an extra second is not going to tend to be that noticeable. I’ve seen shared test results where sites slowed down to 20 seconds at 20 visitors, so there’s a yikes for you.
Generally, most shared sites don’t see 50 people at once, but it’s always good to get an idea of what your site could tolerate, and to make changes that are in your control (like caching) to help prepare for unexpected sudden popularity. This is also a great way to get a gauge of the server you’re on, and how much it can handle – if that line rockets up, there’s a problem somewhere (whether with the server, or the code) and it’s a good idea to address it before it takes you by surprise.