Don’t Get Overrun When You Update
4.2.2 New Features: REFRESH Cache, Stale Cache Mechanism
1. How to REFRESH/PURGE cache manually
Synopsis: php purge_cache_byurl.php -(r|p) domain url [server_ip] [port]-r: refresh cache (use stale cache while updating cache)
-p: purge cache (delete cache entry) domain: domain name (required)
url: url (required)
server_ip: optional parameter, default is 127.0.0.1
server_port: optional parameter, default is 80
/usr/local/lsws/admin/misc/purge_cache_byurl.php can purge or refresh the cache
For a more detailed exploration of cache purging in LSWS, please refer to the litespeed wiki: cache purge. (This wiki entry only addresses purging cache, but almost all the information applies to refreshing the cache as well.)
2. Stale cache
Version 4.2.2 features stale cache. Stale cache time can be configured in the LSWS admin console under Server->Cache->Cache Stale Age. Here you can specify how many seconds to return stale content after the cache expires and before the new cache is available. The default is 10 seconds.
Why do we need stale cache?
We got an email from a customer named Marcos suggesting:
When we need refresh the home of our site, we do the purge … but we have 2,000 concurrent users in the home. After the purge the load grows from 0,15 to 200!!!!!!!!
We need refresh the cache from the home like this:
1. We send the order to refresh instead of purge.
2. Litespeed knows and replace the old cache by new cache getting only one petition of one user (or do it by itself) meanwhile the request is in process litespeed send to the 2000 users the old page(cached).
This would be a very very useful feature.
Thanks for the suggestion. We’re on it.
How does stale cache work in 4.2.2?
Stale cache addresses the issue brought up above by continuing to route extra requests (i.e. requests after the first request on an expired cache) to the expired cache for a certain number of seconds after the cache expires. This allows site users to see content (albeit very slightly outdated content) instead of waiting for the backend to process their requests.
Assume your homepage gets 200 requests/second and it needs 5 seconds to generate a new page with php + mysql. The cache expiration time is set at 60 seconds and you set stale cache age to 10 seconds.
In the first 60 seconds, 12,000 (200 x 60) requests are served by the cache. But then the cache expires…
The first request of the 61st second goes to the backend php + mysql to generate a response. This user is going to have to wait 5 seconds (plus a network latency of, let’s say, 1 second) to see the new homepage.
During those 5 seconds, though, stale cache goes into effect and 999 (200 x 5 – 1) requests are served by the stale cache (instead of waiting around).
From 66th second, now that the new cache has been generated, all 200 requests/second are served by new cache.
Step 3 is the crucial step here. Without stale cache, Litespeed has to start up php processes on every request received before the new cache is generated — an extra 999 php processes — which is why some customers might see their load grow exponentially.
In addition, setting up a cron job to manually refresh the cache before it expires can make sure none of your users have to wait. With this cron job, because the refresh was done manually and not via a visitor request, all users get immediate response from the stale cache while a new cache is being generated and nobody gets stuck waiting (like that poor sap in step 2).