mod_gzip, zlib.output_compression or whatever way you compress your web pages is a great way of reducing your network traffic costs but comes at the cost of increased CPU usage. Despite what you might think, it can be more expensive to send data over the network, especially to slow clients than compress it first of all and send a smaller burst.
Unfortunately this little server may not be up to the task of gzipping content at an acceptable rate to make it worthwhile. I’ll leave it run for another few hours and check the stats tomorrow.
It depends on how many sites you have on it and whether you’re doing any caching of the content on your end as well.. I’ve found WordPress to be a real resource hog 🙁
I’ve found that you gain the most incremental compression at the lowest levels of compression. The more you compress, the harder you work for smaller and smaller compression gains. So I always set it to compress at the lowest level. That’s the best compression:CPU ratio.
Recycled from my High Performance WordPress session at WordCamp:
The important thing is to make sure you’re looking at the right measurement. File size reduction or CPU cycle reduction shouldn’t be your goals… your goal should be pages that load faster. Perceived load time. If your 80% size reduction is causing so much CPU strain that the time from click to load is actually higher, what’s the point? Bandwidth is cheap, but if a reader is frustrated by the time your pages take to load, you may lose them.
So I usually go with the lowest level of compression, because in my experience that makes the pages smaller without much (if any) CPU hit. But you have to test it… if your server is underpowered, it may be that even the lowest level of compression causes a CPU hit that impacts perceived load time.
Thanks for the comments guys. I had to disable it again because this little server just couldn’t keep up.
Mark, you’re right. It’s the user experience that’s important. I found that browsing my own blog was slightly slower because the compressed output came down in one spurt. The uncompressed version appears as it’s generated by WordPress and the user sees a gradually loading page instead of “Waiting for http://ocaoimh.ie…” in their status bar. On a fast server it pays to compress, but here it feels slower.
CPU usage looks about the same on the graph provided by my linode, but when I’m logged in through ssh my session feels more responsive when compression is disabled.
If i have to buy more bandwidth that’s a good problem to have!
Donncha,
Is the server yours or are you leasing? If you’re leasing would you mind tossing out the name of the company? I have a need for a fairly small dedicated server.
It’s a virtual private server from http://linode.com/, so everything is virtual. Works well enough but a little underpowered perhaps for my needs now.
Thanks 🙂
You wouldn’t by any chance also be able to recomend a decent dedicated server company? I’m still toying with the idea I emailed you about and am trying to figure out how much it’s going to cost to get started.
Sorry Andrew, haven’t a clue really, although Blacknight in Ireland are quite good I get the feeling you’re looking for US-centric hosting!
Oh, turned on compression again, but tuned MySQL so it uses less memory and therefore swap. May make a difference today.
Acually, wpmudev and all my other sites are on a server in a Hong Kong datacenter. So if I find a provider with a decent price and a fast connection i’d definetly give it a shot no matter what company the server will be in. thanks for the help 🙂
I’m testing out mod_gzip on my experimental site. The experimental server is bandwidth limited but has a fast processor, so this seems to work well. I also found WP to be a resource hog and quite slow. I guess that’s the price you pay for a full feature set.
One other thing: are you setting your ‘tmp’ directory to point to a ramdisk? If not, you really should. This will help performance a lot.
I’m testing out Litespeed httpd server now which has built in gzip compression and it’s set to the lowest compression. My VPS has 150MB memory so I don’t think I can afford a RAM disk right now 🙂
Depending on your OS, you may be able to turn /tmp into a small ramdisk (say 4mb – most webpages will be less anyway) and anything larger would be pushed into swap and so you get back to normal HDD speed. If you limit mod_gzip to only compress pages 2mb or less then it should all work out fine 🙂
Look at it this way…
90%+ of all browsers (ASK) for compression. Yes, that WILL add more to your server CPU load… if you have EVERY item being compressed (On the fly).
That is just stupid! Like a Windows thing!
What do you compress…
Put all your CSS files in one folder. Pre-compress that folders contents, and only add pre-compressed data to those folders. Make all calls for CSS files, in a PHP file that checks if the ASKing browser is ASKing for compressed data. If not, then kindly DECOMPRESS the file, and return the decompresed data, as the file. {ECHO ($fileuncompresed)}.
It is FASTER to uncompress, then to compress…
If only 10% of your visitors are ASKing for UNCOMPRESED, you have jut cut your CPU use down by 90%, and an additional possible 80%, due to the fact that you are now UNCOMPRESS on every call, and not COMPRESS on every call.
Files NOT to compress…
JPG, BMP, PDF, GIF, PNG, ZIP, EXE, etc… (These are already compressed, or are just too random in contents, to gain a viable compression ratio.)
Files to compress…
txt, htm, html, php (Non code data), MYSQL(BLOB data), CSS, JS.
Using HTACCESS and your proper PHP.INI files for directories, you should be able to come up with a viable option.
Personally, all my critical PHP code is in xxx.P files, which my HTACCESS rewrites, if accessed by anything other than my server. They are all below ROOT, and in a sub-standard folder. My html files are not processed as PHP where I am POSITIVE there will be no need for code (Tos, Disclaimers, Rules) Stuff no-one even reads. My htm files are ALL processed as PHP, since they MAY contain code, and should ALWAYS be sent compressed.
But that’s just me…
I wish APACHEE had a set folder that was ALWAYS compressed, and could only be added to, with a special command. Like MOVE(‘./compressme/*.*’, ‘./compressed/’)…
Simply fired as a cron-job, or internally on a special C-FTP access. (So all files there, would work in reverse, without special setups.)
If they ASK for GZIP, in that folder, it just transfers. Otherwise it UNGZIPs before sending. Essentially what I setup in the above paragraphs.
Same with the opposite end. They should have a special folder for all (BINARY)… still seen in ROOT, but the physical separation tells APACHEE to not even think about altering data in there, for any reason related to compression, or code… Since those file would essentially be ALL-READ-ONLY. (Can only be removed or updated by file transfer, not internally edited in any way.)
Using gzip can save you HUGE amounts of bandwidth, and increase the overall performance of your webserver. It also makes a world of difference for people viewing your site on a dial-up modem.