If you’re wondering why trackbacks and pings aren’t working on your blog then you might want to do what I did earlier today: allow your blog to talk to other servers.
WordPress needs either
allow_url_fopen to be set
On or to have the Curl extension loaded. If you’re having problems receiving pings from other blogs then both of these are probably turned off or missing. Wouldn’t it be nice if Options->Discussion warned that pings wouldn’t work?
Look in your php.ini, or the output of phpinfo() to check for both. If you want to enable fopen, then the entry in php.ini should look like this:
; Fopen wrappers ;
; Whether to allow the treatment of URLs (like http:// or ftp://) as files.
allow_url_fopen = On
I switched to Litespeed web server a while back and by default
allow_url_fopen is set to Off and the curl library isn’t included. Check /opt/lsws/php/php.ini and make sure remote fopens are allowed!
Thanks Barry for helping me fix that.
PS. if you linked to this blog recently, feel free to save your post again. WordPress will ping my site again and this time the ping will get through.
Now, that’s why you can’t believe benchmarks. Sure, this server was able to serve 100,000 page views in 282 seconds but:
- Requests were made from a VPS in the same datacenter. No need to worry about slow clients, or maintaining network connections to many remote clients.
- I used Litespeed Web Server instead of Apache.
So, Litespeed’s webserver is the one to go for? Maybe not. I can’t for the life of me get compression of the static cache working. When I do, the browser tries to display the gzipped data directly. I can enable the webserver’s gzip function but from tests I don’t think it caches the resulting gzipped file. (btw – mod_deflate, the Apache2 module that does the same thing suffers from this problem too!) Later – testing this again. Litespeed allows you to set a a gzip cache directory. For normal traffic it’s worth doing so pages load faster.
The mod_gzip site is a great resource if you want to find out more about compressing HTTP content.
How did Apache cope? I was serving 100 concurrent requests and Apache didn’t cope too well. It did serve all the file requests eventually but the load average jumped to just over 50 and the site was unavailable to anyone else. It’ll serve 1000 requests for a static file fine, even 10,000 too, but under constant load the server starts to wilt. Unless you have the RAM to keep enough Apache child processes going all the time you’re going to start swapping.
Meanwhile, Litespeed hardly caused a blip in the server’s load average. I’m quite impressed and I’m running it now. It’s also what powers WordPress.com. Even if you’re not using WordPress, you should look at alternatives to Apache.
This leads me nicely on to announce WP Super Cache 0.4! Download it here!
Major new features include:
- A “lock down” button. I like to think of this as my “Digg Proof” button. This basically prepares your site for a heavy digging or slashdotting. It locks down the static cache files and doesn’t delete them when a new comment is made.
- Automatic updating of your .htaccess file. (Backup your .htaccess before installing the plugin!)
- Don’t super cache any request with GET parameters. You really need to use fancy permalinks now.
- WordPress search works again.
- Better version checking of wp-cache-config.php and advanced-cache.php in case you’re using an old one.
- Better support for Microsoft Windows.
- Properly serve cached static files on Red Hat/Cent OS systems or others that have an entry for gzip in /etc/mime.types.
- The Reject URI function works again and now uses regular expressions!
Support queries should go to the forum. Make sure your posts are tagged “wp-super-cache”, but if you post from that link they will.