The real way to improve server performance

If you want to improve server performance, the best way is to move as much of the processing off it and onto the client machine. All those visitors of yours are running souped up AMD and Intel CPUs with their big screens and fat harddrives. No wonder your small little hosting plan can’t keep up. Here are some very good ideas from a Slashdot comment I read this morning.

  • Databases can get pretty slow with complicated queries, so upload your database to the client when they load the page and then your database queries are distributed.
  • PHP isn’t very fast, and neither are Perl or Python, so you don’t want to be running them on the server either. Write an interpreter for the language of your choice in Javascript and move your business logic to the client. This will also interface better with the client side database copy.
  • SSL is a performance killer, don’t use it. If you need to send something securely just prefix it with a predetermined number of random letters and numbers, no one will think to look beyond them.
  • Writing to databases can be pretty bad too. Try discarding all your changes, your users might not notice the difference, but they will appreciate the performance gain.

Check out the original post for a few more invaluable nuggets. If you follow all these tips you’ll be well on your way to becoming a respected and l33t hacker.

And now the big news. I’m really excited about this. The next version of WordPressMU will have a special Javascript client-side db (JCSDB) library built in. JCSDB will enable distributed and parallel access to your WPMU db without the danger of harming your servers. The best thing about it? If your site is dugg or slashdotted then your visitor’s machines will handle the load transparently. Instead of using slow and ungainly TCP/IP the library will use super-quick UDP to communicate. It really is the best way of sending data over the Internet.

I expect Matt will roll out JCSDB on just as soon as a few of the final bugs are ironed out. It might be a bit of headache for Barry and Demitrious to administer, but at least we can get rid of at least half our servers and use them to power a massive game of Counter Strike at the next WordCamp.

Update on May 31st! You all thought this was a joke didn’t you? Well, Google Gears has just been released and “is an open source browser extension that enables web applications to provide offline functionality”.

  • Store and serve application resources locally.
  • Store data locally in a fully-searchable relational database
  • Run asynchronous Javascript to improve application responsiveness.

It’s still in beta but Google Reader already supports it by allowing you to download up to 2000 items to read offline. This could be useful when I’m flying to SF next July!

25 thoughts on “The real way to improve server performance

  1. Some good points there.

    There is, however, the issue of having JS disabled, about 5% of users, although this is getting better (until recently, it was 10%). This does, however, show some exciting possibilities for improving performance.

  2. If you say not all true… so u are lying.

    Ok, database are fast, they can work with milion’s of rows. Optimize your table, use good index, optimize querys!

    Explain select something from table where conditions.

    You can use cache but remember cache can be good and can be bad.

    ok php are slower than c++, but you can make aplications very fast. Load objects than u realy need them, load data only then their are needed.

    Ok SSL is making worse perfomenace.. but u need it for security.

    bad programmer can do worst perfomence for all language.

    for example how can be fast php and mysql with milions of rows: (it in german… but u can try navigation searching)

  3. @Donncha: I got a good laugh out of the comments on Slashdot. It’s pretty obvious most people didn’t read the article, or have an idea what the series was about, or have
    any practical experience beyond their personal home page. In the first article, which unfortunately didn’t carry over to the second article, I wrote that these articles only focus on server configuration and not application configuration or architecture. In the third article, which should be posted soon, I reiterate this. I also said that I focus on LAMP because it’s popular, and if you want to use Lighttpd or whatever database/scripting language, go nuts.

    I also find it funny how many commenters here don’t get that the comment you referenced is a joke!



  4. My favorite from the list was this one:

    * Make your site less interesting, or less reliable. Changing your DNS entry to point to an unrelated site for one week a month can really help reduce load.

    @Vaidas: My WP site is as fast—if not faster—than the site you suggested.

  5. I honestly don’t know how anyone could miss the humourous side of this article.

    I would say that the only serious way to have lightning fast internet is for every computer to come preinstalled with the internet, no waiting for slow internet connections, no more ISP wars as to who has the fastest connection, no more webhost wars as to who has the biggest and best accounts with the lowest prices.

    The only problem is, how often would we be provided with updates, constantly, daily, weekly, or only with the newest and greatest computers when we upgrade.

  6. Looking forewared to finally being able to get some Counterstrike running 😉

    I just changed my DNS to ocaoimh when the friends got together.

    On a more serious note: people could try working with HIB, lxadmin and lighttpd – thats not at all a dumb way to go. I believe some of the big social sites use lighttpd.

  7. I guess, by reducing the script might reduce the load time of a web.
    Like changing a web with tables to CSS sure’s cut some seconds on the load.
    For, database…. i’m still learning…

  8. Pardon my ignorance, but is the bit about JavaScript true? If it is, how does this impact on regular ole users like me – does it mean we’ll now be able to use JS-based widgets and such?

    Sorry for dumb q’s, I haven’t the first clue about this stuff.

  9. Well, another possible improvement could be this:
    1. make a Java client, taking eMule/eDonkey/shareaza as a basis
    2. make that Java client somehow run in the background on each client’s machine
    3. the web-site must be available as a set of smaller portions (pages or several bundled pages)
    4. each time some user requests our highly-optimized web-site, the either of two things happens:
    4a: if the user has no Java client, he gets it, and request is re-sent
    4b: if the user has our Java client, that client intercepts the request to our web-site, fetches from our single Pentium-133 Unix server a list of about 20 recent users, and fetches the whole web-site from those users.

    This way, “our highly optimized website” will be just giving out the addresses of recent users to new users in a short while…. 🙂 🙂 🙂

    There are some technical problems here, obviously, but they can be overcome. The biggest problem is in getting feedback from the users in such a system design. I presume this could be really done using UDP for comments (because even if you loose that UDP packet, it’s just a comment), and using some IRC channel for other feedback, placing an IRC-to-PHP (or IRC-to-ASP, whatever you like more) bot at your end of IRC channel. Clearly, each bundle of pages being distributed needs to be versioned, so that our smart Java clients can automatically (in background, of course) check for newer versions from other users. Hmm, I think there needs to be some internet-wide broadcast from our Pentium-133 server, to let at least some of Java clients know about the new page versions….

    Still a lot to do here, but that’s the way to go!

    🙂 🙂 🙂

  10. If you want a really really fast website, write the whole thing in Assembler or C!!

    And if you think that’s mad, have a look here. kHTTP is basically a HTTP server that runs as a kernel module in Linux!

  11. Implement the whole thing on a programmable logic device. Obviously. Expect VHDL snippets for dedicated functional units for sending user validation emails shortly!

  12. The possibilities of extensions to this idea are simply limitless – I’m thinking of taking my durrent website down completely and replacing the entire site with a single HTML file that can only be downloaded once to each IP address.

    Once downloaded it will sit in the client Browser cache asking them to produce the content of my site – the ultimate in distributed computer experience 🙂

  13. They should just start using gopher again. Use a stripped down protocol to put an end to the rat race of having the nicest looking page with the most javascript popup menu’s

    have the client download any large images and stuff automatically via bittorrent

  14. So your severs have been slowed with lots of requests.

    Then do what SAP did in the 90s. The network connection with lots of queries takes up a lot of CPU power.

    So have two servers: all the network queries go to the first server, which sends then in a single stream to the second server containing the db. Then it has the majority of its CPU to make the search to find the info and all the info flows faster.

  15. absolutely awful. no grounding in real information here. real databases can rarely be offloaded to the client for js languages, loss of application predictability results. Also few sites or companies want to share their proprietary database with an “unknown” client in a way that they cannot protect.

Leave a Reply

%d bloggers like this:

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.