Speed up your Rails app’s page load

Back when I started developing for the web in the earlyish days of the web, we didn’t think too much about response time, and if we did, we just thought about the actual server response time —  heck, we barely if at all used any JS or CSS.

Now, networks and computers are faster, expectations of page response time are higher,  ‘web pages’ use large amounts of JS and CSS in external files — and on top of this, paradoxically, the rise of mobile and people on cellular networks mean some people are on slower (and especially much higher-latency) networks.

Dealing with ‘page load time’ apart from server response time — mainly meaning minimizing delays due toloading CSS and JS in external files — is something people are talking a lot about, and if you, like me, hadn’t thought much about it, it’s probably time to. Fortunately, especially if you’re using Rails, there are a few relatively straightforward things you can do with huge impacts, probably a lot easier than profiling and optimizing your server response time in fact.

Chrome Dev Tools “network” tab is a great tool for looking at your page load time and where the bottlenecks are. Other browsers these days probably have similar tools (and if you wanted to go whole hog, you’d want to use such a tool in multiple browsers, since different browsers have different behaviors with regard to loading and processing JS/CSS sometimes).  But I just stick with Chrome, one step at a time.

You probably won’t see what’s really going on if you’re sitting at a workstation on the same local/internal network as your server though. From my ordinary desktop, everything looked quite speedy before I started — but from home, on a fairly slow shared consumer broadband line in the same city as my server but still over the internet, things were much, much slower.  Fortunately, at my place of work we also have a DSL line set up, with a single computer on that DSL line we can “remote access” into, to test things “from the internet”.

1. Make sure Apache is taking advantage of the Rails asset pipeline

The Rails 3.1+ Asset Pipeline should be great for speeding up page load time.

To begin with, it combines all/most of your CSS into one CSS file, and same with one JS file.  You might think this is enough, hey, the browser can cache this as one file, and even if the browser later does a conditional get, it’s just doing one for the CSS and one more for the JS., that’s a great improvement, right?  It might be, but in my limited observation, a conditional GET resulting in a 304 Not Modified still could take 200-400ms.  Times two, one for JS and one for CSS.  That ain’t peanuts.

It really does pay to make sure you are sending far-future expires headers for fingerprinted (aka “digested”) asset URLs, so browsers will use the copy from the cache without even doing a conditional GET check.

Of course, that only applies if the browser has a cached copy. You want as fast as possible without that too (first access; user has deleted cache or copies have expired off the end; you deploy a new version of app with new fingerprints).

So you also want to make sure Apache is delivering the gzipped versions of CSS and JS that the asset pipeline creates for you — if you don’t do something to make this so, those gzipped versions are not being used for anything. In my limited observation again, yes, this really matters. I’ve got a really big application.js, 300K unzipped (mostly jquery and jquery-ui).  It was taking 800-1500ms (!) to fetch.  Serving up the gzipped version cut the file size in a third, and the overall transfer time in half.

I’ve got instructions for how to set up Apache for both in my last blog post (the gzip ones are a bit experimental and tricky, but I think worth it).

Downsides?

If things are set up right, essentially none — the asset pipeline is already setting you up for both these things, might as well take advantage of it.

The only downsides are if things aren’t set up quite right, if the setup has a bug of some kind.  The gzip stuff could cause your CSS or JS to not be properly interpreted by the user-agent and ignored, possibly only in edge cases involving HTTP caches and such (easy to fix once you discover it, but it might go undiscovered).

If the far-futures expires stuff is mis-configured though, and a browser ends up far-future caching a file that isn’t fingerprinted — your users could have a broken experience until they manually delete their browser cache (even a hard-refresh might not fix it), and there’s nothing you can do to fix it server-side.  That’s kind of disastrous, is why I make my apache config suggestion above fairly conservative, trying to make sure it only far-future-expires the right stuff.

2. Apache KeepAlive on

HTTP 1.1 (not new anymore!) allows a user-agent to keep an HTTP connection open and re-use it for multiple requests, if the server allows that.  In Apache, the setting for whether the server will allow it is `KeepAlive`.

Many distros of apache ship with config files defaulting to `KeepAlive off`.

Turn it on.  This especially matters if your application is https/SSL-only, like mine. The SSL handshake is actually a pretty expensive part of the connection. (200-400(!) ms on my own home connection yesterday), making sure the browser can re-use a connection (at least to get the CSS and JS from the page it belongs to) really does make a difference. (Although browsers may also do things with re-using certain SSL session paramters to make second SSL faster even without keepalive? I don’t fully understand it!)

With Passenger, at least, KeepAlive in apache has the intended effect, and clients can re-use a connection that they talk to your Rails app with. With other solutions that use apache reverse proxy, I’m not sure.

Downsides?

Your apache instance will have more connections in it’s connection pool (have to keep attention to max connections setting in apache), and your apache instance will use more RAM.  That’s about it. It ought not to use more CPU or anything like that. 

Passengers “passenger-memory-stats” command is a great way to see how much RAM your apache is really using at present. (Need to run it as root and make sure it knows where your apache httpd binary is, it’ll guide you to what you need to do if you run it).

I am not sure the best way to check how many connections apache currently has in it’s connection pool, and compare to max connection pool setting. That would be good to know, to keep an eye on things.

But overall, KeepAlive ought not to cause any problems. Probably. And can speed things up a lot.

<script> async

So you’ve got a web page with a <script> tag in it’s <head>.  Normally, when the browser gets to the <script src> tag, it will stop processing the page, and go fetch the external script, and not until the entire script is fetched will it continue, downloading other referenced assets, etc.

This is kind of unfortunate. Making the external script load faster as in the above sections will help, but really, we wonder, why can’t the browser download the script in the background while it continues doing it’s thing?

Well, it can, if you tell it to, with ‘async’.  “<script src=” async=’async’></script>” (old style) or “<script src=” async></script>” (html5).  Rails: “<%= javascript_include_tag ‘application’, :async => true %>”.

Why doesn’t it do this by default? Backwards compatibility, mainly that if your script includes a “document.write”, async can’t work. Yours probably doesn’t do that.

But also, because it might increase the “flash of unstyled content” effect.  With async, there could be a longer gap between when the page is visible and when the JS actually runs — this matters if, for instance, you have some JS that adds an “expand/collapse” control to the page, making some page that is in the DOM visible to begin with invisible — might show up visible for a second, before the JS hides it.  I do this.

Here’s someone describing this problem – his description doesn’t even talk about async, but describes well the basic problem. Async may make it more visible than it was before, make the gap longer.  That page describes a clever workaround, which is hacky enough that you’ll be thinking “No way I want to introduce that to my codebase.”

You may not have any JS that does such things (re-styling or hiding content that was visible or different pre-JS), so it’s not an issue. Or you can try a hacky workaround. Or you can fix your DOM so things are initially hidden or initially proper (possibly breaking your non-JS degredation/progressive enhancement). Or you can just not use the async.  Or you can think of some hacky workaround like the one linked above or different (divide your JS into separate files, one which will be loaded async and one not? Ugh no).

The async definitely speeds things up non-trivially for me, on first view without a cached copy of js, even with keepalive and gzipped. Saves a few hundred ms, which again ain’t peanuts. But it also makes some ‘flash of unstyled content’ problems worse for me, I haven’t yet decided what approach I’m going to take.

Here’s a great page talking about more tricks and gotchas and workaround with async javascript. 

Downsides

Depending on the nature of your JS, possibly making “flash of unstyled content” problems visible that weren’t before or worse than they were before. But only on browser load when the JS file wasn’t already in the browser cache, especially if you’ve got far-future expires headers going on.

About these ads
This entry was posted in General. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s