Making sure that an application’s performance satisfies the needs of its users isn’t a, “one and done” kind of task. You cannot expect that one pull request, commit, or configuration entry will solve all of your problems permanently. As the project changes and grows, you will need to make sure you follow the DMAIC cycle for continually improving your product. In this article we’ll look at some of the very first steps that you need to take in setting up a web application to perform well on Google’s Pagespeed Insights evaluation.

Define Your Goal

The first step we’re going to talk about in this article is the definition of our goal. We’re going to make sure that we address as many of the concerns raised by Google’s Pagespeed Insights rules for high performing web sites. Pagespeed Insights, like Yahoo’s YSlow rules before it, and many other web site optimization tools collect the knowledge of what has allowed these companies to build web sites that deliver continent as quickly as possible. The suggestions in these rule sets range from how you include your JavaScript and CSS in a site, to improving the cache control headers delivered in your HTTP responses.

Your Initial Measurement

https://developers.google.com/speed/pagespeed/insights/

Cache Control Headers

The first thing that we will want to do is make sure that our application has a value set for the cache control headers for static assets. This configuration option tells browsers visiting the site to hang on to assets like CSS, JavaScript, and images for one year.

In the following file:

config/environments/production.rb

Add:

# Set static assets to expire in just under a year.
config.static_cache_control = 'public, max-age=31536000'

One year seems like a very long time to hang onto files. What if those files are updated before the year is out? That’s where the asset pipeline in Rails comes to the rescue. When all of your assets are compiled a hash is appended to them, a long hexidecimal signature of the file that is appended to the file’s name. Whenever you update the files, a new signature is generated and the application serves this new file with this new signature. So a browser re-visiting a site with updated assets, will be asked to download the new file with the new signature instead of using the old file. This strategy is sometimes referred to as cache-busting and it’s very useful understand how it works.

The purpose of cache control headers are very simple when you think about it. They keep clients from spending time downloading material that they already have on hand. That saves time in rendering the site and saves on data transfer costs if your hosting charges you based on the amount of data transfered.

Compression

Enabling compression is the other low hanging fruit that leads to considerable performance gains. Just like a user sending a big fat file in a zip archive, which compresses the file, enabling compression for a web server / application takes all of the content and compresses it, allowing the browser to unpack it for the user.

In the file:

config/application.rb

Add the line:

config.middleware.use Rack::Deflater

So that your application configuration looks something like this:

module ApplicationName
  class Application < Rails::Application
    # Initialize configuration defaults for originally generated Rails version.
    config.load_defaults 5.1
    config.middleware.use Rack::Deflater

    # Settings in config/environments/* take precedence over those specified
    # here. Application configuration should go into files in
    # config/initializers -- all .rb files in that directory are automatically
    # loaded.
  end
end

Other Ways to Accomplish These Goals

There may be a certain share of the readers of this article that are ready to jump in and say… “You can also do this with NGINX, Varnish, Apache and it would be better…”

True. I have setup all of those tools to accomplish the same goals many times before.

But here’s what I want for you and many other people to consider. A lot of the Ruby developers I have worked with haven’t done a deep dive into NGINX yet. Some projects will never have someone around to dive into Apache and come up with the ideal cache strategy. What we want to do is give people the ability to do something good, if not perfect.

90/100 vs 100/100