You Don't Have to Be Superman to Achieve a Super PageSpeed Score
When some articles cover how to improve page speed, they often focus on somewhat contrived or idealized examples. That is not what this article will do. Instead, the focus will be on a real world example in which a poorly performing website was vastly improved to the point that it frequently achieved PageSpeed Insights (PSI) scores of 100 or nearly 100 on mobile. Here is a screenshot taken today of one of the pages showing a score of 99 on mobile:
Main Challenges
Since this was from a real website, there were a number of challenges common to many websites, though each of them had a solution. Here were the main challenges with this website:
Angular This website was built with Angular, a JavaScript framework. While Angular is amazing for creating very sophisticated websites (single-page navigation, advanced animations, interconnected interactive components, etc.), it comes with a hefty price tag. In particular, the JavaScript payload is extremely large, even with tree shaking, lazy loading, and server side rendering.
Umbraco This website was built using Umbraco, a CMS great for allowing content editors to easily construct new pages with no developer involvement. The downside is that all the dynamic capabilities of Umbraco can take some time to run, potentially slowing down your pages.
Dynamic Data This website has pages that display data that changes frequently. That makes caching more of a challenge.
Image Slideshow The most important pages across the site include a slideshow of images above the fold. This impacts a number of metrics (e.g., largest contentful paint).
Bulky Markup Many of the pages included large sections of data that bloated the DOM so that there were over 4,000 DOM elements per page. The more DOM elements you have, the longer it takes to paint the page.
Dynamic Query Strings When people land on the site, they often came from ads and other links that incorporated query strings. This makes caching the entire page a challenge (especially with Cloudflare).
GTM Kitchen Sink As is common with many sites, GTM was being used to inject everything and the kitchen sink. This was impacting a number of metrics, especially the time to interactive.
Constant Changes Websites are never really “complete”. New changes are introduced every week via various vectors, such as new content, code changes, and scripts injected via GTM. This made it harder to maintain good page speed over time.
Solutions Covered Elsewhere
Rather than rehash what I’ve already written about, you may want to take a look at this other article that covers a few relevant performance optimizations: Replacing Angular with Plain JavaScript Modules
In particular, this article delves into the following topics (among others):
Replacing Angular with Plain JavaScript This is the number one thing you can do to improve page speed.
Deferring JavaScript You can avoid slowing the page load by only running the JavaScript when it’s needed.
Lazy Load Images and other assets can be loaded some time after the initial page load.
Other Best Practices A few other best practices are mentioned, such as componentizing CSS, inlining critical CSS, removing third party JavaScript, locally hosting assets normally hosted on external servers, and so on.
It’s also worth noting that one thing has changed since that article was written that I no longer recommend. That is, server pushing assets. In essence, it’s a feature you will not likely be able to use in the future: HTTP/2 Push is Dead
Summary of Solutions
For those of you looking for quick answers, here is a brief summary of solutions to save you the time of having to sift through all the details (we’ll get to those later):
Convert Images to WEBP This one is a no brainer since WEBP is well supported (except for on IE11). Maybe some day we’ll be able to use AVIF, but for now WEBP is your best bet for serving compressed images.
Embed Lower Quality Images If the largest contentful paint element is an image, you can lower the quality initially (making sure to embed the data directly into the markup), then swap it with a better one later.
Embed Essential Images in Head Going a step further than a normal embed, you can store image data in a JavaScript variable in the <head> of the page, then make use of it with a web component. This can help to encourage the renderer to take a break after rendering the first image.
Lazy Load Header Just because it’s above the fold doesn’t mean it can’t be lazy loaded (if you take care to lazy load the items that are not initially visible).
Lazy Load Bulky Sections Instead of rendering everything in the initial HTML payload, you can load some portions of the page after the first user interaction.
Preload Late Preloading can actually do more harm than good. If you preload at the right time (once most of the essentials have loaded), you can get the benefits of preloading without slowing down other resources.
Optimize Cloudflare Cache You can configure Cloudflare so that it serves a cached version of a page even when a dynamic query string is present. Even better, you can still serve different versions of a page for particular query strings. You can also serve a cached version of the page while the cache is refreshed in the background.
Load Dynamic Data with AJAX Loading the dynamic portions of the page using AJAX can greatly improve how quickly the rest of the page can be displayed (e.g., because you can cache it for longer durations).
Conditionally Inject GTM Dependencies Just because GTM is being used to inject everything and the kitchen sink doesn’t mean it has to do so all the time. Based on various conditions, you can inject only the scripts you need.
Page Speed Monitoring We can use tools like DebugBear to monitor changes to page speed over time.
WEBP Images
This one is pretty low hanging fruit with Umbraco websites since Umbraco 7/8 come with ImageProcessor, and ImageProcessor supports WEBP using the WEBP ImageProcessor Plugin. Umbraco 9, slated to be released shortly after publishing this article, incorporates ImageSharp, and ImageSharp will likely support WEBP very soon (see issue and pull request).
Assuming you are using ImageProcessor (and the WEBP plugin) and ImageProcessor.Web, using WEBP is as simple as adding “format=webp” to the query string. You will likely also want to make use of the quality parameter (setting “quality=60” is a reasonable compression level for many images). You could also go a step further and allow the content editor to choose the image quality.
Using WEBP will shrink your images smaller than JPEG can, and you can still use transparency like you would with PNG images. As a bonus, you can even use lossy compression (i.e., the type of compression that produces the smallest files) in combination with transparency.
Embed Lower Quality Images
For the most critical images, such as those above the fold (most especially the one that is the largest contentful paint element), you can embed them directly into the HTML rather than force the browser to make a second request to download the image separately. Here is a short C# code snippet that shows you how you can convert the image to base64:
using (var client = new WebClient()) { // Download the image file. var bytes = client.DownloadData(fullUrl); // Convert the image to a base64 representation. var encoded = Convert.ToBase64String(bytes); var imageDataUri = "data:image/" + imageType + ";base64," + encoded; }
You will want to add some caching, but this gives you the basic code you need to base64 encode the images. Note that the imageType in this case would be the image format, such as “jpeg” or “webp”. Instead of setting the src attribute to the URL, you can set it to the data URI you’ve just generated.
Once everything else has loaded, you can then replace the lower quality image with a higher quality image.
Embed Image Data in Head
This one is a bit tricky. In essence, I noticed the browser was taking a break just prior to painting the largest contentful paint element (i.e., the embedded image), and that was slowing down the LCP (largest contentful paint) metric. I’m not actually sure if my workaround helped since I did several other changes at the same time, but I’ll explain it here just in case you want to try it.
Before I get into the code, I’ll dig into the concepts. The browser handles tasks in phases, and it can repeatedly take breaks from some of theses phases to handle others. For example, it can take a break from parsing the HTML to parse JavaScript, or it can take a break from both of those to render the page as best as it can given the current assets it has downloaded and parsed. It would not be strange for it to render the page 3 times, for example. The key to this workaround is to trick the browser into rendering the page for the first time after it has already downloaded the main image.
To accomplish this, we move the image data into the <head> of the page. Since it is in the head of the page, you can be sure the browser will not attempt to paint the page yet (since there is not yet a <body> for it to paint). Also in the head, you would include an incline JavaScript snippet that is responsible for creating a web component that will inject the image into the page at exactly the right time. If you are unfamiliar with web components, they essentially allow you to create custom HTML elements that can run JavaScript (among other things).
Here’s the JavaScript snippet I use for the web component that swaps out the image (note that I actually embed the image data into this snippet):
class EmbeddedImageSource extends HTMLElement { constructor() { super(); } connectedCallback() { let el = document.createElement('source'); el.media = '(max-width: 479px)'; // Long inline image data removed for brevity. el.srcset = 'data:image/webp;base64,...'; this.parentElement.insertBefore(el, this); this.parentElement.removeChild(this); } } customElements.define('embedded-image-source', EmbeddedImageSource)
Then, later on in the page, I have some markup that looks like this:
<picture> <embedded-image-source></embedded-image-source> <source media="(max-width: 767px)" srcset="/mobile-image.jpg"> <source media="(min-width: 768px)" srcset="/desktop-image.jpg"> <img src="/original-image.jpg"> </picture>
Note that for IE11, the JavaScript will not work (IE11 doesn’t support web components natively). However, that should not be a problem, because it just means IE11 will ignore the <embedded-image-source> element and a larger image will display for that browser (which is fine, since IE11 is not a mobile browser).
For modern browsers, they will have the image data and the web component defined in the <head> of the page, then they will (hopefully) not attempt rendering the page until after the <embedded-image-source> has been parsed, which will ensure the main image gets rendered sooner, thus lowering your LCP (largest contentful paint) and improving your page speed score.
Lazy Load Header
If your website is like most websites, it has a header and in that header is a main navigation section. It is also common that the main navigation has menu items that can be expanded to show further menu items. In my case, these further menu items were quite extensive (hundreds of items). To help improve the FCP (first contentful paint) and LCP, I opted to exclude these hidden menu items from the initial page load. In fact, you don’t even need to load these menu items until the menu is expanded.
There is no real magic to this one, so I won’t share any code. Essentially, you just load the contents of the header with AJAX, then construct the header with JavaScript.
In my case, this reduced the size of the initial HTML by about 10KB, which means 10KB more of the page above the fold could be loaded in the same amount of time.
Lazy Load Bulk Sections
Much like lazy loading the header, you can also lazy load areas of the page that amount to a large amount of HTML. The benefit of doing this is that the browser can then utilize that saved bandwidth to download other assets sooner. In my case, this helped to improve the TTI (time to interactive) because the JavaScript (mostly, things injected by GTM) was able to run sooner.
Another trick I used to avoid using too much bandwidth was to add a pager (e.g., a “View More” button) to some of the content so it can only be viewed when a user clicks a button to reveal it.
Preload Late
The urge to preload assets makes a lot of sense, as by preloading you can ensure they all load as fast as possible (i.e., before they would naturally be requested). However, it is important to keep in mind that the order in which assets load is just as important as loading them all quickly.
When Google PageSpeed Insights measures page speed, it doesn’t just look at “when the page has finished loading”. Instead, it looks at a number of factors, such as FCP, LFP, TTI, and so on. You can improve some of these without negatively impacting the others depending on the order in which assets are loaded.
For example, since I don’t make immediate use of my website’s main JavaScript, I don’t start to preload that until around the time that GTM (and the dependencies that it injects) finishes downloading. That way, GTM will run sooner, and that will improve the TTI.
I also know that visitors on certain pages are likely to visit a specific subsequent page, and so I preload the resources for that subsequent page. However, I do that last so as to ensure the resources for the current page are prioritized.
There is no real magic to this, though you can experiment with what works best. Here are some opportunities you can use to initiate a preload:
End of Body I put GTM and related dependencies here so they start loading as other items above the fold are finishing downloading.
Window Load Event You can start preloading when all the other resources have downloaded.
Timeout You can set a timer with setTimeout to start preloading a couple seconds after page load.
GTM You can have GTM initiate a preload.
Lazy Loaded JavaScript In my case, I don’t run the JavaScript until a user interacts with the page. It is this JavaScript that starts the preload for the assets required on subsequent pages.
Optimize Cloudflare Cache
Setting up Cloudflare to cache pages is easy, but setting it up to do it right is harder than it seems. The main stumbling blocks I encountered were:
Query Strings Serving a cached page regardless of a query string, unless specific key query strings are present.
Always Cache Always serving from the cache, even when it has expired.
Bypass Cache Refreshing the cache when changes have been made.
I had a very hard time figuring out the first one until I happened across a feature called “transform rules”: Transform Rules
Using this feature, I was able to transform the request so that the server does not see the unimportant query strings (unimportant in the sense that they should not impact the cached response). It works similar to a rewrite in IIS, except these rewrites are applied before any of the other Cloudflare infrastructure. That means you can treat /page-one as exactly the same as /page-one?query=string. Here’s essentially how you set this up in Cloudflare:
To ensure you always serve from the cache, you can use the “stale-while-revalidate” cache control header, as shown in this C# example:
Response.AddHeader("cache-control", "public, max-age=1, s-maxage=86400, stale-while-revalidate=604800");
This tells Cloudflare that it can serve stale pages from the cache while it caches the page again in the background.
For the final bit, I wanted to clear the Cloudflare cache whenever somebody published a page in Umbraco. To achieve that, I hooked into the publish content event, got the page URL, then called this function:
private static void PurgeCacheForUrl(params string[] urls) { // Variables. var apiBase = "https://api.cloudflare.com/client/v4/zones/"; var apiEndpoint = "/purge_cache"; var apiKey = ConfigurationManager.AppSettings["Cloudflare:ApiKey"]; var cloudflareEmail = ConfigurationManager.AppSettings["Cloudflare:Email"]; var id = ConfigurationManager.AppSettings["Cloudflare:ZoneId"]; var url = apiBase + id + apiEndpoint; // Construct request. var request = WebRequest.CreateHttp(url); request.UserAgent = ".Net Server-Side Client"; request.Headers.Add("X-Auth-Key", apiKey); request.Headers.Add("X-Auth-Email", cloudflareEmail); var bodyObject = new { files = urls }; var body = JsonConvert.SerializeObject(bodyObject); var encoding = Encoding.ASCII; var bytes = encoding.GetBytes(body); request.Method = "POST"; request.ContentType = "application/json"; request.ContentLength = bytes.Length; // Send request. var stream = request.GetRequestStream(); stream.Write(bytes, 0, bytes.Length); // Read in response. var response = request.GetResponse() as HttpWebResponse; var responseStream = response.GetResponseStream(); var reader = new StreamReader(responseStream); var result = reader.ReadToEnd(); }
With these three approaches, I was able to achieve a nearly 100% cache hit rate.
Load Dynamic Data with AJAX
One downside to caching pages with Cloudflare is that it can lead to showing outdated data. To get around this, I loaded the parts of the page that need to have fresh data by making AJAX calls to APIs to get the newest data, and then constructing a portion of the DOM using that data.
There are a bunch of ways to construct the DOM once you have the data. I’ll just mention three approaches that are of interest to me:
Template Literals This is a built-in JavaScript feature that allows for you to construct small chunks of markup that have dynamic portions. See: Template Literals
Dompiler This builds on template literals to make them a bit easier to use: Dompiler
Svelte This is a bit like a lightweight version of Angular / React / Vue (you’d only want to use something like this when the DOM you are constructing is fairly complicated): Svelte
By loading in the dynamic portions of the page using JavaScript, the rest of the page can essentially be cached forever.
Conditionally Inject GTM Dependencies
Even with all of the above enhancements, TTI was still suffering. I was able to improve this by conditionally injecting GTM dependencies so that only the necessary assets are loaded for a given page.
One common pattern that I noticed is that we were injecting tracking JavaScript for various ad platforms even if users didn’t arrive on the website via one of those platforms. The idea is that if somebody lands on the site via one of these ad platforms, the corresponding JavaScript for that platform can both attribute that website visit to that ad and also enable that platform to dynamically serve up different ads depending on the ones that turn out to be the most successful (i.e., those that result in the most clicks or conversions).
The problem with this is that you then have a handful of ad platforms each having their JavaScript running for every user on your site. What you can do instead is to detect which ad platform a user came from, store that in a cookie, then only inject the JavaScript appropriate to that ad platform. All of this can be done within GTM. How exactly to do this depends on the platform, but here is some pseudo code that demonstrates how it might work for something like Genius Monkey (other examples being Bing and El Toro):
if (hasQueryString("utm_source=geniusmonkey") || hasCookie("is-genius-monkey")) { rememberGeniusMonkeyInCookie(); injectGeniusMonkeyJavaScript(); }
The only real way to know which scripts injected by GTM are necessary is to understand each of them and research their exact purpose. Once you understand that, you can write some JavaScript to only include them when they’re needed.
Page Speed Monitoring
That might seem like the last piece of the puzzle, but that is only true if the puzzle is ever finished. In my case, I tend to work on websites that go through a vast amount of change over time. If you aren’t careful, your page speed could tank and you would be none the wiser.
To avoid this situation, you could rely on Google’s tools, such as Core Web Vitals in Google Search Console. They will tell you when new issues impact your page speed. The downside to these tools is you get outdated information some of the time (e.g., it takes about a month for Google to calculate the average page speed).
The way I recommend handling this is to use a service like DebugBear. What DebugBear essentially does is run the same tools as PageSpeed Insights on a frequent basis (the more you pay, the more frequently it runs), and you can set up alerts so that you will be notified if anything changes drastically. You also get graphs of the page speed performance over time (among some other metrics), like this:
These graphs allow you to visualize changes over time. I have found this useful to figure out when problematic changes were introduced, and also to set a baseline for performance so we know what to target.
Once you have improved your website’s page speed and you’ve set up monitoring, you can be confident your website will perform at its best.