Performance Improvements in an eCommerce Storefront: The Power of Lazy loading

Alvin Chan

June 10th, 2020

Elastic Path's React PWA Reference Storefront is meant to be generic, taken and used by any customer, and have it fit their specific needs. With that at the forefront of our psyche, we wanted to make as many feature rich experiences and integrations available to our wide range of customers.

However, we soon realized, with so much focus on building features we had briefly compromised on performance. More specifically we noticed a large increase in initial load times.

I will outline below our (ongoing) journey to improve performance in our Reference Storefront.

 

PART 1:  The Diagnosis

To help with diagnosing our initial load times, we periodically run Lighthouse performance tests within Chrome's Developer tools. Here's what we found:

To our dismay, our lighthouse scores had dropped significantly over multiple releases. To dive deeper we took a look at our bundles and realized a huge issue (pun intended).

We had bundles that were over a mb in size when compressed! When unzipped, it was a whopping 5.4 mb! Users first hitting our website were forcing their browsers to first download over a mb of JavaScript, then once unpacked, to parse and execute 5.4 mb before the first screen was finished loading.

Here's a visualization of our network waterfall:

 

From this point, it was clear we had to dig deeper and investigate what was taking up so much space.  We dove in using the Webpack Dev Analyzer tool to give us insight into our two hugely bloated bundles,  2.8ceed606.chunk.js and main.2fd00650.chunk.js

Here's what we found:

 

From the visual above, we discerned that most of our bloat was coming from third party integrations, as well as unnecessary dark code, that was all being bundled together, and sent to the browser on a users first uncached visit.

2.8ceed606.chunk.js had become ginormous as it contained every third party library that we needed.  main.2fd00650.chunk.js was equally as bad as it contained ALL of our custom component JavaScript from every edge of our application.

We obviously needed these 3rd party libraries in order to deliver the kinds of features that would appeal to our customers. So what now?

 

PART 2: The Solution - Code Splitting

After some research, it was clear that code splitting was the de-facto method to split up heavy web application bundles. During the initial inception, our application was a simple commerce application with a light product viewing and checkout flow where there was no real performance bottleneck. All the Javascript sent to the browser was still maintaining respectable initial load times and nothing warranted the added complexity of code-splitting. However, through various integrations/feature add-ons, we had grown our initial bundle extensively and had reached a maturity point in our application where the added performance benefits of code-splitting far outweighed the complexity it would bring.

Luckily for us the React team had introduced a new and intuitive method for implementing this in late 2018:
Here is a blog post written about those code-splitting features shortly after the React team made its release in 2018

These latest feature updates to React made it easier than ever to dynamically import the necessary Javascript at opportune moments and render them to the client as soon as the JavaScript was fetched and parsed. Bringing these features in meant that we could increase performance and maintain the feature set in our application by spreading the task of downloading JavaScript to different points in a users navigation.

As a result we could relieve the amount of upfront processing that took place on a shoppers first initial visit.  We also noticed that code-splitting would be even more beneficial for our application in particular as we had features that could be turned off and on, thus giving our application an opportunity to not load segments of JavaScript at all!

Another performance benefit we would see is that shoppers who had already visited our Reference Storefront would have cached smaller, more granular bundles. Thus, when an inevitable update to a bundle were to occur, a shoppers browser would only have to re-fetch the isolated bundle where changes were made as oppose to a bundle containing the entire application.

I will dive deeper into two areas where we started splitting our bundles.

 

Part2A:Code Splitting Routes

We began by first thinking about the different types of user journeys that would take place on our site. More specifically the pages that were likely to be first hit.

Our reference storefront only being a reference at this point did not have any reliable metrics to indicate what pages would really be hit first. We also understood that our customers would have varying repurposed storefronts that could have shoppers with differing behaviours and ultimately differing starting points in their buying experience. For example, a storefront re-purposed for a department store company might have more first visits on a categories page as opposed to a car manufacturer who might have more first visits on a product details page.

Thus we sought out to simply provide an intuitive breakdown of routes that could be easily be re-customized whilst providing our own preliminary recommendation/intuition on which pages a shopper would need on an initial visit.

We rationalized that all initially accessed pages would only be ones that could be accessed as an unauthenticated public/guest shopper.  As all authenticated users would have already gone through the site registration process, loading chunks through their browser cache. With that in mind we decided to include the HomePage, the Product Details Page and the Category Page in our initial routes bundle. Whilst dividing other routes into two sections:

  1. B2C Routes
  2. B2B Routes

For some context, our storefront codebase developed functionality for two sets of shoppers -- B2C and B2B.  So we had pages that overlapped and pages that were proprietary to one type of shopper.  Now that we were able to section out B2C and B2B routes we were able to lazy load only the pages necessary for a given type of shopper -- B2C routes would dynamically be imported if it were set in the configurations and vice versa with B2B routes.

Here is what it looked like in code:

 

This code block is run in App.tsx; a top level component run early in app initialization. We check whether our configuration is in either B2B or B2C mode, depending on this condition we dynamically import a React component containing all of our respective routes (either AdditionalB2bRouterContainer or AdditionalB2cRoutesContainer in the above excerpt). The dynamically imported route-component is then wrapped in a React lazy function to be suspended in React-Routers <switch> 

In the snippet above the variable routes contains our base routes with HomePage, ProductDisplayPage, and CategoryPage (Imported within our main chunk without any sort of dynamic importing). On the other hand <AdditionalRoutes /> is populated from an asynchronous fetch to either B2B or B2C routes and the routes will only be made available once the fetch returns.

 

Part2B: Code Splitting Non-Essential Components and Third Party Libraries

Next up, we needed to slim down the third party dependencies that were initially being bundled and downloaded, and have them deferred to not block the initial rendering of our webpage; or in the instance where a feature is turned off entirely to not bundle and send the JavaScript to the browser at all.

As an example of this strategy, I will go through how we lazy loaded our B2B barcode scanner.  The fundamental patterns and techniques with lazy loading the barcode scanner are similarly applied to our other third party integrations and sections of dark-unused or rarely-used components.

First, we pinpointed where in the customer journey our website should be loading our barcode scanner component and its related third party dependencies. To figure that out, let's dive into how the barcode scanner is used.

1. User hits the home screen and the navigation bar provides us with a lightning bolt option for quick B2b functionality. 

 

 

2. Click lightning bolt and modal appears with all of our options

 

 

3. We then click on 'Scan Barcode' and the Quagga library then takes over and your device is ready to scan a barcode.

 

Hm... so where should we start downloading the Barcode Scanner component and Quagga?  Obviously, we don't need to download the barcode scanner dependency right when the homepage loads.

We first thought we could start the fetch once the homepage renders. But then decided not to as the barcode scanner is not a functionality that is used all the time.  There would be a high chance we incurred the cost of downloading the bundle without the user ever even using it.

Our next thought was that we should kick off the download right when the barcode scanner button is clicked. That would ensure we only downloaded the barcode scanner bundle when it's needed.  We rebutted again realizing that would slow down the barcode scanners start up time, as the application needs to first fetch Quagga to render.

These were some of the options considered but we found that perceived performance would be greatest when we start to download the dependency as soon as the B2B quick modal is displayed (STEP 2).  The fetch at this point in the shopper journey would not inhibit the users navigation but also prepare the necessary JavaScript for a users (likely) next click onto the BarcodeScanner.

Here's what it looked like in code:

In bulkorder.main.tsx, the parent container of the BarcodeScanner component, we have:

 

We check whether or not to dynamically load the BarcodeScanner based on whether the right hand modal has been opened via the state value: isBulkModalOpened.  If it is then BarcodeScanner is dynamically imported and wrapped with React's lazy function to be rendered with Suspense.

 

The Barcode Scanner component is now able to be rendered as soon as the dynamic import is returned.  Thats all we needed to change in order to have our Barcode Scanner component dynamically import right before our user might have clicked.

We repeated this process for every feature and third party integration that wasn't absolutely needed upfront.

 

PART 3: The Results

So where did all this get us?  Here are the scores:

We were able to decrease initial load times by half! We went from our first contentful paint time of 8.9s to 3.9s. To be more specific that means a shopper who hits our website for the first time will now see his/her first piece of content in half the time. 

Here is our updated bundle breakdown.  You can see that singular compressed 1mb bundle that we saw in Part 1 has been split up into various bundles.

 

The split of custom components on the right hand side of this visual (components with shades of blue) was a direct result of our changes surrounding initialization of routes in section Part2A. The remainder of the splits were from continued efforts outlined in Part2B. Each one of these bundles will now be fetched by the browser on a per needed basis.

Here is a more detailed breakdown of our bundles:

Note: We also did some work around using named exports for Tree Shaking which slimmed down our bundles further.

 

You can see from our network waterfall, we no longer send all of our components to the browser at once.  Instead, the necessary JavaScript is loaded first (HomePage, ProductDisplayPage, CategoryPage) and then based on configuration, additional routes with third party dependancies are lazy loaded.

 

This has been our first pass on tackling performance. We still have work to do to get these numbers even lower and to look into better practices to continuously integrate features while maintaining performance. In upcoming posts I will go into more of our continued efforts to increase performance, with primary focus on optimizing assets.

Share on

Alvin Chan