Client-Side Rendering – Promises / Pitfalls / Performance

Client-Side Rendering – Promises / Pitfalls / Performance

In this piece I look at client-side rendering as an architectural choice, rather than comparing this or that framework. I look briefly at its purpose, go into a familiar example of how it can go badly wrong, briefly consider architectural alternatives, and whether CSR is likely to be a good bet for your next project.

The single page app (SPA) was first described and demonstrated all the way back in 2002. There was even a US patent that year for an SPA implementation. Early big popularisers were Google Maps in 2004 and Twitter’s 2010 redesign. (Remember Twitter? Whatever happened to them?)

In 2011 FaceBook deployed ReactJS, and 2 years later they open-sourced it. React is of course perfect for Facebook, with everyone’s infinite-scroll homepage built of multiple data feeds.

From the very beginning, the promise of SPAs and of CSR in general, was a faster, more ‘live’ page transition. No need to re-load and re-render all the page furniture when only part of it changes.

Today, this is one of a few core appeals of CSR for architects, stakeholders and dev teams:

  1. The promise of faster loading between pages
  2. A single framework for initial rendering and for page elements that respond to feed updates
  3. A unified architecture for web pages and mobile apps, served by the same APIs

On top of these, it’s fashionable. So much so that architects and teams may well ask which FE framework should we use? before asking would our site, our product, our users benefit from an FE framework at all?

Let’s look at how the CSR can serve the user poorly. 

FE rendering – the numbers

Here are the memory weights of some familiar pages, all measured using Firefox developer tools on a MacBook Air. These aren’t lab-perfect benchmarks, but the numbers are stark.

  • Wikipedia’s page on LinkedIn: 29.4MB
  • Linkedin’s page on X (before scrolling): 152MB
  • LinkedIn’s FaceBook page (before scrolling): 156MB
  • LinkedIn’s YouTube Channel (before scrolling): 223MB
  • A job page on LinkedIn: 226MB

 A page advertising a job, consisting mainly of static text, is heavier than a graphics-oriented infinite-scroll page navigating videos.

Some other key data for the page on LinkedIn:

  • 13 separate JS requests
  • Doxens of API calls
  • 29 MB of strings (just 13% of the page weight) vs 119MB of uncollected JS objects (53%)
  • 2366 rendered words – that's just 10.5 words per MB

That last point is a shocker! Compare it to the Wikipedia page: 13773 rendered words (6 times as long) in 29.4MB (7½ times less), coming in at 468 words / MB.

At core these are surprisingly comparable pages. Each represents a static resource, each with personalised feeds for the logged-in user. The LinkedIn page legitimately has more and richer feeds than the Wikipedia page, but not enough account for a 45-fold difference in efficiency.

The LinkedIn page has a loading spinner. Wikipedia doesn’t need one.

Why does the LinkedIn page perform so poorly? And how could we improve it?

  1. The served HTML, including embedded scripts, iframes and whatever else they have in there, is 1.6MB.
  2. The XHRs for initial page load come to another 2.7 MB
  3. The fully rendered page after processing XHRs is still around 1.6MB – the actual content that we’re coming to the page for doesn’t much change the markup weight

Beyond that everything else is very much secondary. The bottom line is that the underlying markup is far too large and the XHRs are 1.7 times larger.

There is no way client-side rendering can be quicker than serving markup when the data feeds for the initial load are larger than the markup itself.

(Aside. I started this article to discuss the general issue, not to bash LinkedIn. You can find plenty other examples out there easily enough. But it turns out that the LinkedIn job pages are a special case of awfulness.)

Other observations:

  • Many 0-length XHR responses (in addition to the enormous API)
  • A heavily structured, highly architected, extremely verbose XHR data format
  • Some tiny, inefficient response bodies, eg

 {"value":"7"}        
The title card of a job page on LinkedIn, showing the employer's name and logo, a title and some summary information, and with user links for 'Apply' and 'Save', a share icon and burger menu.
This card is 9kB of HTML generated from a 147kB JSON data source.

In all it looks like LinkedIn is trying to provide the data for any conceivable variant of the resource and leaning on the user agent (and the user’s unlimited internet connection) to do all the heavy listing.

It’s not hard to improve on this

  1. Serve the core page content as rendered markup. Or at least serve the individual cards as rendered markup.
  2. Use JavaScript for what it was designed for: adding behaviour. This includes feeds that will change while the page is open, eg my notifications.
  3. If necessary or desirable, use JS to orchestrate loading cards, personal content (my profile dropdown) etc.
  4. Cache what you can at appropriate levels.
  5. And for goodness sake clean up that 1.6MB initial page load!

What is the best use of client-side rendering?

It’s not a question of whether CSR is a valuable tool. Of course it is. In some applications it’s beyond invaluable:

  • Infinite scroll
  • Pages that represent a live data feed
  • Rendering third-party data sources

And it’s important to recognise that CSR can improve responsiveness when users follow links linearly, but the value breaks down in settings where cold page loads become frequent.

 Likely to work well:

  • Environments that don’t support open-in-new-tab (mobile apps, TVs) ✅
  • Mobile browsers (long press required for new tab) ✅
  • Complex workflows or industrial applications ✅

Likely to work less well as cold-load becomes more likely:

  • Pages likely to be bookmarked ❌
  • Pages likely to be shared in emails or social platforms ❌
  • Pages likely to be compared (eg shopping sites) ❌
  • User groups likely to open multiple tabs (I’m talking about us, Devs!) ❌

And of course it’s not necessarily an all-or-nothing decision. Some pages may benefit from CSR enormously, others not. Some situations may justify ongoing use of CSR – turning round historic architecture decisions is rarely straightforward.

 Ideally then, CSR decisions should come down to some key questions:

  • Where in our product will CSR materially benefit the user?
  • Will CSR support our own multi-platform efforts, or might another approach work better?
  • How do we implement it to realise more benefit than cost?

Thank you

#ClientSideRendering #SinglePageApps #Architecture #Performance

Excellent article Guy. Another thing I have noticed viewing a job page on LI. I cannot print it properly anymore. It just prints the content that I can see on the screen. And not even that reliably. Has that something to do with CSR as well?

To view or add a comment, sign in

More articles by Guy Strelitz

Explore content categories