CLIENT SIDE VS SERVER SIDE RENDERING
Anyone who is responsible for creating and deploying a website that serves the general public should know that the work is not done merely at the completion of website/web app (unless they are not concerned with how the web page shows up in searches/responds to users’ requests).
This article serves as an introduction to two popular methods of rendering web pages/apps for the end user. We’ll take a look at their rendering processes, use case scenarios for each method, and their impacts on :
A.) User Experience.
B.) Search Engine Rankings.
We’ll also cover topics such as:
1.) Web crawlers .
2.) Compromises and workarounds to optimise for SEO and rendering speeds.
Finally, we’ll round up with recommendations to try and take advantage of the benefits of each rendering method based on the type of content and number of users on each website.
Terminology
SEO – Search Engine Optimization , the steps taken towards maximizing the number of visitors to a particular website.These steps are done to ensure that the site appears high on the list of results returned by a search engine.
DOM – Document Object Model - A tree-like structure of the nodes that the browser creates (using html elements from the html file it receives) that help it efficiently render and manage the web page throughout its life-cycle .
API – Application Programming Interface
First Page Load Time – Also known as time to First Paint, this is the time between navigation and when the browser renders the first pixels to the screen on a new page.
Second Page Load Time – This is the average load time when navigating from one page to another.
Static Website - A web page that is delivered to the end user , exactly how it is stored.
Authoritative- A web page that is seen as popular by web crawlers because of the number of Links to it , and it being deemed as having high quality content.
Crawler Budget- Crawl Budget is the number of pages a web crawler crawls and indexes on a website within a given time frame.
Prior to JavaScript front-end frameworks (React, Vue.js, Angular etc), Most websites had followed the same formula for rendering, which was to prepare the HTML content on the server and then send the content to the browser upon request. With the advent of these frameworks, a different paradigm for web-page generation emerged, one which involved generating the pages on the browser (the birth of client-side rendering). This approach allowed for an improved user experience, with websites starting to feel like mobile apps, in the sense that once the pages are loaded ,navigation between them takes no load time at all. This is not without it’s drawbacks though and will be explored when we take an in-depth look at the various rendering methods below.
Client Side Rendering
With front end frameworks came the ability to modularize code into components, use of third party packages to extend functionality, and in some frameworks, write JavaScript directly in the HTML. This made front end web development substantially easier and as such these frameworks were largely adopted as industry standard. Client side rendering works as follows:
When the user sends a request to a website:
1.) The browser downloads the HTML, CSS and JavaScript needed to display content
2.) The JavaScript that was downloaded is contains the instructions on which data to fetch , and as such makes an API request to a server to fetch this data.
3.) After this information is received, the browser processes ALL pages for that website and when it is done, it renders a new Document Object Model (DOM) it uses to present content to end users.
Fig 1: Process flow for client side rendering
Impact on User Experience
It was mentioned earlier that user experience greatly improves because all web pages website are downloaded in the first render, so navigation between pages is smooth, as there’s virtually no load time moving from one page to another (Second page load time). There is however a caveat to this…
If you take a look at the diagram above , you may notice that most of the activity is occurring on the right hand side. The user makes a request, then the browser downloads the HTML, then it downloads the JavaScript. If it is a static website, where information is the same regardless of user, the loading may come to an end here, otherwise the JavaScript then proceeds to send API requests to the server for specific data, and then it has to create the DOM before finally rendering to the user’s screen. What is the end user doing during all this time you may ask ? Staring at a blank white screen!(or loading icon). This cannot be great for user experience particularly on a slow internet connection. Studies show that if first page load time goes from 1 second to 3 , the probability of users leaving your site goes up 32%, and if it goes up by a further 2 seconds (5s) , the probability shoots up to 90% (research by hobo consultancy). With regular client side rendering the first page load times tend to be long, while second page load times,are extremely short.
Impact on Search Engine Rankings.
“95% of web traffic goes to sites on Page 1 of Google” – brafton.com . It thus follows that our website should be as accessible and “crawlable” as possible to search engines. Search Engine Optimization (SEO) takes into account factors such as :
1.) Your URL’s accessibility and the recency of Upload
2.) Quality of Content
3.) Inbound links (links coming from another website to your own website)
4.) Time of page load
5.) Meta Tags and appropriate HTML semantics (which give the web page content meaning to web crawlers and allows for better interpretation and indexing)
Our focus here will be on how client side rendering affects website design factors (and not the actual web content) such as URL accessibility and time of page load, and how that ultimately contributes to search engine ranking. Before diving into this however, you might have seen terms like “crawlable” and “indexing” seemingly pop out of nowhere, Let’s get into a quick aside on web crawlers and indexers to clarify these terms and understand their role in how search engines rank websites .
Web Crawlers — Guardians of the Search Engine.
A web crawler is an internet bot (software application designed to automate tasks) that systematically browses the internet based on a list of URLs it receives. It then goes on to parse these sites to find other URLs(both internal and external). When it reaches your site it attempts to parse your site line by line, prioritising pages that seem more “authoritative”(pages that have lots of inbound links , or keywords that are being searched for a lot, in their meta tags) . Web crawlers also have a crawler budget which is a time period after which the web page inspection ends. They may not parse your site entirely because of this budget but will use meta tags on each page primarily, to glean information from the page.
To get the most out of web crawlers , it is in a website’s best interest to load as soon as possible to make more use of the crawl budget and the more links that point to a particular page, the higher it is likely to rank. As inbound links are mostly out of our control (other content creators must find the content informative before they link to it), it is crucial for pages on that website to link to one another to improve rankings.
Search Engine Issue No 1- one URL to rule them all.
Now, with rendering apps on the client side, the entirety of the app is represented by one URL, as the app is rendered as one page, with different components representing different page views. When a different view is loaded , what may feel like a page change does not cause the URL to change for web crawlers, and so what people deem as important sections/pages in your website are indistinguishable from less important ones , leading to an inaccurate search engine ranking(not to mention the fact that you can’t share a particular section of a website with a friend, and that may detract from the user experience). Meta tags cannot be specified for each page view either, and thus SEO ranking suffer from this inaccuracy as well.
Search Engine Issue No 2- Difficulty in Processing.
With Regular HTML files ,the web crawler is able to go through all links on a page , assess the relevance of it’s information, and pass it on to an indexer, ready to be called up when someone searches for it. With client side rendering , it is the JavaScript written in these front end frameworks that ultimately compiles the HTML for the browser to process. So when a web crawler arrives at our site, the JavaScript begins to compile the HTML, but the web crawler meets an empty web page, and moves on to the next URL in it’s list, leaving out our website entirely , meaning we don’t get ranked! Well, it’s not that ‘black and white’ as Google, who has majority share in the search engine market has a web crawler that can indeed process JavaScript ,while it’s competitions web crawlers struggle(see the diagram below). This is less than ideal as the website needs to show up on as many search engines as possible.
Fig 2: Various web crawlers’ ability to index websites made with front end frameworks. (www.moz.com/blog)
Even with Google’s ability to crawl and index JavaScript content, they(Google) still advise using some sort of server side rendering for Search Engine Optimization , due to the more resource intensive nature of processing a JavaScript website. When their web crawlers come across a client rendered site they initially crawl any static HTML they can find in the first encounter, hand out a preliminary ranking (which will likely be lower than if the website had been fully rendered),and then place the website’s URL in a rendering queue, moving on to other websites. They return to this particular website when there are available resources to process it(time of availability being unknown), then analyse and re-rank the website. This may not sit well with regularly updated websites like blogs.
Fig 3:A web crawler’s process flow for crawling and indexing a page for Google. (screamingfrog.co.uk)
Recommended by LinkedIn
When to use Client Side Rendering
The lends itself to more resource intensive web apps, and thus , client side rendering is of preference when:
1.) A website requires a lot of user interaction e.g data entry/report generation, or where the websites write preference is more than it’s read preference.
2.) The website has a complex user interface , spanning many pages.
3.) The website/web app serves dynamic data to numerous users.
Server Side Rendering
When a user sends a request to a website, the following process is kicked off:
1.) The server checks the requested route and any parameters on the route and proceeds to compile the HTML page (based on the server side logic for that route)
2) The compiled page is then sent to the client’s browser for display.
3.) The browser then downloads the HTML, makes the page available to the user, which fetches the resources it is linked to (CSS files, images and Javascript files), and then renders the page and makes it interactive.
Fig 4: Process flow for server side rendering
Now with server side rendering , a HTML file is first rendered to the screens of end users, before the rest of the files are requested for,and so the user sees content on the screen in a far shorter time than with client side rendering ( its first page load time is far shorter) . So how does this affect user experience and search engine ranking.
User Experience
As earlier mentioned, a HTML file is loaded on the user’s screen even before all data is fully received from servers. End users can already begin to read information on the screen, and this partial interaction can be crucial for some websites, such as landing pages, where relaying information before users leave is tantamount to the web page's success . Subsequent page loads take the same load time as the first page, as each page is requested anew from the server, and depend on good network connections to load unlike client side rendering. (its first page load time is equivalent to it’s second page load time) .
Search Engine Rankings.
Optimizing for search engines is rather straight forward for websites rendered on the server. As the server pushes only completely rendered HTML pages to browsers ,web crawlers have no issues or waiting time when it comes to getting keywords and hyperlinks to crawl. Page metadata is always accurate and the links on each page point to the appropriate pages , so it easy to get an accurate ranking courtesy of this rendering method.
When to use Server Side Rendering .
Server side rendering tends to be more intensive on the servers resources , so the following use cases apply:
1.) If the app has a simple user interface ,with a few pages
2.) If an app has more static pages than dynamic pages
3.) If the read preference is more than the write preference
4.) If your app’s first page needs to load up as soon as possible e.g advertisement websites.
Satisfying All Requirements ?
We’ve seen the benefits and drawbacks of both server side and client-side rendering , but is there a way to say , get our website’s first page load time to be as fast as if it were rendered on the server, and then subsequent pages to load as if they were rendered client side ? Can we get the best of both worlds ? Well it turns out that there are some workarounds that allow us to eke out some extra speed or improved search engine optimization.
Now with server side rendering , a HTML file is first rendered to the screens of end users, before the rest of the files are requested for,and so the user sees content on the screen in a far shorter time than with client side rendering ( its first page load time is far shorter) . So how does this affect user experience and search engine ranking.
User Experience
As earlier mentioned, a HTML file is loaded on the user’s screen even before all data is fully received from servers. End users can already begin to read information on the screen, and this partial interaction can be crucial for some websites, such as landing pages, where relaying information before users leave is tantamount to the web page’s success . Subsequent page loads take the same load time as the first page, as each page is requested anew from the server, and depend on good network connections to load unlike client side rendering. ( its first page load time is equivalent to it’s second page load time) .
Search Engine Rankings.
Optimizing for search engines is rather straight forward for websites rendered on the server. As the server pushes only completely rendered HTML pages to browsers ,web crawlers have no issues or waiting time when it comes to getting keywords and hyperlinks to crawl. Page metadata is always accurate and the links on each page point to the appropriate pages , so it easy to get an accurate ranking courtesy of this rendering method.
When to use Server Side Rendering .
Server side rendering tends to be more intensive on the servers resources , so the following use cases apply:
1.) If the app has a simple user interface ,with a few pages
2.) If an app has more static pages than dynamic pages
3.) If the read preference is more than the write preference
4.) If your app’s first page needs to load up as soon as possible e.g advertisement websites.
Satisfying All Requirements ?
We’ve seen the benefits and drawbacks of both server side and client-side rendering , but is there a way to say , get our website’s first page load time to be as fast as if it were rendered on the server, and then subsequent pages to load as if they were rendered client side ? Can we get the best of both worlds ? Well it turns out that there are some workarounds that allow us to eke out some extra speed or improved search engine optimization.
Appending For SEO
Using additional code libraries such as react router, ember router etc (routing libraries for the respective front end frameworks) ,it is possible present a different component for each route selected. The router links associated with each component can be referred to, from other website components and serve as external links for web crawlers to collect and move to once they are done parsing their current link. This in essence allows crawlers to register different components on a client-side rendered app as separate web pages with different URLs and links .
Coupled with libraries such as react helmet (a react library for meta tags), the web crawler will be able to see meta information and URL links for each page of our website, accounting for it’s SEO shortcomings as compared to server side rendering . This does not solve our problem of long initial load times however… enter dynamic rendering.
Dynamic Rendering/ Pre-Rendering
Dynamic Rendering is a method of creating a static HTML version of your existing client-side rendered app specifically for web crawlers. This solves the issue of long wait times for web crawlers , because they are served the static HTML version of the websites, which allow them to easily parse the information they need for ranking. This technique is particularly pertinent for solving SEO problems of JavaScript heavy websites that are constantly changing.
Fig 5:Dynamic Rendering Illustrated. (botify.com)
NOTE:- There is a technique called cloaking in which the content served to users is entirely different from what is served to web crawlers with the aim of artificially inflating rankings or pushing information disguised under another (usually more popular) topic. Search engines do not take kindly to this, and can result in your website being banned from that search engine permanently. You might be thinking at this point, “that sounds awfully similar to dynamic rendering, after all, we are serving the web crawlers different HTML pages to what the clients will see.” Now , in the process of dynamic rendering , you CAN cloak (for instance ,by stuffing a mess of unreadable keywords on the page targeted at web crawlers) but you can rest assured knowing that as long as you are not intentionally doing this, and the HTML pages served to the web crawler have the same content as what is being served to the end users, then your website will not be flagged for cloaking.
Pre-rendering is very similar to dynamic rendering, but the slight difference here is that a static HTML version of the page being served not just to web crawlers, but also to end users (The pages to be served are chosen beforehand and are merely a template of the fully loaded page, without user specific data).
Looking to the Future
Although server side rendering was pushed to the background in favour of client side rendering, it is being brought to the limelight once more, because the industry trending towards providing end users a great user experience from start to finish. Technologies such a React’s “renderToString” method( which loads web pages on the server and then dynamically fetches only the necessary content) are on the rise. There are also newer technologies yet to be fully adopted as industry standard, like React Server Components, that will render from the server, on a component by component basis granting more control over resource use.
Though it is trickier to render on the server (because you are responsible code transpiling, bundling and minifying) , it is well worth considering because of performance increase which may ultimately lead to improved customer retention. There are also frameworks like Next.js (which allows React.js pages to be rendered from the server) that help to abstract away some of these complexities. We continue to wait for these technologies to mature and see the definite direction that server side rendering will take in future.