How often have you come across web applications and websites that run so slowly it’s almost painful? You either get yourself a cup of coffee to sweeten the wait or you lose interest in the content altogether. I bet most of us are more likely to fit into the second scenario. This is becoming more and more evident, and it doesn’t take a software developer’s skills to recognize the problem.
This small detail will result in the loss of a potential customer, and – with that – a loss for the business. It’s obvious that the faster the loading time, the better the conversion rate. Web application performance has a specific business value. Dynatrace showed that when the page load time decreases from 8 to 2 seconds the conversion rate increases to 74%. In another study (conducted by the Aberdeen Group) it was demonstrated that a 1-second delay resulted in 11% fewer page views, and a 7% loss in conversion rate. Amazon saw a 1% revenue increase for every 100-ms improvement. Worst case scenario: the bad website experience will be discussed on social media, or web pages that list the worst of the worst (see Web Pages That Suck).
What could, then, be done to improve web application performance? There are several tools and methods available out there. In the end, your choice of tool doesn’t matter, it should just work!
Want to know how Espeo tackles the problem?
Here are some tips from our dev, Filip.
First, we need to define the bottleneck. To determine what’s working so slow we can use a tool that everybody has – DevTools in Google Chrome. The most important element for us is the Network panel, which records each network operation made on a page, including detailed timing data, HTTP request and response headers, cookies and more.
Here are some guidelines that define an efficient Page:
- <200ms time to first byte,
- <500ms to render above the fold content,
- <2000ms for a complete page load
The next step depends on what is too slow in our particular case. In general, there are two key areas for improvement. The first one is Time To First Byte (TTFB) optimization made on the server side. The second one is client-side processing, loading and rendering of contents.
TTFB defines how much work a server needs to do. Retrieving a lot of data from the database or a dynamic generation of contents can increase response time. If possible, we should use caching and minimize amount of tasks needed to be done on the server. We need to check if our database server isn’t overloaded, check the SQL queries and indexes in database. A good choice is to use additional noSQL databases to store key-value data (because they are faster than SQL queries). If possible, we should also use cache to store static result for some time. Hardware is important too. We need to be sure that our web server has enough resources (CPU, RAM) to handle the traffic. Sometimes there’s a need to engage a load balancer and additional resources.
Client-side performance is about minimizing the payload (image sizes, stylesheets and JS minification etc.) and the number of requests. It’s Important to minimize the blocking of Javascript code by moving JS parts to the end of the page source.
Tools
There are several tools that help to investigate the health of our web app or site, like Google PageSpeed Insights or Yahoo! YSlow. We can find out where we have room for improvement. When we have lot of images or static resources we can use CDN (Content Delivery Network) to improve timings. Images should be optimized to minimize the amount of bytes needed to be downloaded.
It’s all about response time, but what if the website or application is down? To prevent disaster, consider using a monitoring service that will periodically check website availability. Some helpful tools are Google Analytics, which measure website response time and publish a chart, Hyperspin or UptimeRobot to monitor and alert you if app becomes unreachable or NewRelic to analyze your hardware resources.
In business, the importance of measuring is key. It’s about picking the metrics that count, like customer satisfaction and performance. “It is a capital mistake to theorize before one has data” as Sherlock Holmes said. Indubitably, this also applies to web application performance. It’s essential to roll up your sleeves and dig into the data collected from the web site, and from there do adjustments. Again, it should just work!