As Product Managers we are obsessed with what we build. Well, we all want to build the best darn product ever. We immerse ourselves in understanding
- how our users are finding the product or if the campaigns are working (Reach, Activation),
- How many users and if they are engaging well with the product (Active Users, Engagement), and
- Last but not least, if your users come back (Retention).
The priority of these metrics changes depending on the nature of your app.
Give your product managers complete autonomy / authority to drive campaigns, onboard users. They are the best and most aware of the product they are building.
Automation is on the rise. With CI/CD and other similar processes; we are able to ship code at a faster rate then ever before. Delivering value to customers at this rate is great; but, it is important you focus on quality over quantity.
When we talk about quality, Performance is right up top. It is a common practice to check for performance before your application goes live. Most organizations do this.
Google’s Lighthouse is widely used for this purpose for web based consumer application. Google’s Lighthouse-CI is integrates with your CI/CD tool which passes or fails a builds based on performance rules.
Note: You can ignore the SEO numbers/suggestions for your SaaS application.
These tools ensure quality. This helps with SEO, accessibility and best practices and measures performance metrics. Performance metrics are important to understand how your page loads as this impacts user experience.
Important Metrics to track User Experience
Start Render or First Paint & Largest Contentful Paint (LCP)
The reason I have suggested an option to choose from 2 metrics is because one of them is easier to capture then the other. You can be the best judge when it comes to accuracy of the metric and if this is something that will work for you.
Start Render is measured by capturing a video of the page load and looking at each frame for the first time the browser displays something other than a blank page. This is typically measured in lab or using a synthetic monitoring tool like Catchpoint and is the most accurate measurement for it.
First Paint (FP) is a measurement reported by the browser itself; that is, when it thinks it painted the first content. It is fairly accurate but sometimes it reports the time when the browser painted nothing but a blank screen.
FP should typically happen under 2 seconds. Imagine the application you are trying to use sows up a blank screen for a few seconds before it starts rendering the content. This is NOT a good user experience. Show a blank screen for more than 2 seconds after entering the URL can cause page abandonment. You want to tell the user as soon as possible that some activity is happening. This could be as simple as chaining the background color which alerts the user that the application is loading.
According to LCP‘s definition; it is the time it takes for the largest above the fold content to load. For example, breaking story on a news website. This is an important metric, because users typically expect to see something relevant quickly.
Together with FP (or start render) and LCP measures the Loading Experience for a user.
Time To Interactive (TTI)
According to web.vitals:
TTI metric measures the time from when the page starts loading to when its main sub-resources have loaded and it is capable of reliably responding to user input quickly.
TTI follows FP. Having a big gap between these two would mean your users are waiting until the entire page completes rendering. This means if you have an extremely fast loading web application but a horrible TTI; the performance is worst compared to a slower application.
We tend to build muscle memory often for things that we use constantly. This is true for SaaS applications. Most of the tools that I use at work; I have built in muscle memory, which makes me position my mouse on the browser at a location the link/button will render and the moment it does; I tend to click it.
Speed Index is one of the metrics that is tracked by Google’s Lighthouse (or Web.Vitals) as part of performance report:
Speed Index measures how quickly content is visually displayed during page load. Lighthouse first captures a video of the page loading in the browser and computes the visual progression between frames. Lighthouse then uses the Speedline Node.js module to generate the Speed Index score.
In simple words, Speed Index metric tells you at what rate does the visible content (above the page fold) is loading. The lower the score, the better is the user experience.
Typically all of these metrics should be tracked by your engineering or performance teams; however, it is good practice to keep an eye on these as they would be benchmarked based on historical data or competitive data. Breaching the benchmark or any changes in these benchmark can have a direct impact on the user experience of your application.
If you are curing to now more, here is a great article on Understanding Speed Index | Performance Metrics.
How to track metrics?
You can use a Synthetic Monitoring tool like Catchpoint. If you are the adventurous kind you can use Google Puppeteer to run a lighthouse test to capture the above metrics and Grafana to show a historical time series of these performance metrics.
As a product manger I track a lot more metrics and have built my entire dashboard on Grafana (more on this in a later post). I have a set up using Google Puppeteer and Lighthouse libraries that I use to push these metrics and other performance metrics provided by Google Lighthouse in my Dashboard every 24 hours. This allows me to see my performance numbers along with other KPI’s.