Applying a ‘Time-To-Market’ KPI in product

Gabriel Dan on Mind the Product:

It’s a KPI—used mostly by the business—to measure the time required to move a product or service from conception to market (until it is available to be purchased). The process is the combined efforts of all stakeholders, product management, marketing, and so on. It includes workflow steps and strategies involved throughout the process. It’s usually calculated in days/weeks/months/years but it can be met in other forms too depending on how the different organizations will want to implement this.

This is simply amazing and important especially when you are constantly trying to beat your competition to get out in the market to capture your audience.

The shorter the time to market is, the quicker the return on investment (ROI) can be realized, therefore you can imagine why it’s important for businesses.

The quicker the product gets on the market, the bigger market share the company will get especially in an unaddressed segment facing less competition and thus enjoys better profit margins. Getting fresh and relevant products to market quickly attracts customers.

Exactly. It is very common to get into the phase of doing more before releasing to the market. The TTM metric forces you to be frugal about your MVP.

Gabriel Dan does a great job setting the premise and goes on the explain how the TTM should be calculated. Highly recommended.

Avoid feature bloat and deliver your product strategy

Even the best products including your favorite product (can) suffer form feature bloat. In my last article Measuring Feature Adoption – I talked about measuring feature adoption. Tracking feature adoptions will help you avoid feature bloat.

A product filled with non-performing features causes

1. accumulation of technical debt,
2. increased maintenance costs,

leading to a lower customer satisfaction (NPS score) and a lack of market differentiation.

Features too have an iceberg problem. They may seem to be small features but turn out to have huge costs. This can happen when you decide to ship a feature for a specific customer or a use case instead of shipping new products.

Avoiding feature bloat

One way to avoid feature bloat is to have a strategic approach on the bigger picture. Focus on outcomes instead of output.

Avoid focusing on shipping features. Product teams should focus on the number of problems solved and the positive impact on their customers – this directly results in a better NPS score.

When you focus on the outcomes, it allows you to gather feedback from customers, talk to them more often to test your hypothesis for a new feature. Tracking your feature allows you to decide if you should either iterate or pull out this feature.

Finally, doing a feature audit will also help understand how features are being used. Looking at your feature adoption and usage metrics will allow you to decide to to kill an underperforming feature or work on increasing adoption.

Delivering a Product Strategy

As your product evolves and becomes mature, typically in the growth phase – you attempt to serve everyone. In this phase, products can be disrupted by the one big customer’s specific needs or by smaller niche markets.

Segmenting your product based on customer needs, jobs and personas will allow you to bundle your product to different tiers at different pricing options and avoid feature bloat.

Measuring Feature Adoption

When it comes to SaaS Products, product managers typically have product adoption as one of their top KPIs to track; especially when launching a new product. The logic is pretty simple, improving product adoption means higher retention, lower churn and move revenue.

However, once a product is launched we continue to release new features to keep that product adoption going, but we fail to realize that we also need to track feature adoption.

Both these key metrics indicate how well your product is received by your customers. The product adoption metric tells you the percentage of active users, while the feature adoption metric tells you the reason why people continue to engage with your product.

So when you launch a new feature, looking at this individual metric will tell you if the its driving your overall product adoption to go up or not.

Feature adoption is measured in percentage (%) as:
(# of users of a feature / total # of active users) x 100 = feature adoption %

Getting insight into what features users find the most valuable will also inform your team o how to position your product as well as help you with any product decisions you make.

New User Onboarding & Time to Value

Customers (or users) generally hire a software or a service to solve a problem. This results in immediate gratification (or value).

When customers first sign up for your product, they will either get what they are looking for; or they won’t…

In case of SaaS application, they are a bit like IKEA furniture. Unless you assemble the pieces together, you wont experience value. SaaS customers must wait to experience the value of the product. It’s this delay that makes churn a common theme among SaaS products.

This delay also referred as Time to Value (TtV) i.e. the amount of time it takes a NEW user to realize the products value.

User Onboarding is important as you want these users to realize that the product they hired is solving their problems. Product Managers should focus on reducing TtV and drive new users to being active users.

The longer your time to value, the higher customer churn. Users have very little patience. The key is to focus on optimizing your new user onboarding experience. Focus on the key actions that correlate to activation – typically an action that provides value.

It is important to have a continuous user onboarding for existing users as you introduce new features and products. TtV can also help move your already active users to engaged users where the cadence of a valuable task performed is higher. This will help drive adoption of your SaaS product.

User Experience (UX) Metrics for Product Managers

As Product Managers we are obsessed with what we build. Well, we all want to build the best darn product ever. We immerse ourselves in understanding

  1. how our users are finding the product or if the campaigns are working (Reach, Activation),
  2. How many users and if they are engaging well with the product (Active Users, Engagement), and
  3. Last but not least, if your users come back (Retention).
    The priority of these metrics changes depending on the nature of your app.

Give your product managers complete autonomy / authority to drive campaigns, onboard users. They are the best and most aware of the product they are building.

Automation is on the rise. With CI/CD and other similar processes; we are able to ship code at a faster rate then ever before. Delivering value to customers at this rate is great; but, it is important you focus on quality over quantity.

When we talk about quality, Performance is right up top. It is a common practice to check for performance before your application goes live. Most organizations do this.
Google’s Lighthouse is widely used for this purpose for web based consumer application. Google’s Lighthouse-CI is integrates with your CI/CD tool which passes or fails a builds based on performance rules.

Note: You can ignore the SEO numbers/suggestions for your SaaS application.

These tools ensure quality. This helps with SEO, accessibility and best practices and measures performance metrics. Performance metrics are important to understand how your page loads as this impacts user experience.

Important Metrics to track User Experience

Start Render or First Paint & Largest Contentful Paint (LCP)

The reason I have suggested an option to choose from 2 metrics is because one of them is easier to capture then the other. You can be the best judge when it comes to accuracy of the metric and if this is something that will work for you.

Start Render is measured by capturing a video of the page load and looking at each frame for the first time the browser displays something other than a blank page. This is typically measured in lab or using a synthetic monitoring tool like Catchpoint and is the most accurate measurement for it.

First Paint (FP) is a measurement reported by the browser itself; that is, when it thinks it painted the first content. It is fairly accurate but sometimes it reports the time when the browser painted nothing but a blank screen.

Largest Contentful Paint (LCP) is yet another metric by Google as they continue to foray into web performance metrics and is part of their Web Vitals offering.

FP should typically happen under 2 seconds. Imagine the application you are trying to use sows up a blank screen for a few seconds before it starts rendering the content. This is NOT a good user experience. Show a blank screen for more than 2 seconds after entering the URL can cause page abandonment. You want to tell the user as soon as possible that some activity is happening. This could be as simple as chaining the background color which alerts the user that the application is loading.

According to LCP‘s definition; it is the time it takes for the largest above the fold content to load. For example, breaking story on a news website. This is an important metric, because users typically expect to see something relevant quickly.

Together with FP (or start render) and LCP measures the Loading Experience for a user.

Time To Interactive (TTI)

According to web.vitals:

TTI metric measures the time from when the page starts loading to when its main sub-resources have loaded and it is capable of reliably responding to user input quickly.

TTI follows FP. Having a big gap between these two would mean your users are waiting until the entire page completes rendering. This means if you have an extremely fast loading web application but a horrible TTI; the performance is worst compared to a slower application.

We tend to build muscle memory often for things that we use constantly. This is true for SaaS applications. Most of the tools that I use at work; I have built in muscle memory, which makes me position my mouse on the browser at a location the link/button will render and the moment it does; I tend to click it.

Speed Index

Speed Index is one of the metrics that is tracked by Google’s Lighthouse (or Web.Vitals) as part of performance report:

Speed Index measures how quickly content is visually displayed during page load. Lighthouse first captures a video of the page loading in the browser and computes the visual progression between frames. Lighthouse then uses the Speedline Node.js module to generate the Speed Index score.

In simple words, Speed Index metric tells you at what rate does the visible content (above the page fold) is loading. The lower the score, the better is the user experience.

Typically all of these metrics should be tracked by your engineering or performance teams; however, it is good practice to keep an eye on these as they would be benchmarked based on historical data or competitive data. Breaching the benchmark or any changes in these benchmark can have a direct impact on the user experience of your application.

If you are curing to now more, here is a great article on Understanding Speed Index | Performance Metrics.


How to track metrics?

You can use a Synthetic Monitoring tool like Catchpoint. If you are the adventurous kind you can use Google Puppeteer to run a lighthouse test to capture the above metrics and Grafana to show a historical time series of these performance metrics.

As a product manger I track a lot more metrics and have built my entire dashboard on Grafana (more on this in a later post). I have a set up using Google Puppeteer and Lighthouse libraries that I use to push these metrics and other performance metrics provided by Google Lighthouse in my Dashboard every 24 hours. This allows me to see my performance numbers along with other KPI’s.

When (& Why) to adopt Kanban

I started my career as a developers in a Waterfall Project management environment. This was also the framework I was introduced and taught in college; heck I’m also a Certified IT Project Manager Assessment (2002) form Singapore Computer Society and this dealt purely in Waterfall.

Fortunately and unlike all my colleagues and for some odd reason my first introduction to Agile Project Management was not with Scrum; but Kanban. Unfortunately this did not last long as I changed my job and just like all of you it’s been a scrum world for me. For the past decade I have been using the scrum model and for the past few years I have been constantly telling myself = “Kanbn does this so much better”.

One of the biggest frustration with the scrum model is the last few days of sprint where everyone is rushing to deliver what was committed to at the beginning of sprint, cause splitting user stories or carrying them over is bad practice. Why? Cause this affects the Velocity and burn-down charts.

When it comes to Scrum the 3 well know metrics are:

  1. Velocity – this is the number of story points that the team will deliver in each sprint.
  2. Commitment vs. Done – typically a % that shows how many stories that were committed and delivered .
  3. Burn-down chart – this is a graph to show the how the sprint has progressed.

In the past decade, I have seen velocity shift like my internet connection at home and me entire career in scrum the only burn-down chart that I have seen is this:

And thanks to the Commitment vs. Done metric; I end up playing the referee between Developers and testers, cause developers are motivated to close a use story. Its a very fine line you walk as a PM when you have to choose delivery over quality.

The other issue that I run into consistently is when you have to pull a resources out of a squad/team to fix a critical bug. Ideally you fight and argue to try and push this to be in the next sprint, but then there are some bugs that got to be fixed immediately. This again affects your time and make your fail the commitment you made in the beginning of the sprint. Unfortunately this is the most used metric to evaluate success of a sprit/release i.e. look at the commitment vs. done.

At the end of every sprint/release, teams are supposed to retrospectives. In the past decade I have attended 1000’s of these retro meetings, where the goal is to discuss what went well, what didn’t not go well, and what should we do going forward. After the first few meetings, you start noticing a pattern which ultimately led me to believe that these meetings don’t work or aren’t effective.

Scrum by its nature is very limiting and requires discipline. If not, you can start noticing frustrated developers/teams which leads to low quality of work produced. At some point in time teams and organizations tend to focus on delivering in time then delivering high quality products.

Kanban

Despite Scrum being the current #1 agile framework, Kanban is becoming more adopted over the years. I tend to work on Kanban (if given an opportunity) and my experience so far as been nothing but positive. Given the fast changing landscape of technology and the fact that there is something new everyday, adopting Kanban has helped me and my team tremendously to delivery quality product (almost) on time.

Kanban was first introduced by Toyota back in the 40’s where they had a classic assembly line in place which are supplier driven designed for MAXIMUM efficiency. To have a more customer driven approach they optimized their engineering process.

When Toyota brought that idea to it’s factory floors, teams (such as the team that attaches the doors to the car’s frame) would deliver a card, or “kanban”, to each other (say, to the team that assembles the doors) to signal that they have excess capacity and are ready to pull more materials. Although the signaling technology has evolved, this system is still at the core of “just in time” manufacturing today.

Kanban does the same for software teams. By matching the amount of work in progress to the team’s capacity, kanban gives teams more flexible planning options, faster output, clear focus, and transparency throughout the development cycle.

I like this image from Cuelogic that shows the difference between Scrum vs. Kanban.

scrum-vs-kanban-differences-01-01

Kanban offers flexibility, however; one has to be disciplined with this methodology as well. As Kanban does not force you to have a commitment every two weeks, Kanban has a set of metrics that can be sued to evaluate a team’s performance.

  1. Throughput is the number of user stories delivered for a given time range. For example, if your organization does a release every 6 weeks, you would look at the # of user tires that made it to the release.
  2. Cycle time is the number of days it takes for a user story to go from start to deliver. Most uses confidence intervals and the most common measure is 85% confidence.
  3. Cumulative flow diagram is used to visualize the flow of user stories for a given team.

Kanban is simple and most importantly flexible. You can mould the model to fit it with a process that works for you and your team. Unlike scrum Kanban has no necessary rituals or activity. However, there are a couple of rituals that you can borrow from Scrum:

  1. Story Estimation – this can be helpful cause this allows all of team members to discuss a story which allows you to determine the scope and size. Do it with your PM.
  2. Retrospective meetings are one of the most important meetings for a team. It encourages you to improve constantly and with Kanban it allows you to tweak your workflows to make sure quality is always high.

Finally, this is a perfect image that sums up the differences as well as something that you can use to determine when to use scrum or Kanban model for your next product.

Product Metrics: How & What

When you wok as a product manager, you ask yourself questions all the time. Most common amongst those are — Is my product working? Or if the product is already established, is my feature working? How many users are using it? When do they use it the most?
That list is never ending

Software products today are much more complex (in a nice way). Some of this complexity comes with a treasure trove of data. We then integrate with other software & SDK’s which help us understand users behavior and experience.
This data is important and can help you understand product/feature performance. What is working well and what is not. If something needs a little nudge or a complete makeover.

Unfortunately this data is in its raw format. It is up to us to give it shape and understand it. The numbers are known by different names. The one that I use the most — KPI, metrics, user adoption.

How many metrics?

There is no single set of metric that works for everyone. Businesses and products (features pithing a product) are all unique and have different goals based on the state and ambition. For example, a startup will want to acquire new customers quickly, where as a 50 year old business would be more focused on retention and upset to an already established customer base.
Most organizations have a handful of metrics that sum up their product’s (or service) overall performance. From these handful of metrics there is always one (or two) metrics that is slightly more important in comparison. These typically are termed as focus metrics.
These focus metrics are supported by a more granular metrics that teams and individuals spend their time on to drive the momentum of the focus metric. Once you identify and set them up; it forms a hierarchy with an upward flow of impact. Any individual granular or lower level metric that performs well, will advance the focus metric.

What should be my Focus Metric?

Let’s talk about not one but more than one metrics. A “North Start” metric does not exists. Experts have mulled over this concept and have concluded that a business should have a constellation of metrics, rather than one single metric that matters. A North Star metric can be very limiting. For example, Facebook no longer has a single metric in Monthly Active Users. They follow a group of focus metrics that allows them to innovate and continue to grow their product.

Trying to maximizing your focus metric can hurt your product too. If YouTube cares only about videos watched, they might autoplay vides when you visit their website. Take it from me, that is an extremely frustrating user experience. You are basically trying to force your users to do something at random which has an impact on retention – which should be one of your focus metric.

Focus Metrics should be your top priority, not your only priority, and improving focus metric should be organic rather than accomplishing them at the expensive of harming other KPIs.

Focus Metrics Recommendations

Choose a focus metric tied to active usage such as Weekly or Monthly Active Users. They do a good job of summing up trending in other metrics – typically acquisition and retention. This number will show you if your customers are using the provide over time.

Once you have established your focus metric, its time to to start adding metrics that complete your focus metric. An easy way to validate that is to ask yourself a question “if we improve this number will the product’s long-term performance improve?”

At this point you start layering your metrics (like a layered cake) and you can go as many layers deep depending on the hierarchy and complexity of your product.
The first layer of metrics typically have checks to make sure that the product is growing in a healthy direction. For example, if your focus metric is WAU, a good layer 1 metric is a 7 day retention to ensure you aren’t spending previous funds to acquire new users who leave after a day or two.

The second layer onwards these metrics are more customized to your specific needs. Continuing the above example a layer 2 metric would be to check retention on your various platform like Desktop, Mobile, Web, etc. You can go one more layer below to break it down by region.

You can keep going as many layer deep you want but keep clarity in mind cause the more layers you add there is a tendency to add confusion. A good thumb rule is to focus on what matters. Too many goals can be just as ineffective as having none.

Key Metric Types

There is no secrete sauce here. The industry has a standard set of metric types/categories today.


Reach
This is the total # of people who have used the product in a recent time period. For consumer products, it could be the # of paid accounts, or users who have made a purchase in the last 3 months. For B2B, this key metric is often product install base or number of paid licenses within the past quarter or year.
Reach is important because it represents the maximum number of users who could reasonably become active, whether organically or thorough re-engagement campaigns.

Activation
This is the foundational step that primes a new user to become an active user. This was made famous by Facebook who identified adding 7 friends in the first 10 days as their activation metric when they were a startup. This metric drove their Active user key metric and they they focused adding friends as the central part of their onboarding experience.

It is recommended viewing the metrics as a % of new users rather than count in order to isolate it form natural user growth. That way you can know if you’re more successfully at activating users over time.

Active Users
Active users are users who take a key action and received value from your product within a recent time period. Here the value can be defined as one action – like uploading a picture or set of actions, such as uploading 3 pictures and tagging a picture.

A lot of consumer-focused products promote habitual usage (Twitter, Instagram) look at Daily Active Users. B2B is better viewed through the lens of Weekly Active users since it’s not always used every day (weekends are off).

Monthly Active users also is a good fit, if your product has a monthly billing cycle since bills are usually due monthly.


This is also one of the most common focus metric.

Engagement
This is all about how committed your customers are. It accounts for both the frequency and cadence of completing key actions.
This metric can be defined as the number of key actions taken, minute of video watched, or number of transactions completed. It’s important to divide this by your active user count to measure the depth of engagement per users with your product. Otherwise, user grown might mislead you into thinking your product is more sticky than it actually is.

Retention
This is all about checking if your product has staying power. Retention can be driven two ways:

  1. Are you bringing in the kind of people who stick around?
  2. Are you giving those who have already come through the door enough reason to come back?
    When deciding on the time frame for retention goals, pick a range that is long enough to capture the reasonable repeat visit cycle of your customers, yet short enough that teams can get feedback to iterate quickly.

Business Specific
It is possible that your product has specific needs or have a unique ability where the above metrics won’t work to capture the right number for you. In this case it is absolutely fine to define your metric.


Further Reading:

Product School – These are the metrics great product managers track

perf top for debugging and checking if your app is hogging your CPU

A lot of products today live in the cloud and when it comes to its performance, developers typically have the knowledge or tools that they use to ensure the performance is acceptable.
However, as a PM I like looking at this too cause it tells me if my performance is degrading based on a feature that was added.

You can look at performance of an application by multiple means. You can use synthetic tools like Google Puppeteer and Lighthouse to see the performance of a web application. But what about that server-side code that sits in the background to process and serve this data to your application.

There are some handy local Linux tools that you can use to get a quick idea on how your application is impacting your CPU.

Your applications/programs spend a lot of time on the CPU – like billions of cycles (a billion cycle is normal). and you want to know what is your application really doing and which process is using or impacting your CPU the most.

perf is a well-known linux utility that is helpful to get some of your answers quickly. Keep in mind perf is a weird tool, that gives you useful information in different ways so give it some time to use it an understand the output.

Try running $ sudo perf record python
After running it for sometime quit by hitting Ctrl + C
The results are saved in a file called perf.data

To view the results run $ sudo perf report

This will show you the C functions from the CPython interpreter (not the Python functions) and its % usage on the CPU.

perf works on any linux machine. The exact features will vary depending on your kernel version.

When you are noticing CPU spikes on your server in very short durations there are two things you can do.

First by running $ top will show you the list of all programs and its % usage of the CPU.
You can then run $ perf top
This is just like top but for functions instead of programs. This will help you determine what function in the program is causing the CPU to spike so much.

perf top doesn’t always help but its easy to try and sometimes you are surprised by the results.

Also check out Flamegraphs by Brendan Gregg if you ant to visualize your CPU performance. Follow the instruction on his GitHub to generate report.
The graph is built from collections (usually thousands) of stack traces sampled form a single program.

Cache

Your CPU has a small cache on it called the L1 cache that it can access in about ~0.5 nanoseconds. Its 200x faster than accessing the RAM. If you are trying to do operation where ever 0.001 seconds matter, CPU cache usage matters. But you don’t want anyone to be abusing it.

Run $ perf stat
Let it run for a few seconds and then quit by hitting Ctrl + C
This will show you if any program is using those cache and how much.

You can also run $perf stat ls which simply runs the ls command and prints a report at the end.
You can also pass -e to get specific stats.

Your CPU tracks all kinds of counters and what its doing. $ perf stat asks it to count things (like L1 cache misses) and reports the results.

It’s Roadmap season

The (fiscal) year is coming to an end and most of us are recharged coming back from the holiday season. It’s the time of the year when product managers are the busiest, emotions run high and frustration sets in. It’s time to make the roadmap for the next year.

Within the organization Roadmaps are a fearcly debated topic. It’s not the principle of a product roadmap; it’s the misunderstandings they bring.

Google “What is a product roadmap” and you will be treated with a variation of definition (classic):

1. A high-level visually summary that maps out vision and direction: What is a Product Roadmap?

2. A high-level, visual representation of the direction your product offering will take over time: What is a Product Roadmap? | Customer Success Software | Gainsight

3. Plan for how your product is going to meet a set of business objectives: Product Roadmap Examples and Definition | Aha!

I like to call product roadmaps as a strategic document or in Gibson Biddle words “The roadmap is an artifact — an expression — of your product strategy”.

Reading Gibson Biddle’s artilce on The Product Roadmap

When I share a roadmap, I express confidence about the next quarter’s projects but highlight that the content and timing of the subsequent quarters are highly speculative. There’s lots of near-term learning that will cause plans to change.

This is extremely important. The classic mistakes we all make is try to set the roadmap in stone at the beginning of the year or if you are flexible enough we try to put in delivery dates for each item.

Having such a roadmap is important. Keep in mind, its the one thing that you can share with your customers to let them know if the direction your product is headed works for them and how by how much.

Remember, a product roadmap is NOT a project plan, its not permanent and most importantly its not a list of features or requirement.

For your product roadmap to express your product strategy focus on outcomes, understand your audience (your stake holders), keep it simple and stupid (KISS), state your product vision and most importantly keep it updated.

I personally follow Gibson Biddle’s quarterly format for my roadmap. Focus on themes its always helped me in aligning teams behind a common goal. It allows for healthy discussion and alignment and it helps to keep the teams focused.