Product vs Feature teams

Marty Cagan writing about Product vs Feature teams:

In an empowered product team, the product manager is explicitly responsible for ensuring value and viability; the designer is responsible for ensuring usability; and the tech lead is responsible for ensuring feasibility. The team does this by truly collaborating in an intense, give and take, in order to discover a solution that work for all of us.

When I talk and write about how tough it is to be a true product manager of an empowered product team, it’s precisely because it is so hard to ensure value and viability. If you think it’s easy to do this, please read this.

The lack of giving up control and delegating to your product team is probably the biggest reason I see very few product empowered teams.

Feature teams are set up similarly, but with a stark difference:

However, in a feature team, you still (hopefully) have a designer to ensure usability, and you have engineers to ensure feasibility, but, and this is critical to understand: the value and business viability are the responsibility of the stakeholder or executive that requested the feature on the roadmap.

If they say they need you to build feature x, then they believe feature x will deliver some amount of value, and they believe that feature x is something that is viable for the business.

Whichever way you see it; they both are squads but the differences run deep, but I’ll leave you with that I think is the most important

Let’s start with the role of the product manager. In an empowered product team, where the product manager needs to ensure value and viability, deep knowledge of the customer, the data, the industry and especially your business (sales, marketing, finance, support, legal, etc.) is absolutely non-negotiable and essential.

Yet in a feature team, that knowledge is (at best) dispersed among the stakeholders.

Controversial topic but an essential read.

Tight & Loose Cultures and its impact

From the Freakonomics Podcast episode – The U.S. is just different – so let’s stop pretending we’re not:

So, culture is about values, beliefs, absorbed ideas and behaviors. But here’s the thing about culture: it can be really hard to measure. Which is probably why we don’t hear all that much about the science of culture. When something is not easily measured, it often gets talked about in mushy or ideological terms. Michele Gelfand wasn’t interested in that. She did want to measure culture, and how it differs from place to place. She decided that the key difference, the right place to start measuring, was whether the culture in a given country is tight or loose.

I had no idea there was a who study dedicated to cross-cultural psychology. Having worked in both Tight & Loose cultures, I was able to relate to this podcast and the impact this has had on me and my work.

All cultures have social norms, these unwritten rules that guide our behavior on a daily basis. But some cultures strictly abide by their norms. They’re what we call tight cultures. And other cultures are more loose. They’re more permissive. 

You have to listen to the podcast to understand why a country or its culture is shaped the way it is. It’s not just entire countries, but even states within the US have tight or loose culture.

Michele Gelfand and several co-authors recently published a study in The Lancet about how Covid played out in loose versus tight cultures. Controlling for a variety of other factors, they found that looser countries — the U.S., Brazil, Italy, and Spain — have had roughly five times the number of Covid cases and nearly nine times as many deaths as tighter countries. But, let’s look at the pandemic from a different angle: which country produced the most effective Covid-19 vaccines? Tightness may create compliance; but looseness can drive innovation and creativity.

This blew me away.

Listen to the podcast or read the transcript.

Applying a ‘Time-To-Market’ KPI in product

Gabriel Dan on Mind the Product:

It’s a KPI—used mostly by the business—to measure the time required to move a product or service from conception to market (until it is available to be purchased). The process is the combined efforts of all stakeholders, product management, marketing, and so on. It includes workflow steps and strategies involved throughout the process. It’s usually calculated in days/weeks/months/years but it can be met in other forms too depending on how the different organizations will want to implement this.

This is simply amazing and important especially when you are constantly trying to beat your competition to get out in the market to capture your audience.

The shorter the time to market is, the quicker the return on investment (ROI) can be realized, therefore you can imagine why it’s important for businesses.

The quicker the product gets on the market, the bigger market share the company will get especially in an unaddressed segment facing less competition and thus enjoys better profit margins. Getting fresh and relevant products to market quickly attracts customers.

Exactly. It is very common to get into the phase of doing more before releasing to the market. The TTM metric forces you to be frugal about your MVP.

Gabriel Dan does a great job setting the premise and goes on the explain how the TTM should be calculated. Highly recommended.

Avoid feature bloat and deliver your product strategy

Even the best products including your favorite product (can) suffer form feature bloat. In my last article Measuring Feature Adoption – I talked about measuring feature adoption. Tracking feature adoptions will help you avoid feature bloat.

A product filled with non-performing features causes

1. accumulation of technical debt,
2. increased maintenance costs,

leading to a lower customer satisfaction (NPS score) and a lack of market differentiation.

Features too have an iceberg problem. They may seem to be small features but turn out to have huge costs. This can happen when you decide to ship a feature for a specific customer or a use case instead of shipping new products.

Avoiding feature bloat

One way to avoid feature bloat is to have a strategic approach on the bigger picture. Focus on outcomes instead of output.

Avoid focusing on shipping features. Product teams should focus on the number of problems solved and the positive impact on their customers – this directly results in a better NPS score.

When you focus on the outcomes, it allows you to gather feedback from customers, talk to them more often to test your hypothesis for a new feature. Tracking your feature allows you to decide if you should either iterate or pull out this feature.

Finally, doing a feature audit will also help understand how features are being used. Looking at your feature adoption and usage metrics will allow you to decide to to kill an underperforming feature or work on increasing adoption.

Delivering a Product Strategy

As your product evolves and becomes mature, typically in the growth phase – you attempt to serve everyone. In this phase, products can be disrupted by the one big customer’s specific needs or by smaller niche markets.

Segmenting your product based on customer needs, jobs and personas will allow you to bundle your product to different tiers at different pricing options and avoid feature bloat.

Measuring Feature Adoption

When it comes to SaaS Products, product managers typically have product adoption as one of their top KPIs to track; especially when launching a new product. The logic is pretty simple, improving product adoption means higher retention, lower churn and move revenue.

However, once a product is launched we continue to release new features to keep that product adoption going, but we fail to realize that we also need to track feature adoption.

Both these key metrics indicate how well your product is received by your customers. The product adoption metric tells you the percentage of active users, while the feature adoption metric tells you the reason why people continue to engage with your product.

So when you launch a new feature, looking at this individual metric will tell you if the its driving your overall product adoption to go up or not.

Feature adoption is measured in percentage (%) as:
(# of users of a feature / total # of active users) x 100 = feature adoption %

Getting insight into what features users find the most valuable will also inform your team o how to position your product as well as help you with any product decisions you make.

New User Onboarding & Time to Value

Customers (or users) generally hire a software or a service to solve a problem. This results in immediate gratification (or value).

When customers first sign up for your product, they will either get what they are looking for; or they won’t…

In case of SaaS application, they are a bit like IKEA furniture. Unless you assemble the pieces together, you wont experience value. SaaS customers must wait to experience the value of the product. It’s this delay that makes churn a common theme among SaaS products.

This delay also referred as Time to Value (TtV) i.e. the amount of time it takes a NEW user to realize the products value.

User Onboarding is important as you want these users to realize that the product they hired is solving their problems. Product Managers should focus on reducing TtV and drive new users to being active users.

The longer your time to value, the higher customer churn. Users have very little patience. The key is to focus on optimizing your new user onboarding experience. Focus on the key actions that correlate to activation – typically an action that provides value.

It is important to have a continuous user onboarding for existing users as you introduce new features and products. TtV can also help move your already active users to engaged users where the cadence of a valuable task performed is higher. This will help drive adoption of your SaaS product.

Measuring your WiFi Quality

I have been using the router provided by my ISP provider (out of laziness) for the past few months. Those routers aren’t bad, but they do not work for every home environment. I live in a townhome and this router sits on the first floor. With all of us working from home; the connection has been spotty in some of our rooms. The download speed is great when you get a good connection but the stability is worst.

When it comes to networking, I know the basics and I know enough to understand the issue with my WiFi connection and that I need a stronger router. Before I got my Eero; I wanted to check my WiFi quality through out the house. Even if you have a stronger signal everywhere, one has to consider the noise. The amount of devices connected to WiFi one has these days is mind boggling. I counted 27 devices connected to my WiFi network including 9 WeMo switches, a ring doorbell and a Nest thermostat.

macOS has a utility to check your wifi connectivity – airport. Running this will show important metrics that you need to understand the quality of your WiFi Network:

$>/System/Library/PrivateFrameworks/Apple*.framework/Versions/Current/Resources/airport -I
     agrCtlRSSI: -40
     agrExtRSSI: 0
    agrCtlNoise: -93
    agrExtNoise: 0
          state: running
        op mode: station
     lastTxRate: 234
        maxRate: 867
lastAssocStatus: 0
    802.11 auth: open
      link auth: wpa2-psk
          BSSID: 1x:1x:1:1x:11:1
           SSID: XXXXXXXXXX
            MCS: 5
        channel: 48,80

Two numbers are most important here. agrCtlRSSI (Received Signal Strength Indicator) is the power of the received signal in the wireless network. It uses a logarithmic scale expressed in decibels (db) and typically ranges from 0 to -100. The close this number is to 0 the better quality of signal.

The second is Noise or agrCtlNoise; is the impact of unwanted interfering signal sources, such as distortion and radio frequency interference. This is also measured in decibels (db) from 0 to -120. The lower the value i.e closer to -120 means little to no noise in the wireless network.

Once you have these two values, you can now measure the Signal to Noise Margin (SNR Margin) with the simple formula agrCtlRSSI - agrCtlNoise.

Higher value means better WiFi Signals.

To truly monitor the quality of this, I wanted something that could track the wifi quality continuously. Not much could truly give me WiFi Quality so I had two choices. Write a simple shell script that could continuously run, capture the two metrics and provide me with the SNR Margin. Which would mean I had keep the terminal open and let that run.

Since I have been tinkering with golang, this was a good way to learn something new. I found a golang library getlantern/systray that allows you to run a golang app in the system tray. I then used mholt/macapp that allows me to build a macOS application. Now I have a continuous running application that update the WiFi signal every 15 seconds. You can download the code or the WiFiQuality.app on my GitHub.

User Experience (UX) Metrics for Product Managers

As Product Managers we are obsessed with what we build. Well, we all want to build the best darn product ever. We immerse ourselves in understanding

  1. how our users are finding the product or if the campaigns are working (Reach, Activation),
  2. How many users and if they are engaging well with the product (Active Users, Engagement), and
  3. Last but not least, if your users come back (Retention).
    The priority of these metrics changes depending on the nature of your app.

Give your product managers complete autonomy / authority to drive campaigns, onboard users. They are the best and most aware of the product they are building.

Automation is on the rise. With CI/CD and other similar processes; we are able to ship code at a faster rate then ever before. Delivering value to customers at this rate is great; but, it is important you focus on quality over quantity.

When we talk about quality, Performance is right up top. It is a common practice to check for performance before your application goes live. Most organizations do this.
Google’s Lighthouse is widely used for this purpose for web based consumer application. Google’s Lighthouse-CI is integrates with your CI/CD tool which passes or fails a builds based on performance rules.

Note: You can ignore the SEO numbers/suggestions for your SaaS application.

These tools ensure quality. This helps with SEO, accessibility and best practices and measures performance metrics. Performance metrics are important to understand how your page loads as this impacts user experience.

Important Metrics to track User Experience

Start Render or First Paint & Largest Contentful Paint (LCP)

The reason I have suggested an option to choose from 2 metrics is because one of them is easier to capture then the other. You can be the best judge when it comes to accuracy of the metric and if this is something that will work for you.

Start Render is measured by capturing a video of the page load and looking at each frame for the first time the browser displays something other than a blank page. This is typically measured in lab or using a synthetic monitoring tool like Catchpoint and is the most accurate measurement for it.

First Paint (FP) is a measurement reported by the browser itself; that is, when it thinks it painted the first content. It is fairly accurate but sometimes it reports the time when the browser painted nothing but a blank screen.

Largest Contentful Paint (LCP) is yet another metric by Google as they continue to foray into web performance metrics and is part of their Web Vitals offering.

FP should typically happen under 2 seconds. Imagine the application you are trying to use sows up a blank screen for a few seconds before it starts rendering the content. This is NOT a good user experience. Show a blank screen for more than 2 seconds after entering the URL can cause page abandonment. You want to tell the user as soon as possible that some activity is happening. This could be as simple as chaining the background color which alerts the user that the application is loading.

According to LCP‘s definition; it is the time it takes for the largest above the fold content to load. For example, breaking story on a news website. This is an important metric, because users typically expect to see something relevant quickly.

Together with FP (or start render) and LCP measures the Loading Experience for a user.

Time To Interactive (TTI)

According to web.vitals:

TTI metric measures the time from when the page starts loading to when its main sub-resources have loaded and it is capable of reliably responding to user input quickly.

TTI follows FP. Having a big gap between these two would mean your users are waiting until the entire page completes rendering. This means if you have an extremely fast loading web application but a horrible TTI; the performance is worst compared to a slower application.

We tend to build muscle memory often for things that we use constantly. This is true for SaaS applications. Most of the tools that I use at work; I have built in muscle memory, which makes me position my mouse on the browser at a location the link/button will render and the moment it does; I tend to click it.

Speed Index

Speed Index is one of the metrics that is tracked by Google’s Lighthouse (or Web.Vitals) as part of performance report:

Speed Index measures how quickly content is visually displayed during page load. Lighthouse first captures a video of the page loading in the browser and computes the visual progression between frames. Lighthouse then uses the Speedline Node.js module to generate the Speed Index score.

In simple words, Speed Index metric tells you at what rate does the visible content (above the page fold) is loading. The lower the score, the better is the user experience.

Typically all of these metrics should be tracked by your engineering or performance teams; however, it is good practice to keep an eye on these as they would be benchmarked based on historical data or competitive data. Breaching the benchmark or any changes in these benchmark can have a direct impact on the user experience of your application.

If you are curing to now more, here is a great article on Understanding Speed Index | Performance Metrics.


How to track metrics?

You can use a Synthetic Monitoring tool like Catchpoint. If you are the adventurous kind you can use Google Puppeteer to run a lighthouse test to capture the above metrics and Grafana to show a historical time series of these performance metrics.

As a product manger I track a lot more metrics and have built my entire dashboard on Grafana (more on this in a later post). I have a set up using Google Puppeteer and Lighthouse libraries that I use to push these metrics and other performance metrics provided by Google Lighthouse in my Dashboard every 24 hours. This allows me to see my performance numbers along with other KPI’s.

Product Roadmaps

I wont be surprised if every year around this time I’m talking about roadmaps.

Simply put, a product roadmap is a high level plan that organizations use to communicate their plans to achieve their product vision. Product vision is typically driven by the Company’s overall vision.

There is no right way of building roadmaps. I have seen extremely creative roadmaps using images and colors and also black & white flat lists. You use the best (or right) tool that helps you communicate your roadmap to your audience – not one team, several teams.

The product vision at the very highest level talks about Business Objective – how are we going to make money. There is a common theme to all roadmaps and the business objectives tend to fall under two main formats:

  1. Outcome Based which is nothing but an overarching theme or an ability
  2. Feature Based which gets into specifics.

The output for both of these approach is a Product Backlog with Items (PBI) clearly structured and defined.

For example:
Business Objective: Reduce access to item by 30%
Theme: Improve Search Experience
Features: Incorporate Global Searching using keywords

For most part I like taking the outcome based approach as they are better suited for a dynamic market, and themes are less likely to change.

A Feature based approach is well suited for a mature and stable market.

Where things start to break?

  1. The industry today is extremely dynamic and there are 10+ vendors to solve a single problem. The feature based model is fragile and breaks almost instantaneously when their definitions change and new risks or dependencies are uncovered. This tends to erode trust in the product team and its vision.

The best approach is for the management/executives to set the business objectives and empower their product team to determine the themes and features. Delegating builds better leaders.

  1. A good product vision and strategy also tends to fail in the race to meet deadlines. Product teams commonly fail to commit enough time and resources to validate their product strategy.

Basecamp follows a process called Shaping. It’s creative and integrative and a lot of strategic work. Setting the appetite and coming up with a solution requires you to be critical about the problem. What are we trying to solve? Why does it matter? What counts as success? Which customers are affected? What is the cost of doing this instead of something else?
This allows you to define your scope, experience and value you are going to deliver to your customers.

  1. Roadmaps are typically a journey you undertake for the next 4 quarters – its linear. Building that journey digitally is an iterative process. Roadmaps generally do not include outcomes or features to improve your existing features as you have to keep pace with the delivery train.

Review your roadmaps constantly. They are not set in stone and are bound to change. Product Demos, customer feedback, support tickets, KPI’s are a good indicator if you need to take a step back and revisit things.

Wrapping up; empower your product teams to drive the product strategy and roadmap and make them accountable for achieving business objectives.