Learnings From Our Usage-Based Pricing Rollout

A version of this post was originally published on OpenView’s blog.

If you have been paying attention to the SaaS world, you might have noticed that usage-based pricing (UBP) is now everywhere.

A 2021 report by OpenView shows that 45% of SaaS companies have a usage-based component as part of their pricing – a 2X increase in adoption compared to just four years ago. And another 34% plan to follow suit by 2023.

Clearly, this trend is not going anywhere, but the question is: does usage-based pricing actually work? 

In my experience, yes.

During my time at Landbot, I had the opportunity to lead our UBP rollout, and it ended up increasing our net revenue retention by 26%!

While the result was great, getting usage-based pricing right was far from easy. Because there aren’t many stories out there documenting how companies make the transition, I decided to share how we did it and what I learned along the way.

Before UBP, there were tiers and seats

To give you a little bit of context:

Landbot is a no-code chatbot builder that helps businesses automate their customer interactions – lead generation, customer support, surveys, quizzes, user onboarding, and more.

Our previous pricing model was based entirely on feature tiers and seats, which meant that customers generating 100K chats a month and customers generating 10 chats a month would pay us exactly the same amount. 

As a result, we believed there was an opportunity to monetize the usage, so we set out to experiment with usage-based pricing as soon as we closed our series A round. 

Research

Pricing is a highly sensitive topic that can have irreversible impacts. Considering that no one on our team had prior experience, we decided to hire an external consultant to help us. 

We broke down the research into three sprints. Each sprint took 2-3 weeks to complete, covering research design, data collection, analysis, and discussions.

Sprint 1: Buyer persona

We already had a fairly good idea of who our ideal customers were, but we wanted to further validate our qualitative understanding by looking at:

  • Internal data (enriched by Clearbit) – conversion rate, retention rate, long-term retained customers, and newly acquired customers.
  • Price sensitivity survey results from an external research panel.

What the price sensitivity survey looked like

The resulting output was a set of firmographic and demographic attributes of our ideal and secondary customers that we could use to segment our data in the following sprints.

Sprint 2: Feature packaging & value metrics

The goal of this second sprint was to find out:

  • How highly is each feature valued? -> We wanted to see if we could refine the features that go into each tier (or decide whether to keep feature tiers at all).
  • Aside from features, how do customers believe our pricing should scale? -> So we could find the value metric for our usage component. 

To do so, we used MaxDiff survey to understand each respondent’s most-valued and least-valued options in a list of features (relative preference). We also sought to understand their willingness to pay (WTP) by using the price sensitivity survey. 

What the MaxDiff survey looked likeAnalyzing the relative preference among features

Once we had the relative preference and WTP data for each feature, we plotted them onto a 2×2 matrix where the Y-axis represented the relative preference and the X-axis represented the deviation from median WTP. 

Each feature fell into one of four quadrants:

  • Differentiator (high RP x high WTP) = Customers are willing to pay more for it -> High tiers only.
  • Table Stakes (high RP x low WTP) = Customers see it as a “must-have” but don’t want to pay more for it -> All tiers.
  • Add-on (low RP x high WTP) = Some customers find it valuable and are willing to pay for it -> Horizontal add-on.
  • Trash (low RP x low WTP) = Customers don’t really care about it. 

Feature value matrix

Surprisingly, the result was almost identical to our existing packaging, so we kept our feature tiers untouched.

We also went through the same process for value metrics. But instead of using the matrix, we looked at whether the WTP for each value metric scales with its potential volume. Eventually, we decided on “number of chats” as the new usage metric while also keeping seats and feature tiers.

Sprint 3: Price points

In the final sprint, we were interested in nailing down the price points for our new usage-based pricing by again running the price sensitivity survey. However, this time:

  • We presented the research panel with detailed feature tiers.
  • We asked them about their potential usage volume (based on proxies like website traffic, number of leads, etc).

We then performed the Van Westendorp Price Sensitivity Analysis to find the optimal price range for each tier. 

Calculating the optimal price range using Van Westendorp analysis

Once the ranges were established, the only remaining question was whether the usage part should be banded (fixed allowance per tier) or continuous (charge for every X amount of usage). In the end, we decided to combine the two: customers get a free allowance but have to pay extra for overage. 

NOTE: We conducted our research mainly with an external panel (who fits our buyer persona). That’s because answers from existing customers are always going to be biased – they have the incentive to not be honest. 

Of course, data from non-customers has its own set of biases, but it’s the lesser of two evils.

Experimentation

To ensure the usage component wouldn’t have any negative impact, we started it off as an experiment.

The original idea was to run the new pricing as an A/B test. However, upon realizing that it would take at least a year to reach statistical significance and that there was virtually no way to prevent users from seeing the other variant, we decided to launch it to all new users.

After three months, we reviewed how it performed against the old pricing in terms of:

  • Pricing page visit-to-Signup %
  • Signup-to-Paid %
  • Average selling price
  • M2 logo retention
  • M2 net revenue retention

(Month 2 = proxy for long-term retention)

We found that the new pricing did not hurt conversion rate, logo retention, or the average selling price, but it did increase NRR by 26%!

This gave us the confidence to fully commit to UBP and continue testing different iterations. 

After three iterations, we chose the best-performing version and have kept it to this day:

NOTE: I later talked to several late-stage PLG companies that had also gone through similar pricing changes. None of them did A/B tests for the same reasons I shared above. If you need more convincing, here is an article by Patrick Campbell detailing why it isn’t a good idea.

Learnings 

What boosts your monetization might hurt your acquisition and retention

Usage-based pricing often introduces friction in the buying process. Users need to understand how usage is calculated, estimate their required volume, and work out the potential cost. Even if they upgrade, there’s always a chance they’ll churn after getting surprised by their bills.

Basically, the effect isn’t always net positive. So it’s worth thinking about how UBP fits with your growth model.

For example, if your revenue is mostly sales-driven and has an underlying cost to serve that scales with usage, then usage-based pricing could be a great fit. However, if your product is mainly self-serve and relies on virality to drive acquisition, sacrificing conversions in exchange for a higher average revenue per acconut might not be worth it.

Another way to look at it is by picturing your monthly recurring revenue (MRR) as a two-dimensional graph. To grow it, you can improve either:

  • The number of paying customers
  • The average revenue per customer

A pricing change can accelerate your revenue growth in one direction but hurt it in the other. Your goal is to find a formula that will yield the most MRR growth in the long run.

Visualizing how your revenue growth can look like as a graph

Get your value metric right (outcome vs usage, cumulative vs one-off)

Although it’s called “usage-based” pricing, users usually prefer getting billed for successful outcomes. Unfortunately, outcome-based metrics aren’t always realistic. This is why many SaaS companies stick to usage-based metrics as the best available option. This was also true in our case.

When the result of our value metric research came back, “number of leads” was rated as the most preferred option. 

However, we went against the data and picked “number of chats” because:

  1. Everyone has a different definition of “leads.”
  2. Although lead capturing was our top use case, we did not want to be seen as a vertical solution.

The good news is that SaaS users today have become accustomed to paying for usage, so it is still an acceptable option.

Another aspect to consider when choosing your value metric is whether it should be cumulative or one-off.

Cumulative metrics, such as “total number of user records,” are more predictable and tend to only go up. One-off metrics, such as “number of emails sent,” can fluctuate up and down more unpredictably.

As a business, you probably see cumulative metrics as the better option, but your users might not always agree. Make sure to consider their view, the nature of your product, and the norm of the competitive landscape.

Generally speaking, cumulative metrics work better with products that act as a system of record (e.g. CRM, analytics), whereas one-off metrics work better with workflow automation tools (e.g. business process automation, email marketing).

Make sure you have the technical and operational capacity

Usage-based pricing adds complexities to your billing system. You have to consider things like:

  • Should the usage be billed in arrears or upfront as credits?
  • How precise should your billing increment be?
  • How do you handle refunds, plan changes, and bulk discounts?
  • How will you inform a customer when they’re about to reach their usage limit?

You also have to prepare your sales and customer success teams to handle communications regarding UBP. This requires an entirely new playbook on processes, contract negotiation, and even compensation

Last but not least, don’t forget about your finance team. Their involvement is crucial if you want to correctly report your usage revenue.

To be frank, most SaaS companies that have adopted UBP are figuring things out as they go, and that’s perfectly fine. You just have to be aware of the resources involved before committing. 

You won’t monetize every customer the same way

When analyzing how much usage to give away for free, we realized that the average volume our competitors were offering was way higher than what the majority of Landbot customers will ever need.

It put us in a tough spot. If we matched the competition, we would monetize only a small fraction of customers through usage. If we didn’t, our conversion rate might have suffered. 

After a few rounds of testing, we decided to go with the first option and monetize the remaining customers via features and seats.

This hybrid model offers a hidden benefit to products that serve a wide range of personas. It can push users to upgrade through any of the value metrics that are relevant to them.

For example, most B2B companies didn’t have enough top-funnel traffic to require the high chat allowance in our higher plans, but many still chose to upgrade for advanced integrations and conditional logic features.

However, this doesn’t mean you should get greedy and add a bunch of value metrics. Every value metric creates friction, and these frictions don’t simply add up – they compound. 

Don’t try UBP if you don’t have a stomach for risk

There were months when we saw large MRR contractions from usage. It made some of my teammates question whether UBP should stay. 

I had to calm them down by pointing out the following: “The contracted MRR came from usage in the first place. We can work on improving usage, but if we remove UBP, this entire growth lever is gone. Do we care about overall MRR growth, or keeping a secondary metric look pretty?”

Pricing changes are always risky, but the risk-to-reward ratio of usage-based pricing is asymmetric – which is likely what makes monetization the most effective growth pillar.

Make forward-looking decisions

Introducing usage-based pricing is considered a major pricing change. It’s rare for a SaaS company to make more than one major pricing change per year without confusing their customers or creating operational nightmares.

This means your UBP should be designed for not only the current state of your company but also its upcoming phase (1-3 years).

Bear in mind that your product will evolve, and so will your team, market, and competition. No amount of data will be enough to predict these changes. Most of the decisions will come down to qualitative judgment and convictions.

In our case, many decisions were not so clear-cut, but we ultimately went with what we believed would best fit our new strategy at the time.

UBP is one of many weapons in your company’s arsenal

Remember, usage-based pricing is not a silver bullet.

It’s only one potential lever in your overall growth model, but it can be a powerful one when executed well in the right context.

If you believe UBP could be a great fit for your SaaS, why not give it a shot?

Share this post
Share on twitter
Share on linkedin
Share on email

Hi, I’m Austin!

I love exploring new ways of building and growing products. If this sounds like your cup of tea, feel free to follow me or subscribe.

Hi, I’m Austin!

I write about product management, SaaS, growth, plus anything that comes to mind during hot showers.