In an ocean of what, where, when, and by whom – the elusive search should be for Why

 

You’re staring at the expensive custom-configured dashboard of the software system you bought last quarter – the ultimate SaaS Data Analytics tool, or so your people told you.

 

You can readily see a real-time calculation of this month’s recurring revenue (MRR), its correlation to last month’s and last quarter’s average.  You can see your running churn rate of lost customers.  Over in the bottom corner of the screen you can even see expansion revenue on a bar graph next to upsell/cross-sell revenue, and a cumulative graph for both.  In the opposite corner there’s your conversion rate of demos to trials, and trials to paying accounts.  This is amazing!

 

Only one problem: Your dashboard tells you NOTHING about WHY any of these numbers you’re seeing are what they are.

 

Indeed, in the era of Big Data there are a lot of data available to gather, sort, crunch, analyze, correlate and report.  In the Software as a Service (SaaS) world, most of the input data points are broken down into four major categories (NOTE: this isn’t meant to be a 100% comprehensive list!):

 

  • Business Metrics
    • MRR/ARR
    • MRR Churn ($ & %)
    • CAC – Customer Acquisition Cost
    • ACV/ARPA – Average Customer/Cash Value, Average Rev/Account
    • LTV – Lifetime Value
    • CRC – Customer Retention Cost
    • RRR – Revenue Retention Rate
    • Renewal Rate
    • Contraction Rate – Downgrades
    • Expansion Rate – Upsells
    • Reactivation Rate
  • Product Metrics
    • DAU/MAU – Daily/Monthly Active Users
    • Product Usage Rate
    • Sessions per User
    • Sessions per Day
    • Total Time Spent
    • Session Duration
    • Onboarding Completion Rate
    • Number of Key Actions per Session
    • Adoption Rate
    • Engagement Rate
    • D1, D7/W1, D30 Retention Rate
    • Time to Value – Signup to Revenue window
    • New Feature Adoption Rate
    • General Features Usage Rate
  • Customer Metrics
    • Customer Count
    • Churn (# & %)
    • Trial to Paid Conversion Rate
    • Average Onboarding Costs
    • NPS – Net Promoter Score
    • Advocacy Percentage
    • CES – Customer Effort Score
    • NES – Net Easy Score
    • CHS – Customer Health Score
    • Customer Success Engagement Rate
    • Customer Adoption Rate
  • Customer Support Metrics
    • Time to First Response – on 1st Customer Call
    • Average Reply Time
    • MTTR – Mean Time to Response/Repair
    • CSAT – Customer Satisfaction
    • Trouble Ticket Volume Total
    • Trouble Ticket Volume per User
    • MCI – Most Common Issue

 

Yes, that’s a long list (and a partial one at that).  But what does it all mean?  Does it tell you why your MRR isn’t increasing faster, or is stagnant, or even decreasing? Or why your churn rate is what it is?  How to any of these metrics affect the other metrics?  What are the interdependencies?  Most importantly, if some factors are interdependent, can any of those factors be influenced such that the effect on other interdependent metrics is positive (and desirable)?

 

Thus begins the detective story.

 

Think of most, if not all of the metrics in the list above, as “clues.”  Clues in and of themselves are just data points, some of which have important correlations to others, whereas some of them are just nice-to-know trivia with little practical value that you can actually affect.  But isn’t that the ultimate goal of knowing all these fun facts?  To be able to proactively change business behavior such that the bottom-line output metrics improve?  You have to believe that’s true; otherwise, they’re just a scoreboard, like watching a football game on TV.  You know the score, and maybe root for one team or the other, but you personally really don’t have any impact on the game.

 

That’s where analytical investigation comes in.

 

Investigative Analytics is of direct benefit in three key areas:

 

 

  • CUSTOMER ACQUISITION
    • What is the Customer Profile of the prospects with the most demo request vs. the least? Can we steer out marketing efforts toward more receptive prospects?
    • Of prospects who watch a demo, what are the common factors of the ones who most readily move forward to a product trial? Can our Ideal Customer Profile (ICP) or Prospect Persona be even better refined?
    • What are the common denominators of trial participants who convert to new customers? Is there a particular feature set or specific functions that could be emphasized more, knowing it will gain a stronger reception?
    • What are the common denominators of new customers who continue to use the service past the first week or the first 30 days? Is it an operational commonality?  Or does it have anything to do with the onboarding experience?
    • What specific correlation does the Onboarding experience have to customers using the product more in their first few months as a new customer? Can it be improved?
  • CUSTOMER EXPANSION OR CONTRACTION
    • What factors contribute to accounts increasing their use of the system?
    • If a customer is using the system less and less over time, are they being analyzed to discover why? It is a training issue or perception of ease-of-use?
    • What’s the Customer Support history of an account with decreasing usage or contraction of services? Have they had bad service/support experiences?  Do they have a persistent issue that isn’t getting resolved?
    • Of the customers who are expanding, are there specific feature sets that are significant to the customer type or market segment that can be touted and scaled to other clients similar to them?
  • CUSTOMER RETENTION OR CHURN
    • Why do customers like you? Or do you just take that for granted?
    • What’s the number-one reason customers cancel their service?
    • How quickly is a customer “at risk” identified? What are the telltale signs?
    • Could customer retention or churn be related to Account Management, i.e. a people issue and not a product/service issue at all?
    • Is an increase in the volume of trouble tickets an indication something is amiss in development or quality control?

 

There is an endless list of questions that could be asked in each of these three areas, far beyond these examples; but the point is clear: Data Analytics can give you a snapshot of what’s going on.  However, it’s up to you to connect the dots and figure out why the numbers are what they are and figure out what you can do about it.

 

For example, your marketing department can use A/B or multivariate testing on a marketing campaign’s landing page and then review analytics to improve click-through rates on a Call to Action.  But the analytics that tells the marketing team which version of a landing page works best happen after-the-fact of some creative person deciding what the choices were to test.  The analytics can’t do that for you.  It can only tell you which version was preferred more by visitors.  Think about it: your speedometer tells you how fast you’re going – but it’s your foot on the accelerator.

 

  • What are the numbers telling you that you need to do to convert more prospects into customers?

 

  • What are the numbers telling you to do grow your existing customer base, and grow it faster, instead of watching it remain flat or shrink over time?

 

  • What are the numbers telling you to do to maintain your clients and keep them happy long-term instead of losing them to a competitor?

 

The cliché is thrown about all the time about wanting our businesses to be “data driven.”  But who is doing the driving?  Or, instead, are our businesses just becoming data “overwhelmed” without a whole lot of intelligent interpretation of the facts to come up with practical changes in organizational structure and operational behavior such that we can best to maximize our investments of time, people, and money?

 

What do all the clues say about whodunit, and why?  It’s time to stop staring reactively at dashboards and, instead, using them to ask our teams, “Why?” and “What are we going to do about it?”