Metrics are critical for Product Managers. There is a ton of information on the web about metrics for product managers. Critical Methods every Product Manager Should Track from Product Coalition is a good example. Designing and implementing a Product Management metrics program is a lot harder. We’ll explore the process for developing and implementing a product management metrics program.
The process starts by identifying metrics that are directly tied to the goals in your company’s business plan. The metrics should be SMART – Specific, Measurable, Attainable, Relevant, and Timely. The Objectives and Key Results movement has significant traction these days. Google is a long time advocate of OKRs. The implementation of OKRs can be traced back to their founding in 1999.
A key aspect of OKR is the linkage of corporate, team, and individual goals. This ensures alignment throughout the organization. Product Managers should strive to identify metrics that align closely with their entire organization.
This is where most analytics programs fall short. All stakeholders impacted by a metric should have a consensus on the definition of the metrics. For example, Life Time Value (LTV) is a common metric that most software companies use. LTV answers the question “what is a typical customer worth to us?” It looks at revenues, cost of sales, gross margin, and the typical duration of a customer’s relationship with the company. A typical formula is:
A more simplified version is
There are many ways to calculate LTV. If a company is fairly new, it may not yet know the revenue churn percentage of customers. In such cases they need to use a proxy to estimate the typical lifetime duration of a customer. Often average monthly churn rate is used. Established companies usually have definitive data that describes the average lifetime of a customer. Metrics must fit each company’s unique situation.
All parts of the organization that are impacted by a metric must share a common definition of what the metric is, how it will be calculated, and how the results will be interpreted. Marketing, Sales, and Finance may have different views on how LTV could be calculated, but these views have to be reconciled and one definition agreed to. Otherwise the organization whose views were not addressed will ignore or dismiss the metric. The consequences of not taking corrective actions could be serious.
Most good metrics flow directly from data produced by operational systems like CRM, Sales Force Automation, ERP, or Web Analytics. The base data and metrics can be calculated by standard data analytics applications like SAS, Tableau, Business Objects, etc. Metrics data that requires additional attribution, cleansing, or formatting before calculation present a different challenge. Product Managers should assess whether the feasibility, costs, and benefits are worth the value a metric can bring.
One example are Marketing Influence Metrics. Influence metrics assess how much revenue was influenced by specific marketing campaigns. Usually this involves a manual process of identifying specific sales transactions that were influenced by a campaign. The attribution process is manual and subjective. A fact-based standard does not exist to determine that the campaign influenced specific sales transactions. As a result there is low confidence that the metric is actually valuable and that the time, cost, and effort to produce the metric is worth the benefit it provides. Often these metrics are called “vanity metrics” because they make one part of the organization look good, but actually offer no added value to the enterprise as a whole.
The critical part of an effective metrics program is to commit to take action when the metrics exceed agreed upon limits. Metrics enable organizations to take actions based on actionable intelligence. If the Sales organization refuses to take action when Customer Acquisition Costs exceed target values because of higher than normal commission costs, then the rationale for implementing an metrics program needs to be reassessed.
One technique that has been used for decades is to establish upper and lower control limits for specific metrics. Almost all metrics will have some normal variability, but control limits can be used to determine when a metric exceeds or falls short of an acceptable level.
W. Edwards Deming popularized the use of Walter Shrewart‘s control limits in the 1950s when Total Quality Management became a standard tool in the Japanese manufacturing industry. Shrewart’s control limits represent a three sigma deviation from the mean of samples taken from a process. Processes that are in control have 99.7300% of all the points will fall between the control limits. When performance is outside of the control limits, it is cause to investigate. It is important to investigate performance not only when a metric falls below the lower control limit because of some issue or problem, but also when the upper limit is breached as well. Understanding over performance allows an organization to reinforce and extend the conditions that led to the better than expected results.
All organizations impacted by a metric must commit to take action based on the metric’s results. Control limits provide a convenient and statistically valid approach to deciding when to take corrective action based on over/underperformance.
Metrics information should be published on a predictable schedule. They should be shared across all the organizations that are impacted by them. A simple test for publishing and sharing is if the metrics information is available on employees’ smart phones. Contemporary analytics technology provides for a wide variety of mechanisms to publish and share data.
Metrics should be widely available and not restricted to a small select group of people. David Siegel was the CEO of Investopedia for three years. As CEO, he realized that smart decisions are made based on data, and that the greatest reason for disagreement in organizations is the asymmetrical access to reports. So he instituted three steps to make sure employees had access to data detailing progress across the company: he distributed reports to everyone, held weekly metrics meetings, and let any employee join any email list.
For public companies, there is always a concern about the potential leakage of material non-public information. If an employee cannot be trusted to maintain necessary confidentiality, then there are larger issues than the metrics program that need to be addressed.
A good best practice to implement is to conduct periodic retrospectives on the effectiveness of analytics program. Has the business or market changed such that the current metrics suite in no longer valid? Should some metrics be dropped and others added? Is the cost of metric data collection and analysis still providing the benefits that were originally anticipated? Quarterly or annual program retrospectives will help ensure the long term health and success of the program, and the organization.
If you would like to examine how an effective analytics program could benefit your organization, check out Market-Driven Business’ Market-Driven Analytics workshop.
Also published on Medium.