Annual product management industry surveys are always fascinating. They provide a snapshot into the state of product management on a global basis. It is important, however, to put these studies in the proper context. While they provide many interesting factoids (“the average annual salary for senior product managers in the USA is $134,000”) there are often biases that are reported in these surveys (“70% of product managers have one to three professional certifications”). It is important to put these biases into perspective when evaluating the results of product management industry surveys. We’ll explore seven popular product management industry surveys from 2018 and 2019 for statistical significance and potential bias.
We examined seven surveys that have been published by product management organizations in either late 2018 or 2019. The studies include:
You can click on any study and a link will take you to the website where you can download the entire report for free.
I admit that I am a bit of a data junkie. I find that numbers and statistical analysis can bring clarity to otherwise fuzzy topics. One of the best articles I have read recently is How To Write A Good Blog Post Title. And why most of the things you think matter actually don’t. Kris Gage is a professional writer and she explored, with detailed statistical analysis of 470+ articles she had written, the statistical validity of popular headline writing guidelines. She used the Pearson correlation coefficient to prove or disprove the accuracy of 14 different headline writing guidelines (“The average title word count of the top posts is 6–7 words!” or “Use POWER words”). It is a fascinating article and definitely worth a read.
When I looked at the seven product management surveys the first question I asked is are they truly a statistically valid sample of the global product management community? To my surprise the answer is yes.
According to LinkedIn there are 801,238 product managers, managers of product managers, directors of product management, vice president product management, and chief product officers on a global basis. While LinkedIn may not be the ultimate source of truth for product management population analysis, it is a reasonable proxy. Using Student’s T-Test analysis I was able to determine the statistical significance of most of the product management surveys. A sample is considered to be statistically significant if the confidence factor is 95% or higher and the margin of error is less than 5%. I cheated and used an online calculator to determine the margin of error for each survey based on the number of participants versus the global population of product management professionals:
While the survey respondents may be statistically significant, it is important to remember that all of these surveys were sponsored and conducted by for-profit firms. They are in the business of selling product management training, certifications, or conferences:
Each sponsor makes a significant investment in planning, executing, analyzing, and publishing survey results. Some sponsors, like the Pragmatic Institute, have been doing surveys for 18 years. It is not unreasonable for these sponsors to expect some type of return on their investment. As a result some surveys may be slanted to emphasize particular points or reinforce the value of past and future purchases of their products and services. Some surveys are stridently neutral, others are clearly commercially focused.
It is important to note that all of these studies provide very valuable information. Just like we teach our kids, readers need to understand the biases that any one survey might have. Readers can make informed decisions by properly understanding of the context of a survey.
There are a lot of great nuggets of information in these surveys. We’ll excerpt a few that stand out.
According to the Pragmatic Institute study, most Product Managers do not do win-loss analysis to learn about customer/prospect perceptions about messaging, positioning, differentiation, packaging, and pricing.
The Pragmatic survey is one of the most bias-free studies. There are times when they pitch the value of their training and certification but it is not over the top.
According to the Product Focus survey, the majority of product managers are responsible for multiple products:
The Product Focus survey is the most internationally diverse study. Only 9% of the respondents acme from the USA, 82% were from Europe. The survey’s analysis was almost devoid of sponsor bias.
According to the Product Management Festival study there is relatively low satisfaction with current product management processes and methodologies. Methodologies from training and certification vendors received surprisingly bad marks, especially Blackblot:
This analysis is an example of how Response Bias can influence a survey’s results. The 13 items in this list are not all product management frameworks. Several of them are R&D development methods (Scrum/Agile/Kanban, Design Sprints, SAFe, etc.) It appears that the question was designed to highlight problems with commercial product management methodologies (Pragmatic, Blackblot). It is an example of sponsorship bias — Product Management Festival sells conferences vs training and certification.
According to the Indian Institute Product Leadership Association the highest career priority for Indian product managers is to resolve internal conflicts better. Re-skilling/Upskilling was a very low priority:
The IPLA study was almost exclusively focused on Indian product managers. It provides a lot of insights into the state of product management in India.
According to the 280 Group study, End-of-Life management is one of the top skill gaps:
The 280 Group study focused on product management skills. In the introduction to the survey they note:
In late 2018, 280 Group conducted its most ambitious and comprehensive survey to date to learn more about the skill levels of Product Managers across the globe.280 Group study
This survey set out to better understand the skill levels of Product Managers across 15 dimensions (called skill sets) and how they correlate to experience, job title, training, product process, industry, region, and other factors.
The 280 Group survey was the most ‘commercial’ of all of the studies. The analysis and recommendations were clearly defined to support the sale of 280 Group services.
According to the Alpha study:
The Alpha study was one of the smaller studies and is focused on a niche part of the product management universe — user testing and research. The survey questions and analysis reflect this focus. Alpha also publishes an excellent product management journal on Medium – the Product Management Insider
According to the Revulytics study product managers typically do not use actual customer data when making product decisions:
Like the Alpha study, Revulytic’s study ha a small respondent base and was narrowly focused on software usage analytics — Revulytics core offerings.
Product Management Industry surveys provide very valuable information on the state of the product management industry. Their conclusions can help enterprises identify emerging trends and bet practices. Since all of these surveys are sponsored by commercial enterprises it is reasonable to expect that there will be some biases in the surveys that support the sponsor’s commercial objectives. This does not negate the value of the surveys, it just forces the reader to evaluate the results in context of these inherent biases.
Also published on Medium.