Eye-opening insights from 700+ product managers & leaders.
We were recently joined by well-known product leader C Todd Lombardo for an episode of Product Excellence AMA Hour. Todd is VP of Product at MachineMetrics, the company behind the manufacturing industry’s first industrial IoT platform for machines. He’s also an established author, with his third book, Product Research Rules, set to come out later this year.
In a lively discussion, Todd shared why product research doesn’t have to be difficult, expensive, or drawn-out, and why connecting the dots between quantitative product analytics, marketing data, and qualitative user research leads to excellent products.
Here’s a lightly edited version of the conversation. Enjoy!
Yes, thanks for asking! So you’ve got marketing research, which looks at the market. You’ve got user research, which asks what your customers are doing. What do they think about your product? What’s their journey? What are their personas? And then you’ve got your product analytics—all the data behind how people are using your product. The book tries to thread the needle through all three of these areas.
As a product person, you may have access to a user research persona. You may have a marketing team to work with. You may even have data analysts on your team, or maybe you are a data analyst. But because product is the intersection of everything, there’s sometimes a missing thread there. People might have strengths in one area, but then they might lean on that strength too much and not realize that there are other areas to focus on.
And another part is framing research in the right way. You can go out and do excellent customer research and get these wonderful insights, but if you’re not framing it in terms of the market and what it needs, you won’t get a viable product. I experienced this years ago. I remember going all over Europe and flying across the US, talking to different customers in the agro-bio sector. We had a ton of insights, but we never framed the research in terms of the market and our competitors. In the end, the product failed.
“You can go out and do excellent customer research and get these wonderful insights, but if you’re not framing it in terms of the market and what it needs, you won’t get a viable product.”
So the book tries to bring these three parts of product management together and helps the reader to not only frame research better but also to speed up the process of turning insights into product features.
As product people, we still ship products and features that fail. There’s an entire product graveyard out there. Google has launched a whole bunch of products that they’ve had to pull off the market. Everyone has done it! In some sense, there’s an element of putting stuff out there and failing fast. But at the same time, we don’t want to continue wasting time and money.
It kills me when I read stories about this. Over ten years ago, color.com burned through something like $41 million. Yikes! That’s a lot of money. Sometimes, an influx of VC money allows you to be sloppy with your product decisions. Companies are quick to put out a product, but often they’re not building something that solves real problems. What they’re doing is just making things cheaper, subsidized by VC money.
“Sometimes, an influx of VC money allows you to be sloppy with your product decisions. Companies are quick to put out a product, but often they’re not building something that solves real problems.”
In the conversations I’ve had around roadmaps, I started hearing similar themes. It was obvious something was missing – a gap to fill.
Product discovery is so important. Teresa Torres and Marty Cagan are big advocates of it, and so am I. So you need to have that in your tool kit. But also, you can’t ignore what’s going on in the market and the competitive landscape. If you don’t understand that, you might release a product that just isn’t going to work.
The same goes for analytics. You need to understand the data behind the research. We’ve done a lot of qualitative research in putting together a set of customer personas. But we’re also backing that up with analytics, looking at how the data matches up to those personas to ensure they are truly data-driven. Qualitative research helps tell us why customers are using a product, and quantitative research helps tell us what they’re doing. The combination of those two things gives you a lot of power as a product manager.
“Qualitative research helps tell us why customers are using a product, and quantitative research helps tell us what they’re doing. The combination of those two things gives you a lot of power as a product manager.”
You mention in your new book that the root cause of a lot of product failures is failing to understand customer needs. As product people, why do we do such a bad job of this, even though we know how important it is?
The thing is, we don’t necessarily fail miserably, and that’s part of the problem. We end up being mediocre. We’re not a smashing success, nor are we a smashing failure.
Also, there’s plenty of psychology that tells us that what we think are our own smart choices is often actually just luck. We attribute success to our decision-making ability, not necessarily to the fact that we were just lucky. And so it ends up driving what I call ego-driven development.
We sometimes struggle because we think we have the right answer when we actually don’t. We think our decisions continue to be right when we’re really just going on hunches or gut feelings, which are often the result of an internal synthesis of past conversations or experiences. That’s fine, but you still have to back it up with data in some way and validate it.
Part of it is looking at whatever data you have right now, some of which you might not even realize you have. When I first got to MachineMetrics, I looked at all of the different customer tickets. I wanted to know what customers were complaining about, what problems we were solving well, and what problems we had failed to solve.
So if you have some kind of customer ticketing system, that’s a great place to look. It allows you to see how your current customers are using your product and whether anything is changing. If you look at last year, what were the common themes and trends you saw? How do they compare with this year?
If you have some kind of analytics package, you can start to spot some common patterns. What are people doing when they first log into your product? Is there a particular landing page they often go to? Is your default landing page skewing the results in some way? There’s so much data at your disposal that you may not be even looking at.
Then there’s going out and talking to customers. When I first started at MachineMetrics, I said to my designer, “Hey, when was the last time you spoke to a customer?” He replied that he hadn’t visited a customer yet. So I said, “great, pack up whatever designs you have into a prototype, we’re going to a customer tomorrow.”
It’s harder to travel to a customer site right now. Still, a lot of it is about being adaptable, looking at the data you’ve already got, and attacking everything with a curiosity mindset. Data isn’t just spreadsheets and analytics. It can also be a summary of a transcript from a phone call.
“It’s about being adaptable, looking at the data you’ve already got, and attacking everything with a curiosity mindset.”
Great question. I don’t know anybody who is a true expert in all these areas. But when you notice you have a deficiency in one area, you should try to do something about it.
One of my friends at Mass Challenge, a major startup accelerator in Boston, said that startups tend to approach customers in three different ways. The first is a transactional approach, which is like, “Hey, I’ve got this thing, and I’m going to sell it to you.” The second is a confirmatory approach. They ask very leading questions like, “Wouldn’t it be cool if you had this or that?” They try to get customers to confirm their biases and hypotheses. The third is a diagnostic approach, which is like, “Tell me more about how you use that thing” It’s a very curious mindset.
You have to think about the kind of questions you are asking. If you’re a very quant-heavy person and you’re really into analyzing numbers, you may need to practice that curious, diagnostic mindset. On the flip side, if you’re naturally more qual-heavy, you need to get better at understanding how numbers and data work.
It all comes back to curiosity. If you don’t understand the quantitative side, grab a data scientist. If you don’t understand the qualitative side, grab a UX researcher. If you happen to be a product team of one, attend seminars or find other people who have those skill sets and talk to them about how they do their job.
In my new book, there’s a great story about a well-known auction platform that did an annual survey of their user base, which highlighted a couple of key issues. One of the top themes was that the sellers in this particular marketplace didn’t know exactly how buyers found their products and services within the platform. This set off a series of smaller research projects that ultimately resulted in a product feature. This feature educates sellers on how the search algorithm works and how they can increase their visibility on the platform.
So it starts with that key question. From there, you can figure out what data you need to answer this question. See if you already have that data somewhere in your Zendesk tickets or analytics, or go and acquire some data to answer this question. But it really starts with understanding what you have now and then going from there.
As I mentioned earlier, the recent work we did around personas has been successful. One of the questions we wanted to ask was, “What’s the use case?” Or rather, “Who is the use case?” This is a complicated question to answer, as there are so many different roles in a factory.
For example, you have an engineer on the factory floor who is looking at lean processes and making their factory more efficient. Then you have a machinist or machine operator who might be like, “Hey, I need to make this many parts per day. As long as I do that, I’m good.” Then you have executives, who have entirely different goals.
There are different attitudes and behaviors, as well. You may have a super-engaged operator who wants to know all the details about their performance. You may also have another one who doesn’t care as long as they get the job done.
We’ve started pulling all that together. We haven’t finished it yet, but we’re close to matching all the quantitative data to each persona. It’s super cool for us to see things starting to make sense, like “Oh, yeah, that’s why these operators do that thing!”
Sometimes this actually works to your advantage. One of the reasons we’re not publishing the book until a little later this year is because the COVID-19 issue has caused us to wonder how we do this on a more remote basis. We wanted to make sure we thought about this question from our own experience and also talked to others to see what they’d learned and then incorporate that into the text.
I think there’s going to be a new normal. Right now, we’re not allowed to visit customers. So my installation team can’t go into a factory right now. We’ve used Zoom, which has been helpful for us as a product team. We managed to talk to the customer this way, and they showed us different parts of the machine as we were working with it. I think we’re going to see a lot more things like that.
Two weeks ago, I interviewed Mona Patel, who wrote the book Reframe. She told us about some research she did for a mortgage company that involved remote interviews with couples. A number of these couples ended up getting into fights while she was on the call with them. She learned about affairs, intentions to hide money from people, all sorts of things. Her insight was: if she was in the room with these couples, that probably wouldn’t have happened. So when it comes to research, remote can sometimes be good for you. In some cases, your presence in a room may taint some level of authenticity.
“When it comes to research, remote can sometimes be good for you. In some cases, your presence in a room may taint some level of authenticity.”
It’s worth thinking about ways you can collect some of the information you need remotely. If you’re doing a more generative type of research project, you could try asking your customers to take pictures of their desk or work environment, or screenshots of the tools they use in their work. There could be other ways to collect information remotely that may provide just as rich content, if not richer.
I think the first one is to be OK with being wrong – because you probably will be. I was on a call with one of my product managers, and I remember him saying, “Look, you’re going to be wrong all the time. I’m wrong all the time, and it’s great!” And I was so delighted that he was OK with it. He knew that being wrong was going to make him, the company, and the product better.
“Be OK with being wrong – because you probably will be.”
The second step would be to get into that curious, diagnostic mindset and away from those transactional or confirmatory ones. When it comes to receiving feedback from either your internal stakeholders or customers, get inquisitive. Think about how to bring out those answers you need.
“Get into that curious, diagnostic mindset and away from those transactional or confirmatory ones.”
The last step would be to look at the data you already have. You probably have more data at your disposal than you realize. Spend some time there, and you’ll be surprised. It might tell you what you need to then go out and do more external research.