Usable vs. useful: Adam Hagerman on UX and the future of research in product development

Description

In this episode of The Curiosity Current, hosts Stephanie and Elana Marmorstein talk with Adam Hagerman, a researcher and strategist whose career has spanned Apple, GLG, and Indeed. With a background in music, history, and public affairs, Adam brings a systems-thinking approach to research, blending qualitative depth and quantitative rigor to turn complexity into strategy. Adam explains how his career journey, from retail inefficiencies to shaping UX research functions at Indeed, revealed a truth: research is most valuable when it controls the narrative and informs what gets built, not just validates what already exists. He shares why the difference between usability and usefulness matters, how Jobs-to-Be-Done uncovers unmet needs, and why metrics like in-situ usefulness outperform NPS for guiding product teams. The discussion also explores how AI and automation can take on the dull and repetitive work, freeing researchers to focus on high-stakes, human-centered decisions. Adam emphasizes that influence in research is social as much as it is analytical; next-generation professionals must combine storytelling, contextual awareness, and foresight. His mantra is simple but powerful: when research is involved, good things happen.

Adam - 00:00:00:  

Research helps us figure out how the thing works. And how we can modify how the thing works towards a desired end. And that's why science exists, right? It's to tame nature. Earning more revenue from the market is by being useful. Another way to think about this is, like, a want explains an action, but a need explains why they want it in the first place. I find the jobs to be done framework really helpful in, kind of, posturing this difference because it helps us monitor outcomes rather than activity. What we bring to the table is the ability to think through a problem synthetically and through the system's lens. It's not just about the collection of facts and figures. It is about the interpretation of them.

Stephanie - 00:00:43:  

Welcome to The Curiosity Current, the podcast where we dive deep into what's shaping today's trends and tomorrow's consumers. I'm your host, Stephanie, and I'm so glad you're joining me. Each episode, we tap into the minds of researchers, innovators, and insights professionals to explore how curiosity drives discovery and how discovery drives better decisions in an ever changing market landscape. Whether you're a data enthusiast, a strategy pro, or like me, just endlessly fascinated by human behavior, this is the place for you. So get ready to challenge your assumptions, spark some fresh thinking, and have some fun along the way. Let's see where curiosity takes us next with this brand new episode.

Stephanie - 00:01:28:  

Welcome back to the Curiosity Current. Today, I'm joined on the hosting side by Elana Marmorstein, Growth Operations Analyst on AYTM's marketing team and one of the hosts of the conference series, Waves Thinking. Elana, thanks for joining.

Elana - 00:01:44:  

Thanks for having me, Steph. I'm really excited to be here.

Stephanie - 00:01:48:  

Me too. And then today's guest is Adam Hagerman. Adam is a researcher and strategist who has spent his career shaping product and user experiences at companies like Indeed and Apple. Adam has led large research teams, built functions from the ground up, and leveraged systems to connect insights to business decisions. His work blends qualitative depth, quantitative rigor, and a systems thinking approach to translate complex challenges into human centered strategy. Before his time at Indeed, Adam worked in investment research at GLG and has a background in history, music, and public affairs, a true Renaissance man. His unique perspective allows him to uncover meaningful truths about people and systems and activate those insights to create meaningful impact. In our episode today, we are going to be exploring the evolving role of research and product development, what the next generation of insights professionals need to succeed, and how organizations can better connect strategy to real user needs. Adam, welcome to The Curiosity Current.

Adam - 00:02:56:  

Thank you for having me, Stephanie and Elana. I'm excited to talk about these things.

Elana - 00:02:59: 

So were we. So just to jump right in, Adam, you have built this career that's, kind of, at the intersection of research, product development, behavioral science, strategy. Love it if you could just kinda take us back through your career journey and talk about how your education and your experience have shaped your approach to research.

Adam - 00:03:20:  

Sure. Why not start at the beginning? My first job was actually in retail. I was that person that was checking you out at the store going, beep beep beep beep! And anybody who's worked in those roles understand that there's oftentimes limited flexibility to change the way things work. You, kind of, do what you have to do. There's a process, there's a procedure, so on and so forth. But I realized early on that I'm one of those people who likes to say, I don't think that's very efficient. I think there's a better way of doing that. And I have found that ethos has followed me through my career and my education. I studied music. I sometimes joke that I'm a classically trained musician and that's true, but I'm not very good. I was more interested in music as a cultural artifact than performing it. And the reason that that distinction is important for this conversation is at the end of the day, the way music is done is an intensely social thing. It's also like a literature practice. It's written down, it's done over time, there's in fashion, out of fashion, your style, so on and so forth. And I was really captivated with, like, how did that system work? And because I was interested in music as a cultural artifact, I was also drawn to music as, in its place in history, as a thing people did. And that led me into more canonical studies of history, wars, empires, so on and so forth. But I was interested in the anthropological scope of how music played into that. There aren't many jobs for that.

Stephanie - 00:04:53:  

Right.

Adam - 00:04:54:  

Right. When I graduated from school, I was like, well, who's gonna pay me money to sit and talk about the role of Opera in the Habsburg empire? Probably nobody. But what I'd realized through my own curiosities was that what I can do is I can help figure out how the thing works, so I can explain it to other people so that the people who are more motivated and able and technically savvy, as became the case, are able to use those insights to modify the way their product operates in that environment. So I mentioned I had to find work. My first job was at Apple. The way I stumbled into it was a chaotic storm of events that had to do with me speaking German at the time. But what I got to do was I got to work with software developers as they were trying to get their intellectual property onto the App Store. And while Apple is known for their incredible user experience, that doesn't just happen. It doesn't just manifest from the C Phone like Venus. There are people who study that thing. I didn't get to be one of those people, but I got to work alongside them. So the role I had was helping these software developers navigate the Apple bureaucracy. How to get their app on the store, how to deal with the business once it's running, once people are downloading, once they start having consumer complaints, all these things. And there was also a system that could be improved in my expectation that this can be improved, letting me to work with those people who could make the thing happen. And I realized that that is a career opportunity. That is something that you can do. For various reasons, I moved on to GLG, which is basically at the end of the day, it's an expert network, but the group that I worked under was a special projects team. They had their survey research team. They had their more comprehensive portfolio of consultant-like services under that team as well. So, I found myself in another situation where I was doing research to help people make things work better. So, investment research is where I really got to dig into my interest in economics and economics, at the end of the day, is a study of how people make decisions, how they choose to allocate whatever resources they have among the group to achieve certain goals. And investment research really helped me lean into that way of systems thinking, using numbers to tell the story. And then I moved to Duluth after, I was at GLT for about five years, did a lot of interesting stuff for a lot of interesting clients, but Indeed came along. I worked in their marketing research team. I was part of the group that was doing advertising research. Who is our market? and marketing research and I just got to deploy those skills of the systems thinking there are multiple things happening once all intentioned with each other, and here's how the thing works so that you can create a good message product, whatever. And eventually I made my way over into the product team because I had worked cross functionally, collaboratively with a lot of different types of teams at Indeed. The move just kind of made sense.

Stephanie - 00:08:08:

I loved hearing that journey of yours and how all of your interests really ended up moving you to where you are today.

Adam - 00:08:16:  

Yeah. Sometimes it's wild to think how many things I've touched, how many different industries, how many different projects, how many different products within those industries and projects, and all of that came from somebody who was just kinda frustrated with how the the retail system worked.

Elana - 00:08:31:  

Well, and it sounds like, certainly, a theme across all of that is the systems thinking but another one that I feel like hearkens back to your education is the role of context in creating meaning. And I wonder, do you still use that framework even within research?

Adam - 00:08:48:  

Yeah. Earlier on in your introduction, you mentioned my blend of qualitative and quantitative research. And at the end of the day, I put that in my LinkedIn profile because those are the words that people need to hear and need to see. But at the end of the day, they're just different strategies to understand context. Statistics is pejoratively like lying with numbers, but what it is, it's an argument. You're making an argument that something is or isn't using a different rhetorical strategy. Much like Tchaikovsky would use the orchestra as a rhetorical strategy to convey something, and there would be all this context around those decisions and how that was done. The same is true for product design. There are tools that we can use to help people interact with their environment, and we can measure it in numbers, we can talk about it with nouns, verbs, adjectives, and the blending of all of those things is a triangulation at the end of the day. Uh, it's it's a picture of what the thing is, how the thing works, and what we can do to change the outcomes.

Stephanie - 00:09:57:  

So at Indeed, you led research programs that touched really every part of the labor market ecosystem. In today's product innovation landscape, how would you say the role of research has evolved from being a feedback mechanism to a strategic driver that really shapes the what and the why of products?

Adam - 00:10:20: 

 Good question. It's one thing to state what has happened, and it's another thing to try to change the outcomes on the other end. Research helps us figure out how the thing works and how we can modify how the thing works towards a desired end, and that's why science exists. Right? It's to tame nature. The transition for me, moving from validation of like, yes, this was the right thing to do. I did something that sometimes gets researchers into trouble. I showed them that we can have a theoretical framework for how the thing works. And that allows iterative thought, like infinite thought experiments. And what it allowed people to do was think more in the realm of possibility rather than, did I get it correct? And changing that mental framework, just the framing even, I demonstrated that research could help set the future instead of just recording what had happened. Because we were able to use all of those fast facts and figures that we dredged up in the primary data collection. And we were able to frame it into a story, a narrative that my stakeholders could engage with much like they do their movie, their favorite movie characters, their characters in books, plot lines in the news. I was able to give them the same bits of information and spin them on a way to work through that narrative so that research was informing what they eventually build rather than validating that what they built was a good idea.

Stephanie - 00:11:49:  

Yeah. That proactiveness to research rather than reactiveness.

Adam - 00:11:54:  

Something I try to tell my team and I emphasize when I'm coaching them is we get the opportunity to control the narrative. When we're working with with product managers, engineers, etcetera, go-to-market people, senior leadership, just generally interested parties around the company, everybody's showing up with an opinion and a bit of information that they want to influence this way of the project, and their motivations are many, oftentimes well intentioned. What researchers have the opportunity to do is bring an objective lens, but they're not supplanting everything else. They're not saying all of your information is false. Rather, what they're doing is they're using that external information to help meld together all the different perspectives and shape that narrative of what is going to happen. That's the coaching I give my team when I see that we're stuck in a tactical space, and we need to move it into a more strategic space. It becomes about controlling the narrative rather than giving information.

Elana - 00:12:58:  

Very interesting. I'm gonna kinda lean into some of your UX research in this next question. I think some of your answers will be the same, but I'll be curious to hear if this uncovers anything else for you. So first, I want to acknowledge that, you know, you really got to shape what UX research looks like at Indeed, which is rare. Right? That's a rare opportunity to be able to just build a function from the bottom up, and I'm sure it was both challenging and rewarding if you wanna talk a little bit about that. I do wonder, though, for organizations, other organizations who are looking to root strategy in those user needs more effectively, what frameworks or processes have you seen work best for turning complex research into actionable business decisions? And I can see that, like, part of your answer is certainly using these storytelling techniques and making sure that from a contextual perspective, that insight has a role that's not to supplant. But are there other things that you sort of practice in this realm?

Adam - 00:13:59:  

I remember that a decision needs to be made. We don't just do research because we find it interesting. Like some, sometimes I do, we all go down internet rabbit holes, we're like, the practical utility of this information we're gathering is low, but we find it interesting. In the business context, there's a decision that needs to be made. It's, oftentimes, an investment decision. The team is deciding, do we put people to work on this? Do we reallocate from this other thing? This is a decision that needs to be made and the research should inform the outcome of that decision. And yes, it's about storytelling so on and so forth, but the decision is the golden threat that kinda brings all these things together as we're trying to collate all of the different perspectives and things that could happen with with our new information and our ability to synthesize, reiterating that there's a decision, what is that decision, how is this research going to inform that decision, taking that approach, I find, makes the right decision inevitable, if that makes sense.

Stephanie - 00:15:01: 

 It does. I like that.

Adam - 00:15:03:  

The way I do that is a mix of logos, a mix of ethos, and a bit of pathos. But at the end of the day, the way you do it is going to be very, it has to be responsive to how the organization is working. Who has power? Who gets to make decisions? How do they make decisions? You were very kind in saying that I helped to shape the UX culture at Indeed, but I did it in concert with all of these other people. And had that asked of characters been different, the culture of UX research at Indeed would have been different. Because all along the way, there were decisions to be made, and there were ways we could choose to approach the research, there were ways we could choose to approach the stakeholders, but it's all dependent upon that cosmic alignment of the cast of characters. So you may have felt that my answer was not prescriptive and it's because there isn't one framework that will work all the time. But what has helped me is grounding it in that there's a decision that needs to be made, and the decision needs to be in the interest of users because that's who we're paid to represent. But also, we need to make the thing move forward. We gotta do something.

Stephanie - 00:16:15: 

Yeah. For sure. And it's it's interesting too because, you know, before when I was asking you about the role of context, I'm thinking about, like, contextualizing insights, but you were really also raising this point that, like, you as a UX researcher, as an insights professional, you sit in a unique context as well, and your success is only going to be as great as your ability to understand that context and know your role within it.

Adam - 00:16:41:  

And note, I'll expand upon that. It's not just knowing your role, it's also understanding what things you can influence, what things you can control. And I find researchers have a hot take, I find researchers oftentimes have a deluded sense of importance in the decision making process. And that's not to say research isn't important, it's that we think sometimes that we can show up and because we have the facts and figures, we've earned the right. But the reality is this is a social thing and we have to work with the others and the context is much more than just the user context.

Stephanie - 00:17:20:  

Sure. It's the business context. Yeah.

Adam - 00:17:22: 

Because all those things blend together to make the market work the way it does. And I don't think we're doing ourselves any favors by ignoring that.

Stephanie - 00:17:31:  

That makes so much sense. If I can have a small follow-up here, I'm curious, is there any particular advice, given all of this sort of complexity, that you would give a UX research team that is really seeking to build more influence and impact with cross functional partners?

Adam - 00:17:48:  

With researchers involved, good things happen. We're collaborative. We're not just naysayers, and it's important that we think about the simplicity in our delivery, maybe don't show all the stuff along the way, but they do need to be invested from the start. And bringing it back to when research is involved, good things happen. There also has to be an understanding that because research was involved, better things happened and that goes to understanding your context. How can I be helpful in this space? What things can I control? What can I contribute to the conversation that shapes it into something that advocates for users? And it is not dead on arrival. That would be my advice. Or I don't know if that's really even advice, it's more of a desirable outcome because the way you achieve that is going to change. The desirable outcome is when research is involved, good things happen.

Stephanie - 00:18:44: 

So rooting your behavior in that understanding or that sort of, a mantra. 

Elana - 00:18:49: 

Yeah. I like that.

Adam - 00:18:51:  

And then as you go about making your decisions about what to say, when to say it, to whom to say it, remember that the end goal is when research is involved, good things happen.

Stephanie - 00:19:01:  

It's confidence boosting too, which is never bad. Absolutely.

Elana - 00:19:06:  

Yeah. I remember listening in another interview, Adam, you actually mentioned how research oftentimes illuminates the consequences of decisions ultimately. And so in that same vein, when you think about organizations that might struggle to connect the strategy to what users actually need, you know, you push the lever, does something happen? Or if it doesn't happen, why or why not? So from your experience, what would you say are the most common gaps that you've observed in that connecting strategy to what users need? And how can research help bridge those gaps, especially in early stage product ideation and then later on in mature product optimization?

Adam - 00:19:49:  

I find that there's sometimes confusion between usability and usefulness of a product. Many product teams are incentivized by things like engagement metrics, but I think those can be deceiving because those measure usability, were people able to do the thing? Rather than usefulness, was the thing worth doing and did they have a good time doing it?

Elana - 00:20:12:  

Did they keep coming back to use it again and again?

Adam - 00:20:15:  

Yeah. Research helps explain the why behind certain observable phenomena, the thing we see and I think that the difference between usefulness and usable is the gap between a want and a need. I think a want is demonstrated by somebody clicking the thing. They were lured into the behavior. We created an environment where the behavior we wanted them to do manifest. Great. Wonderful. That's usable. But that's not a strategic decision. That's not solving problems for people. That's not earning more revenue from the market. Earning more revenue from the market is by being useful. Another way I, maybe, think about this is a want explains an action, but a need explains why they want it in the first place. I find the jobs to be done framework really helpful in, kind of, posturing this difference because it helps us monitor outcomes rather than activity. I think that shift, thinking about, let's see if I can make this parallelism work. Usable products demonstrate a want because people did the thing, and that's measured by activity. Number of clicks, sign ups, so on and so forth. Things that happened, but the outcome, usefulness leads to an outcome that is more desirable, And that's that parallelism fell apart, maybe we don't put that. But the difference between usefulness and usability is that gap and jobs to be done focusing on the outcome we want to affect. It changes the style of thinking. So products satisfy needs first, that's the strategy. We're satisfying a need, and then the wants come later through the engagement metrics.

Elana - 00:22:01:  

Right. And research is what is illuminating and revealing and understanding that usefulness to folks.

Adam - 00:22:08:  

Yeah. It explains the things we see. 

Elana - 00:22:11: 

Wonderful. 

Adam - 00:22:12:  

What's meaning behind it.

Elana - 00:22:13:  

So, I'm gonna switch gears a little bit, Adam, to everyone's favorite topic, AI. Hope you're not sick about talking about it. But with thinking through the rise of AI and automation, they're transforming really how teams gather and interpret user insights. So, how can research leaders integrate these tools while ensuring that human empathy and contextual understanding remain central to product strategy?

Adam - 00:22:43:  

Oh, wow. Yeah. Okay. So, there are a few ways I could think about. AI is a tool to help us interact with information. And what it's good at is spotting patterns, not defining truth. In engaging with AI, I hear a lot of fear of inaccuracy and I think that comes from a fear of being wrong and this belief that AI is giving you the answers, so it must be true, etcetera. I think the conversation here is the difference between facts and interpretation. Research is here to help us interpret the facts to help us change reality, I think that's the answer. Research is here to help us understand the difference. This is a really deeply philosophical question, and I thought I had thought about this really, really well, but I think my answer is changing really by the day, I suppose. And I suppose that that's what many of your listeners are going to feel too, it's really hard to come up with a single answer about how humans need to interact in this because we're still figuring it out. Things are changing around us day-by-day. 

Elana - 00:23:56: 

Yeah. 

Adam - 00:23:57: 

And I think we need to have an honest conversation about what AI is good for. What does it do well that helps us automate some of our dirty, dull, and dangerous tasks? At the end of the day, that's what it is. It's a tool to help us automate some things that maybe take time because, you know, there's a cost to that time. And it's interesting that this is happening because a lot of the work my team did was about how to automate things in recruiter workflows. That's what we did at Indeed. And I brought up the framework of dull, dirty, dangerous, and there's a fourth one, dear. Dull tasks, those are the like, no, I'm quite happy for AI to help me manage those transcripts. I I don't wanna keep tedious notes. I'd rather be present and engaged with the person I'm talking with and use AI to say, hey, so and so said this thing, I don't remember which person it was, but it's stuck in my head and I feel it's really helpful, help me find that, that's, kind of, a Ctrl + F situation, but it's also helped me contextualize this among this other thing. I'm grappling with this thing that I heard that doesn't quite make sense, and I really appreciate some extra context and feedback. That's time, that's effort, there's an argument to be made that we should be having that conversation with another human being, and I think that that's a conversation to be had but sometimes the realities of our work environment is that we can't schedule time with the other person to have this thoughtful conversation. And I found AI tools really helpful in unblocking those little moments of, this just isn't working, help me resolve through this. And it helps me move through my thought process faster. It helps me organize things, and it helps me deal with what I've come to call, but I'm stealing it from somebody else, a cognitive overhead. Just that cloudy funk that, like, the cobwebs are there, you just need to swat them out so you can move on. I think that's the value that I found. It helps us move through tasks faster, more efficiently. Sometimes it even calls out things that I have overlooked. Well, that LLMs have a significant tendency, they like to please their human counterpart. You can also accept it as something that may have not been deluded by your own confirmation bias. So I found it useful in those ways. And I'm actually not sure that answers your question, Elana, but that -

Elana - 00:26:37:  

Yeah. I feel like you, from what I heard you say, it sounds like, by establishing what you want out of your AI and automation and understanding what is AI and automation good for in the context of whatever you're doing helps it remain central to. I know what AI is going to be doing, and I know what my role as a human and what human empathy and contextual understanding are doing to really get me to the finish line and achieve what I'm trying to accomplish. So I believe you answered the question.

Adam - 00:27:14: 

Because at the end of the day, there's a decision to be made.

Stephanie - 00:27:17:  

Got it. I have to say, I did want to know what the dangerous tasks were. I was waiting on that one, and we didn't get to it.

Elana - 00:27:24:  

Yeah. 

Stephanie - 00:27:27:  

I mean, dirty is one thing, but what are the dangerous tasks? 

Elana - 00:27:30: 

Yeah.

Adam - 00:27:31:  

There are things that could endanger life and limb and emotional stability.

Stephanie - 00:27:37: 

 Okay. So, not so much in market research per se.

Adam - 00:27:40:  

I would disagree.

Elana - 00:27:41: 

I need more.

Adam - 00:27:42:  

What are the dangerous tasks in market research? We're interacting with human beings every day. The fact that I'm talking to people and somebody will listen to it, and what I have to say will maybe change their perspective on the way something goes. That's a very dangerous thing. And as that decision about what Adam chooses to say comes out, do we want that to be automated? Because I could say things that could endanger my life and limb, and I could also say things that endanger the person on the other side's life and limb. And as we think about what decisions would bring that out, it's what kind of research Adam does, it's how Adam chooses to interpret the information around him, it's how he chooses to read the room between Elana, Stephanie, and the masses that I don't even know who they are. Those are all decisions I make, and the consequences of those can be quite dangerous. So, do you want something automating those decisions that are made along the way? Because they are dangerous, just maybe not in the sense that like your arm will get caught.

Stephanie - 00:28:47:  

Got it. They require care. Right? There's a, yeah, gotcha. Understood. I'm sorry, I think I'm just like a seeker of a dramatic thing. So, when you said that, I was like, oh, what's it gonna be?

Adam - 00:29:02:  

It is dramatic. Right?

Stephanie - 00:29:03:  

It is. It is dramatic. I think that's totally fair. I love that. I want to follow Elana's question just a little bit, only to keep us in the context of, you know, whether it's AI or automation. Either way, I really wanna talk about the rapid pace of insights and that sort of forces and requires for those of us who work in the field, you know, to do to accommodate. And in particular, I'm wondering if you ever think about or have addressed how UX researchers can deploy a bit of foresight and anticipate user needs before they manifest, And if there are particular tools, you know, like systems thinking, scenario plan planning that you do to sort of focus more on foresight?

Adam - 00:29:50: 

Foresight. It really is the system thinking in the context. That's the whole point of the enlightenment project. Right? It's being able to develop a theory in your head about something works, how something works, test it in a rigorous way, the scientific method, and then use that to update how you view the world, how you choose to operate through that world. Like, okay, I thought this was making the system change in a certain way, I tested it, oh, it worked or oh, it didn't work or I'm not sure it worked. Those are the skills of foresight. And that's in contrast to maybe reading tarot or the assorted leaves at the bottom of your cup, but in the current philosophy of science that's how things work. So to answer your question, just keep using the skills you've already had, which is the ability to find information, discern the meaning of that information. What's the provenance? Is that an unreliable narrator on the other side? How do I resolve their inputs with what I already know? And you're constantly reevaluating your understanding of how the system works with all those new inputs. And I think with all of this stuff that is changing day by day, staying in touch with it, but maybe not obsessing over it, the obsession, I think it just keeps us in a state of perpetual anxiety because the reality is it will change tomorrow, so why invest in the state of now when you know that tomorrow is gonna be a bit different. And going back to what you can control, what you can't control, keeping up with what is changing and using that to adapt how you choose to navigate the world. What are the dull, dangerous, dirty, and dear tasks that you're ready to offload to these other tools or incorporate into your workflows to be more efficient. As you were starting your question, I thought, well, yeah, the days of month-long research projects are coming to an end. The reason we were able to get so much time for the month-long research process is because we had to manually move through all of that information through note cards, affinity maps, data analysis with, like, we had to do that more manually. Now, the opportunity is that we can move through that information much faster, which means we need to be able to update our understanding of how the whole system works faster. And I suppose that's my answer. Keep up with it, Keep adapting to your understanding of how the world works, how the system works, how the tools can help maneuver through that system better, faster, cheaper, easier, I suppose.

Stephanie - 00:32:31:  

So, it's because you keep coming back to that and I really like it because I think this is a unique perspective in our field and I think it probably comes from your background to a large degree. Then, is the idea that the more that I can be aware of how the system is changing, the more that I can understand, again, my role in it and where I can make an impact?

Adam - 00:32:52:  

Yeah. At the end of the day, there's a new technological frontier in front of us. And the collective internet has talked about how that's going to change the nature of knowledge work. And it is changing the nature of knowledge work, just as a computer changed the nature of a typist or the plow changed the nature of somebody tending a field. And, there are several ways this can go and we're seeing that debate happen in front of us right now. Some would even say it's a negotiation. And I don't know, I guess it is a moment of crisis for insights professionals where we need, as the landscape is evolving, we get to participate in how that narrative shapes out, how that negotiation shapes out, how we choose to engage with these tools, what we choose to do with them with the understanding that we don't get to control how other people use them. Earlier in this conversation, I talked about controlling the narrative. Now, we have another stakeholder we need to talk about, which is AI. And the reality of that stakeholder is, it's by nature stochastic, it will be different every time. And I think that that raises an interesting philosophical question about the nature of truth and beauty in a world where everything is different and hyper-customized. And I think there's other literature outside of this podcast that would agree with you. And I don't think it behooves us to sit by and just say, well, that's interesting. I think we need to engage in how this whole thing shakes out and we as an insights craft need to be present and aware of how this new stakeholder has entered our profession, and how we manage our stakeholders with that as an option. And I'm doing my best not to go into labor economics theory here, but what it's going to do is it's going to reconfigure what our craft looks like. And we're responsible for defining how that goes. I guess that's it.

Stephanie - 00:34:55:  

Yeah. No. That makes so much sense. And I love that because you said the word crisis and then immediately turned it into opportunity, which I think is important because that is,I mean, that's all we have. Right? or we can, you know, wring our hands and ride off into the sunset but I've heard of, we've had a couple of other guests come on the show too and talk a little bit about, like, what are the opportunities for the insights professionals of the future? And I'm curious if either of these kinda strike you. And one is, of course, that expert role. Right? It's still providing the expertise, the lens, the guidance, the strategy and then another role I've heard talked about quite a lot is the data steward, which again requires a whole lot of expertise to ensure that the data that you're compiling from other researchers, from AI, from generative AI, produced results from synthetic data, that you are the steward of that data to say, you know, I'm looking across all of this and seeing this. And that's one role where insights can have an impact. And then, again, that other sort of strategic implications connecting dots kind of role. Do you see those as places where in the future where some of these dirty, dirty tasks and, maybe not the dangerous ones, but the administrative work, right, gets sort of offloaded in the in the analytics, and what does that leave for humans is really the the stuff that only humans can do, right, for now?

Adam - 00:36:37:  

Yeah. That for now was an interesting addition. More and more of the road work that we do will be automated. I made a big deal about dangerous tasks. The reality is, like, Zoom has also automated the dangerous task, which is going to a physical location where we might get hit by a car, so on and so forth. So the automation in those processes is sometimes a good thing. And sometimes that automation results in more efficient projects, contributing to that, like, the months-long research project work streams coming to an end. I think research researchers will need to focus on more of their high value skills, and we will need to do better than a prompt to an LLM. What we bring to the table is the ability to think through a problem synthetically and through the system's lens. It's not just about the collection of facts and figures, it is about the interpretation of them. And as a craft, we'll need to reckon with the value we provide as information gatherers and interpreters. I think this will shrink the pool of people who get to do insights work. I think that's a reality. It will shrink the pool of people who get to do that. And each of those people will need to be much more productive than they are today. That's the pressure of searching for surplus value like that, that's what it is. That's what technology is going to do. So, if those who will get to participate in the craft moving forward will be the ones who have those skills, what is the right way to gather that information? How do I know this information is reliable? What do I do with this information when I know it's unreliable? And how do I interpret the sum of that? Those who get to continue participating in the insights craft will have to do that in a very compelling way with their stakeholders who are making decisions. Which means it has to be a lot less about I'm the trained professional and you should trust what I have to say, and will have to be more of this is how we work collaboratively to do what is in the user's best interest or the customer's best interest. And we as those critical thinkers have to earn the right to sit in that room, that's when good things happen or when research is involved, good things happen, the axiom that I said earlier, and we have to do it in a way that is better than, have we not been there? And that's our critical thinking, we have to be able to process all of that information coming at us, make sense of it in a way that aligns with what the consortium of stakeholders value, and then also what the market values. Bringing all of that information together, it's much like an LLM, it is also stochastic. We kind of have this belief that if we just measure it, we get to control it. But the reality is, we're always readapting and shifting and the realities are changing day-by-day. And if you just ask an OLM, yeah, you get the collective sum of the internet, but it's also stuff that already happened. As we, the critical thinkers are going through the process of changing a product, which will change the way people interact with their environment, which will change the nature of the environment they're interacting with. It's just constantly playing whack-a-mole with a system that's constantly changing. And we as insights professionals will, those who get to continue and do that will be able to adapt with those changing times at a much much more rapid pace.

Stephanie - 00:40:12:  

Makes a lot of sense. It does. 

Elana - 00:40:15: 

Absolutely. And I think now more than ever, there's this additional prominence of research, not just discovering truths, but it's activating them, working with your stakeholders to, you know, actually go beyond uncovering user insights, but using those insights to reshape organizational priorities or product strategies. And I was wondering, Adam, if you could share an example where your research went beyond the research truths and insights to reshape organizational priorities or product strategy in a measurable way.

Adam - 00:40:53:  

Yeah. I'm talking about how the era of months- long research projects is coming to an end. Then here, I'm gonna talk about one of those months-long research projects. We did a rather extensive jobs-to-be-done study at Indeed. And the point of doing that was to uncover new opportunities as the space is growing, as there becomes an opportunity to organically grow your user base through new features and functionality that solve a real user need, that usefulness. When we did our jobs-to-be-done study, we were looking for that white space. We were looking for those outcomes that we would try to manage and influence and how that fits within the larger story of what people were trying to do when they come to Indeed. Obviously, like the user wrote, I'll use a more specific example. So, it was a big study that had many applications in several different product spaces. One of them was our virtual interview product. With COVID, a lot of things went virtual, and there was an opportunity for Indeed to take up more space in the HR ecosystem rather than just being the receptacle for job applications, it could actually be the next point of contact of, okay, let's have a screening interview, so on and so forth. The product was initially mired in indecision and complexity. There were competing visions about what it was even meant to do and what that meant is a lot of product teams were working on usability things. So yes, people were clicking, yes, people were doing the interview, yes, people were doing whatever activity was being monitored, but there was a lack of purpose and a lack of connection to how that thing fit into the HR ecosystem and Indeed's right to play and so on and so forth. With the jobs-to-be-done work, we were able to talk about these are the jobs that people hire an interview platform for, and that interview platform isn't just Zoom, it isn't just Indeed's interview platform, it isn't just a phone call, it's a panoply of things, but they're hired to do those jobs for different reasons. And if we understand what those reasons are, and we innovate on that with a virtual product that is connected to the user's resume, the candidate pipeline, so on and so forth, here's the additional value that it unlocks. And when, I mean, the product had to struggle for a while, but eventually, we were able to start participating in the narrative loop, start shaping the way the product was thinking about iterating and expanding. And once they did that, the engagement metrics went up because we unlock the usefulness aspect so that the usability just follows. So, instead of just constantly iterating on like the size of the button, or do we put the button at the top or the bottom, we were able to get it, well, what are people actually doing here and how do we intentionally create a workflow that helps them do that better, faster, cheaper, easier? The insights we collected were not just those facts and figures, the team already knew the facts and figures. What they needed was the cohesion with the story and the strategy and how the interview process works fitting within that larger context. And that's what unlocked the growth in user base, retention, repeat users, so on and so forth.

Stephanie - 00:44:11:  

Makes sense. Adam, I've read that indeed you, kind of, spearheaded a move away from MPS to more actionable measures of success and I love this. I think a lot of us have similar experiences with MPS both conceptually, but also just not having it pan out over time as a meaningful driver of any particular outcome, particularly in the B2B context. What kinds of metrics have you found are more appropriate for understanding the impact of your work?

Adam - 00:44:45:  

Yeah. I think NPS is useful. It's just not useful for everything. What we did at Indeed by deemphasizing NPS was we asked the team to consider what are you actually measuring and how do you think your inputs to the product experience are going to change that measurement? And that's a hard question to answer, especially with NPS because NPS is asked far away from the experience, oftentimes days, sometimes weeks later, sometimes months later. The person who responds to it isn't always the person who had the experience, we found that to be true and as a result, what we were measuring was not telling us what we thought it was telling us. What we were measuring was essentially brand loyalty, which like, that's great. It's nice to know your brand loyalty, but how does changing the way an inbox is organized? How does that change brand loyalty? It's a much more complex thing than what the product itself can affect. And when I asked that question to my stakeholders, what I got generally was a nihilistic answer of like, oh, well, I'm held accountable to NBS, but I can't do anything about it, so, I, whatever. But that was oftentimes the only user experience metric that people used that they weren't transactional. Like, did they click it? Did they return? So on and so forth. I had a boss at the time who was really supportive of big swings. I worked in an environment where I had, kind of, the institutional support to challenge a very dear metric, something that had been measured over time, that was discussed, that was well understood and well socialized even though there was kind of this nihilistic bent on it. So we engage with the spirit of NPS, which is, we want people to give us a rating from this side to this side that we can track over time and that when we change something in the product, that metric will move. We had a hypothesis that if we take that measurement in situ while the thing is happening, we will get a more sensitive measurement. So, when something changes, it changes, and that it will parallel other desirable business outcomes like revenue engagement and so on and so forth. We were still solving for this usefulness thing, we were measuring usefulness. So, as I mentioned earlier, NPS is measuring brand loyalty, which is a complex thing. Usefulness is also complex, but usefulness is very narrow in its scope. So, we were measuring it in situ and we were able to prove, we had to prove this. I had to come up well, it wasn't just me. I had a brilliant researcher, lead researcher working on this at the time and I was just the guy kinda making sure that all all the edges were sanded down. But what we did is we came up with a system to demonstrate that yes, user sentiment measurement can be very helpful, but we need to do it in a situation. We need to be very closely tied to the usefulness, why did somebody come here, their job to be done? And there needs to be some intentional sampling plan over how you're going to do it so it is a reliable metric. We did it through trial and tribulation. We didn't always get things right, but when we didn't get it right, we used that as an input to like, okay, maybe we're thinking about this the wrong way, how do we change this so that the measurement becomes more reliable over time. And once we're able to demonstrate that the measurement is reliable and that when product teams listen to it, when they listen to the signal that's coming and they do something about it, something good happens. And that's where we were. 

Elana - 00:48:37: 

Right. By narrowing the scope, you're making it more meaningful that the outcome and the results are more meaningful rather than.

Stephanie - 00:48:43: 

Do you like us, maybe? 

Adam - 00:48:45: 

Yep. We were intentional about their job to be done, where they're doing it and we had to play around with the methodology so that we were getting a reliable metric because that reliability is important to credibility. And once people realize that when they respond to the feedback that this measurement system is giving them, good things happen and then that's a self reinforcing.

Elana - 00:49:10:  

Absolutely. Thank you for that, Adam. I'd like to wrap things up with one final question, which is, what is one piece of advice you would offer to insights professionals or organizational leaders. We won't exclude them, but what's a piece of advice you would offer to those who want to make their work truly transformational?

Adam - 00:49:34:  

Remember that this is not a standardized test. We're working in the real world that is messy, and it's not happened yet. So we can't really control the outcomes. That's our desire. Our desire is to do good in the world, but sometimes we need to give ourselves grace. This is not a standardized test where there's an objectively correct answer. We're all just trying to figure it out along the way and giving yourself grace to be incorrect and also the ability to fail gracefully and the ability to be wrong, I guess. Fail gracefully and be wrong. I guess that would be my advice.

Elana - 00:50:15:  

I think that's great. I'm a big fan of failing to use that as a learning experience, to be explicit. And I feel like that's a takeaway for anybody.

Stepanie - 00:50:29: 

Failure fan.

Adam - 00:50:30:  

I don't think it's really about failing to learn. I think our objective should not be to fail, and I don't think you were implying that. And however, when it does happen, it's because there was something we misunderstood and that's okay. We misunderstood something about our theory of how the system works. We tried something out, it didn't work, but now we know that's not a lever we should use. And that by learning that it should also help preclude other things we could do that save us time ultimately. So the intentionality behind failing for me is, yes, about learning, but also what does that learning tell us about what paths are available to us so we can be more efficient in our decision making process moving forward. So I'm not disagreeing with you. I'm just clarifying.

Stephanie - 00:51:16:  

And we do talk about that a lot here too, Elana. I mean, I say Elana because we work together, but about just being willing to experiment is how we talk about it. Right? And if you are going to, sort of, engender and foster that culture at your company, you have to give people permission to fail. Otherwise, how are they incentivized to experiment? So, I think those really go hand in hand.

Elana - 00:51:40:  

Yeah. The curiosity that we love to talk about.

Adam - 00:51:44: 

Yeah. One of the things I use when coaching my team on this very topic is don't ask if it's correct, rather how would you know it's correct? And that shifts from that standardized test of like, I got to get the answer correct, like the methodological rigor.

Elana - 00:52:00:  

Show your work.

Adam - 00:52:01:  

Yeah, show your work a little bit, but how would you know you've gotten there? What are the signals that would need to improve and how can we be sensitive to those things as opposed to thinking we have to get the answer correct?

Stephanie - 00:52:16:  

Makes great sense. Well, Adam, this has been an absolutely illuminating conversation with you today. We really appreciate it. I think one of the takeaways for me is almost just so much of how you think about research feels a bit more conceptual than I think we often will, you know, we sit in a very it's strategic, of course, but, like, it's so easy to get caught up in the things that you're doing without kinda stepping back and saying, you know, to that very point you made, how would you know? How would you know if it was successful? And keeping our minds on the higher order conceptual things that we're actually trying to accomplish. 

Adam - 00:52:54:  

The decision to be made.

Stephanie - 00:52:56:  

Yes. So, it's been a real treat to have you, kind of, ground us in all of that in this conversation. So I really appreciate it. Thanks so much for your time today.

Adam - 00:53:05:  

Thank you for your time.

Elana - 00:53:06:  

Thank you, Adam.

Stephanie - 00:53:11:  

The Curiosity Current is brought to you by AYTM.

Adam - 00:53:13:  

To find out how AYTM helps brands connect with consumers and bring insights to life, visit aytm.com.

Stephanie - 00:53:19:  

And to make sure you never miss an episode, subscribe to The Curiosity Current in Apple, Spotify, or wherever you get your podcasts.

Adam - 00:53:28:  

Thanks for joining us, and we'll see you next time.

Episode Resources