The real cost of bad data: Survey fraud, AI agents, and data integrity with Roddy Knowles

Description

What if you could protect your research data from increasingly sophisticated fraud? In this episode of The Curiosity Current, hosts Matt and Stephanie sit down with Roddy Knowles, COO of dtect, to explore the evolving landscape of survey fraud and data quality management. From organized fraud networks to AI-powered threats, discover why data quality has become an existential challenge for market researchers, how to implement effective fraud prevention strategies, and why questionnaire design remains crucial in maintaining data integrity. Whether you're a research provider, agency, or brand insights professional, this conversation offers practical solutions for safeguarding your data quality and maintaining research credibility in an era of unprecedented threats.

  • Uncover the latest trends in survey fraud
  • Learn proactive strategies for fraud prevention
  • Explore the role of AI in both creating and combating fraud
  • Understand best practices for questionnaire design
  • Gain insights into the future of data quality management

Transcript

Roddy Knowles:

We've already seen people start to use AI agents to complete surveys now. We've got some good ways to dtect that based on the signals that we have, but that's going to continue to evolve and that's going to continue to get better for sure. If I can predict the future, I'll tell you exactly what that's going to look like. But what I can tell you with a high degree of certainty is it's going to get better and harder to dtect for sure.

Stephanie Vance:

Hello, fellow insight seekers. Welcome to The Curiosity Current, a podcast that's all about navigating the exciting world of market research. I'm Stephanie Vance.

Matt Mahan:

And I'm Matt Mahan. Join us as we explore the ever-shifting landscape of consumer behavior and what it means for brands like yours.

Stephanie:

Each episode will get swept up in the trends and challenges facing researchers today, riding the current of curiosity towards new discoveries and deeper understanding.

Matt:

Along the way, we'll tap into the brains of industry leaders, decode real-world data, and explore the tech that's shaping the future of research.

Stephanie:

So whether you're a seasoned pro or just getting your feet wet, we're excited to have you on board.

Matt:

So with that, let's jump right in.

Stephanie:

Today, we're so excited to welcome Roddy Knowles, COO of dtect, a pioneering platform that's redefining how we think about data quality and online survey research.

Matt:

Roddy is a well-known leader in our industry with more than a decade of experience spanning research methodology, product innovation, and in particular, data quality management. He is also the host of the This Is Product Management podcast and is an active voice in shaping where our industry goes next.

Stephanie:

In this episode, we're diving into one of the most pressing challenges in modern market research, how organizations can safeguard the integrity of their data in an era of increasingly sophisticated survey fraud. Roddy, welcome to the show.

Roddy:

Thanks. I'm super happy to be here and talk data quality and anything else you have on your mind.

Stephanie:

Love it.

Matt:

Roddy, your career in research spans product leadership and now data quality and dtect. What sparked your passion for tackling data quality in the first place head on? You know, was there a specific moment? Was there a problem that you encountered? Why this path?

Roddy:

Honestly, it's because I feel like it's the most existential threat that we have in the research space. So as you mentioned, I've been doing this for a while going on, actually over a couple of decades now, I sort of stopped counting and just thinking back to what data quality looked like 20 years ago, 10 years ago, three years ago, it was a whole different ball game. And if we don't get the quality problem solved, I think it is an existential threat for most of us in the research business. And so when I was thinking about what my next role is going to be, I really want to solve a problem that's core to the industry and data quality is that problem. Frankly, I wish we didn't have to deal with survey fraud on a level that we do. It's unfortunate that we do, but it's reality and it's important thing for us to tackle. And so I wanted to tackle something that could not only benefit my company, but other companies in the entire ecosystem. That's how I landed here at dtect.

Matt:

That makes a lot of sense. A very future-proof sort of position, especially considering whenever there's a new innovation, change in technology, something that just really reignites everyone's interest in data quality and tackling that problem. I really feel like we're in a moment like that right now. So very prescient of you.

Roddy:

Yeah, for sure. For sure. And especially as the landscape of fraud changes really rapidly, it changes, you know, every day, but you know, every month, every week, we're constantly seeing new threats. So keeping on top of those really is a full time job. And that's what I'm focused on all day and every day.

Stephanie:

Makes sense. So Roddy, you've held leadership roles at DISQO, Research Now, Dynata, and now at dtect, you're fighting fraud at the front lines. Taking a brief trip down memory lane, because I had the luxury of having this conversation this morning, I was chatting with a colleague who worked with you years ago. And she says it was, quote, too good to pass up. She says Roddy was hands down the best delivery manager I worked with. He is an amazing researcher. So I just wanted to give her your flowers.

Roddy:

I appreciate that. I should give some credit. I don't know whose it is. I don't know if it's true, but I'll take it.

Stephanie:

Yeah, for sure. But I'm curious, like, how have these different lenses that you've had into the data world shaped the way that you approach data integrity? And are there lessons that have kind of culminated over the years and they've carried across these roles?

Roddy:

For sure. My career path has been interesting. I started as a researcher, started as a qualitative researcher. My background on the academic side is as a social scientist. And so I had a number of different positions throughout my career. I started doing field work, started with product management, started with actually doing the research, doing the analysis, questionnaire design, all these core things that we need to do as researchers, and made my way into product and made my way into tech. So I think my lens is a bit different than someone coming from outside the industry. I do think I have a unique position of understanding how the problem really trickles down across a number of roles, because I've been in a lot of those roles myself. And so one of the biggest things for me is I think about tackling the data quality problem is how it impacts everybody. It's easy to say, who does data quality impact? Okay, well, it impacts the tech team because they're trying to prevent survey fraud. It impacts research managers because they're the ones that are trying to manage a project. It impacts the analysts because they're the ones spending time cleaning data. It impacts the sales team because they're the ones that have to deal with the collateral damage and the fallout when there are quality issues. So I think the biggest lesson for me that I've taken from my career and brought into this new role is how it really impacts everybody. So if I'm telling a story about why data quality is important, why fraud prevention is important, it's not because it just impacts one team or one role. It's really how it makes its impact across all teams and all roles, really anything that has to do with the research process. And it's everybody's job to figure that out, dtect or try to solve one problem, which is preventing fraud. But if you write a bad questionnaire, for example, that's on you. So it really takes everybody in order to solve this problem.

Stephanie:

For sure. So survey fraud has clearly evolved. You made this point earlier in recent years, especially in the context of increasing automation, AI, that fraudsters can leverage to their advantage. Based on what you're seeing in the field, I think I know the answer to this, but how serious is the data quality problem for insights professionals right now? And more importantly, what's that risk if we don't act quickly to address it?

Roddy:

I said before it's existential. I think that is not me inflating things. This is a massive problem if you want to quantify it. 30 to 40% of data being bad on a study is not... Out of the norm these days. I remember when it went over 5%, if you had 8% bad data, we had to have a serious call and a serious issue because something went wrong with this project. Now, if you have 8% bad data, you may get a, if you don't get a gold medal, you get a silver one. And so that's really, really changed. So I think that, you know, one, the scope of fraud is, has dramatically increased just on a project basis, but also it's the thing that everybody's talking about as a research supplier. There was a time where you could sweep data quality issues under the rug. You just clean the data before it comes to the client and maybe don't want to talk about these issues that you're having. We can't do that anymore. And this is correct. I think it's the right thing that whoever you are, you're a sample buyer, you're an agency, you're an end client out of brand. You're familiar with what's going on in terms of data quality. So the fact that it's part of the conversation that everybody is having has really, really changed.

Matt:

I know one thing. We have a lot of questions. Stephanie and I both spent a lot of time on the front lines with clients, shepherding projects through the process. We're really interested to know what does this war against fraud look like? So for example, I know you and your team at dtect have a lot of unique approaches to tackling this problem. You literally go undercover to really understand how fraudsters operate. I've got to ask, what have you seen? Has there been anything that has been overly shocking? Like what war stories do you have for us there?

Roddy:

This is shocking. I don't know, because I'm numb to it to some degree, because we're seeing new things all the time. But I'll try to put on an external hat and sort of give a perspective on some of the things that we're seeing. So the landscape of fraud used to be, oh, there's individual participants who come in and maybe they really just want to game their way into a study. Maybe they're angry because they bounced around a router for 15 minutes. They haven't qualified. So yeah, I could be an HR decision maker because I know someone who is. You had those sorts of issues. Then we started to have more organized fraud happening in survey farms, which to be frank is still really a problem, but they were typically co-located. So usually they're a group of individuals who have different machines that they're manning and they're trying to get into studies at scale, either complete those studies or the whole ghost complete issue. They're trying to qualify for the incentive. And so that was what we faced next. What we've seen now, we still see that is a bit different. We've seen distributed networks of people who are using similar technologies and similar approaches, and they're sharing best practices on committing survey fraud with each other. So it's not a matter of fact, if you can sort of clash these survey farms, but by actually having to tackle these distributed networks and understanding what they're doing in order to game their way into surveys. And so the issue where you might see all this traffic coming from one location, maybe people try to spoof their location and pretend they're in other places coming from one place. We still see that, but we also see people that are coming from all over the place and taking sort of different approaches. So you have some say, you know, my survey got hit by bots. One usually is not true. Usually it's a distributed network of people who are sharing something on Discord, sharing something on Telegram. Here's a study. Here's how to get into it. This is a high value. Let's go. And all of a sudden they come in and they hit it fast. And it's really too late to react. If you don't have anything that's proactive, it's preventing those people from coming in, then it really is too late. So I think the way that people have been to fraudsters and been able to sort of congregate has changed. And the speed with which they can impact studies has really, really changed dramatically in the last couple of years. And we continue to see that evolve.

Matt:

It sounds almost like an organized crime drama, except for the modern age and supercharged by technology and social media platforms.

Roddy:

That's pretty much accurate. And what's interesting is some of this is public. You can see some of what's going on. People are sharing this. Just go to YouTube, for example, and see some videos. You want to go a level deeper and join some of these Discord groups and other groups, and you can see what's actually going on behind the scenes. And in order to attack fraud, you really need to understand, one, where it's coming from, two, what the motivations are, and then three, and most importantly, the techniques that people are actually using so you can stay one step ahead.

Matt:

So can you talk to us a little bit about how do you combat it? So I know you've compared fraud prevention to a scarecrow. I heard that quote. So it's fine for casual threats, but ineffective against these organized attacks. How do you then, how would dtect use technology, you know, something like behavioral monitoring, I know you've mentioned, or trusted browser detection, et cetera? How do you get ahead of these sophisticated attacks using the tools at your disposal?

Roddy:

It's really by, I think, a multifaceted or a competitive approach in doing that. So what we do, and I feel strongly about this, is we have passive checks that we do and things that we can observe about people's behavior. And that's super important. So let me talk about that part first. So there's different things that you can observe when someone is taking a survey or engaging with your website or whatever they're doing about their browser. How are they coming in? What are they, where are they coming from in terms of their location? And what signals do you get about their actual behaviors and how they're interacting with a screen and things like that? And so we're incorporating a number of different signals, which on their own are powerful. The most crude, I don't mean that necessarily in a bad way, would be, you know, VPN detection or something like that. You can see that signal. Someone's using a VPN doesn't mean no fraud. Maybe, but it's not definitive. But if they're using a VPN and doing other things, then I'm going to have more confidence in them actually being fraudulent. If they exhibit certain behaviors, and I won't give away all the secret sauce here in case any fraudsters are listening, but they're exhibiting certain behaviors. There's something I can see that tips me off to the fact that they are fraudulent. I'm not going to be able to see those things just by asking survey questions. But we do allow our customers to have a, since we have a pre-screener within the platform too, so they want to ask questions, actually validate someone, what someone's doing. We can see how they are responding to those questions, not just the answers that they're giving, but what's, how are they actually doing that? Are they doing it in ways that actually look sketchy that tip me off? Is there some automation that's happening or auto-translate that's happening, or they're doing something to manipulate and the responses don't actually look good. So we're looking at both the responses themselves and then how they actually entered the data or how they responded to questions. So for us, it's a combination of that sort of passive behavioral monitoring, in addition to, you know, active engagement with a survey or in this case, essentially a pre-screener. And we combine all those things to get better signals. We also try to be honest about that. I can tell you with very, very high degree of certainty, this group is a fraud. There's some types of people that are going to be suspicious. And I might say, yeah, you know, they're doing something, maybe they have a VPN or maybe there's some sort of mismatch in their behavior that looks a little bit sketchy, but I'm not certain about that. So I can flag them as being suspicious and saying, this is something your team might want to look into. Typically it's five to 8% in that range. So try to be honest there about what we actually can observe and what we're confident in rather than being overly confident, because you also have the issue of false positives.

Roddy:

False positives too. So really, depending on where you're sitting in the ecosystem, that's more of an issue. If you're a sample provider or an exchange, false positive problem because that's constricting supply and it's constricting revenue. So we're really trying to strike a balance.

Stephanie:

Makes a ton of sense.

Matt:

It sounds like there's a little bit of calculus there, right? It's identifying the vectors, but also knowing how to combine those vectors to understand. It's like establishing a degree of certainty of fraud or not. And just, you know, there's some art and science there, it sounds like.

Roddy:

For sure. And I'd also argue that there are things that we can do at dtect or, you know, other companies are doing similar things that we are to keep people out of a survey. But it really does start at the very top. I think of where are these people coming from? What are the survey sources that you're using? What are the sample providers that you're going to? And so you want to really think about it from the onset there. Also, are you targeting the right people? Sometimes fraud, you know, air quotes, it might not even be fraud. It may just because you targeted the right people. You targeted the wrong people. Weren't the people that you actually need to talk to to get into your study. Maybe they don't end up getting screened properly too. So it really is a multifaceted approach and we maintain one place in the ecosystem and we're not trying to do everything. Again, it falls on other people to find the right sources, make sure you're targeting the right people, then ultimately make sure you're asking the right questions as well.

Stephanie:

Definitely. And, you know, you've talked a couple of times about how I feel like that, you know, this tends to be a reactive process, but we have to be proactive about these solutions. I'm just curious, like how proactive ultimately do you feel like these solutions can be? Certainly within the context of a project, you can be proactive. But what I mean is when they're constantly evolving, their methods, how do we stay in front of that? Or is this just an area where constant iteration, monitoring, and assessment will be critical?

Roddy:

I think it's really the latter. You can't take your eyes off of what's going on. If you have the same approaches that you were taking a year ago, chances are you've got a lot of fraud that's coming in now that's coming at you in a different way. So it really is a matter of keeping on top of those. So as you mentioned, Matt, we're keeping our ears and eyes to the ground and trying to understand what people are actually doing based on what people are talking about and what they're sharing. But we're also seeing a lot too. So we may be seeing different types of suspicious behavior that we weren't seeing before as things become, you know, evolve more. I haven't gotten here yet, but I was waiting for AI. We haven't gotten there in the first few minutes, so it's about time, right? Some people are using AI in a number of different ways. We have the rise of AI agents and being able to use AI agents to complete surveys. That is not something that we had six, nine months ago. So the signals that we use to dtect behavior like that are different from what they would be before. So the other thing that's important is how do you calibrate based on what those signals are? So there are different things that we can observe. How you observe behavior of someone using an AI agent is different from how someone would know coming in. You have multiple people coming in from a Discord group that are coming in and trying to take the same sort of vector of attack. You may have individuals that are coming in. So it really just depends on where people are coming from. It's not like in one study, there is one specific type of fraud or one specific path that someone is taking to try to get in. And you may have some people trying to get to the end of the complete. So you might have a ghost complete. You might have people who are legitimately trying to qualify. There's a number of different approaches. So you really need to think about how you can tackle each of those scenarios. And again, it's really hard to keep up with if you're on your own trying to do that. So having some tech first protection, whatever that is, whether it's, you know, you work with someone like that or you build something internally, it's just really critical because when you use it to the humans on your team to try to keep ahead of it, it's a losing battle.

Stephanie:

Yeah, absolutely.

Matt:

I was going to ask about what role human judgment plays in all of this. I think you've already kind of hit on that from a couple of different perspectives. It's core really to everything. The tech is just a tool. At the end of the day, there's a critical judgment piece that really determines success or not. Curious if you can just talk us through what good judgment in this space looks like. If you were hiring someone to come in and do what you do, what are those skills? What does that skill set look like?

Roddy:

It's a really good question. The human judgment part is important and it always plays a role in what we do, but only observing based off of things you see in a survey data set are only going to take you so far. Should you look at an open end and say, this response looks fishy because I asked a question and they gave you a bulleted, you know, four sentence response that seems really complicated. Maybe they're a great participant, but they probably use chat TV to put that together. Are you seeing similar responses across participants in a data set you could use for human judgment to catch some of those things? And you should, you should continue to do that. You should put checks in your surveys to see if people are answering different questions in an inconsistent way. If you have quality check questions, they should absolutely be embedded into your survey. So human judgment should absolutely be involved there. But to your question, it's judgment. And there are certain other signals that you should be looking at. So I know this experience a few times where the customer will come to us and say, you flagged these people, but these are good responses. Why did you flag them? The open ends look good. These are some of the best open ends that we have. And I can say they may look good, but I can tell you how someone completed that open end is really sketchy because what I can see about their behaviors and what they did when they were on the screen, how long they took, the way they responded, all those things. So then as a researcher, whoever's grappling with the data, then you may make the decision. I see not only just what the responses are, how they actually crafted those responses. So if you have these passive signals that you can use in addition to traditional researcher judgment, then I think that's really how that judgment can be more powerful.

Matt:

Makes a lot of sense.

Stephanie:

I feel like one thing that we do a lot is we talk about data quality from a defensive standpoint, defending against the bad actors that are out there. I heard you mention this earlier, and I'm really glad because I would love to know, like, where do you see the researchers in all of this? And specifically, I'm thinking about research design and specifically questionnaire design. How important is respondent-centered design when it comes to data quality? And I'm curious if there are any things that, I mean, certainly design is important to data quality overall, but are there any design sort of best practices for deterring fraud?

Roddy:

For sure. And it starts with a screener. First of all, questionnaire design is something that I'm really passionate about. I don't get to design too many questionnaires anymore, but I do love to when I get the chance. But it starts at the very beginning, which is screening people properly. I cannot tell you how many bad screeners I still see when there are issues with a study.

Stephanie:

Oh, we could share war stories, I'm sure.

Roddy:

You want to do a separate podcast on that? We can do it because I've got some really good ones. But every time I see a screener that has a yes, no question, I throw up in my mouth.

Stephanie:

Sure, sure. A little bit.

Roddy:

That continues to happen a lot, sometimes a lot. So making sure the screener is right, for sure. Also, I feel like I've been on this soapbox for 15 years, but making sure that you're designing a survey that you would actually want to take, an actual human that you'd want to actually participate in. Because they may be a good participant, a real human that comes in. They may have good intentions in wanting to reply to your questions, to provide guidance for brands. But 15 minutes into that study, when they've gone through six grids, and then they hit another one, and there's eight columns and 20 rows, and they decided to straight line through it. You can blame the participant, but I'm going to say I'm going to blame the researcher there. So the onus really is continually on the researcher to create participant-friendly questionnaires. And this is the advice I've been giving forever, but have someone take your survey before you field it. And that doesn't mean you, the person who designs it, preferably have someone that's not used to taking surveys. Because if you want a representative audience too, you need to think not just about the people who have been conditioned to deal with the torture to make it through a terrible 20-minute survey, but the other humans that you want, because you want a representative audience. So if you're thinking about design, you use best practices, you use judgment in terms of design, but pre-testing surveys and pre-testing surveys, people aren't used to taking them is really powerful and eye-opening. So the next time you design a survey, you know, give it to your boyfriend, your roommate, your friend and say, hey, take a few minutes. What do you think about this? You'll also realize when things don't make sense too. There's an easy way to get caught and researchers get caught all the time. Like this is the data that I want to get without thinking about how to actually ask the right questions and the right people to get to that data. And so you tend to use jargony terms, or maybe you just don't think about how the question actually flows from one question to another and the cognitive switching that someone has to do when you switch topics. So all of those things are super, super important. And I'll get off my soapbox in a second, but I do feel like questionnaire design is a little bit of a lost art. I say not completely lost. And they say ignored art. There was a lot more attention put on questionnaire design 10 years ago than there is now. And with the rise of a lot of DIY platforms, I've worked with companies that are focused on DIY platforms before, and there's a lot of positives there. But when someone comes in with no guardrails and is able to design poorly, you know, poorly structured questionnaires, then that puts everything in jeopardy. So that educational part is really super, super important. And luckily there are a lot of tools now that are available. I will not say use ChatGPT to evaluate your questionnaires as a be all and end all, but as sort of a great unpaid intern with a little bit of experience in questionnaire design. Absolutely. So you use some of the AI tools that are out there to evaluate your questionnaires is another great thing to do. So there's no excuse from my perspective in fielding a poorly structured questionnaire.

Stephanie:

No, that's such a good point. Another thing that I have only experienced at AYTM, because in previous roles, I didn't work at a company that had its own panel, but being able to see the kinds of support outreach that panelists will send to say, hey, look, I'm trying to take this survey and I can't because I don't understand or this doesn't make any sense. It's so illuminating. And just to think about those people taking their time to say, like, I'm trying hard here in good faith, but this is not possible is illuminating.

Roddy:

And we all need to do better. This is putting a finger at almost everyone in the research ecosystem and making sure that people are qualifying for studies. This comes back to the sample source. It comes back to targeting. If I don't know if the two of you are members of any panels or you take any surveys, I would encourage any of the listeners or viewers to do that if you're in the industry and you don't. If you want to know why you get some angry people into your studies. Bounce around the router for a little bit. Spend some time. Spend 20 minutes trying to qualify for a study. Think that you get into one and then five minutes later you get screened out. So we really need everyone to jump on board and not do that because you want someone by the time they actually get into your study to not be bitter about it and want to finish and sort of fight their way through. We want to make that as positive an experience as possible because when they can't get into the study and when they get there, it sucks. What are the chances they want to take a survey again? And we do not have infinite human capital to support what we're doing. And it's an easy thing to say, but not everyone is thinking about that, whether they're sitting on the panel or supply side or agency or design side, but we all should be.

Matt:

100%. It is a precious touch point. And if you're on the supplier side, not only that, it's an extension of your brand. I mean, it's just completely critical.

Roddy:

It's an excellent point, Matt. And that's one that doesn't get highlighted enough. If a study is not blinded, and then many times it doesn't make sense for it to be blinded and you go out there and you don't name any companies, company X, Y, Z, and you have a poorly designed questionnaire or whatever it is, a concept test that doesn't make any sense, that shines negatively directly on your brand.

Stephanie:

On the client.

Roddy:

These are either the customers that you have or the customers that you hope to market to and have. So this is another touch point and that oftentimes gets obscured. So I think it's a great point, Matt. I'm glad to bring that up.

Matt:

We talked a lot about, you know, the state of things leading up to now, staying with the theme of battle, because I like that. What is the next frontier? What is the next battle on the horizon that, you know, you have your eyes on? What's the next big challenge headed our way? Where is this whole thing going?

Roddy:

It's AI agents. That's what it is. So we've already seen people start to use AI agents to complete surveys. Now we've got some good ways to dtect that based on the signals that we have, but that's going to continue to evolve and that's going to continue to get better for sure. If I can predict the future, I would tell you exactly what that's going to look like. But what I can tell you with a high degree of certainty is it's going to get better and harder to dtect for sure.

Stephanie:

I know. Well, on that bright note. Yeah. Yeah.

Roddy:

Because, yes, AI is an even playing field. Sometimes maybe it's a little bit uneven. Companies that are highly invested in using AI have those resources to actually combat fraud. So it's not like fraudsters have AI that they can use and we don't. Well, we actually do too. It's just a different playing field that we're playing on. So yes, disheartening in some ways, but also encouraging in other ways. 

Stephanie:

Well, Roddy, if we wanted to leave listeners today with like, you know, some advice about what they can do and, you know, let's assume that most of them are researchers or, you know, insights professionals. What's one immediate step that they can take to start improving data quality in their own work?

Roddy:

Because you mentioned immediately, I would say look at your questionnaires. Start with your screeners. That. See if you're actually getting the right people into studies and start redistributors and look at your questionnaires because that's an immediate thing you can do to ensure you're getting the right participants. I'll give you two answers. The second thing would be to evaluate the sample sources that you're using. If you're not currently doing that, look at sample sources over time. Look at the number of reconciliations that you have. Don't just look at the average. Don't get stuck in that trap because it's that one study out of 20 where you have 80% fraud that comes through. It makes everybody look bad and it really jeopardizes your integrity as a sample buyer or as an agency. I would say do that too. The obvious thing, this is not a sales pitch, is to incorporate some technology as a gate in order to keep people out. Since you said immediate, yes, it's an easy thing to do it and implement, but as just an individual researcher who may not have the authority to make that decision, start with the things that you can handle, assess what the overall situation you're grabbing, what it looks like, and then figure out what preventative steps you could take as well. But I wouldn't wait to do that. Think about the things you can actually do today, tomorrow, easily.

Stephanie:

I love that.

Matt:

That's great. My last closing question is just a little bit of a curveball, just sort of tangentially related. Really curious to get your thoughts on, related topic, rustling feathers in our industry, given your experience tackling fraudsters. What do you think about Synthetic Data?

Roddy:

I think Synthetic Data is here to stay. I think it desperately needs a rebrand because it sounds terrible, but that will happen soon enough. But I think that Synthetic Data will continue to play a role in the industry for sure. The most important thing you can do is make sure that the Synthetic data is accurate, though, because if you're building models off of data that's not reliable, we're in big trouble.

Stephanie:

Fraudulent, yeah.

Roddy:

And the other thing about Synthetic Data, I think there's a lot of promise there, right now, there's a lot of smoke and mirrors around what people are doing, but there's also some companies out there, but a few in particular, I think are really doing new and interesting things. And we don't need to ask people, humans, every single question that we need to. There is a strong case in many scenarios to ask questions that you can actually model, you get it from a model and you can actually, you don't have to ask humans. It's like, that's cool. The other point that I'd like to make is if we don't, if we, if, you know, let's assume some of us or some of your listeners actually care because you have a panel or you think it's important to engage with humans. If we don't tackle this fraud problem, why would anyone want to do that? If I'm going to field a study and have 60% bad data, and I'm not even sure about the rest of the data that came through. Why wouldn't I just feel that same study for a third of the cost with Synthetic Data where I can trust that at least maybe gives even more transparency. I have a comparable level of trust. So I think it's something that, you know, anyone who's on the panel side or is an audience provider needs to really take that seriously and think about, think about that. And the last point I'll make is that I think it does help us with that 20-minute survey problem. Like what questions do you actually need to ask humans? Everyone should be thinking about their Synthetic Data strategy. Or questions can you ask in the data that you've already collected, and then go deeper with participants where you need to. So thinking about that, that combined approach rather than just saying the solution to everything is to field a questionnaire. I'd like to say that as a researcher and hey, good for my business, but it's just not, it's not reality. If we think that way, it's short sighted.

Matt:

That's a great perspective. Well, Roddy, thank you so much again for your time today. It's been a great conversation. I've really enjoyed digging into this hairy topic of data quality and fraud. I really enjoyed it.

Roddy:

I appreciate the time, Matt and Stephanie. Always happy to have this conversation.

Stephanie:

Awesome. Thanks for joining us. Curiosity Current is brought to you by AYTM.

Matt:

To find out how AYTM helps brands connect with consumers and bring insights to life, visit aytm.com.

Stephanie:

And to make sure you never miss an episode, subscribe to The Curiosity Current on Apple, Spotify, or wherever you get your podcasts.

Matt:

Thanks for joining us and we'll see you next time.

Resources