The ICG 00:00 Progress. Good morning everyone and welcome to the ICTs digital Academy Webinar Series. I'm Sally Allsop the committee member responsible for our webinar program. So this morning, we're delighted to welcome back ICG member Mike Stevens. Mike is the founder of insight platforms the leading online directory learning and event site for modern research and insights. He is also a leading consultant advisor and writer at the intersection of technology research and analytic. He has over 25 years experience in Insight software and consulting firms, including vision critical where he led the mea, region and cantar where he managed Nash regional business units and global accounts. Mike's consultancy what next strategy and planning helps client teams, agencies and tech firms to develop skills adopt technology and implement process change. So this morning, Mike will be talking to us about developments in AI that will have a big impact on research and researchers going forwards. In particular synthetic data, AI agents, multimodal assistance, empathetic language models, and asynchronous video. So as usual, as usual, we'll have a q&a session at the end when it'll be your opportunity to get involved. But if you do have a question to ask as we go through the mics, happy to take questions as we go along as well. Otherwise, write your questions. In the chat chat box. And then at the end, we'll we'll run through anything that's that's in there. Okay, so this webinar is generously sponsored by Qual Z. The platform supports all your asynchronous project needs including diary tasks, UX research, video, blogging, international projects, long term communities and much more. Closely are passionate about quality insight and love working with quality researchers. They look forward to making friends with more ICG members and helping you to deliver more for your clients. So before we begin our webinar today, here are you can see we've got a packed agenda for June and the beginning of July with our summer party. So please do sign up for for these webinars and events to don't forget that we're able to offer great value sponsorship opportunities, and you just need to get in touch with Lucy if you have any questions about that. So now I'm going to stop sharing and hand over to Mike Henrique Savelsberg 02:39 Thanks, Sally. So let me just lost the Share button. Share Screen Share so okay. But I'm sharing the wrong thing. Hang on. Right now go consumers listening to on Spotify this morning. Okay, everybody, can everybody see that problem The ICG 03:02 is good. That looks great. Henrique Savelsberg 03:06 It's Ben's AI note taker is going to transcribe everything I'm saying that's a little bit intimidating, but um, okay, so I can see some familiar faces in the audience. Some of this might be you might be covering old ground or some new stuff in here. Hopefully it's gonna be useful for most people. There's a this is actually adapted from a presentation I've been given to client teams. So there's maybe a bit of a kind of client angle and some of the implications at the end, but this is a this is really trying to draw together some of the sort of big things that are happening and there's just a ton more stuff. You know, I run this insight platforms in the academy. There's a load of free resources here that you can go and check out. I also consult with clients under that what next brand, but if you've if you've not seen inside platforms, check it out. There's a there's a huge directory of tools if you're looking for new things. All of this is free to use. You can watch all of our old events, there's like about 250 300 videos, demos, previous webinars, you can go and watch the recordings of and then we've got these courses. These are all our paid courses. So you're probably not interested in these but there's free courses there for things like social listening and text analytics and online Qual and that type of thing. Right. Okay, so I'm going to talk about kind of five big trends that sort of buckets and they overlap a little bit but there's people calling me a veteran these days. So you know, I remember the days before, we were doing online stuff, but there was this kind of busy sort of three waves really that were we're in the third of it now that the first was this, you know, getting things from offline to online, late 90s into the early 2010s. And then the last sort of following decade or so, there was a ton of integration you know, you had all these panels, you had sin you know, connecting all the data together. You had these agile tools like Zapier that integrated all this stuff. That's been massive. There's been so many new tools that have made Quant research especially just quicker and cheaper. Now we're into this, you know, AI powered insights, wave whatever the hell that means. I mean, I think to be honest, in a couple of years time, we won't even be using the language of AI because it's just going to be ubiquitous. I'm putting together like a market landscape a visual guide to all of the AI tools that are out there. I'm already up to about 150 companies in about 10 different categories. So it's really it's just exploded, there's so much stuff going on. But the things that I want to talk about now are about video about empathy. This thing I struggle with this one generative innovation, I don't think that's the right language, but you'll see what I'm talking about when we get the synthetic augmented data that's massive. And then kind of banal stuff of automating workflows but actually it's it's where the value where there's a huge amount of value is going to it's going to come in but video you know, there's, there's just so much more use, you probably know like voxpop, me and a bunch of these other video platforms, their use of video is just exploded because people can now pull these things into one place. You can search you can clip them, you can create outputs. You can kind of combine Qual and Quant in some respects it's really overdone the way that a lot of the vendors talk about it, you know this kind of Qual at scale. It's it kind of sticks in the throat a little bit. But the there's a lot of adoption of video tech to get, you know, quantitative feedback, so 100 1000s of video bits of feedback, and then to be able to work with it really quickly. So it's kind of you know, there's, we talk about that in qualitative terms, but really, it's just a different unit of quantitative data. It's just new types of Quant data. So Microsoft is there's a couple of case studies you can watch on the Insight platform site. Microsoft has done this a lot with kind of b2b with developers with you know some audiences and they've got you know, they've done things like get 600 video interviews over a weekend and then distill it and report on it within kind of 36 hours, you know, so they've got the ability to really kind of condense that stuff quickly using AI, Mars again, Mars Petcare. There's a an example here with this platform called nit, where again, you know, they've just done hundreds and hundreds of video, bits of video feedback, and being able to synthesize that and analyze it quickly. There's loads of new applications for this stuff as well. There's some really smart ones. I can't tell you who this is, but it was one of the big soft drinks companies and they had I think they something like 8000 bits of video of people consuming the product as they were drinking it. And they wanted to identify like a physical response that people had when they drank this particular drink because they wanted to be able to say this drink does more for you, you know, I can't give too much away because it's kind of an active campaign. But this, this drink does this to you and you drink it and it's better than anything else in the category. Now, what they did was they built a custom machine learning model to identify the specific sort of physiological response that was picked up in the video, though there's there's just a ton of different types. of data that you can use to get insights you know, using these things. The things that that's really driving it, the AI components, natural language processing, being able to you know, quickly analyze the content of language and text with trans you know, transcribing what people are saying into speed, translating it, so you've got, you know, multi market studies that can be done quickly. And then the, you know, the text analytics that gets the themes, the sentiment, the topics and you know, being able to understand what's going on. So this is, you know, this is really big, but it's not just about the language analytics, there's the kind of computer vision part of this as well. And you can, you know, the cameras now are able to detect the content of video if you've worked with it in demo platform. They've got some really smart tools that recognize every tiny bit of text in an image. So if somebody is shopping in a supermarket that you can say, show me all of the video where there was the word chicken on screen, you know, people are kind of shopping for, you know, for dinner, whatever it is. If you're doing UX research you can say show me all the points at which the Buy Now button was visible on camera, and then you can zoom in and you can figure out What were people's behaviors if it's, you know, user test or something like that. So being able to recognize the content of images, it's not just the text. It's things like show me all the things where there's a drink in the frame. Or show me where there's a group of people rather than individual or even. You can have things like gesture detection. So show me if someone's you know, like moving quickly or you know, throwing something or whatever that is. There's a company that's sadly no more big sofa. What did they become? I can't remember but they they did some really smart stuff with some of the big consumer goods companies with video to train a model with fixed cameras in a kitchen to go take a video every time somebody is unloading the dishwasher and then once you've got that data, you can train it to say, well show me the people who load it and load it with these characteristics and then you can really dig deeper. So video is gonna be huge. This is like the emotion AI stuff. It's been around for a while a lot of people slag it off. But actually, there are you know, physical responses in the phase that reflect certain things that correlate with, you know, engagement. So, you know, there are things like smiles that are active engagement when you're watching things, you can you know, you can kind of challenge the Eggman universal emotions model you like, but there are definitely things that people are finding a lot of value for by recognizing changes in facial expression even have that that kind of universal models have been a bit jumped. So, there's a, you know, the, we tend to think, you know, surveys are this kind of immutable thing and they're real, they're, you know, they're relatively recent kind of technology. The way that we do surveys is often constrained by you know, previous technologies. So, actually embedding video and surveys, making them more human making them more natural. There's going to be much more adoption of that because it's getting cheaper, it's getting easier. Everybody's got a smartphone. And there's going to be a lot more of this video research at scale. I don't I don't really like calling it Qual at scale. But it's more Quant with new types of data, really. So you know, that's where a lot of this stuff is going. So yeah, so video, I think there'll be a kind of much more defaulting video in research designs for you know for Quant as well as for Qual. Okay, this is having said that, I don't know the quality scale kind of thing. I'm gonna talk about empathy at scale, which, you know, you can point out the inconsistencies or you know, this is a this is an old slide everybody's been banging on about this recently, if you don't recognize this is part of the promo for her, which is a Spike Jonze film so, working Phoenix gets a new mobile phone falls in love with his operating system which is voiced by Samantha the character and it wasn't them it was Scarlett Johansson, right. So just keep that thought for a minute because if you've not seen all the reasons, think about Scarlett Johansson and open AI. I'll be coming back to it in a minute. Anyway, this film is actually kind of brilliant, devastating. You wouldn't think about in our film about a guy falling in love with a computer is is kind of meaningful. It's got a lot of big themes that we're now seeing play out in that kind of wider world. It's about 10 years old the film, but this is from a couple of months. ago. This whole I mean, it's a horrible language and AI girlfriend industry that people forming relationships with AI friends that are lonely people who are connecting with these in a very, very kind of, you know, dependent way. And it's an effect that was actually one of the very first AI experiments was actually a psychotherapy advice, bought in the 1950s. And they called it Eliza and actually, what they found the researchers the academics found straight away was that people were forming kind of, you know, emotional bonds with is this am I getting Sorry, am I getting images on my resume? I'm always doing this in my hands. But the this Eliza effect is, you know, we do attribute we anthropomorphize we, you know, we give these AI tools human like characteristics, we talk about them in human like terms, and we're giving you know, we're giving these tools, the capacity to have these sort of human attributes. Now. That's kind of like a little bit highfalutin, but when you look at how this is playing out in some of the sort of newer research tools, this is one example from like, user insight. There are so many of these things. I mean, I'm literally up to documenting about 25 Different AI moderator chatbot solutions at the moment for this for this market map. So there's just the there are so many of these things, but this is another one that's you know, for kind of UX research. Obviously, people probably know incurr, which is more of the conversational AI, you know, embedding it into servers, which is cool. There's lots of different flavors of this. I didn't catch the session that tell it did for the ICD last week or the week before, but we do have a video of a session they did at our qualitative insights summit a couple of weeks ago. It's brilliant. Because Greg is very kind of like down to earth about how this works. And the limitations but also you know, the opportunities and very well worth a look. So tell it is another of these chatbot interviewing tools where you can use voice, you can use text, and you know, they're getting better all the time. So they're interacting. They're following it with probes that respond to what you're saying. And there's a lot of development coming in this area. Now. That's the kind of chatbot interface. Imagine that you're able to turn that into a kind of two way voice dialogue. The way that most of them work now is we'll pose the question in a messaging app and then you can speak your response so it's natural now there's a there's a kind of like a two way dialogue. upgrade to this which is coming. This is a platform called cube. It's more of kind of emotion AI toolbox than that kind of you know, usable research tool at the moment but just have a look at this. This is this was probably from a couple of months ago. Hey there can we use you for qualitative research interviewing? Well, Unknown Speaker 16:36 well look who's interested in little old me, I may be just a voice. I've got personality despair. As for qualitative research, I'd be happy to lend a hand or rather a voice I can pick up on all those nuanced expressions and really dive deep into what makes people tick. What did you have in mind? Okay, Henrique Savelsberg 16:55 so what's interesting about this, this predates the open AI voice model that they launched. A few weeks ago. So what's what's really interesting here is a few different things. One is two way voice. The AI is picking up on the content of the question and responding to it. There's tonality and emotion embedded in the response. So as a participant in research if this is a virtual moderator, I'm gonna respond much better. And then you've got this emotional analysis here. Now it's applied to the bot, which is kind of like a little bit silly because it's generating it, but it's also real time analysis of if I were responding, my tone of voice, the content of my language, so you're mapping the emotional responses here at the same time. So what you're building is a much more sophisticated real time interaction. You think about a real moderator who sits there picks up on tone of voice picks up on, you know, hesitation, picks up on the content of the interview, to be able to make emotional judgments and responses. This is what's starting to emerge in these sorts of things. Now, probably, I guess quite a few of you will have seen this. I'm not gonna play the whole thing. But the reason this, this kind of moves on a gear I'll explain in a moment. Hey, how's it going? Unknown Speaker 18:17 Hey, there, Unknown Speaker 18:19 it's going great. How about you? I see you're rockin and open AI hoodie. Unknown Speaker 18:23 Nice choice. Unknown Speaker 18:24 What's up with that ceiling though? Are you in a cool industry style office or something? Well, Unknown Speaker 18:29 can you take a guess at what I might be doing based on what I'm showing you here? Unknown Speaker 18:36 From what I can see, it looks like you're in some kind of recording or production setup. With those lights, tripods and possibly a mic. It seems like you might be gearing up to shoot a video. Henrique Savelsberg 18:48 Okay, I'm not gonna play the whole thing because I'm sure quite a few will will have seen that. This is a launch of the voice mode which isn't public yet but it's the it's kind of on its way coming apparently. And this is open eyes latest jeep ChatGPT model. The difference between this is phenomenal. Because it condenses what was previously like a five second delay in when you when you if you've got the ChatGPT app, if you ask it a question, it goes thinking thinking takes five seconds to respond. This brings that down to something like between a quarter and a third of a second latency. So you've got kind of real time dialogue happening, which is one thing. Another is obviously there's emotion, the intonation and the response is another thing. I mean, they totally ripped off Scarlett Johansson voice for this and she was absolutely right to go after them because I think it's outrageous, but there's a lot of people who think it doesn't sound like it absolutely sounds like her. But anyway. The other thing that is massive here when you think about it as a qualitative research application is this is what's known as a multimodal model. Multimodal basically means it can see and interpret images, as well as generate text and respond to you know, to kind of voice and text based inputs. So on the right there, the front facing camera is feeding the model with, you know, visual stimulus in real time. So in a research context, if you've got a model that's honed to understand, you know, maybe it's going to a virtual shop, you know, you're taking it on a company shop into a store, and it's saying, Oh, hey, you know, what about this? What about that, you know, at fixture Did you notice this? Maybe you've got it as more of a kind of emotion recognition model that combines some of that you know, facial expression analysis, you know, so being able to put image analysis with all of this stuff together. I've called it empathy at scale. I mean, it's the language around this is all kind of nonsense, but you can see we're starting to kind of close that gap between surveys that are, you know, Quant and dull and, you know, lots of lots of grids and, you know, terribly inhuman and that kind of real humanity of, you know, one to one. In Depth interview that really connects between the researcher and the participant. You're actually starting to replicate some components of that using this. And, you know, you can feel about that, as you as you, you know, you may there's a spectrum that kind of responses that I feel to it as well, but it's, the technology is developing to the point where, you know, we're gonna be able to simulate much more effectively, the nuance and the you know, the skills of real researchers at scale using these types of tools. So, if people want to post I've got the chat open if people want to post comments or questions or anything as I'm going through then then feel free. But the you know, this kind of empathy scale thing. It's, it's combining that multimodal vision, text and voice and everything with that large language model that we kind of know and love now that kind of GPT models in motion AI being able to detect changes in tone of voice or even facial expression with generative video now generative video is gonna be huge as well, where just from a kind of a string of text you can generate at the moment, you know, kind of 1015 second clips of compelling realistic looking video, when that's driving avatars that are going to look much more natural. You're gonna get this kind of like real time interaction with somebody who looks like a real human on screen. And, you know, again, we're, we're prone to give these the attributes of you know, kind of human part, you know, interview is because we're predisposed to kind of anthropomorphize you know, these these tools so there's a lot of kind of human psychology around this that accompanies that, you know, the technology. I mean, you know, horribly prosaic, but just to bring it down, yeah, it's gonna put cost pressure on generating insights with qualitative depth. I mean, there's just no two ways about this. You know, people are gonna say, Well, okay, the models are really good. We are able to get levels of depth that are good enough for our use case in research and it is going to start to put pressure on you know, even good quality qualitative research. So it is something to bear in mind and think, Well, where are we adding value as kind of research experts? The right okay, sorry, that's not It's not mentally sort of depressing. Right. I don't really like the language around this either. But there's there's something here around helping to power innovation a lot better with kind of predictive models. Right, Tom? I'm going to sort of pause and take your question. So thanks, Victor for that, by the way, which we bigger disruption or threat to traditional quote fees, automated moderation or synthetic participants. I definitely automated moderation in my view. I think that the synthetic participants, you know, augmented, I think the I don't think they are particularly threatening to really good Qual because really good Qual is going to explore for nuance. It's really going to pick up on those, you know, those kind of unexpected areas, it's going to drive to those you know, I've yet to see anything that comes out of a, you know, synthetic model that is a surprise or a kind of spark of insight or like wow, I didn't think about that. It's all kind of confirmatory stuff. That makes sense. That makes sense. I'm sure they're gonna get better. But you know, to me, the risk of the synthetic stuff which we'll come on to in the middle is is that it's kind of central tendency type results, so you cannot like miss the real interesting stuff at the margin. The reason why I think the kind of automated moderation is more of a threat is because, you know, you're still getting to the human participants. You're kind of you're reaching people for much lower cost in different categories and geographies and things like that. But you're getting much much deeper richness from them. So I would say for my money, be less fearful of synthetic stuff, but think about how to harness the you know, the kind of all of these conversational models for Qual reserve. So how to harness them that's better than feeling threatened. By them. What we're talking about here, so there's a whole load of different applications of kind of predictive insights and there are things like you know, predicting what flavors are going to work well with particular demographics and groups. So this is a clinical AI palette. There's a lot of work in consumer goods for new food and drink food and beverage products. And they're able to work with lots of public data, social data, news stories, all of that collated into one place, and say, Okay, this seems to be on the app. You might know black swan with trends scope, and the stuff they've done with Pepsi, similar sort of things. So predicting where it's going, but then also using that to then generate concepts that can be tested. Gastric graph is another one that's got these kind of predictive sensory insights. So being able to say, you know, for this particular flavor combination, it's likely to, you know, to work well for this particular target audience in Japan or whatever, you know, whatever that is, using a lot of these tools for kind of like generative, you know, image generation or language generation as part of the innovation process. This is where a lot of inhales innovation teams are starting to play with these things. This company pre launch, you know, it's basically a concept generator, that you give it a little, you know, a little bit of input a little bit of a prompt, and then it'll generate both the image and the copy that can be tested as a concept with participants. So, you know, there's a whole ton of these different things. You can build these for yourself at relatively low cost. But there's, there's more kind of automated ways of doing this. And I figured there was a bigger point around this, but there's, I mean, we can kind of skip through this, but there's a whole lot of predictive analytics opportunities that are coming. And I think, if you know if you can make these predictions, what does it mean for Quant research, where we've, you know, historically, we've done a lot of concept testing for things. The performance of all the data that you get from all these predictive things is currently consistent with benchmarks that have been done for concept test. So what does that mean for Quant concept testing? I think what? What I think is likely to happen is that a lot of the best practices and the don't do stupid stuff will be embedded into these models. So that you kind of don't need to keep researching the same questions over and over again, you know, do we think that people are gonna like these flavor combinations when people have actually tested that a million times? You just don't need to do it. So it is going to have an eroding effect on certain types of primary Quant? But I think what will happen is primary research will be more about where's the real opportunity in the new stuff and that kind of like the you know, the hidden things that we can't see in a model that we can't kind of look in the rearview mirror and predict on I think it is going to put a lot more pressure on innovation cycles to kind of get stuff out there quicker. And I mean, there was a piece today about Klarna this is not really innovation, this is more comms but Klarna I think they've you know, they've cut like 10 or 20% out of their marketing budgets by using design, you know, AI designers to create images and AI copywriters and things like that. I think you know, there's going to be in the same innovation there's gonna be a rush to this stuff. And then people who are willing to be logical headed crap, and it's not that interesting. Then there'll be a bit of a pendulum swing back to realizing where there's actually a lot of limitations in some of this stuff. We don't know where the limitations are right now. It's gonna come out over the next you know, couple of years actually, you know, people are probably going to rush too quickly into Sinhalese. Linda right, super exciting, also super daunting worry about digital exclusion. Yeah, absolutely. You know, not hearing from people who are, you know, who are not engaged not being represented. I think there's a there's a, I'm kind of guilty as being part of this. You know, look at all this cool new stuff. But I'm also really aware that there's a whole load of reasons why things get the brakes put on and then oh, look, it's not actually as cool as you thought it was. You know, when you look at, I think of it like self driving cars. You know, in 2015. Elon Musk said, our self driving cars are gonna be here by 2018. And they're gonna take care of it. And then Goldman Sachs and McKinsey did all this modeling, and it's like 25,000 professional drivers are gonna lose their jobs every month and this was in 2016 2017. And it's a big headline stuff, and everybody panics, and they go, Oh, my God. And it's like, there's more people employed in driving now than they ever have been. And yes, you know, these things are assisting that helping, you know, but we haven't got fleets and armies and Robo taxis. In fact, in the last week, there's been like, far more of this whistleblowing stuff about the risks and dangers about Tesla stepping on that kind of self driving mode, driving into trains and all this sort of stuff. So that's 10 years down the line from that Uber hive and we're kind of at this peak of inflated expectations. I'll show the whole curve in a bit. So there's a there is a bit of a this, look at all this stuff, see the opportunities, but don't freak out is I think is the kind of underlying message for me. synthetic data, I mean, there's lots of different takes on what this is. It's not kind of a monolithic thing. It's really using existing data to create digital twins or create predictive models of you know how people were. I don't understand in some cases, I don't really know why you need a synthetic participant because actually, the AI can kind of just give you the predicted and so you don't need to go through this old paradigm of having virtual respondents. But what the way that I think about this is a little bit like the predictive eye tracking stuff. So you think about you know, eye tracking with a webcam with glasses. If you do enough eye tracking projects, you can build a predictive model to go look, we know from all our experience people look at faces, people look at the baby, people read the headline on the copy, people look at the red button on the website. You can then create predictive models. So if I upload an image of a baby where most people are going to look in this ad, so that's a kind of predictive, you know, model that's based on lots and lots of previous primary research. Dragonfly aim power, there's a bunch of companies that do this. And it means that you know, you can a little bit like that kind of Predictive Innovation stuff I was talking about earlier. You can not you don't have to do the primary research in all cases, but there's a bunch of cases where you know, you're just gonna get a really boring result and it's more about safety and not getting it wrong than it is actually about getting an opportunity. For me, you know, synthetic data is really artificially generated data. It's used a lot in banking, in testing software, where you want to write, you know, privacy compliant data that looks like real humans. We're starting to see it much more in kind of market research. There's this I think, is a public GPT. If you've got the page ChatGPT version, kind of a play with this. It's basically it generates an individual user persona that you can interact with. You give it virtual attributes and you say, you know, you're a buyer in this category, you do have these features, and then you have a conversation with it. You know, again, but just define the body to be really surprised by anything that comes out with these. And then there's, this is the more scale this is a chemical yapple And they have this augmented audiences thing what they do is they kind of train all the data where they go out and grab fresh data for a project. It actually might come from search data, social media, other datasets, proprietary datasets, this isn't just all about making it up out of the large language model. So what they've got is all of the kind of data that powers it. The large language model like a kind of ChatGPT is your the way that you interact with it. So it's got the ability to talk to you but really, it's drawing on data that's kind of more robust and more sensible. So there's a few there's a bunch of articles on insight platforms you can go and check out like there's there's a an explainer about synthetic data. We've got some kind of top tools and things last week or the week before we did a webinar with yovel You know, it's quite good if you want to go and check out the video and just have a look and see what you know what Doug is talking about. All of this is evolving. So you know, there is no definitive. This is synthetic data and this is what you can use it for. This is what you can't all of this is kind of in progress. It is going to you know it is going to increase the risk is obviously you've got marketing people who are why do I need to pay for primary research why do I need to wait for these slow researchers I'm just gonna go and you know, go and use a synthetic stuff. And then you know, people will over rely on it. They'll make stupid decisions. There'll be kind of you know, disastrous Cocker people will launch an ad campaign that you know, that falls flat. But to be honest, you know, you can do that with real research. I mean, Pepsi and Apple have done that in the last couple of years anyway. But, you know, we'll we'll start to see some of these things create COC apps, but actually, there's, you know, there is good application for synthetic data in some areas, if it's going to help to, you know, support a decision if it's going to mean that you've got data as part of a decision making process that you know, you can more or less rely on, rather than just kind of, you know, shooting in the dark. So, there's a lot of stuff that I think is going to come over the next couple of years on synthetic and augmented, right, I'm probably going on a bit long so workflow automation got this sounds boring, doesn't it right. So if you think about what's happened in research, I rush through these bills. There's this kind of three stages of a project broadly, what what we've done quite effectively is squeezed that middle bit, the data collection bit so you know, you can do 1000 surveys overnight, you know, less so in Qual, but particularly in the Quant stuff. I think what's coming is we're going to squeeze the front and back end of the process more because a lot of these tools are going to help with that process. So for example, if you want to go and generate some ideas for a new project, you know, you can work with some of these, you know, creative whiteboarding tools, things like you know, give me some ideas for concert, you know, for survey questions instead of going into a meeting sitting down and go and help me help me knock this out. You can start with a language model. And it will give you you know, maybe a mind map maybe a list of questions. You know, you can do this in a load of different survey tools. Now, this is poll fish, you know, give me some, you know, concept has questions for for testing some part do not even think why I made this up, but, I mean, I have to say it, it's really awful. But, you know, the principle is about God, that's all polish because a customer but the like, the thing that came out of this was just dreadful. But in principle, if you're not a very good researcher, as these things get better, you know, you're going to be able to, you know, to really kind of speed things up. And, of course, what is gray is the cause it's all you know, I didn't know that there was sponsoring this. This was already an example that was giving in here anyway, but, you know, if you work with a quasi platform, you know, you can do some decent stuff here, give you some ideas. So this was give me some ideas for projective techniques for you know, for exploring a new concept and it's like, Okay, here's a sensible one. Is another kind of a sensible one, you know, so it gives you some decent ideas that you can then go and run with it. It's like having somebody who's you know, another colleague that you can go and kind of bounce things off. So, you know, there's great ways in which that that kind of planning setup design phase will start to condense and become easier. Obviously, the back end, you probably most people aren't going to be familiar with kolu. If you're not, I mean, it's embedding into a bunch of different AI tools. No anyway, so they seem to be doing great. What's this? I did this a few months ago as a kind of proof of concept. Just to try and figure out can you upload a survey dataset into ChatGPT with its data analyzer thing and get sensible results out of it? I mean, the answer is, yes, absolutely. You can, you know, with some kind of limitations, but this is a custom GPT that I created. There's a bunch of different talk to your data tools that are being developed and coming out. This is not a research specific one that's accurate, but it's more you know, have a conversational interface for analyzing data. So if you're not an analyst and you but you know what questions you want to ask them, it's kind of good. There's also experience which I can't think I don't think I've put on here, but that's another survey analysis tool. It's built specifically for research data. And the reporting interface. This is this is just a thing that I'm playing around with. It doesn't always work very well, but I quite like it because I can just talk into it and go here's my idea has been for a blog post or for whatever presentation. And then what it does is it takes the transcripts and then it turns it into a structured output automatically. So I mean, it's not always great. Sometimes it's a bit hit and miss but it helps to kind of structure your thoughts a little bit and I've used this as a basis for presentation planning, or as I say, articles and things like that. This one was actually really you can see there this is my, the transcript of my book just talking into it. This is how it tidied it up. I then just literally just took that and put it into gamma, which is an online presentation tool. And it created like, you know, vaguely possible, you know, online presentation. The way that these things are going, you're gonna be able to string together much, much better automation of the reporting process. So imagine, that wasn't me doing a blog post. It was me doing my Top of Mind thoughts, having watched some groups, and I want to spend five minutes downloading my thoughts into this app, the app structures it then feeds it into a presentation generator that I can tidy up and then within 20 minutes, I've got something that I can share with a client, you know, so that's where I think the compression for the reporting stages is gonna happen. You know, it, in theory means you can kind of run more projects concurrently. It should also mean that people are less skilled in research are going to be able to do more of this stuff. So that's the trends or the some impacts here at the moment. I'm just gonna, just gonna look at Victor's question. So it's currently capable of synthesizing data by scraping publicly available sources to build so buyer personas, whether on a datasets Yes, absolutely. So there is I don't know if you are familiar with a guy called John Lombardo, who was the LinkedIn b2b Institute guy so he and his partner is now I forget Peter, somebody, they've actually built this. And if you drop me a line afterwards, Victor, I can connect you with him because it is it's a b2b, synthetic data product. And it's basically generating you know, personas and you know, more kind of data for different you know, basically categories. So it's, it's definitely one there. Yeah, it you know, it's not scraping public information on the fly. It's actually entirely generated out of the language model. So that's a big caveat for me. But you know, I think in theory, you could probably generate a lot of this stuff yourself because the context window which is the amount of stuff you can upload into the models like Claude and Gemini or Google's model whatever they call it. And ChatGPT is huge. So you could upload a whole load of, you know, published data PDFs, interviews, whatever. And then you could say, you know, use this data to simulate, you know, five different personas that I can then interact with for this particular niche category. So there's there's ways in which you can do it for yourself, but yeah, okay, I'll follow up afterwards. Oh, look, bonus transport. Was this. Talk to you? Oh, yeah. Right. Sorry. If you know, Scott Galloway, this is kind of when you can go and play with it's literally just prachi.ai what he's done here is and I think this is going to be big for kind of expert things. There's actually a probably a better example is that he was the LinkedIn guy, the big fella Reed Hoffman, where he created a digital a digital video twin of himself. Exactly the same principle which is, I've trained it on all my books I've trained it on all my blog posts, and then I'm going to interact and it's going to adopt my persona and tone of voice and you can ask it questions as if it were me. So this is one that Scott Galloway did. Now, imagine that you're working in house in an insights team or you're working with an agency if an agency has got someone who's like a brand guru, and they're an expert, and they've got all this content, they've created all his presentation, you feed it into this and you can kind of you know, you can have sort of proliferating versions of your brand guru expert in the same way that insights teams. People do a disservice to anybody who's recently been one or as good friends. You know, there's some people get retained in insights teams, because they're kind of the corporate memory, not because they're fantastic at their job, but they've been there so long that you know, the the only people who know what's going on so if you can distill a lot of that knowledge and you can have it into a kind of you know, a virtual team member, the other parts of the business can kind of talk to and interact, you know, that's potentially quite a valuable proposition. Now, there's a company here called glimpse, and I've just lifted this from a couple of their videos, so don't worry too much about the detail here. But there's two principles in how they're using AI. It's a surveys tool. But what they've got in the reporting is an AI interaction where you can sort of have a verb to have a conversation with a virtual version of a respondent. So you can basically ask them questions as if you were doing a follow up study. Imagine you wanted to do a follow up deep dive Qual with a particular respondent from the Quant study. And you can kind of ask them questions about you know, whatever has been enriched around the core survey feedback using a language model. And every like, bunch of questions about this, you can tell I'm not that close to it. But the same principle of being able to chat with a segment. So you've done a big study, you've got 1000 respondents, you've segmented them into six groups. You can kind of interact and chat with each of the segments of data and ask them questions you know about how might they respond to a concept or you know, can you tell me more about your preferences in this area, that sort of thing. So that's glimpse, that's the kind of talk to you data talk to your virtual personas type thing. AI agents, this is if you read a definition of an agent, or other if you know if you go looking for five, you'll you'll you'll see six different definitions or whatever the thing is. It's not nobody really agrees on what these things are. So I'm going to tell you my version of it. And that's This is the version we've got a an explainer on a website like what are agents for research? This is a really good illustration because the way that I think of agents is putting together a bunch of different expert AI is so imagine you've got an AI that's really really good at, you know, asking questions for kind of core research. You've got another one that is really good. At critiquing data, or you've got another one that's good at generating images. You basically got all these highly specialized trained different AIS, there's a video on the zappy website you might want to go and look at that brings it to life. It's got the world's most horrible soundtrack, but you know, you can you can listen to on mute but this is a an approach for generating concepts. And it's, you know, it's pitched at kind of big CPG companies like Pepsi and people like that. But the idea is you've got these different AI tools that connect together work as a team and actually kind of act semi autonomously. So you give it a brief you say I want to generate some concepts for this target audience, it will go and gather all the research that it knows from internal systems from public service sources, and here are the big three needs. Okay, that's the kind of research AI then it will hand that over to a concept drafting AI. And then then I'll hand the drafted concepts over to a kind of editor critique AI because I don't like that and fix that. So you've got this kind of like, you know, what they call a agentic workflow. And we've got a webinar next week on insight platforms. They're all free if you want to sign up to any of these. And this one is from a company called Varuvai, which again, has built this sort of agent based approach to to color research. So yeah, that's, that's agents that are going to be big, right? We can either stop for questions or I can I can crack on with some of the implications. I know this is going on longer than I thought so. I'll crack on unless people post questions if you want. But I'll just go through quickly some of those like more work a bit more people doing jobs is gonna have to change agencies are going to be disrupted, but don't freak out is number five. Did you did you want to pause say, Do you want to The ICG 47:32 know that's fine. was just sort of anticipating coming into to to carry on and then then we'll have sort of 510 minutes at the end to answer any questions from Henrique Savelsberg 47:42 anyone. Oh, this is a bit longer than I thought it was going to be. I'm probably ranting a bit. Right. It was increasing volumes. Yeah, there's this SNR does this. Have this is the buyer side of this users and buyers survey. I forget the exact wording of the question, but it was basically did your workload increase last year? And do you think it's going to increase or decrease next year? Being say like more put, you know, a bunch of people last year said it was going to increase now even an even higher show people are saying it's going to increase next year. You can tell my bad researcher, I can't even describe a chart. But the principle is, you know, clients feeling overloaded and that their workloads are increasing. And we did a survey I did this with market logic software last year. It's a sort of similar sort of thing we're asking through the lens of AI and we're saying do you think AI is going to increase your organization's demand for research and insights? So a bunch of people in marketing and innovation stakeholders said Hell yeah, you know, they're basically seeing their need for insights is constrained by current capacities and the AI is going to open that up the research insights to Yes, but less so. So, AI, in some ways is kind of like the M 25. You know, it's this there's an economic principle of induced demand if you make capacity available, the capacity gets filled up. So, you know, you think about the volume of photos in the world that are taken, you know, we've grown capacity. So you know, people are taking photos all the time, cloud storage, you know, you just keep a bunch of stuff when it was like you know, you had a floppy disk, you know, you're very precious about what you saved and what you throw away. Now it's just unlimited. So same with the, you know, the motorway type thing. If you're increasing access to research and insights, you kind of reducing the friction you make it easier for people using these AI tools. The optimist in me says, more decisions are going to be made using customer data using, you know, insights and research. There's a lot of stuff that can go wrong with that, but in principle, you're gonna see more demand more use of research and insights in organizations, you know, accompanying this change in these AI tools kind of part of that same story is just growing democratization. There are more people doing research and in the if you work in UX research and product research, it's it's sort of happening. Again, you know, it's a mixed blessing, but designers, product managers, people who are not researchers, are trying to a certain extent to go and do their own user tests, do their own interviews. And there's a lot of there's a lot of people who are not really specialist researchers who do research and so it's work. And this is that same survey that we did with with market logic. This is just feedback from stakeholders. These are big companies. I think more than a billion dollars or more than 1000 employees, but all of these people are saying either sometimes or often do all of this research and insights work myself people in marketing innovation teams, all of this kind of stuff, dude. So those doing interviews, that sort of thing. So, like, AI is gonna turbocharged this kind of democratization thing. Hopefully, that's going to be more people are making more decisions that are, you know, grounded in customers and users. But obviously, there's a lot could go wrong with that people, you know, biased designs bias the interpretation of data, and it's just mistakes. Insight roles, research and site roles are gonna have to change this horrified mail that if you saw this, the wfh did this survey of like about 75 people who have Big Insights budgets, and I look at all this there's one in here in particular, a quarter agree that insights function has an impact on the business performance. I mean, 75% of people don't think the insights function drives business performance. I mean, that's just a Riffic. So there's a real need for research and insights to kind of you know, to evolve away from this traffic, police vendor management type thing. The big things, I don't think this is teaching anybody anything new, but there's gonna be a bigger specialized skill set in research and insights around tech and data because you're managing streams of data, you're managing systems in a way that they haven't previously. There's going to be a need for much more strategic impact and cloud you know, commercial consultative output. So I think as well we like to see much more embedded models where you've got people adding value that you know, rather than this kind of like, choke point, insights function that kind of, you know, things go through, you're gonna see the skills embedded much more in different teams closer to the point that demand agencies, I mean, there's opportunity as well. So there's a bunch of threats to agencies, particularly if you've got a legacy operating cost model. All systems you know, you've got this, the project based buying framework is one of the worst things that agencies have to live with. But there's a bunch of opportunities here as well. There's some really smart stuff. I've got a draft blog post with some case studies of agencies who are using AI to do some very cool stuff. There's a load of proprietary data there. There's kind of like new offers that can combine professional services and skills and kind of AI tools and things as well. The thing is, though, it's AI is kind of going to level the playing field. You can be a one person agency and like, you know, all of you could have the potential to be punching at a much higher level using a lot of these tools because they're just accessible to everybody now. But I think the winning agencies are going to build some proprietary intellectual property around these kind of taking tools. They're going to have real deep expertise in categories in different types of audiences, whatever that is, and they're going to be able to connect loads of different sources of data. It's not enough to get rid of surveys agency, it's going to be we have seen across all of these different sources of data and social media, we pulled it all together. Right? I just want to say, don't freak out. I mean, like, seriously, don't freak out because the people like me will go look at all this cool stuff. The reality is, organizations take a long time to properly adapt to this stuff, and the stuff never as good as you think it is, you know, it's you got to try it to figure out where the holes are. And if you look at the latest Gartner Hype Cycle, you know, that generative AI is right there at that peak of inflated expectations. Everyone goes oh my god, you know, it's remember the 25,000 drivers out of work each month for the next 10 years. You know, that's, that's the kind of nonsense that is talked about in this context. There's a really good framework here from the guy who runs thought leadership for HubSpot. Scott Brinker, he's got this thing called martec law, but it's basically this can apply to all sorts of things. Organizations, particularly large companies, they change very slowly. Technology changes very, very quickly. But organizations can only adapt it to a you know, on a kind of logarithmic basis. So even with all this exponential change, and everything going we're gonna be an AI business, you know, there is going to be time to adapt. So yeah, do do kind of stay on top of stuff. We don't cover everything but you know, we try and get a bunch of this stuff, right? If you're not a subscriber, we send this out every week. It's like it kind of, you know, some new tools and new stuff that's going on. So new product launches, it's very quick to military and you can cover you can make five new tools each week if you want. And, yeah, try stuff. I mean, you know, there's a lot of stuff that's quite easy to get your hands on and try and enjoy it. I know it's easy to, to kind of worry about this stuff. But right that's me done. Sorry. We've only got five minutes for questions. The ICG 55:42 Does anybody want to ask a question? Otherwise? I have one. We have one earlier on I think there was one about Victor Aster question. There was a lively discussion by ICG email about the project pipeline being dry this year. With plenty of members lamenting low workloads how much of this do you think is down to clients trying to use AI in you, real researchers? Henrique Savelsberg 56:11 I don't know it feels a little bit brutal on the client side at the moment. I've got to be honest, I don't know how much of it is sort of preemptively. right sizing organizations, but it just seems like there's a death by 1000 paper cuts, you know, incite. People on the client side seem to be losing jobs or you know, headcount shrinking. So there's a lot of that corporate enterprise the stuff that's happening if that might be because some of the finance people in the marketing people are saying well, you know, you're gonna be able to use this use AI for this stuff. So they're kind of preemptively you know, baking in savings from Ai before they've actually done it. I think I see a lot of people experimenting more than embedding this stuff. To be honest, I don't really think it's having a enough of a disruptive effect around that. But I think it's probably more of a psychological spillover of, oh, we're not sure if we're going to make a decision because we think it's all going to change and people are feeling a bit overwhelmed by all this new stuff, you know. So yeah, I don't I couldn't say it might be, you know, some companies earlier much earlier adopting than others. So there could be a specific effect around that. But, you know, I'm not really seeing clients. I mean, I've given versions of this to quite big brands that insights to me was like, Oh, my God, you know, we're nowhere on this stuff. We need to get ahead of this way. You know, so there's a there's a quite a wide distribution of research teams. The ICG 57:45 Okay, any other questions? Were sort of two minutes left. Nothing. Henrique Savelsberg 57:51 I'm sorry. I just kind of spent the whole thing talking but if you obviously you know, email me or you know, whatever, LinkedIn just drop me a line if there's anything you want to revisit, or if you want links to go into any of this stuff in more depth. There's a load of stuff on the on the website. The ICG 58:08 Just one last question for me. Then, Mike, before we go. Clearly, there's an awful lot of technology out there and it's all quite overwhelming. If you were to suggest one or two tools that that typical Qual ma market researchers should be adopting or trying out, you know, without signing up for too many different tools that obviously have subscription costs and things. What would you suggest that people should start with if they're just beginning their AI journey? Henrique Savelsberg 58:42 You know, it's so a menace in the sort of the nicest possible way, but it's kind of the wrong question to ask or is the wrong place to start with a question because the question should always be okay, what am I what am I doing now? And where are my choke points are the residual where do I feel like I've got opportunity to add value and kind of going at it from that because I think going into it from the tool end of things is going to feel really overwhelming, you know, you've got to go on there. How am I going to add value to my clients going forward? And then you know, what are the things that I can you know, I can find in that area, so you know, I genuinely don't know there's whatever 30 people on the call. I know there's a range of people who are using it or call tools and Quant things. I obviously, causes, you know, supporting this webinar, very accessible tool, great team and good pricing. I mean, you know, if you want to play around with that for a little while, but there's a bunch of other tools out there. The I mean, the thing is, don't be don't be intimidated by the you know, there's it needs an annual license commitment. If somebody says we'll only deal with your license commitment, then fine move on because there's just a ton of different you know, there's loads of choices in this area so that, you know, my, I guess my recommendation would be grounded in your needs and how you think you add value, but then don't. If you get knocked back from a company that you think looks like a good fit. Don't assume that's the only option because there are you know, these things are just exploding all over the place and there's lots of people willing to be creative with the pricing. The ICG 1:00:28 Okay, thanks very much indeed Mike for your amazing whistlestop tour of these these new trends. Thank you to everybody for logging in today. Do join us for a future webinar. There's plenty of things in our diary coming up. And yeah, just thanks, everybody. See you again next time. Have a great afternoon. Bye. Thank you. Thank you. Thanks very much. Transcribed by https://otter.ai