I asked ChatGPT to write a poem for my son’s birthday. What does the result tell us about the potential impact of generative AI?

13 Jun 2023 | Research & Business Knowledge

Daily predictions of imminent apocalypse are only part of the heady cocktail that is the forecasting of generative AI’s potential impact on work, society and our very existence.  Market researchers, like so many in all fields of public and business life, have been frantically exploring the implications of generative AI’s capabilities over recent weeks, contemplating and discussing its potential impact on our work and whether it represents a threat or an opportunity.

I am only one amongst many who have been experimenting with ChatGPT and other platforms to explore its effectiveness in performing various tasks, with a particular focus on its potential to replace the human element in our practice.  However, I took perhaps a slightly unusual path in this exploration, by asking ChatGPT to write a poem for my son’s 20th birthday card – it may be worth mentioning that I have been creating individual bespoke cards for friends and family for many decades now, as in the examples here:

No alt text provided for this image


Having become highly aware that the ‘rubbish in, rubbish out’ principle is fundamental to the effective performance of natural language AI, I took great care to hold ChatGPT’s hand and give it careful guidance regarding what I sought.  Here was the brief:

Write a 12 line poem for my son Ned’s 20th birthday. He is at university in Manchester, loves to party, has a great group of friends and is very charming.

And the result wasn’t half bad.  I don’t think Simon Armitage has much to worry about and the poem would have benefitted from a consistent rhyming scheme, but the result was clearly on brief and the language used was unexpectedly literate.

No alt text provided for this image

My intuitive response was instructive.  I was impressed with the overall quality, amused by the language and surprised that, given its general literacy, it mucked up the rhyming scheme.  But beyond that, I realised that it really did feel machine-made.  For all its quality, it lacked the human dimension: the emotion and fallibility that is perhaps uniquely human, traits that are demonstrated through the poem I then wrote to appear on the inside of my son’s card.

No alt text provided for this image

So, I was broadly heartened, choosing to see this experience as supportive of my emerging perspective on generative AI and its implications for my practice.

I think there are probably three key aspects of the best qualitative research that make it a process intrinsically activated and empowered by a uniquely human set of qualities.

Empathy and human interaction skills

Effective interviewing, whether one-on-one or in groups, whether face-to-face or remotely over web video, requires an essential ability to connect with people in a way that engenders trust, honesty and productive interaction, something that is incredibly difficult to imagine any machine ever achieving as effectively as a skilled human.

Effective interviewing also requires responsiveness and the agility to adapt in real time to what one is hearing, probing appropriately and developing hypotheses to test ‘on the hoof’.  While we always start any session with a framework for exploration, one of the fundamental principles of qualitative research is that we allow research participants to create their own agenda according to how they are processing the ideas we are discussing, rather than impose upon them a pre-existing structure.

Thus, qualitative interviewing is an intrinsically dynamic, iterative process, that changes between and within every session, in contrast to generative AI which, by definition, can only work from the probability generated by a huge mass of pre-existing data.  For this reason, generative AI will struggle to see significance in the new or pick up and run with the unexpected.

Content analysis

I hear much about the potential value of AI in the analysis of feedback, with many talking of allowing it to do the ‘donkey work’ in finding the themes in discussion transcripts, as a prelude to the added value that humans can then add.

I’m sorry, I don’t buy this at all.  As I have written many times before, qualitative research is not about what people say, it’s about what it means.  Proper qual is not reportage, it is interpretation, using cross-referencing, experience, empathy and imagination to find what lies beneath the surface of what people say to work out why they say it.

Furthermore, rather like interviewing, analysis is a dynamic process, during the course of which you generate and test new hypotheses, explore the factors that might be driving similarities and differences, and constantly develop your thinking.  I often compare it to peeling the layers of an onion: until you have reached a certain point, you cannot see what comes next, and the insights reveal themselves in stages, often in a non-linear way.

No alt text provided for this image

For these reasons, I cannot see myself letting AI do the ‘heavy lifting’ any time soon.  I want to be there at the start of the journey, because only then can you find the real pathway.  There is no point making a start 3/4 of the way through the journey if you then find that you are walking the wrong path.

Analysis is the ‘black box’ of qualitative research: the part invisible to clients but where the greatest value is added.  And that added value will always be human value.

Providing actionable direction 

The final critical human element lies in seeing the implications of the research learning and communicating these effectively in a way that clients can embrace and act upon.  While AI is showing the ability to generate wonderful imagery and diagrams to illustrate our debriefs, the value of the debrief content will be down to us.  The creation of a compelling narrative that leads to a credible, relevant and motivating conclusion is a uniquely human skill, once again relying upon a degree of lateral thinking, creativity and experience that the machine can never possess.

I am seriously concerned about the socio-cultural threat that natural language AI presents as its development races so far ahead of humanity’s ability to manage it positively and productively. I am genuinely fearful of the potential consequences of its use by bad actors to deceive and defraud, and our widespread inability to apply a critical faculty to separate truth from fiction.

More prosaically, however, in market research I think it presents a greater threat to practitioners in quantitative than in qualitative, particularly through its ability to eliminate the need for humans in data processing and finding patterns in open-ended responses.  In quant, it also presents the risk of fraud and misdirection from ‘bad actors’ using AI to respond to quant surveys.

For qualitative research, I can see there being a negative impact in the short-term from the application of generative AI as a way of saving time and particularly costs, most notably through automated content analysis.  It could well be that clients see this as an opportunity to bring more qual in-house, as procurement and CFOs seek to remove the human element in the name of ‘efficiency’.  However, I can foresee a bounce back from this as companies learn to their cost that the guidance provided by machines lacks insight, validity and creativity, and that genuine effectiveness matters more than theoretical ‘efficiency’.  It seems probable to me that, in the medium term, the growth of AI will make the creation, management, analysis and reporting of qualitative research by skilled humans only more important.

However, this is all, of course, based on the assumption that AI does not bring the world to an apocalyptic end before next Tuesday.