Knowledge

‘Alexa – What’s the Future of Market Research?’ ASC Conference

21 Nov 2018 | Research & Business Knowledge

“Alexa, what’s the future of market research?” The application of AI and machine learning to surveys

Association for Survey Computing conference, London, 15 November 2018

Overall, a really interesting conference I thought, although some of the papers were more focused on the close detail of statistics in general and algorithms in particular than your everyday jobbing market researcher really needs to know about.  What did I learn? That there’s no common agreed definition of AI; that ‘machine learning’ has now overtaken AI as a search term; that there’s a Parliamentary select committee on AI (https://www.parliament.uk/ai-committee); that the ‘Turing Test’ developed in 1950, a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human maybe isn’t useful any more because it focuses on how something seems, not what it actually is. (Is a sunflower ‘artificially intelligent’ because it turns to face the sun?)

And what are the implications for qual. and quant.?

There are many applications of AI (whatever it is) relevant to us as market researchers. One is ‘topic modelling’ (or ‘text analytics’); basically taking verbatim comments from whatever source and analysing them  – in fact, like coding as we market researchers know it. (Also see this article on the ICG website: https://theicg.co.uk/article/4001269/review-of-text-analytics-software). I’d thought (in my ignorance/fear) that if you bought the right software you could tip in all of your text, turn a handle and out would pop a nice high quality code-frame without benefit of human beings.  Apparently not. You have to prepare the text (eg make it lower case only, remove ‘stop’ words such as ‘if’, ‘so’, ‘and’, and do some ‘stemming’, so if you’re interested in strategy you ‘stem’ to ‘strateg’ so you get strategy and strategic). Humans are also still needed to ask the right questions in the first place, choose the correct algorithms to put the text through, check that the overall results look sensible. The machines still need a test set of (human) coded data to learn from, and still have weak spots (eg very short responses, handling typos or unknown words, when multi-coding is needed).

Other relevant things happening now in AI relevant to MR: analysing response to video, converting speech to text: https://www.realeyesit.com/, https://www.bigsofatech.com/, https://livinglens.tv/ “Making light work of open-ends”: https://www.confirmit.com/ Sentiment and topic analysis by chatbot which adjusts follow-up questions automatically depending on your response, promoted as “No more form based surveys!”: https://www.wizu.com/, coding of images.

Recommendations from one of the papers: focus on the benefits of using AI in terms of speed (much quicker than humans) and cost (much cheaper than humans) but also work out where humans can best be used in the process.  AI is a “useful timesaver.” There’s no ‘silver bullet’, you need to blend techniques, but this is just the first wave.

Beware the black box

Bethan Blakely of Honeycomb asked the question “If AI is imitating humans, then is it biased (racist, sexist) like humans?” On the basis of the examples she gave, the answer is, if we’re not careful, “yes”.  She urged the need for transparency, accountability and responsibility for those creating and using ‘black box’ algorithms. Jane Frost of the MRS followed up with results of a survey on inclusion (or lack of) in MR and the need to call out bad behaviour.

And what am I going to do differently?

I’m following up on one of the text analytics packages as I think it could be really useful for a client of mine which has oceans of text from its website query function which it wants to mine for potential NPD. There is far too much information for it to be a (cost effective) human coding job, but could a human-coded ‘learning set’ be used on the full data to sort wheat from chaff quickly and cheaply?

My favourite quote of the day (from Daniel Bailey of Data Liberation):

“If you don’t want to be replaced by a spreadsheet, keep on micro-learning every day.”

Happy to hear any other perspectives from the ICGers who attended!

Chris Brookes

19/11/18

Menu