Knowledge

Coping with large volumes of open ended responses

16 Feb 2021 | Research & Business Knowledge

Many thanks to Ella Fryer-Smith who has produced this summary of advice given to her about coping effectively with large volumes of open ended responses.

Thanks to all who responded to my request over the past couple of days. I’ve had quite a few people wanting info on this topic too, so thought I’d round up advice I got back here:

  • Discover.ai ‘OPEN’: an accelerated reading tool that saves the donkey work but still relies on human analysis skills. The machine uses NLP and other clever things to put data into themes according to whatever nudge terms have been given initially – but you can change these and find new connections. (just like you would in standard qual analysis except this is faster). Works in any language (for a slightly higher price point). I’m leaning toward this option personally – thanks Kirstie & Konrad for the recommendation
  • Symanto: launched a self serve platform this year where you can input your own data or collect reviews. Steph says: ‘the accuracy is better on the whole than most other providers (I’ve done a few tests over the years) but never perfect although you can also correct it. For specialist projects I think you’d have to work on the dictionary as their off the shelf ones wouldn’t  meet all needs but the ai will provide suggested terms for high frequency words which I find helpful.’  You can learn more on their website here <https://www.symanto.com/insights-platform/insights-platform-features/>
  • Using some form of semantic text analysis / AI (similar to the above two options).  A thorough (& quite academic) review of different packages can be found here <https://www.surrey.ac.uk/computer-assisted-qualitative-data-analysis/resources/choosing-appropriate-caqdas-package> (thanks Nathalie for the article)
  • Alternatively a few people came back saying they had tried various AI methods, but find excel is still the most time efficient and reliable method, even when approaching large datasets.
Menu