Artificial Intelligence (AI), or more commonly Machine Learning (ML), already exists in various forms in the Market Research industry. We see AI/ML being used to conduct sentiment analysis, natural language processing, and machine vision tasks, all of which sift insight from large volumes of textual, audio, image and video data using instant recognition and tagging. And then there’s chatbots and social media targeting, engaging with customers using algorithms instead of the old-fashioned human touch. Let’s not forget big data analysis too, along with task automation, both of which can vastly reduce overall man hours and project costs.
But what about accuracy? Results are not 100%. At a recent conference I attended, namely “Alexa, what’s the future of market research?”, an interesting example was presented by Tim Brandwood of Digital Taxonomy. A test project was conducted whereby six days manual verbatim response coding was reduced to one day with the introduction of CodeIt, their proprietary AI/ML coding (or should we say ‘tagging’) tool. However, the volume of responses coded with the aid of the automated process was measured at 83% when benchmarked against the wholly human approach. Some clients will see this as an acceptable trade off, particularly when masses of previously untapped data are providing insights that would have been prohibitively expensive to extract from unstructured data.
Accuracy is also somewhat reliant on the volume of data you are able to seed with, a very necessary step in kick starting the machine learning process; the higher the volume of data the more accurate the results. This is where the traditional nature of small volume and bespoke market research studies and AI really start to diverge, which raises the question how we might address this issue. One suggested solution would be to draw on data lakes to expand the volume of data, although Intellectual Property concerns might need to be addressed with clients before this becomes a viable resource.
On a broader level, bias has also been identified as an issue in Machine Learning. In order to learn there is a need to fuel the machine first. But where does that data come from? It often comes from freely available datasets such as those collected by Google, but these reflect historical bias which is then perpetuated in the AI results. An example of this may be seen in automated translation tools. When translating to English from a source where the language or phrase is gender neutral it can associate a gender to job roles. For example, doctor may be identified as male and nurse as female because that’s the bias present in the data that it’s learnt from.
Another ethical consideration is around transparency and consent. Products like Alexa have helped to normalise AI with consumers, bringing these highly sophisticated gadgets into their home. It listens and learns, but the consumer is aware of its existence and can interact with it on their terms. When it comes to chatbots, should a consumer be informed they are not talking to a human? Does it matter either way? There are varying opinions in our industry and beyond. It’s very much a grey area at present that needs careful consideration with a wide range of specialists feeding into the conversation.
I can see the concerns from both perspectives and looking specifically at Market Research we very much pride ourselves on our code of conduct. Yet we have to ask whether transparency comes at the expensive of harvesting accurate insights if behaviour and language is moderated because a conversation is being had with a bot? On the other hand, I would suggest (as will many others too) that behaviour is far more likely to be moderated when conversing with a real interviewer!
Either way, there is still a need for human involvement in AI at present. Be it preparing initial test data to seed the Machine Learning process, setting up and validating rules, or performing checks and balances on the outputs. It’s not effort wasted though, it’s no more of an upfront investment. However, for many that investment is simply not viable for a multitude of reasons, in which case they have no choice but to wait until the technology becomes more mainstream and cost-effective. Of course the fear is being left behind, because it’s often the pioneers that are seen to be the first to profit and gain market share.
Here at ResearchBods we utilise video intelligence software when capturing video responses from access panels and community members. This is a huge time saver, highlighting specific keywords and enabling the reviewer to jump to key frames in the footage at the click of a button. By doing so we quickly identify those few meaningful seconds of content amongst hours of footage. (Hu)man and machine in perfect harmony you might say; harnessing the best elements of AI/ML and years of professional experience, blended with a little bit of common sense to know when the machine hasn’t quite got it right.
To conclude, AI is still very much in its infancy so there’s no need to panic. We’re certainly not. We’ve adopted some AI/ML techniques that fit our business, and we’re always scanning the horizon and taking stock. Over the next five years the landscape is likely to stabilise. It’s an oft repeated pattern that holds true in Market Research as much as any other industry. We’ll see more and more software coming to market that is cost-effective, proven and reliable. Market leaders will emerge as others fall by the wayside, including the cowboys that are most definitely out there in the Wild Wild West of AI.
In the meantime, be open to how AI/ML could benefit your role, department, organisation and your customers. If you’re about to embark on employing a third-party AI/ML solution, don’t be afraid to ask some hard questions about its provenance. How do the results stack up against tried and tested methods? If it’s worth its salt your chosen partner won’t shy away from answering your questions and answering them well to boot.
And finally, trust your instincts to tell you whether AI is right for your business. Satisfy yourself that it’s not just a gimmick and that it’s going to bring some quantifiable benefits in some shape or form. After all, instinct is the very thing that sets humans apart from Artificial Intelligence, for now anyway.