We all know video is great. It illuminates, it inspires, it makes insight tangible, and it brings the ‘voice of the customer to the boardroom’. In other words, it’s often used as context – a powerful medium for storytelling that’s bolted-on to existing projects and methodologies and used to bring the ‘real’ data to life.
At the same time, we’re capturing exponentially increasing volumes of video – both as consumers ourselves (across a range of social media platforms) and as a consumer insight industry.
But context is only one part of video’s value. We’ve barely scratched the surface of the potential of video as data – a sequence of quantifiable and commercially useful data points that allow us to give structure and meaning to content previously thought to be at best purely qualitative, and at worst completely chaotic.
Clients want context to help them make sense of their quant data. But they’re also challenging traditional data capture mechanisms. In sophisticated organisations with an ever expanding amount of data about complex consumers being captured, the right kind of great quality data becomes more valuable than ever.
That means data about what people really do, how they do it and how they talk about it. It means data that comes from natural, authentic responses – rather than reported or artificial ones. It means data that’s built on the reality of consumers’ lives; rather than a pre-structured framework designed by researchers that they’re forced to conform too in responding. And it means data that’s experiential: captured in the moment and the point of experience of interaction with a product.
This kind of data exists everywhere in video – in every word spoken, every observable behaviour, every brand and product lurking in the background. It’s in the tone of a consumer’s voice, in the movement of their face, in their choice of language. The challenge therefore becomes accessing it at scale.
Qualitative researchers look for these data points every day – they’re the building blocks of insight creation in qual projects. But the principles of classification, identification and analysis can be applied to video data sets of colossal size. One 90 second video of a consumer loading their washing machine might contain 30 data points of interest to an ethnographer or qual researcher – in the brands and products they’re using; in the way they sort their clothes and pour detergent; in where their washing machine is located; in the cycle setting they use. Scale that up to a quantitative sample size across 500 videos and the data points are in the tens of thousands: mass qualitative insight.
In fact, much of the data it’s possible to collect from these kinds of videos doesn’t just add to the sum of knowledge from normal closed-question capture – it begins to replace it. Think again about the laundry video – how many closed questions do you need to ask to capture the same amount of data points with a lower level of confidence? In a world where video is not just a bolt-on, but a primary source of data, the rules about what questions you ask and how you ask them change.
Technology has a big role to play here. Managing huge video data sets and automating the analysis of them is the holy grail for many in our sector. But that underplays the role of human interaction with video – the process of making connections, finding emergent data points and building codeframes from the ground up. Although machine learning, behaviour recognition and other automated tech is improving quickly, complex and detailed behaviours will likely need human interpretation to successfully code for some time. Video contains data that falls into two groups – points that computers can recognise and codify; and ones that they can’t. At that point, the focus shifts to tech as an enabler rather than a solution in itself.