The buzzwords in research might be big data and providing insights, but it is at the industry’s peril that it neglect the seemingly simple task of providing data and what research buyers may want.
Tabulations were once the main deliverable, but this broadened to PowerPoint reports, Excel worksheets, and availability of data within proprietary software. It continues to move forward, as clients are now expecting to see data in online dashboards, online reports, or perhaps as data within their own internal reporting systems. Clients are beginning to expect their research data to be analysable alongside other key business data. To get a place at the “top table”, research data needs to be “customer ready” for this change.
Over the past five years, I have specialised in managing data from different sources so that research buyers can marry up their internal data with key research findings. Typically, the research projects feeding these data integration systems are tracking studies. The quality of data I have seen has differed vastly.
So, what distinguishes the “good data” that research agencies pass on to us from the “bad data” that others supply? I would suggest that there are three key areas: consistency, project management, and communication.
Consistency is all important. To ensure that data is provided correctly, it is important to develop a set of procedures that can be easily followed and understood. This is essential, particularly when staff leave or a new team handle a project. Putting a procedure in place is a skilled task; some companies already have a specialist overseeing this.
Project control is equally important. This should cover the project detail. Fortunately, there is an easy solution at hand – good old Excel spreadsheets. Excel spreadsheets can impose discipline and produce an easily viewable record covering the history of a project. Let’s take a tracking study, initially researched in five countries with three questionnaire versions, which records data about twenty key brands for ten performance indicators.
We all know what happens with a tracking study like this – brands are added or removed, a sixth country is added, one version of the questionnaire is only asked annually, etc. This is typical of the tracking study that is never going to change. Or so we thought and hoped!
Simple spreadsheets can be used to record what is changing in a tracking study. They quickly explain the history of the project and can be used as a “master index” to drive the project. Some software may have this functionality built in, but the principles are the same and, of course, all important.
Communication is the third important key area. Inevitably, even with good project management and anticipation of changing questionnaires, something may happen that changes the fundamental structure of the data or what is required. The advice here is simply do not assume. We have all been recipients of data that has been provided with good intentions but has meant hours of work to restructure.
All of the above covers good practices, but it does not discuss how the actual data should be provided. There are three standards for research data: SPSS, Triple-S, and XML. SPSS is not really a standard, it is just a standard software product that most people working with research data have access to.
Triple-S is truly a standard. It was developed in the United Kingdom more than 20 years ago and has succeeded in that over 50 research software suppliers subscribe to its standards. It means that metadata can be passed from one software package to another both at the respondent data level and the data content level (question texts, responses codes, filters, and some data rules).
XML is the modern way to pass information between systems. When you book an airline ticket, for example, the data associated with your booking will be stored in an XML format and passed to a database. XML is like HTML, but it allows flexibility in how your data is structured. Triple-S uses XML to store the data content.
Finally, there is customised XML. Triple-S is a particular standard way of using XML, whereas other customised XML forms may be used or required. If a research agency needs to provide data directly to the IT department of a bank, for example, the bank may already have a proprietary or preferred format for the XML it uses to process data.
In summary, the provision of data is growing and becoming more important. Good practice and expertise are needed. The prize is that new openings become possible and that research data becomes used at the “top table”. We ignore this opportunity at our peril.
Authored by Phil Hearn, MRDC Software