FOQ1 Knowledge Brief Executive Summaries
Download the ARF

The ARF is the industry’s best platform for marketing research knowledge, networking and advice. Learn more about our research initiatives!

FOQ1 Knowledge Brief Executive Summaries

pdf
Read Full Knowledge Briefs

Full Knowledge Briefs of the Executive Summaries below, also known as ORQC, are available free to ARF members on My ARF and to non-members for $199. You may download or purchase under "Publications" at my.thearf.org.

For questions, please contact Dr. Bill Cook, EVP, Research and Standards, The ARF, at bill@thearf.org.

Executive Summary 1: Overlap, Duplication, and Multi Panel Membership »
Executive Summary 2: Response Data Quality »
Executive Summary 3: Inter Study Comparability and Benchmark Analysis »

LEGAL DISCLAIMER
THE ONLINE RESEARCH QUALITY COUNCIL INFORMATION AND DATA RESULTS (COLLECTIVELY, THE “INFORMATION AND DATA”) CONTAINED HEREIN AND/OR MADE AVAILABLE BY THE ADVERTISING RESEARCH FOUNDATION (THE “ARF”) FROM TIME TO TIME HEREAFTER IN THE FORM OF CUSTOMIZED REPORTS, EXECUTIVE REPORTS, GENERAL SUMMARIES OR OTHERWISE IS THE EXCLUSIVE PROPERTY OF THE ARF. ANY DISCLOSURE, REPRODUCTION AND/OR DISTRIBUTION OF THE INFORMATION AND DATA, IN WHOLE OR IN PART, IS EXPRESSLY PROHIBITED WITHOUT THE PRIOR WRITTEN CONSENT OF THE ARF.

THE ARF IS MAKING AVAILABLE THIS INFORMATION AND DATA TO ITS MEMBERS, FOUNDATION OF QUALITY SUPPORTERS, AND THE GENERAL INDUSTRY “AS IS” AND “AS AVAILABLE” FOR GENERAL REFERENCE, BUT EXPRESSLY DISCLAIMS ANY AND ALL REPRESENTATIONS AND WARRANTIES WITH RESPECT TO THE INFORMATION AND DATA’S COMPLETENESS OR FITNESS FOR A PARTICULAR PURPOSE. YOU SHOULD MAKE YOUR OWN DECISION AS TO HOW OR WHETHER YOU UTILIZE THE INFORMATION AND DATA, AND ANY USE THEREOF IS AT YOUR SOLE RISK FOR QUALITY, PERFORMANCE AND USEFULNESS. THE ARF DOES NOT WARRANT THAT THE INFORMATION AND DATA WILL MEET THE REQUIREMENTS OF ANY THIRD PARTY OR THAT THE INFORMATION AND DATA WILL BE ERROR-FREE. MOREOVER, THE ARF MAKES NO WARRANTIES ON ANY FURTHER RESULTS DERIVED FROM USING THIS INFORMATION AND DATA BEYOND THE ARF’S EXPRESSLY STATED PURPOSE.

UNDER NO CIRCUMSTANCES SHALL THE ARF BE LIABLE FOR COSTS OF PROCUREMENT OR SUBSTITUTION OF GOODS OR SERVICES, LOST PROFITS, LOST SALES, BUSINESS OPPORTUNITIES OR ANY OTHER PECUNIARY LOSS, LOSS OF ANY GOODWILL OR FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES ARISING OUT OF YOUR USE OF THE INFORMATION AND DATA, AND ANY RELATED MATERIALS, HOWEVER CAUSED, ON ANY THEORY OF LIABILITY OR WHETHER OR NOT THE ARF HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THESE LIMITATIONS SHALL APPLY NOTWITHSTANDING ANY FAILURE OF ESSENTIAL PURPOSE OF ANY EXCLUSIVE REMEDY.

ACKNOWLEDGEMENTS
The Online Research Quality Council (ORQC) was established under the auspices of the Advertising Research Foundation at the behest of advertisers, online panel suppliers, and research companies to address a critical business need for improving online survey data quality. The Foundations of Quality (FoQ) research-on-research project, the largest ARF study in history, is the keystone activity of this council. It involves 17 online panel suppliers, 5 auxiliary vendors, and enjoys the support of major advertisers and research firms, including Bayer Corporation, Capital One, Coca Cola, ESPN, Estee Lauder, General Mills, GM, Kraft, Microsoft, Procter & Gamble, and Unilever, who have come to the table to work together on this initiative.

Even with an unprecedented level of collegiality and cooperation, which is the hallmark of the Foundations of Quality project, undertaking a major cross industry task is a continuous challenge. Without the selfless donation of time, money, and expertise, the study would not have materialized.

Although dozens of individuals have contributed ‘above and beyond’ via committee work, monetary support, and personal investment, we give thanks to Robert Tomei, IRI and Kim Dedeker, formerly of Procter & Gamble (now at Kantar) who courageously launched the Online Research Quality Council, and with it, the creation, formation, and deployment of the Foundations of Quality project over the past 18 months. To that we add special thanks to the Define Quality Committee Co-Chairs, Renee Smith, Ipsos; Efrain Ribeiro, Kantar/Lightspeed; and Dr. Tom Evans, ESPN, who, in tandem with their committee, created the FoQ research design and survey instruments; managed the fielding of the study; and worked tirelessly with Bob Walker, our independent analyst, to bring the study to fruition. Finally, we express great appreciation to Kristin Luck and her team at Decipher, who provided outstanding, and critical, database services to support the FoQ study

Scope of the Foundations of Quality (FoQ) Study
This landmark study involved the fielding of a survey across 17 online panels and a telephone and mail panel. There were five versions and two waves of the study, and in the end, a total of 100,000 interviews were gathered for analysis. The completed online surveys were further enhanced by appending the historical survey taking activity of the panelist from the panel’s records. Each panel supplier completed a questionnaire about their panel protocols, processes, and characteristics. In addition, a separate analysis looked at 675,000 encrypted email addresses representing the entire online panel universe covered by the 17 panel providers. The ‘book value’ cost of doing the study exceeded one million dollars.

Executive Summary 1: Overlap, Duplication, and Multi Panel Membership

Robert Walker; President,  Surveys & Forecasts, LLC
Raymond Pettit, Ed D; Senior Vice President and
Joel Rubinson; Chief Research Officer, The Advertising Research Foundation

EXECUTIVE SUMMARY

The objective of this Knowledge Brief is to provide clarity by delineating the concepts of panel overlap and duplication, and assessing the impact on data quality. There is much concern in the research arena that ‘it’s all one big panel with heavy responders doing all the surveys’ - with few facts to guide judgments or opinions. Our study provides the fact-based empirical evidence the industry is seeking.

In today’s world of online survey research, some people belong to more than one panel – this is known as (panel) ‘overlap’. Using an encrypted email matching exercise, the FoQ study found a 41% match at an aggregate level. But to put it in a more realistic light, when we look at the proportion of people/panelists this represents, the percentage of overlap is, conservatively, around 16%. This proportion was not as high as many in the industry had estimated.

Further, just because someone belongs to multiple panels, it does not necessarily lead to degradation in survey response quality: most online panelists appear to be diligent and responsible. In fact, whether a respondent belonged to one or to many panels, the FoQ study found virtually no difference in survey results. Statistical modeling confirmed this finding: multiple panel membership is not a significant driver of survey response quality. There are other factors in this complex process that are at work, and these will be presented in upcoming ARF Knowledge Briefs.

To put the overlap percentage in perspective, if the proportions are projected to the total population of the 17 panels in the FoQ study, there are roughly 5,500,000 ‘unique panelists’ active and available to respond to surveys (the figure would undoubtedly be higher if we consider all US online panels). This is comparable to the numbers reported in past research-on-research into telephone survey participation rates.

Duplication – or evidence of a respondent taking a survey more than once in the same study – also exists. Recent duplication levels reported to the industry have seemed alarmingly high and may be confusing the issue. The reason for this is that the ‘duplication’ level being reported is across online panels in their entirety. But surveys don’t work that way: in surveys, a sample is drawn for a particular study and the respondents are placed back in the pool for the next study. So, multiple probabilities of being selected and agreeing and actually completing a survey are at work for each study.

While duplication can occur when people belong to multiple panels, the likelihood of it is much less than the percentages reported when looking at the aggregate level. In fact, the numbers being reported to the industry may actually be an indicator of overlap, and thus may be confusing the issue. Regardless, it points out the need of creating common and accepted definitions and acceptable standards across the industry.

The FoQ study measured duplication at the study (survey) level, and found that because of the process of first being invited to the survey and then getting past the screening criteria and actually taking the survey, the duplication rate that occurs is a fraction of the ‘total’ panel duplication percentage.

For example, on a typical per study basis, when another source of sampling is needed (note: 85% - 90% of ‘blending’ occurs across two panels), the duplicate percentages we found are generally in the low single digits. Because the FoQ study ran across 17 panels, some higher percentages (mostly correlated with high overlap levels of particular panels) were seen. Since the study was blinded, we were not able to assess which panels may have sourcing partnerships in place, thus the numbers provide a glimpse into the ‘universe’ of potential duplication.

The FoQ study included a local market sample to assess duplication levels on a restricted sample universe of respondents. In this case, duplication levels did approach unreasonable levels, signaling that great care and effort be exercised when this situation occurs.

Further analysis identified a small segment of duplicators (survey takers) who had marked differences in response patterns when compared to non-duplicators. This finding offers some clues for data quality improvement when working with highly specialized or ‘difficult to find’ (low incidence) samples. Even though the blatant ‘duplicator’ segment was quite small, logic dictates that duplication should not be allowed to occur in any form.

Bottom line, the FoQ study reorients the perspective the industry had started to internalize and even believe – the facts show a manageable problem but that duplication needs further attention, as long as overlap and multi panel sourcing (to meet sample requirements) exists.

Full Knowledge Briefs of the Executive Summaries below are available free to ARF members on My ARF and to non-members for $199. Download or purchase under "Publications" at my.thearf.org.

Executive Summary 2: Response Data Quality

Robert Walker; President,  Surveys & Forecasts, LLC
Raymond Pettit, Ed D; Senior Vice President and
Joel Rubinson; Chief Research Officer, The Advertising Research Foundation

EXECUTIVE SUMMARY

While multi-panel membership is a reality, we found little evidence that it impacted data quality to any significant degree. Tangentially, ‘years on the internet’ and ‘hours per week on the internet’ had no significant impact on data quality that we could find.

Survey length, demographics, and people’s attitudes towards surveys, however, were more likely to discriminate good or bad survey taking behavior that does impact data quality. Using a metric developed from the design of the survey, called the Bad Behavior Score (BBS), a series of models was developed which clearly captured the influence of survey length, demographics, and attitudes on ‘good or bad’ survey taking behavior. This metric (the BBS) was also used to create rules for determining the optimal number of surveys taken by age/gender.
The BBS analysis yielded the following key results:

  • Longer survey length increases the likelihood that ‘bad’ survey taking behavior will occur nearly six times
  • Respondents aged 18-29 are nearly twice as likely to demonstrate bad behavior than the 50-65+ age group
  • Taking fewer surveys a month increases the odds of exhibiting ‘bad behavior’; but taking more actually lowers the odds slightly
  • Being a member of multiple panels actually lowers the odds of bad behavior occurring by as much as 32% 

Our study also captured ‘interview time’ as a way to identify the impact of rushing through surveys to complete them quickly. However, recent academic research on ‘speeding’ (elapsed survey completion time) suggests that ‘total interview time’ may be a biased indicator of quality because it doesn’t take into account endogenous variables such as diligence, accessability of attitudes, and speed of thinking (which may vary by age).  Given the difficulty of isolating ‘speeding’ as a key influencer of data quality, we opted to present some preliminary results in this report, but continue our analysis seperately.
The FoQ findings provide additional fodder for a number of potential Industry Solution discussions:

  • Should we be focusing on lessening bad behavior; increasing good behavior, or both? How would that change or affect guiding documents and principles for the industry?
  • The development of a BBS metric or ‘engagement’ scorecard to assess ‘response quality’ on a per study basis. What would it look like?
  • Uncover best practice techniques to ‘eliminate’ egregious survey-taking behavior on the fly. How does this affect sample balance, ability to provide representative samples, meet targets, and cost of a project?
  • The development of a survey level quality audit. Should it be third party or self imposed? What would it look like? How would it be implemented? What are consequences for ‘failing’ an audit?
These implications are currently under study by the ARF’s Industry Solutions Committee, part of the ORQC.

Full Knowledge Briefs of the Executive Summaries below are available free to ARF members on My ARF and to non-members for $199. Download or purchase under "Publications" at my.thearf.org.

Executive Summary 3: Inter Study Comparability and Benchmark Analysis

Robert Walker; President,  Surveys & Forecasts, LLC
Raymond Pettit, Ed D; Senior Vice President and
Joel Rubinson; Chief Research Officer, The Advertising Research Foundation

EXECUTIVE SUMMARY

Benchmarking is the process of comparing the quality of a specific process or method to another that is widely considered to be an industry standard or best practice. The result is often a strategic, tactical, or metrics solution for making adjustments or changes in order to make improvements. Benchmarking may be a one-time project, but is often treated as a process in which organizations continually seek to challenge and improve their practices.

Benchmarking can also be used to test the reliability, or the consistency, of a set of measurements or of a measuring instrument. There are two forms:  the measurements of the same instrument over time (test-retest), or in the case of more subjective situations, such as personality or trait inventories, whether two independent raters give similar scores (inter-rater reliability).  The FoQ study was designed to address the first form of reliability (test-retest).

Our focus, then, is on repeatability, which is different from reproducibility. Repeatability is achieved when   test/survey results across time, within/across different panels, different vendors, or similar (product) categories ‘agree’ within certain accepted limits or parameters. For the purposes of the FOQ, our ‘repeatability conditions’ included:

  • Comparing the same measurement procedure (online panel surveys)
  • Using the same measuring instrument, under the same conditions (survey design was identical across all samples)
  • Collecting data from the same sample (all panels used the same sample specifications)
  • Collecting data over a specified period of time (2 Waves).

Our Benchmark Analysis was a way to study the repeatability conditions within and across online panels, not research vendors. The major findings include:

  • While within panel results are consistent, across panels we see a wide variance, particularly on attitudinal and/or opinion questions (purchase intent, concept reaction, and the like).
  • Panel ‘best practices’ reduced variance only slightly.
  • Sample balancing (weighting) survey data to known Census targets, minimally on age within gender, education, income, and region removed variance, but did not completely eliminate it. Likewise, the test of a pseudo-demographic weighting variable (panel tenure) did not completely eliminate variance
  • The data suggests that panel practices work together in subtle ways to build groups of respondents with distinctive attitudinal profiles. While panel tenure may be one such factor, the way panels recruit, the type and amount of incentives offered, and possibly even the ‘character’ of an individual research/panel company may encourage distinctive panels to emerge whose members share attitudinal and motivational propensities that drive results that may vary from panel to panel.
The findings suggest strongly that panels are not interchangeable. Guidelines and transparency about sourcing is needed, when blending or multiple panels are used to fulfill sample requirements. But, in addition, buyers of research should be aware of the attitudinal tendencies provoked by panel practices, and seek ways to assure that suppliers carefully blend and balance samples to achieve a harmonious result

About the Advertising Research Foundation (ARF)

Founded in 1936 by the Association of National Advertisers and the American Association of Advertising Agencies, the ARF is dedicated to aggregating, creating and distributing research-based knowledge that will help members make better advertising decisions. ARF members include more than 400 advertisers, advertising agencies, associations, research firms, media companies, and academics The ARF is the only organization that brings all members of the industry to the same table for strategic collaboration. The ARF is located at 432 Park Ave. South, 6th Floor, New York, NY 10016 and on the Web at www.theARF.org.

Full Knowledge Briefs of the Executive Summaries below are available free to ARF members on My ARF and to non-members for $199. Download or purchase under "Publications" at my.thearf.org.

Journal of Advertising Research


Tracking the Power of Social Media
March 2014, Volume 54. No 1

Since its launch by the ARF in 1960, the Journal of Advertising Research has become one of the seminal journals in the industry. Enjoy this free digital download of the 50th Anniversary issue.

Advertisement

Testimonial

"“The ARF provides a rare opportunity to interact with people on the other spokes of the research wheel; to see how people on the advertising, research vendor and brand level use the same research I do, but differently. This way I can be more in synch with what the industry is doing and that allows me to work more effectively."
Daria Nachman – Director of Marketing Research, ABC National TV Sales

ARF VIDEOS