“This poll is a push poll!”

“These are partisan questions…”

“If this isn’t a biased poll, then why are the results broken down by political party?”

We sometimes hear this sort of commentary when we present research results in a public forum.  More often than not, they herald persons or groups who are unhappy with the results.  Typically, they hope to discredit those results by questioning the research methodology.

The accusation bears examination.  When opinion research is conducted in the public eye and focuses on public policy, it is by nature destined to generate heated discussion.  It’s worth taking some time to clarify some basics regarding what is and what is not legitimate methodology.

First, let’s get a grip on what exactly constitutes a push poll.  A push poll is in fact not a poll at all, but rather a messaging tactic.  The idea is to contact a group of people (usually voters) under the guise of conducting a legitimate poll.  However, the “poll” is simply a matter of relating a litany of negative (or for that matter positive) items about the candidate, issue or initiative.  The goal is to influence public opinion one way or another by communicating pro or anti messages enhanced by the perceived credibility of an unbiased opinion research (poll) project.

There is no question that this tactic is misleading and unethical for a pollster to conduct.  Again, a “push poll” is in fact not a poll at all.  No reputable pollster will ever conduct a push poll.  They are commissioned by campaigns and performed by call centers….not by research organizations.  These efforts are easily distinguishable from a genuine poll not only in their grossly lopsided presentations of a candidate, issue or measure, but also in the number of persons contacted throughout the effort. 

A push poll requires reaching a far larger audience than that required to generate statistical validity of a legitimate poll.  For instance, depending on the size of a population, most polls complete somewhere between 300 and 1000 calls.  A “push poll” on the other hand, will typically necessitate contacting thousands of persons. 

The goal is not statistical validity, but rather communicating a message to a specific group of persons.

So, if you think a poll that was conducted with a sample size of 350 is a push poll, you’re probably wrong.  It just does not touch enough people to effect change on any real scale.

This leaves us with the question of if a poll’s results are corrupted by bias.  Bias is the intrinsic enemy of accurate polling.  It is the methodology of a research effort that determines if bias has been introduced to a poll, thereby calling the results into question.  There are a myriad of different forms of bias, including self-select (this is the inherent flaw in mail or online surveys), interviewer bias and question bias, to name a few.

The accusation of a survey being “partisan” usually has to do with the kind of questions asked, and the way that data is presented.  Let’s start with the kind of questions asked.  When examining a matter of public policy, it is incumbent upon public servants and elected officials to gain a thorough understanding of what the population that they represent or work on behalf of truly thinks. 

In other words, public policy doesn’t occur in a vacuum.  It exists in the swirling winds of competing interests, objectives and personalities.  In America it is a free process and it is wholly human.  That means it is both complex and messy.  It is the job of opinion research to distinguish between the genuine and the perceived, identify those that are unmoving in their opinions and those that can be swayed (as well as the degree to which they may be swayed)…and ultimately develop an accurate assessment of the landscape of public opinion.

Sound governance predicates developing some understanding of how people will react to likely messages and messengers on every side of an issue.  Only then can responsible government determine a go-forward strategy that legitimately reflects the needs, desires and objectives of the population that they serve, as opposed to heeding the cries of a vocal but unrepresentative few.

With this in mind, a research project will necessarily entail exposing respondents to probable arguments, protagonists and antagonists and various forms of messaging including those presented in conversational (“real-world”) formats.  The reality is people will hear many arguments –some legitimate, some less so…some heated and some coldly rational…some on the radio or in the newspaper, and some from their friends and neighbors.  The question is what will be effective, and to what degree will it be so?  This demands that questions that are pointed and intended to sway public opinion be asked of a research project’s respondents…simply because the general public will be subjected to those very arguments. 

Randomizing question order so that if a particular order of questions impacts how a person responds (in other words, it introduces bias to the project), a different ordering of questions asked to other persons will negate the bias.  In this manner bias resulting from question order is cancelled out by the continuous re-ordering of questions by randomization. 

Finally, it can be unsettling for some to see results of survey broken down into political groups.  For instance, a study may find that 40 percent of Democratic men feel one way, in contrast to 80 percent of Republican women who feel exactly the opposite.  It is unsurprising that this kind of reporting can have the whiff of partisanship about it.  Nevertheless, it is not necessarily reflective of a “partisan” effort.   The goal is simply to identify groups with shared opinions.  As long as the same questions are asked to all groups (Democrats and Republicans, men and women and so on), then there is no problem in terms of bias. 

The key takeaway of this final point is simply that the content of the questions is not changed in any manner because of these descriptive demographic traits, which are used solely to analyze question responses after survey has been conducted.

It should be evident that a push poll is pretty easy to spot.  And, a poll that entails pointed questions or presents results using descriptive characteristics (demographics) isn’t by definition a push poll. 

A well-executed poll should be designed to reveal what people really think, and if their opinions be swayed one way or another.  It is not intended to make people feel good or bad, nor can it influence the opinions of a population to any appreciable degree. 

A poll is simply a snapshot in time; its results an accurate representation of a broader population and open to analysis and interpretation.

Mr. Wallin is vice president of Probolsky Research LLC, a full service opinion research firm specializing in local government.