Beyond the methodology pollsters use, the wording of questions, and the specific questions asked, can give the impression of a certain result that may not be the case.
With a federal election just about to start, we are going to inundated with a barrage of polls.
And, while Canadian polling companies haven’t suffered the same loss of credibility as US companies (many of whom predicted a 10+ point win for Joe Biden, rather than the actual 4 point advantage), many people in Canada also have doubts about polling in this country.
Those doubts are furthered by the wide range of results, with some polls showing a nearly 15 point lead for the Liberals, and others showing a 4 point Liberal lead.
This is why I tend to recommend that people look at the trend in polls, rather than specific polls.
For example, in the 2019 campaign, and leading up to it, many polls showed the Conservatives leading, while others showed the Liberals leading.
Most pollsters ended up being off by 2 or 3 points at most, with the final result being 34% for the CPC, and 33% for the Liberals, though the Liberals won more seats because of vote distribution.
By comparison, things look much worse for the CPC.
The poll that showed the CPC with the best results in 2019 – Angus Reid – now shows the CPC trailing.
Most other polls also show the CPC doing worse than in 2019.
In fact, the CPC doesn’t lead in any poll, a stark contrast from the election two years ago.
So, while we can’t be sure that any one poll is accurate, the trend is certainly not so good for the conservatives.
The parties also do lots of internally-funded polls, and the willingness of the Liberals to push for an election likely means their internal polling is good.
That would also explain why the CPC seems scared of a campaign.
But beyond all of that, it’s important that we be as aware as possible when it comes to tricks pollsters use to give a certain impression, or fit a narrative.
Note, I’m not saying pollsters are making up their numbers. That would quickly result in the demise of their reputation, since most of their income comes from non-political polling for corporations, and no company would hire a pollster caught flat out making stuff up.
However, it is clear that pollsters word questions in such a way as to group responses together to tell a specific story.
Here’s an example:
This is a headline from Abacus Data:
Then, a headline from the Peterborough Examiner, reporting on a Mainstreet Research poll:
How can both be true?
Well, the answer is in the clever wording of the Abacus poll.
Abacus asked this question about an early election:
“If the Prime Minister asks for an election to be held this fall saying he would like to give Canadians a chance to select the government they want to take the country forward how would you feel about this?”
Here are the three choices people could pick in response:
“Upset at Mr Trudeau because an election seems unnecessary.”
“Prefer that we not have an election but it isn’t something that would affect how you vote.”
“Happy to have a chance to cast a ballot and help choose the government to take us forward.”
17% picked the first option.
44% picked the second option.
38% picked the third option.
You’ll note of course, that there is only 1 negative choice, and two somewhat positive choices.
And lo and behold, Abacus then said the following:
“While only a minority prefer an election now, the vast majority (83%) won’t be upset at Mr. Trudeau if he asks the Governor General for an early election.”
They could have just as easily said, “While only a minority are happy to go to an election now, a clear majority (61%) are either angry at an early election or would prefer not to have an early election.”
This technique could be used to come up with basically any narrative you want.
You could use it to generate conservative responses to an issue:
What do you think about deficit spending?
“The deficit is of no concern.”
“I understand the short term need for a deficit, but want to see it reduced.”
“We must balance the budget in a reasonable amount of time.”
Then, just add up the last two responses, which would likely form a majority of respondents since they are quite broad, and then claim “an overwhelming majority oppose deficits.”
Sure, the numbers would look right, but would it actually tell you anything?
By contrast, when Mainstreet Research asked people about an early election, they worded it this way:
“Do you think this is a good time to have a federal election?
The results were 35.3% ‘yes’, and 64.7 ‘no’.
That is – in my view – a far more reasonable and realistic poll, since it gives an equal amount of positive and negative answer choices – 1 each.
Note how the 35.3% who support an early election in the Mainstreet poll is comparable to the 38% who are ‘happy’ to have an election now in the Abacus poll, but Mainstreet doesn’t put in a third question (prefer not to have election, but wouldn’t effect vote) that Abacus did. Abacus throwing in that question effectively split up opponents of an early election, and they then took part of that split and lumped it in with supporters of an election to fit the narrative.
Again, the key here is that Abacus isn’t making anything up. Their polling has been quite accurate, and they follow the industry methodology. What they instead have done is find ways to ask questions that can take accurate data, and then rearrange it rhetorically to fit a narrative.
That’s what we all need to be on the lookout for, from all pollsters.
We have to look beyond the topline numbers, and make sure there isn’t an agenda at play behind the scenes.