In the last few weeks we have seen two very different examples of how technology, polling and data interact with the democratic process — the Brexit referendum and the 2016 Federal Australian election.
What’s fascinating about both cases is that while technology is shifting the way our opinions are created and canvassed, prevailing polling methods failed to accurately reflect the mood of the nations.
In the first instance, the Brexit referendum aroused many disparate sentiments among the UK population, and despite what was being said, tweeted and shared in the lead-up, when Britain decided to leave the EU on June 23, there was one resoundingly common sentiment: shock. Neither side of the debate had predicted that Britain would actually exit the EU – nor did the opinion polls.
Only days before the Brexit referendum, YouGov, an international online market research group, conducted a poll that had the Remain vote at 51% and the Leave vote at 49%, commenting:
‘Our current polling suggests the race is too close to call, but the recent trend has been towards Remain, just as other referendums in the past have shown late movement towards the status quo.’
This suggests that in the absence of accurate voting data, YouGov made a call because the alternative didn’t have a precedent. As author Andrew Gelman explains, the approach was flawed because the data wasn’t rich enough, and wasn’t actually reflective of the the voters for 5 key reasons:
- The polling data wasn’t actually representative of who voted (this is because Remain voters were more likely to be reached than Leave voters)
- Surveys are a poor way of capturing voting intentions (people said Remain/undecided but decided ultimately to vote leave)
- People changed their minds on the last day
- The polling data didn’t accurately capture the people who decided to vote on the day
- The sampling variability
In Australia, voting turnout is much more predictable (given that we have compulsory voting) yet we’ve seen a similar thing happen with the recent Federal election. As the votes started to roll in, a murky picture emerged. And as the counting closed at 2:00am, Australia still had no idea who the next leader of the country would be.
What is extraordinary about both examples is that neither of the outcomes were aligned with the polling information. The polls in the case of Brexit, maintained a win in favour of Remain.
In Australia, in the last few days of the election, the Nine-Galaxy poll predicted a majority to the Liberal party, with a 3.4% swing to Labour.
However, at the time of writing, The Australian Liberal Party had won with the smallest of margins, and 5 seats are still being counted — a much closer contest than anyone had predicted.
So what do these discrepancies between opinion polls and actual election results tell us about the way we measure public sentiment? While political parties and the media rely on these polls to give them a sense of how the winds are blowing, the above examples demonstrate that polls are not necessarily the only weathervane we should rely upon.
A snapshot, not a movie
In Australia, there are a number of opinion polls, each of which uses different methods. According to The Conversation, the three most common methods include:
- Phone polls (phone calls conducted by a person)
- Robopolling (computer calls)
- Online panelling (samples selected from a wide database)
Most of the major national polls use a combination of all three. For example, the most well-known of the major polls, Newspoll, uses both online panelling and robopolling, whereas Galaxy uses online panelling and phone polls.
And while polls can provide a valuable glimpse into what is happening, they have their limitations. In the article Opinion polls explained: How to read them and why they matter, ABC election analyst Antony Green comments:
‘People change their opinion, people focus more on election campaigns. If opinion polls always told you what the result of the election would be, then parties wouldn't spend money campaigning.’
What a poll can show you is what is happening in the present. It cannot predict how events or a campaign are going to affect voters.
Another limitation is that the data can be given context in order to promote a certain point of view. This may be to create an entertaining spin or to justify a decision, but it also succeeds in creating a disconnect between the original information and the end result, making the data seem more biased than it actually is.
Another issue is that the method of polling is not keeping up with what is happening in society and therefore key sectors of the population aren’t being represented.
For example not all of the major polling companies include mobiles in their calling, a fact that seems unbelievable in this day and age. Yet according to Australian Communications and Media Authority (ACMA), a third of Australians only use a mobile. Nearly half of these users are under 34 years of age.
This inability to move with the times, means that you are not sampling a truly representative portion of the community.
The first social media president
So is the solution to go where the people are? Political personalities are becoming extremely adept at building their social media presences, yet few have been able to truly harness the potential of the medium, to discover insights about their voters.
A notable exception is the 2012 Obama campaign. It wasn’t just remarkable in that he won a second term, but it was the first time that Facebook was considered a formidable election tool — and not only for promotion, but to understand what voters were thinking.
The Democrats employed a large group of data engineers who realised that they could get remarkably detailed insights from the information found on the social media site.
This not only meant that they could create incredibly targeted campaigns, but they also got a fantastic view of what the campaign was doing at any given time.
As Sam Graham-Felsom, Obama’s chief blogger said of social media’s potential: ‘This is the Moneyball moment for politics," 2008."If you can figure out how to leverage the power of friendship, that opens up incredible possibilities."
Social media gives parties a direct road into a voter’s brain, and can also give us a sense of what the outcome might be. For example in the recent Federal election Meltwater, a media intelligence platform, tracked the amount of conversations about the two major parties leading to 2016 election on Facebook, Twitter and blogs and found:
‘In the month Liberal Party received 57.7 per cent of the total number of conversations around the two prominent parties, the data — provided exclusively to news.com.au — revealed’
Meltwater said that by monitoring these conversations, you could see that the race was going to be very close ‘right down to the line. But the real strength of social media is that we get a very strong picture of how people are feeling in real-time. As Meltwater data commented to news:
‘Social media is an exceptional barometer as to the feeling and sentiment of the general public.’
Therefore it might not even be a question of who comes out on top, but as technology develops, we can get a much clearer idea of the issues that will sway people, how they respond to debates and even tweets, and not just a broadsweep, in their own language from their own accounts.
As is often said about social institutions, the theory might be sound, but humanity is not. And the same can be said for polling methods. Even if the methodology is excellent, there is only so much we can predict, because the tides of opinion are not static, but constantly churning and moving.
In order for polling to be truly effective, technology needs to catch-up in order to capture in real-time, whose citizens are voting for, why they are voting for them in that moment and what could potentially change that decision. Otherwise, we might be in for a few more surprises, especially in close contests.
The program for CeBIT Australia 2017 is out! Find out more about the CeBIT 2017 eGovernment program here.