Money, robots, and the truth: ideas for a more open referendum

Liz Carolan
The Graph
Published in
7 min readDec 14, 2017

--

Brace yourself; the referendum is coming. You, me, and the rest of Ireland are about to be bombarded by facts, rumours, misinformation, personal stories, personal attacks, photos and diagrams about abortion.

Irish people, in my experience, take a serious and reflective approach to our referendums (ahem, Britain). We want to read about what is at stake, and will seek out voices we trust to help guide our thinking.

Yet information also seeks us out; leaflets land on our doormats, and emotive posts — targeted to us with a scary precision — appear on our Facebook feeds.

So, how do we know what to trust?

There are a three issues to consider here:

  1. Fake news
  2. Targeted ads, and algorithmic transparency
  3. Campaign financing

These are highly connected but separate issues will have a huge bearing on our democracy in the coming months. This will affect us all regardless of our view on the referendum question.

However there are things we can do about them, as voters, campaigners, journalists or commentators; and perhaps most importantly as a community working together to share information and seek improvements. I’ll go through each, and then finish up with some ideas for what action could look like.

1) Fake news

First up is good old fashioned lies masquerading as news — now known as #Fakenews. These are often articles with sensational headlines that people you know may share in good faith, but that contain untruths that can influence people’s perceptions of a topic, or discredit a person or organisation.

You have probably seen these when your Aunt shares a warning to her Facebook friends that something very ordinary in their life is about to kill them (I’m looking at you anti-vaxxers).

Handy image by IFLA — download it free here! Thanks IFLA!

At a personal level: we can inform people spreading mistruths, gently; no-one likes to be told they have been duped. My approach is to presume good intent and give the person the tools to work out whether or not they can trust content. The wonderfully dorky “International Federation of Library Associations” have a guide for spotting fake news, including an downloadable infographic for replying in the most constructive way possible.

As a community level: this is where “fact-checking” projects come in — dedicated teams examining claims made in public to see if they stand up to some basic truth tests. For this we will no doubt need to rely on (and support) news organisations (the Journal have done these in the past)

2) Targeted ads, and algorithmic transparency

Next up is targeted ads — those posts that appear with “sponsored” (on Facebook) or “promoted” (on twitter) in tiny grey letters. You’ve seen these and are probably used to ignoring them — but they do present a number of challenges to democratic processes at a societal level. Stay with me here.

These ads are directed (“micro-targeted”) at people based on criteria set by a person paying for an ad. Algorithms do the work here; little virtual “robots” that follow rules a person sets them. An example of a basic algorithm is a formula you put into excel: you can tell it to add up all the numbers that appear in a column, and then find the average.

An example of a “sponsored” ad I got today

In the case of social media ads, algorithms look at all of the activity of billions of accounts to work, for every person, where you live / work / hangout, how old you are, what you are interested in, who your friends are, your political leanings, if you like classical music etc. Advertisers then get to set some rules for these robots to pick out exactly who they want to see particular pieces of content, ads, articles, photos, videos etc.

These can be very specific — for example, today I got sponsored content from Fianna Fail that was targeted at people in Ireland who like classical music (I didn’t really know I liked classical music, but Facebook did — I think because a friend checked me into a concert a few years ago).

Facebook allows us to see why we were targeted with particular ads (details on how to do this below)

I just went through Facebook’s “ad manager” and was able to instruct it to find women aged between 20–35 within 5km of Stillorgan with a wedding anniversary coming up in the next 2–3 months: it turned up about 1,000 people. It doesn’t tell me who they are, but from €11 per day I can target these women with ads for my (hypothetical) gift shop in the Stillorgan shopping centre. No harm there, maybe I would get a few sales, and help some people get a good gift for their husband/ wife.

Facebook’s ad manager, which allows you to target key demographics.

The women who would receive my ad are able to look up why they are seeing these ads (so can you; in the top right corner of the post click on the “…” (Facebook) or “∨” (Twitter) symbol and the “Why am I seeing this?”).

How you can work out why you are being shown a particular ad on twitter

This ability to see who is targeting them is a very basic form of “algorithmic transparency” or the ability to see the rules behind the targeting, although this information is only available to me, facebook, and the women getting the ad, for the period that it is in their feed. It will then vanish from them without a trace.

Why does algorithmic transparency matter?

I’m glad you asked. While it is not such a big issue in the scenario where I am selling anniversary gifts, it is in politics.

Firstly, if I am running a referendum campaign, and I am telling an untruth about the other side, that only a small, select group of people can see for a short period of time, this can go unchallenged. It is not out in the open where it can come to the attention of fact checkers or those who are being discredited, where it can be challenged or countered.

Secondly, it makes it very difficult to monitor who is spreading information in a campaign, and who is putting up the money for this to happen.

Spending in political campaigns is regulated, and rightly so; we as a country need to know how much money is being spent, by whom, who’s money it is, and what it is being spent on.

Groups must, by law, supply this information to the state body in charge of Standards in Public Office (SIPO), and it is then published so we can all examine and question it. Hurray for transparency!

In the UK this approach enabled journalists to look at Labour’s declared spending in the 2015 election, and find that the “Ed Stone” was omitted from the spending returns, which led to a fine. Hurray for accountability!

Micro-targeted ads are almost impossible to trace, as such a small group see them, and then they vanish. This means that we cannot always tell who has paid for it to be there. This has famously been an issue in the recent US Presidential election, where Russia was found to have paid for ads aimed at influencing the result (let’s not go there).

Which brings us to…

3) Who is paying for all of this

In referendums, groups are limited in the amount of money they can accept from individuals and companies, and large donations are published so we can see who is paying for what we see. Foreign donations are not allowed by law, in order to avoid foreign interference in our elections (there’s a debate about this at the moment, but I won’t get into right now).

Both sides of the abortion debate have been campaigning for years, through protests, leaflets, jumpers, badges etc. Regardless of the legal restrictions and disclosure rules on financing — citizens, in my view, have the right to know who is paying for each of the organisations involved in this debate.

This should be achieved through pro-active transparency by all sides on where their money is coming from.

But there are steps we can take to keep an eye on what is happening, and here are some suggestions I have:

Ideas:

  1. We start monitoring the targeted ads (and leaflets for that matter) we are receiving and sharing them with each other; in the US ProPublica developed a tool for this, and I am sure that a group of us could come up with something that can work in Ireland.
  2. We ask the organisations we encounter to be as proactive as they can in being open about where their money is coming from and going*; be it a dedicated page on their website or even an openly readable google sheet.
  3. We ask for greater algorithmic transparency, in particular in the domain of political advertising and campaigning.
  4. We support those organisations who will do the fact-checking for us – this doesn’t happen for free and takes staff and volunteer time.

Anything I am missing? Have you any ideas on what a tool to start monitoring ads could look like?

I think we can use the next few months to set an example for what openness can do for our democracy, and hopefully help each other navigate the deluge of information while we are at it.

*Here is an example by experimental political party “Something new” led by James Smith https://somethingnew.org.uk/about/finances/party.2486.html

--

--

Liz Carolan
The Graph

Exec Director of Digital Action, founder of Transparent Referendum Initiative.