The government has withdrawn its misinformation bill. A philosopher explains why regulating speech is an ethical minefield
- Written by Hugh Breakey, Deputy Director, Institute for Ethics, Governance & Law, Griffith University
Misinformation and disinformation are major concerns worldwide. The federal government’s misinformation bill aimed to respond to the threats posed by false, misleading and harmful information. The bill met strong opposition in the senate and has just been withdrawn.
Legal efforts to suppress misinformation are ongoing. Around the world, many countries are considering legislation to suppress specific types of misinformation or require online platforms to suppress it.
Such laws are always controversial. They encounter some well-known practical and ethical problems – and some surprising ones.
Most obviously, censorship restricts people’s right to free speech – an important natural freedom protected in the Universal Declaration of Human Rights and international law.
Prohibiting speech is additionally concerning in democracies, because citizens have a civic responsibility to engage in debate about the laws that collectively bind us.
Free speech also has many beneficial “utilitarian” consequences. It can allow truths that were once thought false to be reconsidered and accepted. It allows for existing truths to be better understood. As John Stuart Mill famously argued: “He who knows only his own side of the case knows little of that.”
Powers of censorship can also be abused to suppress political dissent. Even if we trust the existing government to be judicious in suppressing speech, we might be nervous about the way future governments could employ such powers.
Finally, restrictions on speech can be difficult to target precisely. Even if the wording of a law is narrowly specified, it might chill speech. The threat of legal sanctions can encourage people and organisations to avoid speech anywhere near the legally specified boundary.
These are all important ethical concerns – and many of them were raised by critics of the government’s withdrawn misinformation bill.
Such concerns are not definitive, because prohibiting speech can also have socially desirable consequences. Laws against incitement to violence, prohibiting defamation, and even protecting things like copyright are all widely accepted limitations on our ability to speak freely.
Yet if we want to understand how legal attempts to suppress misinformation might be counterproductive, it is not the restriction on what we say that matters, but the consequences for what we can hear. Suppression regimes can, perversely, undermine confidence in the very beliefs they wish to protect.
Misinformation: cause or symptom?
The problem of misinformation is easy to overestimate. It is intuitive to think that misinformed people will make bad personal or political decisions and be led to adopt worrying values.
But as cognitive psychologist Hugo Mercier has argued, people often believe and share misinformation because of values they already hold and the actions they want to perform. In such cases, the misinformation may be a symptom as much as cause, and suppressing it will not change the underlying concern.
Our cognitive biases also tempt us to overplay the significance of misinformation. Confirmation bias and self-serving bias encourage us to believe that those with different values and beliefs are manipulated, credulous and misguided. This is a much more comfortable belief than the disquieting alternative that our opponents are reasonable people with legitimate concerns.
Read more: How the federal government's misinformation bill might impede freedom of speech
A driver of distrust
Misinformation suppression regimes can cause more – not fewer – false beliefs. Consider any important belief that you are confident is true. Think for a moment about why you believe that fact. The answer is probably that you have heard plausible evidence from credible sources supporting it. And you figure that, if there were substantial things to be said against your view, you would have heard about them.
But suppose I were to tell you there was no way you could have heard about conflicting evidence, because you have lived for years under a misinformation suppression regime. Should you now rethink your confidence in that belief?
Yes. The earlier grounds for your belief no longer apply. You can no longer justify your belief by appealing to the fact that you have heard what may be said for and against it. You are like a scientist who trusted the results of an experiment, but then discovers that any data that might have disproved the hypothesis has been systematically excluded.
Despite this change in the grounds for your belief, you might not change your mind. After all, a government body – no doubt informed by experts – has judged the supporting facts to be true. If you trust the government body, both in its capacity to provide true information (its accuracy) and in its intention to suppress only disinformation (its sincerity), then you have a new reason to accept your belief.
But here’s the problem. You need to really trust the government body. This is not the type of trust you might ordinarily put in, say, news networks or scientific experts. You might be willing to give those sources the benefit of the doubt, but remain open to the possibility their information might turn out to be false or misleading.
In an information environment where there are many different sources of information, you don’t need absolute trust in any single source. You can weigh things up for yourself, working out which sources make sense and will likely prove reliable.
But when a single body curates the entire information environment, you need near-absolute trust in that body, because its role involves actively suppressing evidence that is wrong. If you don’t have that level of trust, then the regime has removed your good reasons for accepting a true belief, without replacing them with something equally compelling.
Perversely then, the rational response to a misinformation suppression regime can drive distrust.
This concern applies even to to perfectly rational beings. But for imperfectly rational beings, the response to suppression can be even more dramatic.
This is because the people who are most vulnerable to misinformation about important issues are those who are already sceptical about experts and government authorities. Once these sceptics realise that these untrusted authorities are in charge of suppressing information, they will feel like they have additional reason never to trust anything the authorities say.
Understanding and autonomy
Efforts to suppress misinformation imply that the critical goal is to ensure widespread true beliefs, at least about important issues. But true belief isn’t the only knowledge-related (“epistemic”) goal individuals and societies might have.
Another goal is understanding. Someone might have a true belief, but only because they have uncritically adopted it, without any understanding of the evidence for or against it. A misinformation suppression regime might encourage sufficient trust that people accept its pronouncements. But if people do so based on faith, they do not understand their beliefs; they are not developing their critical and cognitive faculties.
In this way, a system that achieves desirable initial outcomes might set the scene for worrying long-term results, as faith in authorities undermines genuine understanding and critical interrogation. Government bodies are exactly the type of institution that democratic citizens must be vigilant in appraising. It is a civic responsibility to try and sort out when and where authorities speak honestly and accurately – and to vote and act accordingly.
We cannot fulfil this responsibility for misinformation suppression regimes, because they suppress information that could cause us to doubt its determinations. They require us to abdicate our civic responsibility to think for ourselves.
Having right or wrong beliefs isn’t all that matters to people. They also care about how they came to have those beliefs. In particular, they care about whether they have made up their own mind. Being in charge of our beliefs is a necessary part of being autonomous – a self-governing agent, able to set one’s own goals. As John Locke argued: “he is certainly the most subjected, the most enslaved, who is so in his understanding.”
Public domain, via Wikimedia CommonsA misinformation suppression regime pays no heed to this source of respect. Every piece of information comes with the invisible but omnipresent qualification: You are not getting the full story, because you cannot be trusted with the full story.
Suppressing misinformation in this way will be seen as insulting and manipulative to those deemed at risk of forming dangerously wrong beliefs. If our concern is with people who are vulnerable to misinformation because they make up their minds on the basis of emotion rather than reason, the last thing we should do is to insult them and treat them condescendingly and paternalistically.
It’s easy to think that if we successfully suppress some information, it’s like that misinformation never existed. This is a mistake. The action of suppressing information itself has effects.
The action has moral consequences: it disrespects people’s ability to make up their own minds, make their own mistakes and take responsibility for their beliefs.
The action has democratic consequences: it weakens the civic responsibilities of citizens by demanding uncritical faith.
Most perversely of all, the action has epistemic consequences: it undermines confidence in the very beliefs it hopes to safeguard.
Authors: Hugh Breakey, Deputy Director, Institute for Ethics, Governance & Law, Griffith University