---
title: Argument quality and fallacies
tags: live-v0.1, communication, misinformation
permalink: https://c19vax.scibeh.org/pages/argumentquality
---
<!---{%hackmd FnZFg00yRhuCcufU_HBc1w %}--->
{%hackmd 5iAEFZ5HRMGXP0SGHjFm-g %}
{%hackmd GHtBRFZdTV-X1g8ex-NMQg %}
<!--
> [name=Stefan Herzog] We should try to link to John Cook's video on reasoning fallacies
-->
# Argument quality and fallacies
Arguments can be good or bad, and that difference is not just a matter of subjective preference. Rather, argumentation research has spent centuries identifying how and why some arguments are stronger than others. This includes understanding why some arguments (so-called ‘fallacies’) can fool the unwary into thinking they provide good reasons for believing or doing something when, in fact, they do not.
In thinking about argument quality, we are interested not just in what someone might subjectively find persuasive or convincing, but also in what ought to be convincing to a rational critic. As a result, rational argument involves ‘norms’ or ‘standards’ against which actual arguments can be compared. These standards provide a yardstick for argument quality and allow us to make statements about when particular arguments are weak or strong.
As we typically don’t want the wool pulled over our eyes we should care about whether an argument genuinely provides a good reason or not, at least when we are the recipient. At the same time, though, decades of research on persuasion and attitude change suggests that one of the, if not the biggest, factor in persuasive success is the actual quality of the argument put forward in a persuasive message (see e.g., [Hoekens et al., 2020](https://doi.org/10.1080/00913367.2019.1663317)).
Thinking about argument quality in the context of the vaccination debate is consequently of direct practical relevance. However, it is also clear from the research literature that individuals vary in their ability to distinguish good arguments from bad (see e.g., [Kuhn, 1991](https://books.google.com/books?id=q0ra0DxRTNEC)). Educational psychologists, in particular, have developed and tested wider programs for improving argument skills (e.g., [Kuhn & Moore, 2015](https://www.tandfonline.com/doi/abs/10.1080/23735082.2015.994254)), and useful textbooks for improving skills exist at every level.
However, a basic understanding of how and why some arguments are better than others can be provided quite straightforwardly, as can illustration of why commonly encountered arguments in the vaccination debate are fallacious.
## Argument quality, contradiction, and relevance
A key fundamental in the context of argument quality is (self-)contradiction. Argumentation is maximally poor where it involves contradictory positions and understanding why this is so helps shed light on what argument is trying to achieve. Ideally, we want our beliefs about the world to be true, and the actions we take to be effective in bringing about the outcomes we desire. Argument helps us arrive at accurate beliefs and help us identify useful actions. In this context, contradiction is to be avoided because contradictory statements or contradictory sets of statements _cannot be true_ no matter how things are in the world. Similarly contradictory actions undermine our goals whatever they are.
Holding contradictory beliefs undermines our ability to make effective choices in our lives, and when we encounter such inconsistency in others, we need look no further to know that there is something deeply wrong. Of course, the mere fact that a set of beliefs is consistent is not enough to guarantee that they are true. For example, it is consistent to believe the surface of Mars is very hot if you don't have any data about its temperature. However, "The surface of Mars is exceedingly hot" and "The surface of Mars is exceedingly cold" cannot both be true no matter what the data.
Being clear about this is useful, because experience suggests that denialist discourse, particularly in online, social media fora, is frequently riddled with contradiction and inconsistency, and the weakness of certain theoretical positions often becomes apparent after only a few conversational exchanges when contradictions start to emerge.
But thinking about contradiction isn't just useful for understanding when a position or set of beliefs is flawed. It is also useful for understanding what makes arguments weak or strong. The main (though not only) goal of providing arguments is to _change others’ beliefs_ (see e.g., p. 5 of [van Eemeren et al., 1996](https://books.google.com/books?id=FXL_AQAAQBAJ)). We can thus think of a strong argument as one that would require us to change our beliefs if we accept it, whereas a weak argument has no such force.
One way to understand that ‘force’ is to notice that accepting the argument(s) while not changing our beliefs about the claim it supports would lead our views to become inconsistent ([Hahn, 2020](https://www.sciencedirect.com/science/article/pii/S1364661320300206?casa_token=PksqLnVEBD8AAAAA:Zw9xmG33INAAqFDBRvT404xXBrOdwCaefKw3n0JwXOFb5BSjLvlhsBE_eV7LgZblwdy4Thg)). By contrast, an entirely irrelevant and, hence, weak argument has no bearing on a claim, so that it would make no difference whether we reject or adopt it.
This provides a useful tool for thinking about argument quality, in general, and fallacies, in particular, because fallacies are typically fallacies of relevance. A simple test of whether or not an argument provides a relevant reason for a claim or not is to think about what impact, if any, it would have if the argument were true as opposed to false. If both possibilities are equally compatible with the claim, the argument actually provides no grounds for changing our beliefs.
In many, if not most, real world contexts, the relationship between a claim and evidence an argument might offer in support of that claim is such that the evidence only makes it more likely, not necessarily the case, that the claim is true. This is because there could be multiple ways in which the evidence came about. Here, the way to think about relevance is to consider how likely it is that we would observe the evidence if the claim were true, as opposed to if it were false. The more likely the former, and the less likely the latter, the more ‘diagnostic’, and hence relevant, the evidence is. And the more an argument containing that evidence will force us to change our beliefs to avoid inconsistency. In the limit where the evidence is equally likely whether the claim is true or false, by contrast, that evidence is entirely non-diagnostic, irrelevant to the truth or falsity of the claim, and the argument containing it is maximally weak.
To illustrate: anecdotal evidence that ‘an acquaintance got a headache after receiving the vaccine’ is less compelling evidence of vaccine side effects than a significant increase in headaches in the vaccine condition of a randomized control trial, because there are so many reasons unrelated to vaccination that could give a specific individual a headache. This makes that piece of anecdotal evidence rather undiagnostic and an argument based on it rather weak.
Over the centuries, argumentation theorists have developed normative standards (‘yardsticks’) for evaluating argument quality that have underpin these basic intuitions and considerations in great detail. These include logic, probability theory, and a long tradition of identifying different types of arguments (so-called ‘argument schemes’) and providing so-called ‘critical questions’ to help gauge strength. This includes the identification of argument schemes that are often fallacious. These normative standards offer powerful tools for thinking precisely about argument quality, but the intuitive considerations just described capture the foundation of much of what determines argument quality, and provide a practical, general guide.
## Non-arguments, weak arguments and fallacies
<span style="color:green">
### 1. Factual error
When we think about the quality or strength of an argument, we are trying to evaluate how much the reason(s) given actually support the claim. We can do this hypothetically (i.e., “how good an argument would this reason be if it were, in fact true?”), but when we consider the actual extent to which that reason should convince us, we also need to consider whether the reason itself is true. To illustrate: if vaccines made us magnetic, that would (for most people) be a significant reason for vaccine hesitation. But this is moot, because Covid vaccines do not, in fact, make people magnetic. In this way, a large proportion of the arguments against vaccination, particularly in online social media, are so poor as to be irrelevant, because they rest on claims that are demonstrably false. We detail the most popular of these on our [Myths](https://https://hackmd.io/@scibehC19vax/misinfo_myths) page.
### 2. Argument fallacies
What then of the rest? As outlined above, there are at least some useful criteria for evaluating the quality of an argument. In this section, we look at a few common types of argument in more detail. Specifically, there are types of arguments that tend to be weak: so-called “fallacies of argumentation”. These fallacies have been collected in catalogues (for example, [Woods et al., 2004](https://www.worldcat.org/title/argument-critical-thinking-logic-and-the-fallacies/oclc/1091196952?referer=di&ht=editionh)) and are widely discussed in textbooks as “traps for unwary reasoners”. In other words, they are arguments that might _seem_ strong but are actually quite weak.
Anyone familiar with online discourse will have encountered at least some of these, such as “ad hominem arguments”, “slippery slope arguments”, or “arguments from ignorance”.
There is one important thing to note before discussing examples: one of the things that is tricky about the fallacies (and has led to ongoing academic research on them), is that many of these fallacies are typically weak, but _not always_ bad. This means one has to think also about the specific content. Simply identifying arguments as instances of a particular type is not enough. So the following text will provide ways to help with that assessment.
#### 2.1. Arguments from ignorance
Arguments from ignorance are arguments that use the _absence of evidence_ in support of a claim. A classic textbook example is the following:
Ghosts exist, because nobody has proven that they don’t.
which gives a rather weak reason for believing in the existence of ghosts. Arguments from ignorance are common in the context of vaccines, and Covid more generally, and many people will be familiar with the phrase _“absence of evidence is not evidence of absence”_. Before discussing specific Covid examples, it is important to understand the general concerns with taking absence of evidence as evidence of absence.
The Ghosts example illustrates why we need to be careful in treating absence of evidence as evidence of absence: it seems incredibly difficult to prove even the positive existence of ghosts, and proving a negative is even harder. This means it is quite likely that we would not have such evidence regardless of whether ghosts actually exist or not. In other words, the failure to prove that they don’t exist does not seem _diagnostic_: that failure seems possible whether ghosts actually exits or not, and, as a result it does not provide much evidence either way.
The same goes for conspiratorial thinking where the very absence of evidence for a conspiracy is taken not as evidence against the conspiracy, but as evidence _for_ it because it indicates a ‘cover up’ (for example, [Cook et al., 2020](https://theconversation.com/coronavirus-plandemic-and-the-seven-traits-of-conspiratorial-thinking-138483)). Clearly, nobody would expect evidence of a cover-up for a conspiracy that doesn’t actually exist in the first place. If one then also believes there would be no evidence if a conspiracy existed, then one cannot infer anything from that absence. If one genuinely believes there would be no evidence either way, then that absence of evidence is _entirely uninformative_: this means it provides neither evidence for nor evidence against, as it is equally compatible with both.
For evidence to be relevant it has to be diagnostic: a smoke alarm that was equally likely to sound when there was smoke than when there was not, would not be helpful in alerting us to fires. Likewise, a medical test that would be equally likely to return a positive result regardless of whether the disease was present or absence would be entirely undiagnostic.
By contrast, a diagnostic piece of evidence is one that is much more likely to be found if the claim at issue is true, than when it is false. Thinking about both of these possibilities is consequently important to understanding argument strength, and provides an essential tool for navigating the fallacies.
Thinking about diagnosticity helps us identify cases where the absence of evidence _does_ constitute relevant evidence of absence. If we have conducted tests in which we can confidently expect to see something, but fail to do so, then that is informative. In particular, the absence of a particular type of adverse event in a sizeable vaccine safety trial, rightly raises our confidence that such an adverse event will not occur, or, at least, is sufficiently rare that we would not expect to see it in a trial of that size.
Now that there have been [billions of vaccine doses given worldwide](https://c19vax.scibeh.org/pages/c19vaxfacts#How-many-people-have-received-the-COVID-19-vaccines), evidence on what will, and what will not, occur as a vaccine side effect has become robust. As a result, much of the argument among vaccine sceptics has shifted to putative long-term effects. Here, the absence of evidence provided by the ongoing Covid vaccination campaign is not equally diagnostic because of the comparatively short time horizon that we have experience with. There can be no evidence against side effects that emerge 20 years after a vaccination if a vaccine has been around for only a year. However, it does not follow that there _will_ be such long-term consequences—on the contrary, relevant evidence from _other_ vaccines argues against the possibility of long-term consequences. In other vaccines, we simply do not seem to see side effects that start only after many years (see, for example, Table 8 in [Wiedermann et al, 2014](https://www.intrinsicactivity.org/2014/2/1/e2/IA20140201-e2.pdf)).
<!--:::warning
Our pages on [the vaccine development process](https://c19vax.scibeh.org/pages/vaxprocess) and [side effects of the COVID-19 vaccine](https://c19vax.scibeh.org/pages/sideeffects) cover vaccine safety and the low risk of side effects in more detail.
:::-->
#### 2.2. Ad hominem arguments
Probably the most common fallacy in online discourse is the “ad hominem argument”. This is an argument that attacks the source, rather than the content of an argument. It attacks the player, not the ball. Not only are ad hominem tactics common in online discourse, so too is the charge that an ad hominem ‘foul’ has been committed.
Like arguments from ignorance, ad hominem arguments can be tricky to navigate, because not all ad hominem arguments are fallacious or irrelevant. This is because there are many occasions when the reliability of a source should matter to how we evaluate an argument or piece of evidence, and here, depending on the specific content, the claims being made about the source could be relevant. This means that thinking about ad hominem arguments is closely linked with thinking about issues of trust in scientists or policy makers. Once again, thinking about diagnosticity is important here, and it is helpful to think about the role of sources more generally, before looking more closely at specific Covid examples.
How then do sources and trust in their reliability feature in argument? Researchers have distinguished different aspects of what someone is doing when they communicate with us about facts ([Collins, et al., 2018](https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00018/full)). First, sources implicitly or explicitly provide _testimonial evidence_ that what they say is true. Someone claiming that something is the case can itself be taken as evidence that something is true (unless they specifically indicate otherwise): John saying “there is a fire”, constitutes evidence that there is a fire. How strong that evidence actually is, however, depends on how accurate or reliable John is. Second, sources _transmit_ information. If John says “there is a fire, because I can see smoke”, John not only (implicitly) provides testimonial evidence that there is a fire, and that he can see smoke, he is also transmitting information (the presence of smoke) that (if true) is diagnostic of fire. Distinguishing these aspects helps clarify when information about a source is relevant and when it is not. And this, in turn, allows us to distinguish fallacious ad hominem arguments from non-fallacious, appropriate arguments.
Clearly, with respect to testimonial evidence, where the source’s claim _is_ itself the evidence, the accuracy or reliability of the source is always relevant. In fact, it determines the strength of the evidence. John shouting “there is a fire” is good evidence that there is a fire if John is competent and conscientious. It is worthless if John regularly makes things up.
With respect to information transmission, however, the situation is more nuanced (imagine that, instead of John shouting “there is fire”, we have John says “there is smoke” as evidence for a fire). Here, the reliability of the source is only relevant to the extent that there is _uncertainty_ about the accuracy of what is being transmitted the listener can verify that content independently, then the source becomes irrelevant (if, alerted by John, we look out the window and see the smoke ourselves). Often, a source provides a link to another source for a claim. If the listener can check that link herself, then the initial source becomes redundant. In general, if content can be assessed without recourse to the (initial) source because of listener expertise, then the source is irrelevant, and this underlies the intuition that in many contexts it should not matter who provides an argument, and that we should instead be focussing on the argument itself.
Where source reliability itself is irrelevant, however, any argument attacking the source is irrelevant too. Here, an ad hominem argument will always be fallacious.
In contexts where source reliability could at least be relevant in principle, what kinds of considerations might rightly raise doubts about the accuracy of testimony or the faithfulness of information transmission? Lack of subject matter expertise, poor past track record, negative appraisals by others, and bias all constitute potentially relevant concerns. As a result, much of the ad hominem argumentation found in the context of covid vaccination seeks to establish one or more of these concerns.
Accusations of bias are common, in particular claims that a source has received funding that would create a potential conflict of interest. Conflicts of interest are indeed important, which is why they need to be disclosed in the context of publishing academic research. There are two things to bear in mind when evaluating potentially biased sources, however.
First, a biased source may nevertheless provide relevant information: in particular when the source provides information that goes against the direction of the assumed bias, that bias can make the communication more, not less, credible. So the presence (and even more so the mere possibility) of bias should not typically be enough to entirely discredit a source and make its communications worthless (particularly not with respect to information transmission, see above).
Second, the grounds for attributing bias in online debates about vaccination are often far too vague. The mere fact that a someone in an academic department has received funding from a particular institution is (typically) not enough to create a conflict of interest for other researchers working in that department: given how research funding actually works, it cannot be assumed that they benefitted, nor that they even know this funding exists.
Similarly, being seen in a picture with someone may not mean one shares that person’s views, particularly when that person themselves meets many, many people in many different contexts.
Again, simple consideration of diagnosticity helps evaluate the relevance of such claims. What circumstances would lead one to see someone in a picture with this other person? Do these circumstances _depend_ on the claim in question? The less likely it is that one came to be seen in a picture only because one shared a common view on the claim in question, the less diagnostic such a “connection” will be.
Diagnosticity also helps with this argument about bias:
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">True but this vaccine is unique in that each new eligible group ultimately lead to truly gigantic profits for an already very I fkiebtial heavily lobbying Pharma industry.<br><br>So listening to truly independent experts is crucial. At the moment that seems to be JCVI</p>— Paul Ryan (@pjryan51) <a href="https://twitter.com/pjryan51/status/1455805565511344131?ref_src=twsrc%5Etfw">November 3, 2021</a></blockquote>
Potential profit can indeed be a powerful source of bias, which is why commercial interests must be declared as possible conflicts of interests in many contexts. However, in the specific example, the possibility of potential profit does not seem diagnostic: given the huge global demand for vaccines (with many countries still largely without access to vaccines), demand vastly outstrips supply. There would consequently be sales whether or not vaccines are approved for this particular group.
Hopefully, these considerations will help with the scrutiny of other instances. Because, unfortunately, considerations about bias and conflicts of interest will likely remain important in Covid debates, last but not least, because there are lobby groups with systematic campaigning to promote conspiracy and undermine scientists and policy makers, and campaigns to shift the argument (see [Tomori, 2021](https://www.nature.com/articles/d41586-021-02993-7)) and promote science denial (see [Diethelm and McKee, 2009](https://academic.oup.com/eurpub/article/19/1/2/463780)).
#### 2.3 Slippery slope arguments
Slippery slope arguments such as the following are also frequent in the current vaccine debate:
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Rice, dried beans, beef / chicken stock. I foresee the regime starving out political objectors soon, it’ll begin with vaccine passports and expand out. <a href="https://t.co/jIeK0e2Bhx">https://t.co/jIeK0e2Bhx</a></p>— Cernovich (@Cernovich) <a href="https://twitter.com/Cernovich/status/1445072575059480578?ref_src=twsrc%5Etfw">October 4, 2021</a></blockquote>
Slippery slope arguments implore us to not take an action, because that action will make a future negative outcome more likely.
Slippery slope arguments are consequentialist arguments. Such arguments suggest that we should (or should not) take a particular course of action, because doing so will lead to something positive (negative).
As a result, the strength of such arguments depends on two factors: how desirable(undesirable) that outcome actually is, and how likely it is that the action will really bring it about. A consequentialist argument is weaker if the potential benefit is of little value to us, than when that benefit is valuable. And even when it is valuable, the argument for action is weaker when the action is unlikely to be successful, than when it is likely to produce the desired outcome. Both factors must be considered,
This means that slippery slope arguments _can_ be strong arguments. There are genuine examples of slippery slope developments in the real world <!--(XXREFS)-->. But often, the chance that the undesirable outcome will be brought about by the action seems remote. In particular, in the context of ‘slippery slopes’ concerning ‘liberty’ and individual rights, it should be born in mind that in Western democracies, a state’s constitution will already seek to define the boundaries of those rights, and the extent to which they are protected. This limits how far slippery slopes might be able to go.
#### 2.4 Temporal order, correlation and causation
Here a fallacy with the fancy Latin name “Post hoc ergo propter hoc” (meaning “after this, therefore because of this”) is another well-known fallacy that figures strongly in vaccine debate. It refers to a conclusion that event A caused event B on the basis of the fact that B followed A. The obvious example here are anecdotal reports of a negative event after someone received the vaccine:
“my friend/relative etc. was diagnosed with cancer just weeks after being vaccinated”.
Clearly, mere temporal order is not enough to conclude that one event caused another, even where one event _regularly_ precedes another, as this example illustrates:
The birds sing before the sun rises, therefore bird song causes the sun to rise.
It is essential to look not just at the occasions when one event precedes another, but also at those when that event occurs without the preceding event having taken place (yet more ‘diagnosticity’ in action).
This doesn’t mean that one event following another might not alert us to an important regularity. Documenting adverse post-vaccination events is a precondition to identifying side effects caused by vaccines. But additional evidence is required to be confident that it is indeed the vaccine causing those events.
#### 2.5 Statistical fallacies
Finally, there are many statistical fallacies that are worth mentioning here. These can ensnare reasoners because they often involve counter-intuitive effects. Many of the most relevant fallacies for vaccination involve comparisons involving groups of different size.
One important example is this popular argument
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Experts say if a growing percentage of fully vaccinated are winding up in the ICU, the vaccines aren’t quite the protection they were advertised to be. <a href="https://t.co/lCyCvPkGAS">pic.twitter.com/lCyCvPkGAS</a></p>— Mike (@Midnightrider98) <a href="https://twitter.com/Midnightrider98/status/1454444215585234945?ref_src=twsrc%5Etfw">October 30, 2021</a></blockquote>
This may seem a compelling argument that vaccines are not as good as we thought they were, or even failing. But that interpretation overlooks the fact that the relative group size for vaccinated and unvaccinated is changing over the course of a vaccination campaign. For any vaccine that is less than 100% effective (and that means all known vaccines), some vaccinated individuals will nevertheless become ill with negative consequences. As more and more people are vaccinated, the percentage of those who are ill that have been vaccinated will _go up by mathematical necessity_.
To see why, imagine that no one is vaccinated. At this point, no vaccinated person can become infected (because there are none). Imagine, at the other end that _everyone_ is now vaccinated. At this point, everyone who is infected is a “breakthrough” case. So, as more and more people get vaccinated, the percentage of cases that are from vaccinated individuals has to increase over time, from 0 toward the maximum possible of 100% (for more information on breakthrough infections see our pages [here](https://hackmd.io/@scibehC19vax/c19vaxfacts#What-about-“breakthrough-infections”-Why-do-so-many-vaccinated-people-get-infected)).
Another fallacy involving different size groups is Simpson’s paradox. Here, a trend that is apparent in multiple individual groups disappears when the groups are combined, or reverses, because those groups vary in size. For example, early analyses in the pandemic suggested that case fatality rates were higher in Italy than in China. Yet looking at individual age groups, fatality rates were lower in Italy in every single group. This puzzling combination of results was possible because of differences in demographics across the two countries: Italy’s population is older than China’s. Given that death rates are highest among the elderly, putting all of the data into one single pot (which means ignoring age) will make rates seem worse in Italy (for more detail on this example, see [von Kügelgen et al., 2020](https://arxiv.org/abs/2005.07180)).
Likewise, Simpson’s paradox has potential application to vaccine data because it too will involve different subgroups of the population (old, middle-aged, young) with varying characteristics (including the time when they received the vaccine) and different size, as this [example](//https://www.covid-datascience.com/post/israeli-data-how-can-efficacy-vs-severe-disease-be-strong-when-60-of-hospitalized-are-vaccinated) illustrates.
All of this demonstrates a more fundamental point about statistics concerning Covid: “denominators” (the numbers we divide by when calculating percentages or rates) matter. This is well illustrated here by the animations that can be found in [this Twitter thread](//https://twitter.com/LucyStats/status/1417275249318666243?ref_src=twsrc%5Etfw).
By picking the wrong denominator one can easily generate misleading or meaningless statistics.
Statistical fallacies, in general, may be impossible to spot without training in probability theory and statistics. This makes it important to seek out expert sources where the interpretation of numbers is concerned.
## Summary
In summary, this is what you can do to critically evaluate vaccination debates:
1. Think about whether a reason is diagnostic!
2. For arguments involving values, think about the extent to which there really is a threat to the value: will the action in question genuinely have a negative effect? If yes, how does that balance with other values you hold?
3. Where numbers and statistics are concerned, beware of sources without adequate training.
4. And, more than anything, be aware of inconsistency – it is always a sign of argument gone wrong!
</span>
-->
:::success
Would you like to find out more about the argument quality and fallacies? We created a search query specifically for this page, which links you to other interesting resources like Twitter threads, blogposts, websites, videos and more. Check out the search query that we generated specifically for this page [here](https://hypothes.is/groups/Jk8bYJdN/behsci?q=arguments).
Would you like to know more about how we generated the search queries and how our underlying knowledge base works? Click [here](https://hackmd.io/B3R70tuNTiGy6wi9HObuSQ) to learn more.
:::
----
<sub>Page contributors: Ulrike Hahn, Stefan Herzog, James Ladyman </sub>
{%hackmd GHtBRFZdTV-X1g8ex-NMQg %}
{%hackmd TLvrFXK3QuCTATgnMJ2rng %}
{%hackmd oTcI4lFnS12N2biKAaBP6w %}