# Final Essay
Lilly D’Italia
COMM 1255
April 18th, 2023
**The Effects of Online Filter Bubbles**
Fish do not know that they live in water in the same way that we, as internet users, do not know that we live within filter bubbles. While they confine us to information that solely reinforces our personal beliefs, filter bubbles should not be detrimental to our internet use as they can be properly dealt with through education in media literacy, and more importantly, they are at their core, a technological reflection of our personal underlying biases. With this lens, it is arguable that filter bubbles are not necessarily negative and can be used to diversify our knowledge and dismantle the subconscious biases that live within us. Therefore in this essay, I will define misinformation and disinformation and argue that filter bubbles are simply reflections of our personal biases and other societal issues, which could be used positively if we consider the effects of filter bubbles in terms of media literacy.
**What is mis/disinformation and how does it affect filter bubbles?**
According to Claire Wardle in her article, “Understanding Information Disorder” there is a difference between the false information we are fed on the internet and the information that reinforces our previous beliefs, which creates a filter bubble. Wardle defines misinformation, disinformation, and malinformation and differentiates between them as broad concepts and specific types. She also acknowledges that “fake news” is a term that has been weaponized by politics to attack journalists and therefore refers to this phenomenon as Information Disorder. Misinformation is when false information is spread unknowingly, malinformation is genuine information that is spread to cause harm, and disinformation is intentionally false information spread to cause harm. There are seven specific types of mis/disinformation: satire, false connection, misleading content, false content, imposter content, manipulated content, and fabricated content, each having different motives and effects (Wardle, 2021). These are important to be able to differentiate between and understand, as this is the information that works within the filter bubbles to continuously reinforce our opinions with no external resistance.
Eli Pariser defined filter bubbles in his infamous 2011 TED Talk, “Beware online ‘filter bubbles,” as “invisible algorithmic editing of the web (Pariser, 2011).” Pariser uses Facebook as an example, acknowledging that the more he interacted with his liberal friends’ feeds, the more his conservative friends’ posts began to disappear, arguing that the way information flows on the internet is changing. This idea is the basis of filter bubbles, we only see information on our feeds that we agree with, taking away the opportunity for us to interact with other opinions and ideological perspectives. The most important area of emphasis, however, is that there are no longer informational standards for the internet, each person’s search results on the internet, targeted ads, and social media feeds are all different and this phenomenon goes completely unnoticed by the average person.
**Media Literacy**
With that being the case, nonetheless, there are solutions to protect us from the negative effects of filter bubbles. Dhana Boyd, a prominent researcher, and scholar explains the effects of studying and understanding media literacy in her article, “Did Media Literacy Backfire?” Media literacy is the concept of understanding and validating what information is trustworthy on the internet (Boyd, 2018). Due to all of the false information that exists on the internet, acknowledged by Wardle (2021), Boyd argues that we have a responsibility to question authority and to doubt the information we are being given (Boyd, 2018). She explains tactics that can easily be taught and learned, like checking for author or publisher credibility, which can drastically change the way we understand the internet. Once we are trained in such aspects and have the skill to differentiate between which information is true versus false, the dangers of filter bubbles dramatically decrease.
In this context, while we still might only see information that aligns with our beliefs, at least being able to differentiate which information is true and which information is false removes many of the dangers that come with filter bubbles and allows them to be used as a tool rather than a weapon. As I am arguing, while filter bubbles are created by the algorithms of the internet they are reflections of our personal biases. This demonstrates why being fluent in media literacy is necessary, once we take away the danger of being surrounded by false information within a filter bubble we can use the natural algorithmic functions of the internet to diversify our beliefs and opinions.
**Personal Biases**
Thus, once we understand how to approach the internet through the lens of media literacy we can acknowledge that filter bubbles are products of our personal underlying biases and use them as tools to counteract that, rather than allowing them to indoctrinate our ideas. A Forbes article, written by Thomas Chamorro-Premuzic, called “Why Are Humans Biased Against AI?” discusses the recent phenomenon of people reacting negatively to new AI technology. The article uses the example of ChatGPT, which has been accused of sharing inaccurate or biased information and upsetting users. While AI aims to remain as neutral as possible because it is a computer program that does not have opinions, bias still shines through. For example, on Google, if you search something like “women should…” you may get results like cook, clean, or stay home. This is not because the computer thinks that this should be the case, but it is because all of the previous data which it has collected from humans lead it to believe that that is the correct answer. The same idea applies to AI, Chamorro-Premuzic argues that these negative reactions are the result of the AI displaying negative human traits, which it learns from interactions with other users (Chamorro-Premuzic, 2023). Therefore, once we can acknowledge this fact, AI could be used to increase our self-awareness. Chamorro-Premuzic argues that if we had the "necessary self-criticism to accept that its undesirable features are merely a reflection of our own human qualities" (Chamorro-Premuzic, 2023) we could use it to self-improve. Moreover, while this concept comes from AI, if we look at this idea on a larger scale filter bubbles function in the same way.
A study from 2017 called “Fake news and ideological polarization: Filter bubbles and selective exposure on social media” proves that ideological polarization on social media, specifically Facebook is driven more by selective exposure than by filter bubbles (Spor, 2017). Selective exposure is an idea based on human psychological biases which state that, “individuals have a tendency to consume media which aligns with their views and beliefs and avoid such content that is different in perspective or even challenging to their position” (Spor, 2017). This is to say that while the algorithm of the internet does contain us within filter bubbles, even if that were not the case, due to human nature we would still choose to interact with information and media that aligns with our beliefs because it is more enjoyable. Consequently, polarization between different groups of thought would remain unchanged.
Therefore, I argue that once we understand each aspect that contributes to the complexity of online algorithms and filter bubbles, and know how to avoid false information through media literacy, they can be used positively. Returning to the previous example, if you search “women should…” on Google, and the answer it provides has a misogynistic underlying bias, it should demonstrate the extent of the filter bubble. If basic media literacy skills are considered and internet users take the responsible action of acknowledging the filter bubble, they could further search for unbiased and trustworthy sources while simultaneously working to dismantle the subconscious biases that live within all of us.
<br/>
**Resources**
Boyd, D. (2018, March 16). Did media literacy backfire? Retrieved March 2, 2023, from https://points.datasociety.net/did-media-literacy-backfire-7418c084d88d#.d46kox6e1
Chamorro-Premuzic, T. (2023, February 28). Why are humans biased against AI? Retrieved March 2, 2023, from https://www.forbes.com/sites/tomaspremuzic/2023/02/27/why-are-humans-biased-against-ai/?sh=409f19c35f5c
Sphor, D. (2017). Fake news and ideological polarization: Filter bubbles and selective ... (n.d.). Retrieved March 2, 2023, from https://journals.sagepub.com/doi/10.1177/0266382117722446
Pariser, E. [TED]. (2011, May 2). Beware Online “Filter Bubbles” [Video]. YouTube. https://youtube.com/watch?v=B8ofWFx525s&si=EnSIkaIECMiOmarE
Wardle, C. (2021, August 03). Understanding information disorder. Retrieved March 2, 2023, from https://firstdraftnews.org/long-form-article/understanding-information-disorder/