# Matt's Filtering and Labeling Emails Practical

While there is definitely a strong argument to be made that people should be aware of more than just the articles and posts that confirm their beliefs, I don't see why algorithm-created filter bubbles are at fault. [FS](https://fs.blog/2017/07/filter-bubbles/) describes the very specfic tailoring of websites, search engines, and ad content to users as **"filter bubbles"** that it encourages readers to break out of. It brought to mind Instagram's "explore" page and Tiktok's "for you" page, which users know are constantly changing based on what they view and like. Many people I know are aware that they are being heavily tracked online and joke about or embrace it, often in entertainment contexts like Instagram or TikTok. While some may see it as a danger or invasion of privacy, it is not something that scares or worries the people I've encountered. I understand the danger in living a sheltered reality and believing everyone agrees with you, but I don't think the article's encouragement to readers to familiarize themselves with information in opposing filter bubbles is likely to be followed. Often, media sources have one bias or another. Sources within my filter bubble may say *"we're right, they're wrong, here's why"* and if I sought out an alternative opinion it would say the parallel *"you're wrong, we're right, here's why."* While there is benefit in knowing why the opposition thinks they're right, articles with an opposing viewpoint can often push someone with a formed opinion farther towards their own side rather than changing or expanding their views. Furthermore, after reading Pariser's article about Facebook's study of algorithmic effects on content consumption, I don't see why observers of this phenomenon are **putting Internet platforms rather than users at fault.**
As summarized by [Pariser](https://medium.com/backchannel/facebook-published-a-big-new-study-on-the-filter-bubble-here-s-what-it-says-ef31a292da95), the Facebook study showed that a decline in exposure to cross-cutting content is mostly caused by users' choosing to have certain Facebook friends and choosing to click certain articles. Algorithms creating "filter bubbles" therefore merely **amplify this effect caused by users' choices**. There seems to be an underlying question to this debate of whether social media and search engines should serve the wants of users or oppose these wants for the users' "own good," making value judgments on users' behalf and assuming they know better. To me it seems perfectly reasonable that social media would include a "filtering" feature that is merely an extension or amplification of the user's own choices. In my view, it is the role of the platform to **make itself easier to use** in the way a user would like to use it. I would not want a platform to be deciding for me that I am gathering information in the wrong way.
Even if people "should" be gathering information differently, an algorithm system that shows users what they want to see is not to blame and is not wrong for existing. I may be incorrect, but I would assume a filter bubble would accomodate a user who chooses to seek out information from various sources of varying biases. If a user is interested in nonpartisan content or content from both sides of an issue, their filter bubble would reflect this variety of sources, right? As Jacob Weisberg explains in [The Echo Chamber Revisited](https://www.wnyc.org/story/143347-echo-chamber-revisited/), algorithms meant to personalize our feeds **do not necessarily narrow our feeds**. Users with broad interests and who regularly interact with varying sources will see this reflected in their personalized feed. The only responsibility I believe the platform should have is that of more clearly, obviously, and directly **informing users** that it is using an algorithm to tailor content to their interests. This way, users would be conscious of the way their feed is created and would know how to play an active role in ensuring they see cross-cutting articles if they wish to.