# Reading Responses (Set 2)
### Nov 07 Fri - Ads & social graph background
We think that advertisements just sell things to us across the websites we visit, but what we might not know is that they actually track and judge us in order to decide what we see and don't see next. These come from our social media feeds and what we look up in our search engines. It's how advertisements become our surveillance through networks and have become a powerful strategy that shapes our lives online. It can even decide what job listings are offered to you, if you even qualify for it, what products you can and can't afford, and if we even count as a customer worth reaching out to.
This concept reminds me of past readings and class discussions about cookies that save our user preferences. What I didn't realize is how much more invasive and complex advertisements can be through the use of my saved information, and there is nothing I can do about it. Cleo Abrams asserts that cookies are just part of the identifiers that websites place on us. They send a code that may look like a cookie, but it sends your data to a third party anyway. Abrams claims, "companies incentivized by billions in ad dollars will always find a loophole. They know you don't want to block first-party cookies, because then many sites wouldn't work." (Abrams 4:45). She claims that ads like Facebook and Google have a profit motive to continue tracking us, no matter how much we want to protect our privacy. Since the cookies are essential for sites to function, there is no way of getting around being tracked. Tech companies will always try to develop new technologies to go against protections.
Social media platforms like Facebook, Instagram, and Twitter offer advertisements as their primary source of revenue. According to Rob Stokes, Facebook, Twitter, YouTube, and LinkedIn as all examples of major advertisements. Through promotions on Twitter and Facebook ads, "Social media can be an excellent place to reach prospects because you can usually target very accurately based on user provided demographic information." (Stokes 305). I support Stokes' claim because most of the advertisements I see are on social media, considering we spend most of our time on it daily. It's an easy way to track and generate new content based on our engagement with other ads.
Another ad strategy is pop-ups and pop-unders. According to Stokes, this was very important in the early days of online advertising. Stokes explains this as "audience annoyance" and how " there are now ‘popup blockers’ built into most good web browsers. This can be problematic as sometimes a website will legitimately use a pop-up to display information to the user." (Stokes 301). Going back to how audiences like to keep as private as possible, pop-ups are still a way we are being tracked. It is inevitable because platforms function best with these strategies and use them to keep generating content and keep us scrolling.
Overall, I have learned that cookies do much more than save our user preferences, such as what we add to our carts, filters, and layouts. We are unable to get around being tracked and judged by advertisements. I've learned they can discriminate and judge us based on past purchases and show only products they know we can afford and not show certain job listings because we might not be a "good fit." These advertisements are mainly on Social Media platforms, places where we spend most of our time, and they use them as their primary source of revenue. Our engagement is being sold to advertisers.
**(AI was only used to outline and structure my response. All content, claims and ideas are my own.)**
#
### Nov 18 Tue - Artificial intelligence
Artificial Intelligence has become more than just a helpful homework and brainstorming tool - it has improved to create realistic pictures and write full pieces in just seconds. Because of this, bots are making it difficult to differentiate what is real and what is AI-generated. The consequences include that it can make fake and misleading content that looks believable, which makes it so that we can no longer can trust what we see online. When it comes to the consequences, it is not just about technology; it can affect privacy and reliability in the online world.
I have always been confused about how AI content is generated and how it can be so accurate on things, especially images. Bea Stollnitz describes it best by breaking down *tokens* and how models like OpenAI and GPT implement them into creating content. Stollnitz explains that "GPT models are trained with a large portion of the internet, so their predictions reflect a mix of the information they’ve seen." (Stollnitz 2023). After reading through the article, during the training, those models break down the information on the internet into thousands of tokens, which include small words and syllables. It doesn't necessarily memorize articles or large texts, it learns patterns between the tokens. In training, when content that is misleading or biased is reviewed, it can unintentionally produce text that may sound confident and real. The models could use the good and bad parts of the internet. Because of this, consequences come into play, because we are then exposed to that misleading content.
We have done multiple exercises in class where we had to click "real" or "AI-Generated" based on what we already know and how we analyze online content. We learned how it uses "star-like" pixels, and that's how we can tell. But without breaking down the image and looking at pixels, it has come to a point where it can still be very difficult for us to tell the difference between real and fake. Stable Diffusion made it possible to prevent harmful content on the internet. After reading James Vincent's article on *Stable Diffusion made copying artists and generating porn harder and users are mad*, I still had some questions on what Stable Diffusion really was. I learned that it has some improvements and limitations when it was recently updated, including its ability to filter what it can and can't generate. Vincent includes how "Users of AI image generator Stable Diffusion are angry about an update to the software that 'nerfs' its ability to generate NSFW output and pictures in the style of specific artists." (Vincent 2022). This update had users think it made things worse, but it had limitations on purpose. It prevented harmful and misleading content from being created.
In my opinion, I believe the new update to Stable Diffusion was effective and will help prevent exploitation of harmful content. By removing tokens from its training on porn websites and nudity, it limited its ability to create inappropriate images. It also stopped people from misusing AI models, such as creating non-consensual pictures and visuals. Even though some may think it had a downgrade, it actually prevents things like copying an artist's exact style and making explicit photos. AI will always improve. Updates like in Stable Diffusion, which now controls which tokens it learns from are an important way to keep technology safe while keeping the creative aspects of models.
**(Grammarly was used to fix grammar and also improve some of my sentences)**
#
### Nov 21 Fri - Algorithmic bias
We may think that algorithms are natural because they are computers making decisions, but they can be biased in many ways. Usually, it can be due to new or old data they learn from, or the way they are designed. When algorithms are trained on data that is already biased or has stereotypes, they will repeat and produce the same issues. This can lead to it even predicting what we talk about and pushing specific ads towards us. Algorithms aren't really natural at all, their biases can easily slip into our everyday experiences without us realizing.
The algorithms placed on platforms are created by people who make the choices about what data matters and what will be ignored. Cathy O'Neil, in *Weapons of Math Destruction*, states, "A model's blind spots reflect the judgments and priorities of its creators." (O'Neil ND). Any biases the creator has built into the algorithm. When we see an ad on websites that predict what ads we should see, they really just reflect other people's assumptions. This leads to certain things being left out or being unfair without the intention to do so. It happens through how the algorithm was made.
Algorithms don't just make guesses. Many believed and still believe that our phones are recording and listening in on conversations, and that is how we receive the ads we see. They are constantly collecting data and can predict what we want before we think about it. Rich Haridy explains, "the deeply disconcerting implication of all this is that the rich vein of data constantly being gathered can be crunched by an algorithm to essentially predict what you and your friends are talking about, and serve you an ad that is perfectly tailored to your current needs." (Haridy 2019). They watch our patterns and push ads that relate to our conversations almost perfectly, but the predictions aren't the same for everyone. The algorithm decides what we see based on the assumptions it makes about us. This leads back to bias and how it leads to people getting different or unfair results without knowing.
Algorithms are shaped by the people who create them, which leads to bias. When they learn from stereotypical data, it will create the same patterns in ways we might not notice. Because they are constantly tracking what we do online, they make very accurate decisions, which we think they are recording our conversations. Algorithmic bias isn't random - it's a result of what the systems rely on.
**(Grammarly was used to fix grammar and also improve some of my sentences. AI was only used to outline and structure my response. All content, claims and ideas are my own.)**
#
### Dec 02 Tue - Digital language and generations
It has always been difficult for me to correctly interpret how people say things on the internet. I have trouble differentiating tone and feeling through people's text messages and what they are really trying to say. There is a difference between how my parents text and how my friends do. The internet is transforming every day, which creates new "rules" of language. This relates to the original idea of "communication in a digital age" and how it evolves. Because of the creation of new rules for how we communicate online, it has become harder to understand the exact tone and emotion in messages. Digital communication is reshaping how we connect.
One reason why tone is so hard for me to understand is different generations don't read messages the same way. Audie Cornish quotes Gretchen McCulloch and mentions that "a lot of the confusion stems from the fact that people read Internet writing differently, depending on when they first went online." (McCulloch, as cited in Cornish 2019). When I think about how differently my friends text compared to my parents and me, I notice how we interpret each other's tones and adapt to each style. My parents take my messages more literally, while my friends use "LOL" as a filler in their sentences. McCulloch's point explains why I can get confused, every generation has its own way of reading online language. It can be hard to know how someone actually feels through a screen.
"By one estimate, over a third of couples who got married between 2005 and 2012 met online. By another, 15 percent of American adults have used online dating, and 41 percent know someone who has." (McCulloch 2019). This quote relates to our past lecture "Finding someone and living alone" based on online dating. This also shows how powerful online communication has become. It proves that the internet is not just a place where people can go for quick conversations, it's a place where people can build lasting, meaningful relationships. It has reshaped how people meet and connect with each other.
The way we communicate online will always evolve. It affects how we understand each other and interpret messages. Even though our tones can be hard to read across different generations, the internet is still a major part of how we form new connections. McCulloch's ideas highlight how these misunderstandings happen because everyone grew up with different online communication "rules." As the internet evolves, we learn how to adapt to new ways to communicate to understand each other more clearly.
**(Grammarly was used to fix grammar and also improve some of my sentences. AI was only used to outline and structure my response. All content, claims and ideas are my own.)**
#
### Dec 05 Fri - Pushback
I cannot imagine life without my phone and social media. I guess it's because I was born into it. I haven't had to work hard to socialize, make friends, and get information; it's always been easy and accessible with my iPhone. Being online feels completely normal, especially in my generation. But I believe that I could be better off without it. I find myself to be easily distracted and not fully engaged with the world around me. Pushing back from technology can make me take back control and focus on relationships, away from the screen. But is it really that simple?
In *Pushback: Expressions of resistance to the “evertime” of constant online connectivity,* Stacey Morrison and Ricardo Gomez explain "pushback" as a "a growing phenomenon among frequent technology users seeking to regain control, establish boundaries, resist information overload, and establish greater personal life balance." (Morrison, Gomez 2014). People are rejecting technology because they want more control and balance in their everyday lives. They aren't fully abandoning technology, but instead, they are trying to create more of a healthy relationship with it so it doesn't take up all of their time and attention.
Alex Vadukul interviews Biruk Watling, one of the original members of the Luddite Club from Brooklynn, she says, “I own this now with a sense of inner torture,' Ms. Watling said, 'but I have to look out for my well-being as a young woman. It’s too risky for me to put my life in the hands of a flip phone.” (Vadukul 2025). As much as she wants to stay off smartphones, she knows our world is built around digital tools and needs one to stay safe as a woman. This shows how the Luddite lifestyle didn't work out for her. Even for the people who are the most dedicated to avoiding technology, they still end up needing it.
Even though many of us can feel overwhelmed and distracted by technology, it is still a huge part of how we live and even stay safe. Pushback can help us create better boundaries, but it doesn't erase how dependent our world is on digital connections. I think that is why people, including me, don't fully give up on their phones; we rely on them more than we realize. Technology will always be here, along with the pressure of staying connected.
**(Grammarly was used to fix grammar and also improve some of my sentences. AI was only used to outline and structure my response. All content, claims and ideas are my own.)**