##### tags: `CDA`
# Reading Responses (Set 2)
## Reading responses 5 out of 5
### Nov 4 Tue – Finding someone & living alone
In *“The Big Lies People Tell in Online Dating”* (Christian Rudder, 2010), the author explains that many people lie on dating apps about things like height, age, and income. People want to look “better” because they feel pressure to match social ideas of success and attractiveness. This makes online dating less honest and more like a performance instead of real connection.
In *“Living Alone in America”* (Joseph Chamie, 2021), the author writes that more Americans now live alone than ever before. Some enjoy independence, but many feel lonely and lack emotional support. This trend is especially strong in cities and among older adults.
These two readings connect because both show how modern life gives us more freedom but also more pressure and loneliness. Dating apps offer many choices, yet they make people hide their true selves. Living alone gives privacy, but it can also feel isolating. Together, these readings make me think: in a world where we want connection and independence, how do we stay honest and not feel alone?
### Nov 7 Fri – Ads & social graph background
Rob Stokes (2013) explains how online advertising works and why it is so effective. He shows that digital ads appear almost anywhere online and can track every click and view. Using ad servers and cookies, advertisers follow users’ actions and target them based on data. Stokes says this helps companies reach the right audience and measure success, but it also raises privacy concerns because every move online can be tracked.
Cleo Abram (2020) in her Vox video shows how ads “follow” people across websites through tracking pixels and cookies. She explains how this creates detailed profiles so users keep seeing the same products again and again. Both readings show how technology connects people through data, but also how this power can feel invasive. Tracking helps business, but it also makes users feel watched. It made me think: how can we enjoy the benefits of targeted ads without losing our privacy?
### Nov 18 Tue – Artificial intelligence
In *“How GPT Models Work”* (Ben Stollnitz, 2023), the author explains how large language models learn by predicting the next word based on huge amounts of training data. This lets the model produce writing that feels natural and human. In *“Stable Diffusion Made Copying Artists and Generating Porn Harder — and Users Are Mad”* (James Vincent, 2022), the author describes updates to an image-generation model that now limit copying an artist’s style or making explicit images. These changes show how companies are trying to respond to ethical, legal, and safety concerns. In *“Sydney, Spotify, and Speedy”* (Tyler Gold, 2023), the author reflects on how quickly AI systems are improving and how companies are testing them in public, sometimes with surprising or strange results.
Together, these readings show that AI can now create impressive text and images, but this power comes with consequences. AI expands creativity, but it also raises questions about copyright, safety, and human identity. It made me think: as bots become better at producing art and language, how do we protect human creators and what role should AI play in the future?
### Nov 21 Fri – Algorithmic bias
In *“This is why some people think Google’s results are ‘racist’”* (Fiona Rutherford & Alan White, 2016), the authors explore how search engine results may reflect racial bias. They argue that the way algorithms rank information can mirror societal inequalities, because they rely on historical data, user behavior, and choices made by engineers. The article shows that even when there is no intent to discriminate, outcomes can still appear biased if underlying data is biased. In short, algorithmic decision-making isn’t neutral.
In *“ChatGPT and woke ideology”* (Nate Hochman, 2023), the author considers how large language models like GPT may embed political or cultural biases, including so-called “woke” views. Even though these systems are designed for general use, Hochman argues that the training data, design choices, and business goals shape their ideological tilt. This means the biases in models may not just reflect society, but also reinforce certain worldviews.
These readings connect because both show that algorithms—even without bad intent—can still exhibit bias. Whether via search results or AI-generated text, the design, data, and context matter. They make me ask: if algorithms shape what we see and say, how can we make sure they treat all people fairly and help rather than hinder society’s equal-opportunity goals?
### Dec 05 Fri – Pushback
In *“Pushback”* (Stacey L. Morrison & Ricardo Gomez, 2014), the authors describe the ways people resist the pressure of being constantly online. They explain that digital life creates an “evertime,” a feeling that we should always be available and connected. Small actions—such as delaying replies, turning off notifications, or taking short breaks from social media—are forms of pushback. These everyday behaviors show that people are trying to protect their attention and regain control over their time.
In *“Luddite Teens Still Don’t Want Your Likes”* (Alex Vadukul, 2023), the author reports on teenagers who intentionally avoid smartphones and social media. Even after entering college, they continue using flip phones, spending more time outdoors, and meeting friends in person. Vadukul shows that these students reject the fast pace and pressure of digital culture. Their choice represents a stronger form of pushback: instead of just limiting phone use, they step away from mainstream platforms entirely.
Together, these readings made me think about how people respond to the demands of constant connectivity. Whether through small habits or bigger lifestyle choices, pushback is a way to create healthier boundaries with technology.