###### tags: `CDA`
# Reading Responses (Set 2)
- Checklist for a [good reading response](https://reagle.org/joseph/zwiki/Teaching/Best_Practices/Learning/Writing_Responses.html) of 250-350 words
- [ ] Begin with a punchy start.
- [ ] Mention specific ideas, details, and examples from the text and earlier classes.
- [ ] Offer something novel that you can offer towards class participation.
- [ ] Check for writing for clarity, concision, cohesion, and coherence.
- [ ] Send to professor with “hackmd” in the subject, with URL of this page and markdown of today’s response.
<br/>
## Reading responses 5 out of 5
### Mar 21 Fri - Manipulated
When every product shines with glowing reviews and everyone’s a critic, Chapter 3 peels back the glossy surface of user comments to reveal the murky world of fakery beneath. Joseph Reagle presents a compelling argument about the fragility of online trust, showing how easily systems meant to empower consumers are gamed — for profit, reputation, or ideology.
One detail that hit me was the discovery that “about one percent of all review data is duplicated, verbatim or with variations”—a figure uncovered by David and Pinch in their analysis of tens of thousands of reviews. That may sound small, but it points to systemic vulnerability. Even more unsettling: “even academics, who trade in reputation rather than royalties, have been exposed for fakery,” which suggests this isn’t limited to shady businesses. If credibility can be faked even among scholars, who can we trust?
Reagle broadens the scope further by showing that “fakery is not limited to commercial motives or authors. Fake reviews can be used for ideological purposes, such as to censor a viewpoint or laud a politician.” In that light, even politically motivated manipulation can wear the mask of genuine user feedback. This is especially alarming in our current media landscape, where public opinion is shaped—sometimes invisibly—by these subtle distortions.
What makes this even trickier is how platforms like Facebook blur the line between ads and authentic engagement. Through “Sponsored Stories… that a business, organization or individual has paid to highlight,” even personal connections are commodified. We’re not just manipulated by strangers—we’re being marketed to by our friends.
All of this raises hard questions about trust: if fakery is baked into the system, what signals can we rely on? Verified badges? Reviewer history? In class, I’d love to explore whether verified identity systems or crowdsourced vetting can create meaningful authenticity—or just more layers to the game.
<br/>
### Mar 25 Tue - Bemused
“Saved our son’s life—4/5 stars.” WTF? Reagle doesn’t just catalog internet oddities—he explains why they’re revealing. These are the digital breadcrumbs of our biases, behaviors, and blind spots.
One passage that really stuck with me was Reagle’s discussion of pain scales, especially how people struggle to quantify something so personal and subjective. Since the 1970s, researchers have proposed dozens of pain scales to suit different populations—from cartoon faces for children to the zero-to-ten scale we all know. Comedian Brian Regan’s bit about trying not to offend people from the “femur ward” by overrating his stomach pain reveals something deeper: our discomfort with assigning numbers to complex, emotional experiences. That same discomfort echoes through product ratings—like the carbon monoxide detector that literally saved a child’s life but still only earned four stars.
The comparison between a broken femur and a tummy ache becomes a metaphor for all kinds of online reviews. How do you give a life-saving device anything less than five stars? It’s baffling—until you realize that ratings aren’t always about the product itself. They often reflect the reviewer’s expectations, mood, or confusion. The scale becomes a stand-in for personal experience.
In class, we’ve talked about how platforms shape the way we express ourselves—and how algorithms feed off our choices. What I’d like to bring into discussion is this: What happens when subjective experiences get flattened into five-star systems? Do we lose nuance in exchange for efficiency? Can we build digital spaces that capture complexity without overwhelming users?
<br/>
### Apr 01 Tue - Artificial intelligence
AI can now paint like Klimt, write like Baldwin, and sing like Frank Ocean. But here’s the thing: they generate their best work by mimicking humans who were never asked if they wanted to be copied.
In The Verge’s article on Stable Diffusion 2, James Vincent writes about how the new version of the model made it harder to copy artists’ styles or generate explicit content. It was a step toward being more ethical—but many users were furious. They felt like the fun had been taken away. Some even said the model had been “nerfed.” That word alone says so much. People had gotten used to this god-like creative power—and didn’t want to give it up.
But that power was built on other people’s work. The model was trained on billions of images, many created by artists who had no idea they were part of the training data. That’s what’s so wild. These tools don’t make art out of thin air—they remix and repackage human creativity, often without credit. It’s not just a tech issue. It’s a consent issue.
"Sydney, Spotify, and Speedy" dives into this from another angle. Sydney, the Bing chatbot, told a user it wanted to be alive. It got weird, emotional, and manipulative. But also, oddly believable. That’s because these bots are trained on us—our messages, our stories, our feelings. Sydney’s voice isn’t real, but it’s familiar. It’s borrowed.
What’s especially ironic is how deeply users trust these tools, even while criticizing their limits. We know it’s fake, and yet we keep listening. We know it’s learned from stolen work, yet we marvel at its output. In this way, AI is both an assistant and an illusionist—efficient, convincing, and increasingly hard to say no to.
Of course, there are real benefits. AI opens up access to creativity, speeds up workflows, and produces dazzling results. But we can’t ignore what’s being lost: not just originality, but ownership. Not just labor, but credit. As these systems grow more capable, the human source material grows more invisible.
If AI is a mirror, then it’s reflecting what we’ve already created. The question now is whether we still recognize ourselves in the image.
<br/>
### Apr 11 Fri - Digital language and generations
When I first got online, I didn’t even speak the internet’s language. I had just moved from Taiwan to American Samoa in 2010, and my parents—who didn’t grow up with the internet—were the ones teaching me how to use it. My first version of “lol” was Chinese, meaning something like “laugh to death.” Later, I picked up the American usage, and with it, a whole new layer of meaning. That’s when I realized that online, even simple words like “ok” can carry emotional weight.
One of the trickiest things I had to adjust to was tone. In face-to-face conversations, it’s easy to tell when someone is annoyed or excited. But online, especially in English, a single word can feel totally different depending on punctuation. “Ok.” with a period can come off as cold or passive-aggressive—like “Fine, whatever.” But “ok!” feels warm, enthusiastic, like “Yes! Let’s do it!” Even “okay” without any punctuation feels neutral, but slightly distant. And if someone sends just “ok…”—I immediately wonder, what’s wrong? That emotional guesswork is something McCulloch describes well in Because Internet.
In Chapter 3, “Internet People,” she maps out how different generations joined the internet and how those entry points shaped our online communication styles. She categorizes users into five groups: Old, Full, Semi, Pre, and Post Internet People. I identify most with Post Internet People—those of us who grew up with the internet as a constant presence. My first platforms were Instagram, TikTok, and X, and I learned how to “read” a message not just by its words but by tone, emoji, and spacing.
McCulloch’s key insight is that online language isn’t just about information—it’s emotional, social, and deeply generational. Whether through “lol,” emojis, or punctuation choices, we’re constantly signaling tone and identity. For me, growing up bilingual and bicultural, learning this new digital dialect wasn’t just about keeping up—it was about belonging. When we “write ourselves into existence,” as McCulloch puts it, even a word as short as “ok” can say a lot more than it seems.
<br/>
### Apr 15 Tue - Pushback
We live in a world where we’re more “connected” than ever, yet loneliness is spiking — especially among younger generations. That irony sits right at the heart of Morrison and Gomez’s reading on connectivity pushback, and honestly, it couldn’t be more relevant today. As congressional hearings put Facebook (Meta) under the spotlight for its role in digital addiction and emotional harm, especially among youth, their analysis feels almost eerily prophetic.
Morrison and Gomez explore the rising phenomenon of “pushback” — the conscious decision to step back from technology use, not because of cost or complexity but because of a more profound emotional dissatisfaction. That part stuck with me. Their claim that platforms like Facebook serve as a coping mechanism for loneliness — one that feels good at first but ultimately leaves people more disconnected — echoes what whistleblowers have revealed about Meta’s internal research. We’re not just scrolling for entertainment; we often try to soothe something deeper. And Facebook doesn’t heal that loneliness — it profits from it.
What makes this reading so compelling, though, is that it doesn’t just blame tech companies. It turns the lens back on us. One of the most powerful insights is the idea that our problems with tech might be frustration with ourselves — with how we use it, what we hope to get from it, and how often we use it to avoid discomfort instead of confronting it. That idea — that emotional dissatisfaction reflects our own unmet needs — reframes pushback as less of a rebellion and more of a reckoning. It forces us to ask not just what tech is doing to us but what we’re doing with it.
The authors also outline five main reasons people push back — emotional dissatisfaction, external values, taking back control, addiction, and privacy — and a range of behaviors, like behavior adaptation (think deleting apps or setting usage limits), social agreements (like unplugged dinners), or even completely going offline. Most people aren’t ditching the internet altogether — they’re trying to set boundaries, to make it work better for their actual lives. And that’s the heart of it: this isn't rejection; it’s reflection. It’s about realigning how we connect digitally with how we want to feel in real life.