# Reading Responses (Set 1) ------------ ## Reading Responses 5 out of 5 ------------------ ## Jan 26 Tue: "Superconnected" by Mary Chayko What has become readily apparent in the past three decades is that the internet is not a simple tool, one which is used and then set aside through most of the day. As emphasized in the chapter "More Benefits and Hazards of 24/7 Superconnectedness" from Mary Chayko's *Superconnected*, there are substantial elements of the new digital age that have changed how millions live. She lists a myriad of different concerns that have been raised about the new status quo: supposed feelings of isolation, addiction to games and services, and increasing temptations to (unsuccessfully) multi-task. Chayko acknowledges these concerns, but largely considers them moot, either by emphasizing that forms of these issues existing prior to the internet or by noting advancements such as women's emotional labor being supported by social media connections. At times however, Chayko's arguments carry a sting of "whataboutism," or essentially trying to handwave away valid concerns as "old fashioned". Regarding video games, she mentions a study performed by Nick Yee, which found that 45% of studied gamers displayed symptoms of excessive involvement in interactive media to avoid/escape problems (p. 192). Chayko brushes this aside, citing Dmitri Williams, who noted that an obsessive reader wouldn't be viewed with such concern, and that fear of digital media is based on a Protestant work ethic (p. 192). This, as well as other statements, suggest that much of the negative portrayals of the internet in the chapter are meant to be the exception that proves the rule. ---------------- ## Feb 05 Fri: Fake News A striking element in these readings was, unfortunately, the relative downward spiral we seem to be on. Silverman's piece on 2016 Facebook misinformation puts out a clear statistic: fake materials, usually from a far-right perspective, tend to have the highest engagement levels. danah boyd's 2017 post on media literacy's failures/backfires provides a logic for *why* misleading material does well. She points to a complicated blend of anti-intellectualism and individualist research, with a bit of material analysis mixed-in. After all, legitimate outlets often cost money, or doctors that dispel anti-vaccination movements are expensive to visit. The newest reading, Emily Dreyfuss's deconstruction of a single misinformation campaign, demonstrates where four years have taken us: to such powerful and believed misinformation that a single edited video clip can be a key cause of the January 6th incident that killed several. A notable point that both boyd and Dreyfuss (though more implicitly, in her case) make is that media literacy campaigns are unlikely to reduce this problem. In fact, boyd accuses such efforts of potentially exacerbating our present situation. Quite the bind is presented: media literacy is often paternalistic or doesn't work, there are motivated misinformation entrepreneurs, and a substantial portion of those in power are willing to exploit the spread of lies. It's tempting to suggest that the technological innovations that brought us to this point will somehow pull us out, via veracity warnings or adjustments to algorithms. That's partially correct: I suspect banning Trump and others like him from major platforms will lower the societal temperature partially. But what broader societal developments might help shift the rhetorical and psychological window offline more substantially? Relying on platforms alone to cure social ills seems like a fools-errand, and perhaps more material changes, be they in employment, healthcare, or societal structures are needed to repair deeper wounds. --------------- ## Feb 09 Tue: Make It Stick: The Science of Successful Learning The subject of this excerpt was a tad ironic considering the medium these words are appearing in: a reading response, one meant to retrieve knowledge of the piece as well as synthesize the information into new questions or critique. The authors of this piece, Peter Brown, Henry Roediger, and McDaniel would likely agree with this approach. Their book strongly emphasizes that the common models of learning, based on repetition and tight blocks of drilling, aren't as successful as they feel in the short-term. Testing, according to their findings, is by far the most effective way of ensuring information and skills stick in the long-term. Not only that, retrieval is essential for generating knowledge that can be built upon and modified, not merely used in its initial context. The authors emphasize repeatedly that the dogma of repetition has cemented itself in much of education and sports merely through history. They claim that *this* is the dominant strain of learning and practice. Simultaneously however, they make sure to note the backlash against testing, especially of the standardized variety. While they make a convincing case for the use of testing as a study tool (rather than a "dipstick"(p. 21)), this discontent suggests that testing is at least common enough that it's a cultural issue, regardless of the reasoning behind that testing. Are we really suffering from a dearth of testing in education, or scenarios that force retrieval? Or is it merely an error in how testing is conceived at this moment, one to be rectified by educators? -------------- ## Feb 16 Tue: Cooperation It's often assumed that humans are selfish and occasionally brutish, only seeking out their own ultimate goals. That's the entire rationale for the Hobbesian "Leviathan", and the emphasis in some ideologies on repression or utilization of these supposed "base" instincts. Martin Nowak's research, based on game theory and modeling, comes to a notably different conclusion: given repetition, familiarity, or the chance to be seen assisting others, people will come to favor collaboration as a means of survival and thriving. At a fundamental logical level, it makes sense that collaboration would aid the general survival, and Nowak alludes to collaboration behaviors occurring even in non-sentient lifeforms (such as yeast). Often the basis for the belief in humanity's selfishness is Darwinian natural selection, but Darwin himself claimed that a tribe that "were always ready to aid one another, and to sacrifice themselves for the common good" would be naturally selected over "tribes" divided by selfish aims (as cited in Nowak, p. 38). While there obviously remain numerous situations where collective action struggles, it's found in myriad scenarios that the cooperation and critical thinking of "the many" is far more beneficial to all parties than individual gains would be alone. Nowak makes clear: humans are the most collaborative species on Earth, and that's what has placed them on their seat at the top of the food chain. The greatest challenge in my mind is when this collaboration leads to developments that can morph into existential threats. For example, it takes the collaboration of millions to develop and power the technologies that have improved countless lives, but those same advances are slowly (or not so slowly) going to cause disastrous climate effects. The need for a kind of "counter-collaboration", one which goes against the previous arc of development, is challenging to conceive of. The deeper question is, we may have the fundamental human nature to handle the task, but have certain institutional and societal barriers been erected that make turning the ship too difficult? And if we truly are best off in our tight bundles of "known" communities (150 members), what does that mean for the modern, isolated world we largely live in? -------- # Feb 23 Tue: Dark Web In the past, I've jokingly referred to the internet as "the most double-edged of swords". If the standard, google-able internet has such stark pros and cons, the "Darknet" is even more of a stark case. As described by [David Kusher in his piece for Rolling Stone](https://www.rollingstone.com/politics/politics-news/the-darknet-is-the-government-destroying-the-wild-west-of-the-internet-198271/), the Onion Protocol that's foundational to the modern Darknet was originally created in the 90s as a tool for state security and dissident privacy. The Tor browser was publicly released in 2003, to add more public users to the network, both for altruistic reasons and simply to add more non-military users to the userbase. Now however, with the concept popularly seen (by both the public and law enforcement) as a hive of scum and villainy, there's an active effort going on decrypt the userbase, or at the very least, to find successful ways of scraping public information that can be found on secured illicit sites. As Kushner puts it, there is an almost comedic note to it all, as different American defense agencies are pitted against each other, either trying to eliminate or improve the Onion Protocol system. The efforts both by the Naval Research Lab in the 90s and now the decryption work by the FBI/NSA strike me as an ongoing, and largely unavoidable issue in international security policy's intersection with technology: there's always blowback, and it's usually *quite* foreseeable, but policymakers go through with it anyway. The ends may justify the means, but there is a consistent trend of not realizing the "ends" are usually not "the end" of the story. A similar example is the "Zero Day" hacks instigated by the US/Israeli governments with the "Stuxnet" worm and similar tools, which have now become significantly widespread. Essentially an exponentially increasing curve of issues ensue, as more fires pop up that will be put out in a way that only encourages worse damage later. Reading about efforts such as the encryption/decryption of networks, or even the more "benign" reading on cryptocurrency, there's a distinct feeling of so many complicated and shifting technical issues creating an instability. Eventually the center will be unable to hold, and somehow all of it will come crashing down. Perhaps that's just my own pessimism and limited technical competency speaking, but I can't say these developments have me hopeful that they'll somehow generate improved quality of life in the long run. Any time there's a hint of positivity (say, Tor being used for democracy protests), their actual useful bits are reversed (decryption software getting those same activists jailed). Are we essentially doomed to a loop of advancement followed by blowback?