--- title: "Data Ethics Club discusses: Dataism Is Our New God" tags: Data Ethics Club --- # Data Ethics Club discusses [Dataism Is Our New God](https://onlinelibrary.wiley.com/doi/epdf/10.1111/npqu.12080) <!--Please don't edit the info panel below--> :::warning <!--Please don't edit this warning panel--> ### :arrow_forward: What's this document? __[Data Ethics Club](https://github.com/very-good-science/data-ethics-club/#Data-ethics-club)__ is an every-other-week "journal" club/discussion group for Data Science and Ethics. Sign up to our mailing list [here](http://eepurl.com/hjkmnX) :sparkles:. This is summary of Wednesday 31st March's Data Ethics Club discussion, where we spoke and wrote about the piece [Dataism Is Our New God](https://onlinelibrary.wiley.com/doi/epdf/10.1111/npqu.12080), which is an interview with Yuval Noah Harari. The summary was written by Nina Di Cara, who tried to synthesise everyone's contributions to this document and the discussion. "We" = "someone at Data Ethics Club". Natalie Thurlby edited parts of the first draft and added some additional contribution summaries. ::: --- ## What is *dataism*, really? Harari's *dataism* is based on believing in the judgement of algorithms in the same way that we may believe in the decisions of gods. The volume of data being amassed in our digital world is giving humans power that we have never had before, in medical discovery, personalisation, surviellance and more. Harari argues that this data gives humans god-like powers, making algorithms the new higher being of decision making. Unlike other data ethics pieces, Harari isn't interrogating how data science fits into existing power structures, but framing data science as a separate system of authority. In our group discussions we could certainly see some of the similarities between *dataism* and religion, and tended to chalk these up to themes of abdicating responsibility for outcomes, and a lack of control in the decisions made about us by systems we cannot see or understand. In the same way that people have historically attributed events that they didn't understand to gods, the same language is being transferred to how we can be ['blessed' by algorithms](https://link.springer.com/article/10.1007/s00146-020-00968-2). That said, we weren't wholly convinced that the 'belief' went any further than this. Flatly comparing dataism to religion is abrasive. To many, religion is much more than an authority system. Arguably, the religion that is organised around the authority has it's own societal power, beyond that of the "source" of the authority itself ([thanks Terry Pratchett](https://en.wikipedia.org/wiki/Small_Gods)). We felt the data as religion metaphor was missing a proviso like this. ## Do we have the server space? Even when recognising that there is a great belief in the potential of data, we were skeptical about the ability of humans to recreate god-like power through algorithms. [General intelligence](https://en.wikipedia.org/wiki/Artificial_general_intelligence) is not exactly around the corner, and while there are useful things that you can do with AI they’re not god-like. If Apple and Google know more about ourselves than we do, as Harari alluded to, then their targeted advertising doesn't show it! Besides, even if we had the data, do we really have enough server space to make a god? We wondered whether this piece is intentionally hyperbolic in terms of the abilities of AI, in order to scare people into considering AI's potential negative impacts. Or perhaps it doesn't matter if AI has these abilities; much the same as religion is built on belief, just believing the algorithms can make decisions is enough for us to give them the authority to do so. The drive for a quantified self is strong; so many of us want to know how many steps we've done or monitor our heart rate, and we want to believe that the information we get is true. Given our skepticism, the potential algorithms that Harari describes felt more like the Wizard of Oz being a fallible man behind a curtain. As with [stochastic parrots](http://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf), AI seems impressive until you understand the nuts and bolts. But we are probably an audience who have the background knowledge to understand what is behind the curtain - not everyone does. ## Knowing enough to know better With the background of those attending Data Ethics Club being people who have some understanding of data, one theme that came up was data literacy. Those of us in the room were probably more likely than the general public to know how poor algorithmic decisions can be. We identified education around data and AI as a way to avoid being overly accepting of algorithmic decisions. But what does good data education look like? It might mean better general information literacy, or having a good understanding of your rights. For instance, in European law (under GDPR) you're entitled to some sort of explanation about automated decisions made about you - how many people know about and use this to their advantage? This isn't common knowledge - it was news to some of us. As well as education for the general public, we need good ethics education for the people building systems. We could make use of codes of professional conduct for this kind of thing (e.g [the BCS has one](https://www.bcs.org/membership/become-a-member/bcs-code-of-conduct/)), if we assume corporations aren't going to work to the standards we would hope for. ## Rise of the curly fry haters Harari's vision of personalised oppression was also a topic of discussion. We weren't sure about this, given that the data would be making decisions from underlying data that [potentially describes broader groups anyway](https://www.pnas.org/content/pnas/110/15/5802.full.pdf). That is unless new sub-groups develop from the data that were never conceived by humans, and become the new basis for algorithmic discrimination. For example, what if liking curly-fries is associated with high income? The curly fry haters are unlikely to rise up as a group, and it would be hard to pin point the reason for the discrimination in order to organise around it. It might be possible for the algorithmically opressed to become it's own category for activism. However, this depends entirely on how transparent the algorithms are. If we don't know that we are being unfairly treated, or why, then how can we fight it? ## Our science fiction future Sci-fi recommendations and discussion featured heavily this week as we contemplated our possible distopyian AI futures! That said, Harari's outlook didn't seem to evoke fear about algorithms, it was framed more like an inevitable outcome which makes a nice change from [the usual panics that seem to go around as new technologies are introduced.](https://journals.sagepub.com/doi/full/10.1177/1745691620919372) Whilst we might not live out a sci-fi novel, we agreed that the narratives around AI created by literature and the media are hugely important in framing how people think about it. When AI is framed as infallible or even magical its no wonder it seems attractive! Overall, we mostly took *dataism* with a pinch of skepticism ourselves, but it did make excellent food for thought. --- ## Voting - 91% (10/11 voters) felt that the content sparked interesting dicussion. - 55% (6/11 voters) would recommend the content itself to others --- ## Further recommendations based on this piece This weeks discussion prompted lots of recommended follow on content! Some papers: - [Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3792772), Wachter, Mittelstadt, and Russell, 2021. - [The Sisyphean Cycle of Technology Panics](https://journals.sagepub.com/doi/full/10.1177/1745691620919372), Orben, 2020. - [“Blessed by the algorithm”: Theistic conceptions of artifcial intelligence in online discourse](https://bvlsingler.files.wordpress.com/2020/05/blessed-by-the-algorithm-paper-b-singler-2020.pdf), Singler, 2020. And some books: - Any of Harari's other books! - Mindf*ck - Christopher Wylie - Weapons of Math Destruction - Cathy O'Neil (+2) - Brave New World - Aldous Huxley - Brave New World Revisited - Aldous Huxley - Your Computer is on Fire, particularly "Your AI is a Human" - Sarah Roberts - Small Gods - Terry Pratchett --- ## Attendees Note: this is not a full list of attendees, only those who felt comfortable sharing their names. - Natalie Thurlby, Data Scientist, University of Bristol, [NatalieThurlby](https://github.com/NatalieThurlby/), [@StatalieT](https://twitter.com/StatalieT), :sun_with_face: - Nina Di Cara, PhD-ing, University of Bristol - Huw Day, Maths PhDoer, University of Bristol - Tessa Darbyshire, Scientific Editor, Patterns @TessaDarbyshire tdarbyshire@cell.com - Vanessa Hanschke, PhD Interactive AI, University of Bristol - Matthew West, RSE, University of Exeter - Emma Tonkin, Digital Health, University of Bristol, 🤔 - Paul Lee, investment world - Ruth Drysdale Jisc - [Zoë Turner](https://twitter.com/Letxuga007), Senior Information Analyst, Nottinghamshire Healthcare NHS Foundation Trust - Robin Dasler, data-related software product manager - Emma Kuwertz, Data Scientist, University of Bristol - Kamilla 'Milli' Wells, Citizen Developer, Australia :birthday: