# Computational Inequalities _practice based sessions by Joana Chicau — MA Internet Equalities UAL 2022_ # :: 01 :: Intro Data, Datasets and Computational Inequalities > 1. Invent new ways to represent uncertainty, outsides, missing data, and flawed methods > 2. Invent new ways to reference the material economy behind the data. > 3. Make dissent possible [What would feminist data visualization look like?](https://civic.mit.edu/2015/12/01/feminist-data-visualization/) ![](https://i.imgur.com/AJnrZdk.jpg) The Library of Missing Datasets, by Mimi Onuoha (2016) is a list of datasets that are not collected because of bias, lack of social and political will, and structural disregard. Courtesy of Mimi Onuoha. Photo by Brandon Schulman. * [Mimi Onuoha](https://mimionuoha.com/what-is-missing) and [Eyeo Video Lecture by Mimi Onuoha](https://vimeo.com/233011125) highlights: approx. 12min40sec — 23min20sec. ### Plotting Data: Acts of Collection and Omission > "that no “data” pre-exist their parameterization. Data are capta, taken not given, constructed as an interpretation of the phenomenal world, not inherent in it.” Johanna Drucker [Plotting Data](http://plottingd.at/a/introduction.html#the-design-of-datasets) > "In our data-driven society, it is too easy to assume the transparency of data. Instead, we should approach data sets with an awareness that they are created by humans and their dutiful machines, at a time, in a place, with the instruments at hand, for audiences that are conditioned to receive them," says Yanni Alexander Loukissas, Assistant Professor of Digital Media in the School of Literature, Media, and Communication at Georgia Tech. This talk sets out six principles: all data are local; data have complex attachments to place; data are collected from heterogeneous sources; data and algorithms are inextricably entangled; interfaces recontextualize data; and data are indexes to local knowledge. Then, it provides a set of practical guidelines to follow. These findings are based on a combination of qualitative research on data cultures and exploratory data visualizations. Rebutting the myth of “digital universalism,” this work reminds audiences of the meaning-making power of the local." — [All Data Are Local]( https://soundcloud.com/berkmanklein/all-data-are-local-thinking-critically-in-a-data-driven-society) ✨ **References** ✨ 👀 * [Caroline Sinders](https://carolinesinders.com/wp-content/uploads/2020/05/Feminist-Data-Set-Final-Draft-2020-0526.pdf) * [Exposing AI](https://exposing.ai/datasets/) * [Algorithms Allowed](http://algorithmsallowed.schloss-post.com/algorithms_allowed.html) * [Plotting Data: ‘Enron Corpus'](http://plottingd.at/a/introduction.html#encoding-culture-enron-corpus) * [Di.Versions](https://di.versions.space/the-weight-of-things/index.html) * [Joana Moll w/ Tactical Tech](https://www.janavirgin.com); * [Gender Shades](http://gendershades.org/) * [Nora Al Badri](https://www.nora-al-badri.de/works-index) * [Anna Ridler](https://annaridler.com/works) --- ### ▶ → ▪ TASK ~ 30min ▫ ↜ ◄ ▫ Write down your own definition of data and datasets; ▫ What were the main questions and critique raised during today's session? ▫ Refer to relevant examples mentioned during class or other you know already; ▪ Start writing ideas on what datasets you would be interested in — working with or making yourself — and what would be the next steps. --- ![](https://i.imgur.com/pP0clZF.jpg) > CODED BIAS explores the fallout of MIT Media Lab researcher Joy Buolamwini’s discovery that facial recognition does not see dark-skinned faces accurately, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us all. * [Coded Bias](https://www.codedbias.com/) + [link to full movie](https://www.youtube.com/watch?v=xu6rwo_Y1vQ): highlights: approx.2min + 6min + 10min + 26min18sec; * [Big Brotherwatch](https://bigbrotherwatch.org.uk/) --- # :: 02 :: Web Scraping > Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites. [Web scraping on Wikipedia](https://en.wikipedia.org/wiki/Web_scraping) * [Article on Web Scraping](https://exposingtheinvisible.org/en/guides/scraping/) ### Structuring Data Files _File Formats_ **JSON** * [JS Objects](https://techstacker.com/display-javascript-objects-in-html/) * [JS Objects Maps](https://www.w3schools.com/js/js_object_maps.asp) * [JSON](https://www.w3schools.com/js/js_json_syntax.asp) * [Example of JSON on Github API](https://api.github.com/users/JoBCB) * [JSON to CSV](https://konklone.io/json/) **CSV** * [CSV](https://www.computerhope.com/issues/ch001356.htm ) * [Setting up your CSV file - Gov.uk](https://www.gov.uk/guidance/using-csv-file-format#setting-up-your-csv-file) * [Datasheets for Datasets](https://arxiv.org/pdf/1803.09010.pdf) ### Web Scraping Text * EXERCISE * * [Build a web scraper with Node.JS](https://pusher.com/tutorials/web-scraper-node/) * 01. create a new folder, and name it for eg.: webscraping; * 02. check if you have NPM and Node by typing in the terminal one line after the other: > npm -v > node -v * if you get an number as output that is the version of those libraries that you already have installed; if not: * install NPM [NPM](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm): > npm install -g npm * and download and install [NODE](https://nodejs.org/en/download/); * 03. [download Text Script File](https://git.arts.ac.uk/jchicau/Computational_Inequalities/tree/master/webscraping) and add it to the 'webscraping' folder; * 04. in the terminal go to the same location of the 'webscraping' folder by typing the pathway, for eg.: > cd /Users/joanachicau/Documents/Project/workshops/UAL—CCI/CCI-MA/CompInequalities/webscraping * 05. then hit enter, and we will install the three libraries: [Axios](https://github.com/axios/axios) [Cheerio.js](https://cheerio.js.org/) and [Puppeteer](https://github.com/puppeteer/puppeteer): > npm install axios cheerio puppeteer --save * 06. we will finally run the code from the 'scraper.js' file in the terminal: > node textscraper.js * 07. to quit the program press in the keys 'control' + 'C' in the terminal; 🌈 🖥 **More Webscraping Tools** 🖥 🌈 * [Web Scraping with node-fetch](https://www.scrapingbee.com/blog/node-fetch/) * this script use the libraries: [Node and NPM](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm); [Cheerio.js](https://cheerio.js.org/) and requires [node-fetch](https://www.npmjs.com/package/node-fetch); * [Scrape Airbnb script](https://github.com/nonlinearnarrative/scrape-airbnb) * [Use JavaScript Fetch API to Get JSON Data and display on an HTML file](https://www.digitalocean.com/community/tutorials/how-to-use-the-javascript-fetch-api-to-get-data); * [Image URL Script - NPM](https://www.npmjs.com/package/images-scraper) * [Mozilla Addon](https://addons.mozilla.org/en-US/firefox/addon/downthemall/) **Working with Data and APIs in JavaScript** * [Video's series](https://www.youtube.com/playlist?list=PLRqwX-V7Uu6YxDKpFzf_2D84p0cyk4T7X) by The Coding Train; ✨ **Project References** ✨ 👀 * [Algorithms Allowed](http://algorithmsallowed.schloss-post.com/algorithms_allowed.html) * [The Dating Brokers](https://datadating.tacticaltech.org/) * [Get Well Soon](https://getwellsoon.labr.io/) * [Privacy Policy Generator](https://github.com/ellennickles/personalized-privacy-policy) --- # :: 03 :: Text Dataset + ML5 ![](https://i.imgur.com/YBLva72.jpg) [Tega Brain, Get Well Soon](http://tegabrain.com/Get-Well-Soon); ### ML5 Text * EXERCISE * * We will look into [ml5js — sentiment](https://learn.ml5js.org/#/reference/sentiment)! * 01. You can download the code files from [git.arts](https://git.arts.ac.uk/jchicau/Computational_Inequalities/tree/master/Sentiment_Interactive_MLP5JS). * 02. Open the terminal and 'cd' into the same folder location as the previous code files. * 03. We will install [npmjs http-server](https://www.npmjs.com/package/http-server); type in the terminal: > npm install http-server or > sudo npm -g install http-server * 04. once installed, type in the terminal: > http-server * 05. now you can visit the index.html file inside the folder by opening this URL http://localhost:8080 in your browser; * 06. to quit the program press in the keys 'control' + 'C' in the terminal; 🌈 🖥 **more tools** 🖥 🌈 * [MIMIC: Sentiment](https://mimicproject.com/code/62050fce-d4a9-7aaa-e563-51a8992e1d45) * [AFINN-111 Sentiment Analysis](https://www.youtube.com/watch?v=uw3GbsY_Pbc); * [word2vec (ml5js)](https://learn.ml5js.org/#/reference/word2vec) * [What is word2vec? - Programming with Text by P5.js](https://www.youtube.com/watch?v=LSS_bos_TPI); * ["Experimental Creative Writing with the Vectorized Word" by Allison Parrish](https://www.youtube.com/watch?v=L3D0JEA1Jdc); * [CHARRNN (ml5js)](https://learn.ml5js.org/#/reference/charrnn) * find a charrnn working example [here](https://ml5js.github.io/ml5-examples/p5js/CharRNN/CharRNN_Text_Stateful/); * [Databasic](https://databasic.io) ✨ **project references** ✨ 👀 * [Allison Parish](https://portfolio.decontextualize.com/) * [Ruben Van de Ven](https://rubenvandeven.com/choose-how-you-feel-you-have-seven-options) --- # :: 04 :: Image Dataset + ML5 ![](https://i.imgur.com/kno1Ifx.jpg) > Laws of Ordered Form (2020) is a two part video work and a downloadable handmade dataset created by first taking thousands of photographs of images found in Victorian and Edwardian-era encyclopaedias and then manually reclassifying them. The work calls to attention how echoes of historic taxonomies and beliefs can still be heard in modern implementations of machine learning. By collapsing this moment of history with today’s current concerns around dataset bias, the piece emphasises the problems with classification without thought, and consider the histories that remain in our present, even within the latest technologies. - Anna Ridler. __About ImageNet__ > A dataset in computer vision is a curated set of digital photographs that developers use to test, train and evaluate the performance of their algorithms. The algorithm is said to learn from the examples contained in the dataset. What learning means in this context has been described by Alan Turing (1950): “it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc.” A dataset in computer vision therefore assembles a collection of images that are labeled and used as references for objects in the world, to ‘point things out’ and name them. [An Introduction to Image Datasets by Nicolas Malevé](https://unthinking.photography/articles/an-introduction-to-image-datasets) * [Sample](https://github.com/ajschumacher/imagen) * [Intro to-image-datasets](https://unthinking.photography/articles/an-introduction-to-image-datasets); * [Categories](https://image-net.org/challenges/LSVRC/2017/browse-det-synsets) * [IMAGENET_CLASSES](https://github.com/ml5js/ml5-library/blob/main/src/utils/IMAGENET_CLASSES.js) ### ML5 Text * EXERCISE * * We will look into [ml5js — image classification](https://learn.ml5js.org/#/reference/image-classifier)! * 01. You can download the code files from [git.arts](https://git.arts.ac.uk/jchicau/Computational_Inequalities/tree/master/Image_Classification_MLP5JS). * 02. Open the terminal and 'cd' into the same folder location as the previous code files. * 03. type in the terminal (if you havent yet see in the previous session how to install [npmjs http-server](https://www.npmjs.com/package/http-server)): > http-server * 04. now you can visit the index.html file inside the folder by opening this URL http://localhost:8080 in your browser; * 06. to quit the program press in the keys 'control' + 'C' in the terminal; 🌈 🖥 **tools** 🖥 🌈 _Image Dataset (MobileNet) + ML5_ * [Beginner's Guide to Machine Learning in JavaScript with ml5.js](https://www.youtube.com/watch?v=26uABexmOX4&list=PLRqwX-V7Uu6YPSwT06y_AEYTqIwbeam3y) * [Intro ml5js image-classifier](https://learn.ml5js.org/#/reference/image-classifier) * [ml5.js: Image Classification with MobileNet](https://youtu.be/yNkAuWz5lnY?t=231) * [hello-ml5](https://learn.ml5js.org/#/tutorials/hello-ml5) * [ml5js — ImageClassification](https://editor.p5js.org/ml5/sketches/ImageClassification) * [MIMIC: mobilenet](https://mimicproject.com/code/45e317ca-2edb-f7a0-141c-a6e462f9243d) (using the camera); ✨ **project references** ✨ 👀 * [Normalizing — by Mushon Zer-Aviv](https://normalizi.ng/) * [On scene net datasets — HOMESCHOOL by Simone C Niquille /Technoflesh](https://www.technofle.sh/hs/homeschool.php) * [1000 Synsets](http://javierlloret.com/1000-synsets.html) * [3D objects](https://www.technofle.sh/hs/homeschool.php) * [15 min fame](http://www.marnixdenijs.nl/15-minutes-of-biometric-fame.htm) * [Surveillance Paparazzi](https://driesdepoorter.be/surveillance-paparazzi/) * [Coco Dataset](http://plottingd.at/a/introduction.html#dataset-for-4-year-olds-coco) * [Facing the Future, Kate Crawford ](https://soundcloud.com/the-observatory-1/episode-113-facing-the-future) * [Face Swap](https://github.com/deepfakes/faceswap#faceswap-has-ethical-uses) --- # :: 05 :: Tutorials ⏰ approx. 10min per person; --- # :: 06 :: Community ML tools ![Algorithms of Our Own — image design by Nina Shahriaree.](https://i.imgur.com/Xa509y7.png) Algorithms of Our Own project by Feminist.AI — image design by Nina Shahriaree. ### * EXERCISING * __in groups:__ Analyse two of the project references below and compare the following points: * what are the main aims of the project / community? * what strategies do you identify for audience outreach and participation? * what are methods use to develop such projects within particular communities? * what strategies do you identify around documentation of tools and project? __individually:__ Now looking at your project, reflect on the point below: * who is the audience for your project? * how can your project's contribute to further discussion within your audience and field of practice? * what strategies around documentation of research and practice can you adopt to best communicate your project process and outcome? You may include examples or illustrate your point by mentioning specific projects presented throughout the course. ✨ **project references** ✨ 👀 * [Feminist Internet](https://www.feministinternet.com/envisions) in a society that is entangled with AI, what role do young people have in developing its future? How will they choose to live with AI and how might they change it? * [Feminist.AI](https://www.feminist.ai/) works to put technology into the hands of makers, researchers, thinkers and learners to amplify unheard voices and create more accessible AI for all. * [Posthuman Feminist AI](https://www.posthumanai.com/): Creating with Unheard Voices in AI Design * [Citizen Sense](https://citizensense.net/) — democratizing environmental data. * [City Digits](https://civicdatadesignlab.mit.edu/City-Digits) develops and pilot tests innovative tools that help high school students learn mathematics for investigating their local environment. * [Identity 2.0](https://identity20.org/) 'We want you (yes you), to feel empowered by the choices you have with your data. We want to start conversations about data in places that it did not exist previously and hold the bigger players accountable for how irresponsible they have been with our data.'' 🌈 🖥 **tools** 🖥 🌈 * [MIMIC](https://mimicproject.com) is a web platform for the artistic exploration of musical machine learning and machine listening. * [Wekinator](http://www.wekinator.org/example-projects/) allows anyone to use machine learning to build new musical instruments, gestural game controllers, computer vision or computer listening systems, and more. ### ⏰ 30min-45min Joana available for tutorial time; --- # :: 07 :: Tracking Body Movement and Gestures ![](https://i.imgur.com/zBMNryY.jpg) [Prospectus For A Future Body, Choy Ka Fai ](https://www.ka5.info/prospectus.html) * ['How artificial intelligence will transform how we gesture?''](https://theconversation.com/how-artificial-intelligence-will-transform-how-we-gesture-91697) * ['New surveillance tech means you'll never be anonymous again'](https://www.wired.co.uk/article/surveillance-technology-biometrics) ### * Warm-Up Exercise * _Hacking Choreographies:_ in groups, start by investigating how the PoseNet script operates and compare with one of the other tools references before. How is the body perceived? what is considered a 'body'? 🌈 🖥 **tools** 🖥 🌈 * [Posenet](https://storage.googleapis.com/tfjs-models/demos/pose-detection/index.html?model=posenet) * [ml5.js: Pose Estimation with PoseNet](https://thecodingtrain.com/learning/ml5/7.1-posenet.html) * [ml5.js: PoseNet webcam](https://editor.p5js.org/ml5/sketches/PoseNet_webcam) * [ml5.js: PoseNet image](https://editor.p5js.org/ml5/sketches/PoseNet_image_single) * [ AIxDesign Workshop: Playing with PoseNet in ML5 with Erik Katerborg ](https://www.youtube.com/watch?v=dnk6kT38sBo&list=PLMJwm6AuaSjeUG35dw6KLemURR_2QWgcy&index=1) * [ MIMIC: Bodypix Soundscape](https://mimicproject.com/code/2fdd8ba2-3cb8-1838-49a5-fe9cfe6650ed) * [MIMIC posenet](https://mimicproject.com/code/48d5b6d9-794e-97d4-a16e-4780cf6c4a8c) * [Tensorflow](https://www.tensorflow.org/js) * [Three.js](https://threejs.org/) * [threejs pose](https://threejs.org/examples/webgl_loader_mmd_pose.html) * [mannequin.js](https://boytchev.github.io/mannequin.js/) * [List of resources on 'Human Motion'](https://github.com/derikon/awesome-human-motion) * [pose and motion](https://github.com/daitomanabe/Human-Pose-and-Motion) * [Choreo-Graph](https://github.com/mariel-pettee/choreo-graph) * [Everybody DanceNow](https://github.com/nyoki-mtl/pytorch-EverybodyDanceNow) * [AI Choreographer](https://google.github.io/aichoreographer/) ✨ **project references** ✨ 👀 * [Shenzhen, Joana Moll](http://www.janavirgin.com/shenzhen.html) * [Kate Sicchio](https://www.sicchio.com/) * [Choreographic Coding Labs](http://choreographiccoding.org/) — projects and tools; * [Bali Motion Capture recording from LuYang](http://luyang.asia/2020/11/20/bmw-art-journey-bali-motion-capture/) * [Choy Ka Fai](https://www.ka5.info/index.html) * [Discrete Figures, Daito Manabe ](https://youtu.be/hauXQQhwbgM) * [Building with Posenet by Computational Mama](https://computationalmama.xyz/projects/buildingwithposenet/) ### ⏰ 30min-45min Joana available for tutorial time; ### * TASK * For next class, bring your WIP project, prepare a set of images and/ or text that meaningfully represents it. --- # :: 08 :: Performing a Dataset > “To dramatize data, you must first understand it. You analyse it, play with it, try to find relationships, try to infer the events that took place, and extract stories and meaning. > And then you throw it all away. Chuck it in the bin, and wipe your hands clean. All that’s left is your understanding of the processes that gave rise to the data and the events and relationships within.” - Memo Akten and Liam Young ![](https://i.imgur.com/w9Ycele.jpg) > What will be the role of humans in a future AI driven world? As algorithms begin to optimize nearly every interaction and aspect of our lives, the last remaining role for people may be performing the emotional labor to act as human interface to AI. The 24h HOST performance is a small party that lasts for 24 hours, driven by software that automates the event, embodied in human HOST. * [by Lauren McCarthy](https://lauren-mccarthy.com/24h-HOST) --- ![](https://i.imgur.com/qBS6ojm.jpg) > "Through these participatory processes of roleplay, rehearsal and speculation, the pre-enactment is a tool for understanding prediction and reclaiming contested futures."" * [by Lily McCraith](https://lilymccraith.net/Pre-Enacting-Predictions) --- ![](https://i.imgur.com/l6lsHhW.jpg) > Hello Hi There uses the famous television debate between the philosopher Michel Foucault and linguist/activist Noam Chomsky from the Seventies as inspiration and material for a dialogue between two custom-designed chatbots: every evening, these computer programs, designed to mimic human conversations, perform a new – as it were, improvised – live text. * [Hello, Hi There by Annie Dorsen](https://anniedorsen.com/projects/hello-hi-there/) * [Hello, Hi There video](https://vimeo.com/194697514?embedded=true&sour) ![](https://i.imgur.com/dQSjs7r.jpg) * ['Project 'Anatomies of Intelligence' by Joana Chicau and Jonathan Reus](https://anatomiesofintelligence.github.io/) ### * Exercise * Choose from one of the system of techniques devised to give the audience a way to transform daily news articles or any non-dramatic pieces to theatrical scene. * Simple Reading: news item read, detached from the context of the newspaper (which makes it false or controversial). * Crossed Reading: two news item are read in alternating form, complementing or contrasting each other in a new dimension. * Complementary Reading: information generally omitted by the ruling class are added to the news. * Rhythmical Reading: article is read to a rhythm (musical), so it acts as a critical "filter" of the news, revealing the true content initially concealed in the newspaper. * Parallel Action: actors mimic the actions as the news is being read. One hears the news and watches its visual complement. * Improvisation: news is improvised on stage to exploit all its variants and possibilities. * Historical: data recurred from historical moments, events in other countries, or in social systems are added to the news. * Reinforcement: article is read accompanied by songs, slides, or publicity materials. * Concretion of the Abstract: abstract content in news is made concrete on stage, i.e. hunger, unemployment, etc. * Text out of Context: news is presented out of context in which it was originally published. [by Augusto Boal, Theatre of the Oppressed](https://en.wikipedia.org/wiki/Theatre_of_the_Oppressed) ✨ **project references** ✨ 👀 * ['Project 'Anatomies of Intelligence' project](https://anatomiesofintelligence.github.io/) * [Memo A. on performance;](https://www.memo.tv/works/#performance) * ['Data Dramatization' article](https://memoakten.medium.com/data-dramatization-fe04a57530e4) * ['Data Meditation' project](https://chatty-pub.hackersanddesigners.nl/Network_Imaginaries#ch-18datameditation) * ['Data Visceralization' article](https://data-feminism.mitpress.mit.edu/pub/5evfe9yd/release/5) * [Transcultural Data Pact](https://decal.furtherfield.org/2019/11/22/transcultural-data-pact-larp/) explores how personal and collective data practices and devices shape the attitudes and fortunes of societies. * [Trailler: The Transcultural Data Pact - A Live Art Action Research Role Play](https://vimeo.com/470585528) * [Lecture: How live action role play could fix real-world social problems:](https://theodi.org/event/odi-fridays-how-live-action-role-play-could-fix-real-world-social-problems/) Artistic Director Ruth Catlow talks about how participation in scenarios in live action role play (LARP) leads to powerful group-driven discovery, rich research data, and potential real-world answers. --- # :: 09 :: Tutorials ⏰ approx. 10min per person; --- # ::: More References and Resources ::: * [Creative AI Lab database and resources](https://creative-ai.org/) * [Awesome Machine Learning Demos](https://github.com/MilesCranmer/awesome-ml-demos) * [Raw Graphs Tool](https://www.rawgraphs.io/) * [Inspect Element: a practitioner’s guide to auditing algorithms and hypothesis-driven investigations ](https://inspectelement.org/) ### Open Datasets _eg. open Image Datasets:_ * [Exposing AI](https://exposing.ai/datasets/) * [Body-human-grasping-of-objects](https://ps.is.tuebingen.mpg.de/code/grab-a-dataset-of-whole-body-human-grasping-of-objects) * [UCI edu datasets](https://archive.ics.uci.edu/ml//index.php) _eg. open Text Datasets:_ * [Wellcome collection datasets](https://developers.wellcomecollection.org/datasets) * [Components datasets](https://components.one/datasets) * [Datasetsearch Google](https://datasetsearch.research.google.com/) * [Commoncrawl](https://commoncrawl.org) * [Dataphys Resources List](http://dataphys.org/workshops/tei16/data-sets/) ### Project References * [AI CHEATSHEET](https://aicheatsheet.comuzi.xyz/) * "In Decision Space, visitors were invited to assign all the images available on the gallery’s website to one of four categories: Problem, Solution, Past and Future. Additionally to overlaying all images on the gallery’s website with the classification system, [decision-space.com](http://sebastianschmieg.com/decision-space/) provided a focused environment for further decision and click-work on the dataset." * [Excavating AI](https://excavating.ai/) * [Our Data Bodies (ODB)](https://detroitcommunitytech.org/?q=content/our-data-bodies-digital-defense-playbook) * [Data Garden](https://www.datagarden.org/post/richard-lowenberg-interview) * [Dear Data](https://www.dear-data.com/theproject) * [Qualified Selves - Tracking Data Differently](https://sensemake.org/resources/showcases/) * [Humans of AI](https://humans-of.ai/) * [This is the Problem, the Solution, the Past and the Future](http://this-is-the-problem-the-solution-the-past-and-the-future.com/dataset/problem) * [Ground Truth](https://www.thegreeneyl.com/ground-truth) * [Fatal Migrations](https://projects.theintercept.com/fatal-migrations/) ### Further Readings * [Feminist Data Visualization by Kanarinka](www.kanarinka.com/wp-content/uploads/2015/07/IEEE_Feminist_Data_Visualization.pdf) * [Intersectional AI Toolkit](https://intersectionalai.miraheze.org/wiki/Intersectional_AI_Toolkit) * [Critically Conscious Computing](https://criticallyconsciouscomputing.org/#/) * [Data Feminism by Catherine D'Ignazio and Lauren F. Klein](https://data-feminism.mitpress.mit.edu/) * [What Data Visualization Reveals: Elizabeth Palmer Peabody and the Work of Knowledge Production by Lauren Klein](https://hdsr.mitpress.mit.edu/pub/oraonikr/release/1) * [Data Art & Data Skills Toolkit](https://www.artdatahealth.org/dataskillstoolkit/data-art/) * [Datastori.es Podcast](https://datastori.es/) * [Fundamentals of Data Visualization](https://clauswilke.com/dataviz/index.html) * [Data Science for Whom?](https://data-feminism.mitpress.mit.edu/pub/vi8obxh7/release/3) * [Pattern Discrimination](https://meson.press/books/pattern-discrimination/) * [on Data, Invisible Infrastructures : Surveillance Architecture ](https://labs.rs/en/) * [Just Tech library](https://www.zotero.org/groups/2664469/just_tech/library) * [List of Dataviz books](https://informationisbeautiful.net/visualizations/dataviz-books/) * [Undestanding Data by Lev Manovich](http://manovich.net/content/04-projects/106-data/manovich.data_article.2020.pdf) * [Reading List: Toward ethical, transparent and fair AI/ML: a critical reading list for engineers, designers, and policy makers](https://github.com/rockita/criticalML) __More on the theme of Data:__ * [Interpretable Machine Learning A Guide for Making Black Box Models Explainable](https://christophm.github.io/interpretable-ml-book/) * [Machine Unlearning: Data, Diversity and Determination | Nicholas Kelly](https://www.machineunlearning.co.uk/episodes/nicholaskelly) * [Security by Design: A Human Rights Centered Design Curriculum & Methodology](https://www.secureux.design/) * [XYZ Information Activism](https://xyz.informationactivism.org/en/) * [Data as Culture](https://culture.theodi.org/) * [Hip-Hop Vocabulary](https://pudding.cool/projects/vocabulary/index.html) * [Flowing Data Case Studies](https://flowingdata.com/) * [Data Workers](https://dataworkers.org/) * [Algolit](https://www.algolit.net/index.php?title=Main_Page) * [We and AI podcast](https://soundcloud.com/weandai)