# Bodies & Algorithms 💻💃🏻
*by Joana Chicau, April 2024*
## Introduction

Joana Chicau is a graphic designer, coder, researcher — with a background in dance. She researches the intersection of the body with the constructed, designed, programmed environment, aiming at widening the ways in which digital sciences is presented and made accessible to the public.
She has been actively participating and organizing events involving multi-location collaborative coding, algorithmic improvisation, discussions on digital equity and activism.
She is currently an associate lecturer and PhD student at UAL Creative Computing Institute at the University of the Arts London.
.・゚゚・ [joanachicau.com](https://joanachicau.com/)
.・゚゚・ [post.lurk.org/@joanachicau](https://post.lurk.org/@joanachicau)
.・゚゚・ [are.na/joana-chicau](https://www.are.na/joana-chicau/web-choreographies-other-stories)
.・゚゚・ [linkedin joana-chicau](https://www.linkedin.com/in/joana-chicau/)
.・゚゚・ [twitter.com/BChicau](https://twitter.com/BChicau)
### Inspiration ✨ 👀
**[Bali Motion Capture, 2020, LuYang](http://luyang.asia/2020/11/20/bmw-art-journey-bali-motion-capture/)**

'Lu Yang’s BMW Art Journey entitled “Human Machine Reverse Motion Capture Project” examines how the human body can be trained to overcome its physical limitations and explores its deployment in historical and present-day cultures. Her research looks into how humans negotiate their evolving relationship with machines that may ultimately surpass human limitations. Steeped in the latest digital technologies, Lu employs sophisticated motion-capture devices to record dancer’s gestures, including facial, finger- and eye-capture techniques that can collect and analyze the subtlest body movements, and mimic these using robotic technologies.' [source](https://www.press.bmwgroup.com/global/article/detail/T0308333EN/bmw-art-journey-winner-lu-yang-completes-first-leg-of-her-travel-to-bali-indonesia-using-motion-capture-technology-lu-captures-facial-and-body-expressions-of-balinese-dancers-and-collaborates-with-japanese-dancer-kenken).
**[Prospectus For A Future Body, 2011, Choy Ka Fai
](https://www.ka5.info/prospectus.html)**

Can we design future memories for the body?
Is the body itself the apparatus for remembering cultural processes?
Prospectus For a Future Body proposes new perspectives on how the body remembers and invents technological narratives. Central to the project is the study of body movement in dance: How it can evolve, adapt or re-condition to possible futures?
**[Untitled Algorithmic Performance, 2016, Kate Sicchio](https://www.sicchio.com/)**

Algorithms are driving our lives. How can we harness algorithms for art and use them within a digital performance practice? When can machine learning be used to expand and explore a creative process? By working with the machine learning algorithm, t-SNE, data can become performed through live coding. Technically, the t-Distributed Stochastic Neighbor Embedding (t-SNE) is a technique for dimensionality reduction that is used for the visualization of high-dimensional datasets. It is used in research in the sciences for biomedical, computer security and other fields with large data sets. Within this dance, images of bodies in motion are feed into the algorithm, producing new possibilities for live performance and the creation of new choreographic scores.
**[Prepared Performance for Ice Arena in Shenzhen, 2010, Joana Moll](http://www.janavirgin.com/shenzhen.html)**

This piece is a part of a global performing project that uses surveillance cameras from all over the world as a unique recording tool to carry out real time streaming shows. Thus, the ones that are being observed are becoming performers with an audience, so the last step of the surveillance system chain doesn't belong to the ones in charge of keeping the security in the area but to the ones observing the performers. Roles change while expanding levels of observation and questioning security methods. This piece takes place inside an ice skating arena in a mall in Shenzhen, China.
### Work Selection
**Web Choreographies** (2019) w/ Jip de Beer

Screenshot from a performance integrating 3D visualizations display of live coded web environments.
* [Video trailer](https://vimeo.com/318721981);
* [Project's website](https://joanachicau.com/web_choreographies.html);
**Anatomies of Intelligence** (2018-) by Joana Chicau and Jonathan Reus

Screenshot from AoI's virtual theatre
Anatomies of Intelligence (AoI) is an artistic research project that investigates connections between the processes in anatomical knowledge and the “anatomy” of computational learning and prediction, datasets and machine learning models.
* [Anatomies of Intelligence website](https://anatomiesofintelligence.github.io/);
**The Shape of Things to Come** (2019) collaboration with Replica

THE SHAPE OF THINGS TO COME is a practice-led research project exploring how theatre techniques can be employed to reverse-engineer cognition and emotions, to invent new forms of communication between human and machine, and to rehearse future societies.
* [Replica Institute](https://replica.institute/);
* [The Shape of Things to Come video](https://vimeo.com/371160085);
* [Related article](https://pan.ecphras.ist/Projects/The_Shape_of_Things_to_Come.html);
**Dancing at the Edge of the World** (2020) collaboration with Replica

A speculative performance piece exploring the intersections between the body-based practices promoted by Grotowski’s Theatre Laboratory and Haraway’s feminist theory, Dancing at the Edge of the World is based on a multimedia non-linear narrative that shifts the focus of knowledge production for AI-design by rooting it in the experience of the body.
Biometric information, harvested from the moving bodies of the performers in real-time is translated to sonified cues and becomes entangled with the performance scores (voice & movement). The performers’ bodies are generating traces in space, a living cartography of their capabilities and limitations. The technology becomes a medium for interconnecting, as well as for interrogating the embodied agencies engaged in a ritual act (of survival). The piece will be enacted by 6 -7 performers.
* [Dancing at the Edge of the World](https://berlin-open-lab.org/portfolio/dancing-at-the-end-of-the-world/);
* [Video documentation](https://www.youtube.com/watch?v=Nj4xrpGgW08);
# Data are Us: the personal, the sensitive and the intimate
*'New surveillance tech means you'll never be anonymous again. Forget facial recognition. Researchers around the world are creating new ways to monitor you. Lasers detecting your heartbeat and microbiome are already being developed...*' [Source: Wired Magazine 2019](https://www.wired.co.uk/article/surveillance-technology-biometrics)
PERSONAL DATA
> “Personal data are defined in the European General Data Protection Regulation (GDPR) as any information through which a person can be directly or indirectly identified. Examples include a person’s phone number or email address, directly associated with her, as well as the WiFi access points she connects to through her mobile device,…” [Gomez Ortega et al., 2024, p. 2]
SENSITIVE DATA
> “GDPR defines sensitive data as a special category of personal data that includes racial or ethnic origin, political opinions, religious or philosophical beliefs; trade-union membership; genetic data, biometric data processed solely to identify a human being; health-related data; and data concerning a person’s sex life or sexual orientation expose people and lead to inferences about their behavior and experiences.” [Gómez Ortega et al., 2023, p. 1-2]
INTIMATE DATA
> “intimate data referring to personal information about intimate practices (e.g., cooking, sleeping, showering) taking place in intimate spaces (e.g., home) and bodily experiences (e.g., menstruating, urinating).” ([Gómez Ortega et al., 2023, p. 2]
>
> “The voice is biometric information and is considered sensitive under the GDPR if used to identify a person.” ([Gómez Ortega et al., 2023, p. 2]
Bibliography
Gomez Ortega, A. et al. (2024) ‘Dataslip: Into the Present and Future(s) of Personal Data’, in Proceedings of the Eighteenth International Conference on Tangible, Embedded, and Embodied Interaction. TEI ’24: Eighteenth International Conference on Tangible, Embedded, and Embodied Interaction, Cork Ireland: ACM, pp. 1–14. Available at: https://doi.org/10.1145/3623509.3633388.
Gómez Ortega, A., Bourgeois, J. and Kortuem, G. (2023) ‘What is Sensitive About (Sensitive) Data? Characterizing Sensitivity and Intimacy with Google Assistant Users’, in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. CHI ’23: CHI Conference on Human Factors in Computing Systems, Hamburg Germany: ACM, pp. 1–16. Available at: https://doi.org/10.1145/3544548.3581164.
## Current PhD research
**[Choreographing You: Dear User, are you ready to move?, 2023, Joana Chicau](https://re-coding.technology/choreographing-you/)**

Contribution to 're-coding everyday technology' VOL. 1 contains 12 positions by artists, researchers and designers on questioning the default conditions and circumstances of everyday technology.
For more on my current research, watch the recording on [Human-Computer Counter-Choreographies](https://ual.cloud.panopto.eu/Panopto/Pages/Viewer.aspx?id=770a9a95-0fd2-4dc9-902d-b09600b3d006&start=3.274308) at PGR Weekly Meet-ups (08/11/2023).
🌈 🖥 **Further References** 🖥 🌈
* [Posenet](https://storage.googleapis.com/tfjs-models/demos/pose-detection/index.html?model=posenet)
* [ml5.js: Pose Estimation with PoseNet](https://thecodingtrain.com/learning/ml5/7.1-posenet.html)
* [ml5.js: PoseNet webcam](https://editor.p5js.org/ml5/sketches/PoseNet_webcam)
* [ml5.js: PoseNet image](https://editor.p5js.org/ml5/sketches/PoseNet_image_single)
* [ PoseNet GitHub Page](https://github.com/tensorflow/tfjs-models/tree/master/posenet)
* [Real-time Human Pose Estimation in the Browser with TensorFlow.js](https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5)
* [Pose Estimation TensorFlow Article](https://www.tensorflow.org/lite/models/pose_estimation/overview)
* [Building with Posenet by Computational Mama](https://computationalmama.xyz/projects/buildingwithposenet/)
* [ AIxDesign Workshop: Playing with PoseNet in ML5 with Erik Katerborg ](https://www.youtube.com/watch?v=dnk6kT38sBo&list=PLMJwm6AuaSjeUG35dw6KLemURR_2QWgcy&index=1)
* [Facemesh](https://learn.ml5js.org/#/reference/facemesh)
* [Handpose](https://learn.ml5js.org/#/reference/handpose)
* [ MIMIC: Bodypix Soundscape](https://mimicproject.com/code/2fdd8ba2-3cb8-1838-49a5-fe9cfe6650ed)
* [MIMIC posenet](https://mimicproject.com/code/48d5b6d9-794e-97d4-a16e-4780cf6c4a8c)
* [Tensorflow](https://www.tensorflow.org/js)
* [Three.js](https://threejs.org/)
* [threejs pose](https://threejs.org/examples/webgl_loader_mmd_pose.html)
* [mannequin.js](https://boytchev.github.io/mannequin.js/)
* [CO/DA live-coding tool](https://saralaoui.com/2023/09/co-da/)
* [List of resources on 'Human Motion'](https://github.com/derikon/awesome-human-motion)
* [pose and motion](https://github.com/daitomanabe/Human-Pose-and-Motion)
* [Choreo-Graph](https://github.com/mariel-pettee/choreo-graph)
* [Everybody DanceNow](https://github.com/nyoki-mtl/pytorch-EverybodyDanceNow)
* [AI Choreographer](https://google.github.io/aichoreographer/)
* [Choreographic Coding Labs](http://choreographiccoding.org/) — projects and tools;