Let's get Physical
MDEF: Measuring the world / A world in data activity report.
Let's get Physical
Vania Bisbal, Francisca Herrera, Anna Fedele, Sophie Marandon, Carmen Robres
Journal Index
objectives
-> hypothesis
We wanted to collect data on emotions and feelings, something connected to our identity and mental health. It was a bit hard to actually come up with a specific "we want" statement because we kept on thinking about what we wanted to measure and experiment with instead of a main objective. These are the topics that came up:
Once we found objective, "We want to use music to make people feel better", we began brainstorming some questions about music and its effects on us. These are the questions that came up:
Statements:
Can music change your emotions? -> Music can not change your emotions
Does music help you connect? -> Music does not help us connect
Objective: We want to use music to make people feel better
Question: Can music change your emotions?
When defining the broad questions, we all reasoned by looking at what we could achieve as data visualisation, thus being guided by data collection methodologies without first dwelling on the topic
hypothesis
-> data
The tool we used involves physical interaction. Initially, our intention was to utilize the camera with the Raspberry Pi to connect to an AI capable of recognizing emotions, thereby allowing us to observe changes through facial expressions.
For the physical interaction tool, what we did was edit two videos: one with a sad context and the other with a relaxed context. In both videos, we used different songs. In the sad video, we started with a sad music track and then transitioned to a happy one. In the relaxed video, we began with a relaxed song and then switched to an anxious one.
video 1
https://youtu.be/vgvYHuiI424
video 2
https://youtu.be/WweWlX8CcDY
To assess in a physical manner, we created billboards containing the questions:
Each person who responded placed a sticker on the billboard next to their answer. We kept track of each person by assigning numbers to them and their stickers
it's possible to replicate the intervention thinking about the metodology to associate different mood of songs to images that generates confusion or a change of mood.
It would be interesting to replicate the same physical intervention from a more accurate point of view such as going in depth and studying two groups of people how they interact to videos with different music and comparing them, plus you can go into more detail and be more accurate by studying more combinations of the same music.
We tried in our data collection not to use technological tools, so we manually marked the answers on our board on an excel file to have the trends organised.
But visibly it was noticeable that the second video gave a more marked difference in votes while the first gave more confusion.
Final result:
2 carboards
pens
stickers
computer or phone
headphones
We placed the survey on two display boards with a graph indicating to the right and left the intensity of an emotion so happy/sad and relaxed/anxious.
We asked our classmates and people at Itnig Coffee to watch one video, answer the respective question with a sticker, and then watch the other video and answer the question linked to that one.
Link for the excel -> https://docs.google.com/spreadsheets/d/1eJO85jfhDWHWYISVO-MCWwmm5ERGMplby8LVj6mTOoc/edit?usp=sharing
When we made the posters, we had people vote first for one part and then for the other of the video with anonymous stickers, but we realised that it was in our interest to also collecte the variation of an emotion that person by person instead of collecting the collective mood. So we could have reconfigured the graphic we submitted.
data
-> information
Data Summary | |
---|---|
Capture Start | 07-02-2024 |
Capture End | 08-02-2024 |
Original Data Format | Visual Graphic |
Total Data Points | 42 |
Number of datasets | 1 |
We generated two graphs:
Video one - From sad to happy
Observations:
Video two - From relaxed to anxious
Observations:
To process the physical data collected and visualize it digitally.
We managed to prove our hypothesis wrong. Music can change your emotions but the mood change depends on the individual's background.
In the video happy/sad, the scent of the images and the second a track were a bit confusing for some people and caused confusion because they knew the song and the part of the film we used, so their personal tastes influenced the survey.
We would also like to incorporate more emotions into the answer billboards. As mentioned before, we would also implement a better method for tracking each person's response.
For future explorations, integrating technology to delve deeper into these topics, like measuring heart rate, would be intriguing. This could enable subtler data collection. Additionally, evaluating these changes with personalized music for the listener could be considered.
Additionally, it would be interesting to go beyond the experience of video and music and collect data on how music changes the interaction between people in a specific context.
Through another exploration, we can gather more in-depth data by increasing the number of participants, dividing them into two groups, and showing each group only one of the videos. This approach allows us to compare the new data with the original dataset and observe whether the trends remain consistent