Recently, I took a class on ML competitions. In the final week, we had hackathon that served as a platform for a remarkable intersection of art, machine learning, and collaborative teamwork. Our project centered on creating a system that could help art appreciation in museum and gallery settings.
Introducing the "Art Guide" Project
The core objective of our task was to build a system that could provide auditory descriptions of artworks. This system, named "Art Guide" offers users the ability to capture images of art pieces using their devices, triggering an audio response that describes the artwork. This innovation has the potential to enrich cultural experiences in spaces where traditional audio guides may be absent.
To delve deeper into the technical aspects of our project, you can find the full codebase on our GitHub Repository.
Data Collection
Our journey began with data collection. Our data collection team scrapped around 200,000 images from WikiArt, accompanied by corresponding descriptions and pertinent metadata such as artist details and creation dates. While the WikiArt data formed the bulk of our collection, we looked for additional information from Wikipedia. Unfortunately, only a minor fraction of our data found correspondence there.