owned this note
owned this note
Published
Linked with GitHub
# Presentation of MONA at LinkedMusic
February 25, 2025 4pm
550 Sherbrooke West, 5th floor
(practice Feb 24 4pm)
## Introduction - Lena
**c'est quoi notre but avec cette réunion **
- partage de connaissances vu la similiarité de nos infrastructures
- présenter comment on a fait → voir s'ils identifient des oublis/erreurs
- leur présentation → est-ce qu'on a des commentaires: flux dynamique, travail de mise à jour des données "sources"
- envisager un travail collaboratif, en parallèle? potentiel de mise en commun à plus long terme
### Présentation générale du projet
projet de recherche & organisme à but non lucratif
- activités
- données ouvertes
- application mobile : différents contenus de art public à patrimoine
- interface admin
## Importation - Simon
### Motivation
MONA's data come from many upstream sources. Mainly governmental open data
portals.
We aggregate the data into our database, much like you do with your "Data lake".
As you must know, this already presents a number of challenges in terms of data
normalization. Here we've found some linked data concepts handy, namely
reconciliation and data mapping (which we call, together, "semantization").
This process is done manually and is quite laborious.
The challenge I'm working on now is the updating of these data to correspond
with the upstream sources as they are updated by the groups that maintain them.
We'd like to avoid having to redo any laborious manual semantization. Ideally,
we'd minimize any further semantization (although it will be, for us,
inevitable). Before moving on, I'm going to point out that we also, in the
context of semantization, bring some corrections to some erroneous data in our
upstream sources.
### Our Data and Our Sources
The data comprising our upstream sources are exclusively tabular, serialized in
JSON or CSV files that we either have stored locally, or which we can access
through the web.
We get data from 10 sources (soon to be 11), detailing artworks, cultural
locations, and heritage sites.
Some examples of properties we'd record, say for an artwork, are its title, its
creator, its location, its dimensions, etc.
From these, we also make a table of artists and institutions which contribute to
the different artworks.
***
I'll continue by, first, explaining how we import our data right now (that is
without any logic for updating the data), and then I'll explain how we could
adapt our importing to include logic for updating the data.
### Importing the Sources
For each of the sources, using a mapping between the columns of our source and
our database, we copy the data from the source into our table. To know in what
row to copy a certain row of the source, we use an invariant column of the
source to reconcile each row with the id of a datum in our database.
In fact, we stage the data from the source before copying it into our database
to apply any corrections we may have. At this step, we also add information
about LOD authorities, if we have some URIs reconciled.
We also copy data from sources we deem less authoritative first so they may be
overwritten by other, more authoritative sources.
I'll reiterate: we loop over every source consecutively.
#### Data Mapping
Tiffany has gone through the sources, detailing each of the columns of the
tabular data in the sources. These columns are then mapped against the columns
of our database tables.
This mapping will inform the datatypes we use in our data lake, and will inform
us as to whether type conversions are required prior to staging the data.
#### Reconciliation
As I've mentioned before, we choose an invariant for each source which we use for reconciliation.
This is, for now, for all CSV files, the order in which the rows appear, and for
JSON files, the IDs provided.
We then reconcile the data in our sources with the data in our database using
the invariant.
We extracted the correspondences between the data using what we already had in
our database from our previous, ad hoc method of importing data.
#### Conflicts Between Sources
We can have data which are in one source but not the others, in which case,
everything is straightforward, and the situation I mentioned earlier where we
overwrite data in our database would never occur.
If multiple sources contain data about the same artwork, or the same artist,
then we have a conflict.
This is when the notion of degree of authority comes into play. Two rows in two
source tables reconciled to the same row in our database will require us to pick
and choose which data ultimately end up in our database. In our case, the source
with the higher authority level wins out over the other.
Any datapoints found in both sources would end up coming from the more
authoritative source.
A more authoritative source "occludes" the data of a less authoritative one.
### Updating the Data
So now that I've given an overview on the importation of our data, I'll explain
how we would have to modify it to accomodate for updating.
How can a source update it's data?
Well it can add new data, it can delete data, or it can change data.
The important thing to notice is that the data from a single source at two
different points in time can be seen as the data of two different sources at the
same time.
The loop I mentioned earlier is in fact a queue to which we simply have to add
future, updated versions, of a source. We "continue the loop" at a later time,
when some sources have been updated.
In effect, any changes to the data of a source correspond to conflicts between
sources.
Naturally, future versions of the sources are more authoritative than earlier
versions. To avoid any degree of authority complexities, we can simply add every
source every time we do an update, in the same authority order.
As for additions, our importation scheme already handles such cases.
And for deletions, well I'm still working on that, but we may simply keep the
old data. Our database would be append/modification only.
## Semantisation and Linked Open Data Export (LODExport)
We have an API endpoint called LODExport which presents a part of our data to be converted into LOD by LINCS. This endpoint includes reconciliation with wikidata and ULAN which we've already done manually, and we've done a mapping between the
columns of our database and the CIDOC-CRM ontology with LINCS.
## MONAjout - Adding new items via app with Wikidata - Tiffany
### Motivation
- existing data sources (MONA DB) used by the MONA app don't have all the public art catalogued
- want to leverage the data available in wikidata
- want to leverage user knowledge to augment our database
- example of a mural near McGill at 625 Milton that is not in our database, but that would be great to have catalogued https://www.mtl.org/fr/quoi-faire/culture-arts-patrimoine/murale-en-hommage-jean-paul-riopelle-montreal

### Solution

- add functionality to the app to allow users to lookup the missing public artwork in wikidata and request it be added to the MONA database [Rough Wikidata Search Demo WIP](https://margelle.github.io/rapportMONA/monajoutdemo.html)
- if also not in wikidata, present user with data entry form that will add it to both wikidata and the MONA DB (along similar lines as https://db.simssa.ca/create/)
- limited to trusted users (MONA staff and volunteers) for now to avoid invalid/problematic entries, but in the long term, we hope to create a verification system to permit crowd sourcing on entries
## Varia
Comments and questions from review of[ SIMSSADB repo Wiki](https://github.com/ELVIS-Project/simssadb/wiki)
- authority control https://github.com/ELVIS-Project/simssadb/wiki/6.-Upload-Form-Documentation#authority-control : consideration for MONA later if everyone is allowed to suggest additions
- provenance chains https://github.com/ELVIS-Project/simssadb/wiki/3.-Data-Model-Issues#where-did-you-get-this-information; how to deal with partial updates rather than new adds?
- they are using Django Autocomplete Light (DAL) since custom autocomplete didn't work out https://github.com/ELVIS-Project/simssadb/wiki/6.-Upload-Form-Documentation#authority-control; does similar exist for Laravel/php ? autocomplete would be a good quality of life improvement
- search query formatting is rigid (low/no forgiveness for Apostrophes and Spelling) https://github.com/ELVIS-Project/simssadb/wiki/1.-User-Manual#query-formatting
- celery python library event queue: investigate or ask why it is used?
[Github issue "add back wikidata"](https://github.com/ELVIS-Project/simssadb/issues/395) : there is a discussion in this github issue that is very similar to the senspublic Pepi Wikidata about free-form vs rigid categories
Perhaps prefill DB from wikidata/cache for faster access? https://github.com/ELVIS-Project/simssadb/issues/179
## Questions potentielles
- Pourquoi on n'a pas encore wikidata dans nos données?
- on a commencé le projet en travaillant avec des données ouvertes (json, csv) de portails de données ouvertes
- ce sont des données produites par des institutions/autorités
- comme il y a des mises à jour à ces données, défi d'ajuster l'infrastructure à la mise à jour et à l'augmentation des sources de données ouvertes
## Meeting notes
Kyrié, Anna
historical scores: more like mnemonic tool
introductions
MONA intro
- think about introduce the GLAM+ term :)
Simon's presentation
- point out that "recursive seeming aspect" of MONA being a source for MONA
- hierarchy for import source authority, data occlusion
- internal fixes for data: what if the new (updated) data → MONA takes precedence, but what if we "became" wrong → would need to flag it
- what type of change might happen? change in location (removal for construction)
- no info on artists? reconciled with wikidata and ULAN → right thing to do
user feedback
- get info about when people add stuff
### from now on
we can announce any collaboration for grants, mcGill, if it might help for a grant
their project
- only wikidata, no attempt to align: everything stay separate in the data lake, becomes put together in SPARQL queries. Same composers will have several ids but they will be reconciled inside wikidata
- greedy learning vs lazy learning (machine learning) → proposes "lazy searching"
- natural language queries for
instruments project: local name for every musical instrument people have
- crowdsourcing local instrument names and get pictures as well
- use approximate coordinates for local variants?
- how to authenticate upload to wikidata: become the proxy for the users
have a workshop on how to crowdsource wikidata as proxies?
initially, map the properties, use openrefine to reconcile, then the updates change the contents
send them news when we work on stuff: ex publication of SensPublic article
can help for grant writing → recommends CRSH (not NG! )
best bet is FRQSC, team grants (5 year renewable 1time, if you change )
cermmt : keep renewing by changing the PI
## Debrief
icones + couleurs MONA
poster ou demo DIRO: abstract pour le 10 mars