ex nom du fichier :
Readme(point)md
# PTM : vers un modèle d'annotation
Que contient le sharedoc?
Sharedoc des documents :
https://sharedocs.huma-num.fr/#/HOME/Perso/PTM
4 Dossiers :
* CORPUS_1898
Mise en place du corpus d'annotation, phase 1 = annotation manuelle
* DATA -> les données
* à la racine les éditions brutes (sans annotation)
* GOLD = annotations manuelles < Alena (via Prodigy)
* AUTOM = annotations suite à application d'un modèle
* MODEL -> les modèles obtenus (via Prodigy)
* un modèle à 14 étiquettes
* un modèle à 12 étiquettes (merge : NUMGLOB & NUM, PERS et PRENOM). Ce dernier modèle a été mis en place pour l'exemplication.
=> Il faudrait décider après avoir typé les erreurs, quel modèle créer/garder.
* SCRIPT -> les scripts permettant de manipuler les données
* jsonl_ToModel.py :
script qui prend en entrée un fichier brut (sans annotation - dossier DATA/GOLD -> (en sortie) un fichier annoté automatiquement (dossier AUTOM)
* jsonl_CptEn.py :
compter le nombre d'entités dans les fichier annotés (DATA/AUTOM)
* modifEtiquettes.py :
script pour merger des EN (exple : NUMGLOB en NUM)
# Descriptif des procédures :
## Annotation manuelle
Le manuel d'annotation :
https://docs.google.com/document/d/1QoVk4bx4mgWC6rLIpB2CpzvkYvsoSOoL2VzhP1cJBJI/edit
[Alena] Nous avons annoté sous Prodigy qq pages de l'édition de 1898.
Ont été annotées les pages 1167 à 1201
* dossier DATA -> fichier brut :
> p1167-1201_xmlPage.jsonl
* dossier DATA/GOLD -> fichier annoté
> annot_Alena_14etiquettes.jsonl
## Création d'un modèle
cf. notebook
https://github.com/fmelanie/fromAnnotationsToModel/blob/main/annotationsToModel.ipynb
Utilisation de Prodigy pour créer un modèle
> prodigy train --ner ptm_Alena_test -LS MODEL/output_model_14labels
où :
* -LS :: pour avoir les stats par entité
* MODEL/output_model_14labels = nom du répertopire de sortie
La sortie shell obtenue est la suivante :
> ℹ Using CPU
========================= Generating Prodigy config =========================
ℹ Auto-generating config with spaCy
✔ Generated training config
=========================== Initializing pipeline ===========================
[2022-04-25 16:30:09,941] [INFO] Set up nlp object from config
Components: ner
Merging training and evaluation data for 1 components
[ner] Training: 392 | Evaluation: 97 (20% split)
Training: 195 | Evaluation: 46
Labels: ner (14)
[2022-04-25 16:30:10,237] [INFO] Pipeline: ['tok2vec', 'ner']
[2022-04-25 16:30:10,240] [INFO] Created vocabulary
[2022-04-25 16:30:10,240] [INFO] Finished initializing nlp object
[2022-04-25 16:30:11,567] [INFO] Initialized pipeline components: ['tok2vec', 'ner']
✔ Initialized pipeline
============================= Training pipeline =============================
Components: ner
Merging training and evaluation data for 1 components
[ner] Training: 392 | Evaluation: 97 (20% split)
Training: 195 | Evaluation: 46
Labels: ner (14)
ℹ Pipeline: ['tok2vec', 'ner']
ℹ Initial learn rate: 0.001
|E|#|LOSS TOK2VEC|LOSS NER|ENTS_F|ENTS_P|ENTS_R|SCORE|
|---|---|---|---|---|---|---|---|
|0|0|0.00|133.34|0.06|2.13|0.03|0.00|
|1|200|1313.54|7723.26|96.57|96.48|96.65|0.97|
|2|400|286.93|1090.45|97.29|97.82|96.77|0.97|
|3|600|546.63|894.78|97.85|97.84|97.87|0.98|
|5|800|254.63|488.87|97.90|97.78|98.02|0.98|
|6|1000|314.74|400.49|97.83|97.70|97.96|0.98|
|8|1200|549.02|316.90|97.94|97.84|98.05|0.98|
|9|1400|360.15|320.40|98.07|97.82|98.31|0.98|
|11|1600|329.16|276.22|98.06|97.96|98.16|0.98|
|13|1800|377.14|265.20|97.90|97.73|98.08|0.98|
|15|2000|693.03|301.09|97.95|97.73|98.16|0.98|
|18|2200|499.73|288.01|98.02|97.99|98.05|0.98|
|20|2400|396.63|188.88|98.14|98.02|98.25|0.98|
|24|2600|606.93|294.45|98.09|97.99|98.19|0.98|
|28|2800|520.56|236.49|97.86|97.75|97.96|0.98|
|33|3000|887.81|342.58|98.14|97.99|98.28|0.98|
|38|3200|669.43|252.42|98.09|97.91|98.28|0.98|
|44|3400|493.57|214.63|97.97|97.84|98.11|0.98|
|49|3600|355.02|148.72|98.17|98.05|98.28|0.98|
|55|3800|512.43|156.28|98.15|98.05|98.25|0.98|
|60|4000|502.57|124.27|98.12|98.08|98.16|0.98|
|65|4200|440.06|119.41|98.21|98.11|98.31|0.98|
|71|4400|576.80|111.11|98.28|98.23|98.34|0.98|
|76|4600|730.22|122.67|97.94|97.90|97.99|0.98|
|82|4800|457.13|78.25|98.17|98.08|98.25|0.98|
|87|5000|3084.66|249.73|97.86|97.67|98.05|0.98|
|92|5200|1126.26|124.52|98.24|98.11|98.37|0.98|
|98|5400|265.95|56.73|98.21|98.05|98.37|0.98|
|103|5600|313.77|69.77|98.18|98.14|98.22|0.98|
|109|5800|748.27|95.30|98.00|97.87|98.13|0.98|
|114|6000|934.81|125.97|98.20|98.08|98.31|0.98|
> ✔ Saved pipeline to output directory
output_model/model-last
=============================== NER (per type) ===============================
||P|R|F|
|---|---|---|---|
|NUM|99.22|99.38|99.30|
|SPATIAL|98.59|89.17|93.65|
|PERS|94.89|97.81|96.33|
|VILLE|99.19|98.58|98.88|
|VOIE|99.78|100.00|99.89|
|RUE|98.92|99.35|99.13|
|NUM_PERS|99.56|99.78|99.67|
|STATUT|97.27|97.27|97.27|
|ORG|79.49|83.78|81.58|
|GER|100.00|100.00|100.00|
|LOC|100.00|96.67|98.31|
|PRENOM|44.44|80.00|57.14|
|NUM_GLOB|50.00|33.33|40.00|
|PART|100.00|100.00|100.00|
NB:
* PRENOM = se trompe peu, mais en loupe bcp
(mais 80... c'est quand même bien moins que les 96 à 99 des autres étiquettes)
* best modèle doit être :
|E|#|LOSS TOK2VEC|LOSS NER|ENTS_F|ENTS_P|ENTS_R|SCORE|
|---|---|---|---|---|---|---|---|
|98|5400|265.95|56.73|98.21|98.05|98.37|0.98|
= où la loss (TOK2VEC et NER) est la plus faible
### COMPTER les entités
* SOIT avec script python (jsonl_CptEN.py)
> python jsonl_CptEN_annotManuelle.py
|Annotated|Number|
|---|---|
|NUM|3084|
|PERS|2327|
|STATUT|510|
|GER|191|
|VILLE|2423|
|VOIE|2254|
|RUE|2273|
|NUM_PERS|2247|
|LOC|313|
|SPATIAL|623|
|PART|32|
|ORG|195|
|NUM_GLOB|32|
|PRENOM|29|
|sum of annotations / NE|16533|
* SOIT avec 'recette' Prodigy
Il est possible d'utiliser (d'adapter) une 'recette' prodigy (dossier SCRIPT/RECIPE).
https://github.com/explosion/prodigy-recipes
Adapter la recette annotation. En fin de tâche, afficher le nb d'EN annotées
> prodigy ner.manualCt ptm_Alena_test blank:fr ../DATA/GOLD/annot_Alena_14etiquettes.jsonl --label NUM,NUM_GLOB,PART,PERS,PRENOM,ORG,STATUT,VILLE,VOIE,RUE,NUM_PERS,GER,LOC,SPATIAL,JOKER -F RECIPE/recipeNerManualCt.py
> ✨ Starting the web server at http://localhost:8080 ...
Open the app in your browser and start annotating!
|Annotated|Number|
|---|---|
|NUM|3084|
|PERS|2327|
|STATUT|510|
|GER|191|
|VILLE|2423|
|VOIE|2254|
|RUE|2273|
|NUM_PERS|2247|
|LOC|313|
|SPATIAL|623|
|PART|32|
|ORG|195|
|NUM_GLOB|32|
|PRENOM|29|
|sum of annotations / NE|16533|
### APPLIQUER LE MODELE à tout le corpus
[Méthode N°1 = Spacy via Prodigy]
Ici nous utilisons Prodigy. cf. plus bas juste avec Script, sans l'outil Prodigy.
> prodigy print-stream output_model_25avril/model-best/ ed1898_xmlPage.jsonl > ptm_all_14_viaProdigy.txt
!!! la sortie est prévue pour une visualisation en surbrillance que le shell, le format obtenu n'est donc pas un format jsonl.
Il faudrait faire autrement...
### Merger des EN et créer un nouveau modèle
NUM & NUM_GLOB -> NUM
PERS & PRENOM -> PERS
- modifier en texte (dans le fichier jsonl)
ou utiliser le script : modifEtiquettes.py
- importer le jsonl dans la bdd prodigy
> prodigy db-in ptm_Alena_merge12 annot_Alena_12etiquettes.jsonl
- créer un modèle avec 12 étiquettes :
> prodigy train --ner ptm_Alena_merge12 -LS output_model_12labels
Quelles sont les fdifférences? Est-il maeilleur ?
> ℹ Using CPU
========================= Generating Prodigy config =========================
ℹ Auto-generating config with spaCy
✔ Generated training config
=========================== Initializing pipeline ===========================
[2022-04-25 17:21:56,454] [INFO] Set up nlp object from config
Components: ner
Merging training and evaluation data for 1 components
[ner] Training: 392 | Evaluation: 97 (20% split)
Training: 195 | Evaluation: 46
Labels: ner (12)
[2022-04-25 17:21:56,762] [INFO] Pipeline: ['tok2vec', 'ner']
[2022-04-25 17:21:56,764] [INFO] Created vocabulary
[2022-04-25 17:21:56,765] [INFO] Finished initializing nlp object
[2022-04-25 17:21:58,136] [INFO] Initialized pipeline components: ['tok2vec', 'ner']
✔ Initialized pipeline
============================= Training pipeline =============================
Components: ner
Merging training and evaluation data for 1 components
[ner] Training: 392 | Evaluation: 97 (20% split)
Training: 195 | Evaluation: 46
Labels: ner (12)
ℹ Pipeline: ['tok2vec', 'ner']
ℹ Initial learn rate: 0.001
|E|#|LOSS TOK2VEC|LOSS NER|ENTS_F|ENTS_P|ENTS_R|SCORE|
|---|---|---|---|---|---|---|---|
|0|0|0.00|132.84|0.23|5.63|0.12|0.00|
|1|200|1017.97|7317.63|96.96|96.78|97.13|0.97|
|2|400|242.68|967.92|97.87|97.87|97.87|0.98|
|3|600|457.93|742.35|98.00|97.93|98.08|0.98|
|5|800|263.79|358.70|98.12|98.05|98.19|0.98|
|6|1000|287.13|369.25|97.90|97.78|98.02|0.98|
|8|1200|344.90|293.54|97.61|97.83|97.39|0.98|
|9|1400|259.34|217.35|97.87|97.76|97.99|0.98|
|11|1600|2157.85|319.89|98.23|98.11|98.34|0.98|
|13|1800|401.33|235.27|98.23|98.14|98.31|0.98|
|15|2000|397.44|190.77|98.14|97.99|98.28|0.98|
|18|2200|398.55|178.28|98.27|98.17|98.37|0.98|
|20|2400|559.63|207.65|98.31|98.26|98.37|0.98|
|24|2600|864.15|284.82|98.14|97.96|98.31|0.98|
|28|2800|698.74|231.12|98.30|98.20|98.40|0.98|
|33|3000|848.87|267.26|98.31|98.20|98.43|0.98|
|38|3200|730.63|225.93|98.56|98.52|98.61|0.99|
|44|3400|401.17|146.44|98.05|97.93|98.16|0.98|
|49|3600|254.33|91.69|98.48|98.40|98.55|0.98|
|55|3800|396.94|110.42|98.33|98.20|98.46|0.98|
|60|4000|471.04|105.48|98.33|98.23|98.43|0.98|
|65|4200|347.21|77.24|98.39|98.32|98.46|0.98|
|71|4400|240.15|62.20|98.36|98.29|98.43|0.98|
|76|4600|740.12|120.00|98.12|98.05|98.19|0.98|
|82|4800|266.06|69.45|98.00|97.87|98.13|0.98|
> ✔ Saved pipeline to output directory
output_model_12labels/model-last
=============================== NER (per type) ===============================
||P|R|F|
|---|---|---|---|
|NUM|99.69|99.69|99.69|
|SPATIAL|98.58|88.54|93.29|
|PERS|95.56|98.05|96.79|
|VILLE|99.39|98.79|99.09|
|VOIE|99.78|100.00|99.89|
|RUE|98.92|99.35|99.13|
|NUM_PERS|99.12|99.56|99.34|
|STATUT|98.18|98.18|98.18|
|ORG|76.74|89.19|82.50|
|GER|100.00|100.00|100.00|
|LOC|100.00|98.33|99.16|
|PART|100.00|100.00|100.00|
- créer un modèle avec 14 étiquettes : (20 mai 2022)
Après un premier test d'annotation sur l'ensemble du corpus, nous avons remarqué que ne pas annoter les titres et les pages étaient problématique pour le modèle.
Le modèle annotait les pages et titres à ne pas annoter.
1/ Nous avons ajouter les étiquettes PAGE et titre dans notre fichier Gold :
python jsonl_ajoutPAGEetTITRE.py
Ce script prend en entrée le fichier annoté par Alena (format jsonl) et ajoute aux entités non anotées l'étiquette PAGE (si le texte est un chiffre) ou TITRE (pour les autres antitées non annotées)
2/ le fichier résultant du script précédant est importé dans prodigy
prodigy db-in ptm_Alena_test_PageTitre ptm_Alena_16etiquettes.jsonl
On vérifie que l'annotation est bien importée :
prodigy print-dataset ptm_Alena_test_PageTitre
3/ on créé un modèle à 16 étiquettes
prodigy train --ner ptm_Alena_test_PageTitre -LS MODEL/output_model_16labels
prodigy train --ner ptm_Alena_test_PageTitre -LS MODEL/output_model_16labels
ℹ Using CPU
========================= Generating Prodigy config =========================
ℹ Auto-generating config with spaCy
✔ Generated training config
=========================== Initializing pipeline ===========================
[2022-05-20 11:19:34,145] [INFO] Set up nlp object from config
Components: ner
Merging training and evaluation data for 1 components
- [ner] Training: 392 | Evaluation: 97 (20% split)
Training: 392 | Evaluation: 97
Labels: ner (16)
[2022-05-20 11:19:34,452] [INFO] Pipeline: ['tok2vec', 'ner']
[2022-05-20 11:19:34,454] [INFO] Created vocabulary
[2022-05-20 11:19:34,455] [INFO] Finished initializing nlp object
[2022-05-20 11:19:35,792] [INFO] Initialized pipeline components: ['tok2vec', 'ner']
✔ Initialized pipeline
============================= Training pipeline =============================
Components: ner
Merging training and evaluation data for 1 components
- [ner] Training: 392 | Evaluation: 97 (20% split)
Training: 392 | Evaluation: 97
Labels: ner (16)
ℹ Pipeline: ['tok2vec', 'ner']
ℹ Initial learn rate: 0.001
|E|#|LOSS|TOK2VEC|LOSS|NER|ENTS_F|ENTS_P|ENTS_R|SCORE|
|---|------|------|------|------|--------|------|------|------|------|
|0|0|0.00|107.97|8.22|5.87|13.74|0.08|
|1|200|3944.99|9889.63|96.63|96.65|96.62|0.97|
|2|400|326.53|1033.33|97.03|97.44|96.62|0.97|
|3|600|315.15|693.91|97.73|98.01|97.46|0.98|
|5|800|313.70|505.62|97.95|97.82|98.08|0.98|
|6|1000|408.12|462.68|98.18|97.97|98.40|0.98|
|7|1200|3h39.18|341.92|98.18|98.00|98.37|0.98|
|9|1400|326.05|312.10|98.14|97.91|98.37|0.98|
|11|1600|4742.89|463.88|98.12|97.97|98.28|0.98|
|13|1800|398.04|272.66|98.02|97.99|98.05|0.98|
|15|2000|502.20|278.64|97.99|97.88|98.10|0.98|
|17|2200|646.67|256.04|98.27|98.17|98.37|0.98|
|20|2400|655.52|265.97|98.32|98.22|98.43|0.98|
|23|2600|603.93|255.26|98.41|98.28|98.54|0.98|
|28|2800|554.42|246.28|98.37|98.23|98.51|0.98|
|33|3000|542.80|198.15|97.76|97.64|97.87|0.98|
|38|3200|529.57|205.11|98.24|98.08|98.40|0.98|
|43|3400|476.44|168.25|98.16|98.08|98.25|0.98|
|49|3600|466.52|153.77|98.24|98.19|98.28|0.98|
|54|3800|984.93|195.25|97.89|97.73|98.05|0.98|
|59|4000|763.95|172.38|98.18|98.05|98.31|0.98|
|64|4200|630.95|122.35|98.14|98.02|98.25|0.98|
✔ Saved pipeline to output directory
MODEL/output_model_16labels/model-last
=============================== NER (per type) ===============================
RESULTAT 16 ETIQUETTES
||P|R|F|
|---|---|---|---|
|NUM|99.38|99.53|99.45|
|SPATIAL|98.59|89.17|93.65|
|PERS|95.94|98.46|97.19|
|VILLE|99.19|99.19|99.19|
|VOIE|99.78|100.00|99.89|
|RUE|98.49|99.13|98.81|
|NUM_PERS|99.34|99.56|99.45|
|STATUT|98.18|98.18|98.18|
|ORG|75.00|89.19|81.48|
|GER|100.00|100.00|100.00|
|LOC|96.67|96.67|96.67|
|PRENOM|33.33|40.00|36.36|
|NUM_GLOB|100.00|33.33|50.00|
|TITRE|100.00|100.00|100.00|
|PART|100.00|100.00|100.00|
|PAGE|100.00|100.00|100.00|
### Appliquer le modèle à tout le corpus du corpus
[Méthode N°2 = Spacy]
Ici nous utilisons Spacy (sans Prodigy), directement appelé dans un script.
> python jsonlToModel.py
... en sortie nous obtenous un fichier jsonl (dossier DATA/AUTOM ) : ptm_all_14.jsonl
-> et maintenant : ptm_all_16.jsonl
!!!! pb de gestion double guillemets (à revoir dans le script)
### Compter le nombre d'entités
> python jsonl_CptEN.py
|Annotated|Number|
|---|---|
|PERS|64629|
|VILLE|64647|
|VOIE|60152|
|SPATIAL|13871|
|NUM|82174|
|RUE|61583|
|NUM_GLOB|2119|
|LOC|8773|
|NUM_PERS|62362|
|ORG|5839|
|STATUT|16689|
|GER|5473|
|PRENOM|1356|
|PART|785|
|sum of annotations / NE|450452|
## Analyse et correction des données annotées automatiquement
Que fait-on une fois les données annotées automatiquement?
### Typer les erreurs d'annotation
Trouver un traitement automatique -> un script
Nous allons prendre 2 fichiers :
- le fichier annoté manuellement
- la même partie des annuaires annotée automatiquement
Que peut-on observer ?
Utilisation du module NEREvaluate (https://pypi.org/project/nervaluate/) pour typer les erreurs.
Voici un premier résultat sur l'ensemble sur le modèle qui tient compte de toutes les 14 catégories.
### MODELE 14 CATEGORIES
```
OVERALL RESULTS
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 16473, 'incorrect': 53, 'partial': 0, 'missed': 7, 'spurious': 490, 'possible': 16533, 'actual': 17016, 'precision': 0.9680888575458392, 'recall': 0.9963708945744874, 'f1': 0.982026289904319}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 16506, 'incorrect': 20, 'partial': 0, 'missed': 7, 'spurious': 490, 'possible': 16533, 'actual': 17016, 'precision': 0.9700282087447109, 'recall': 0.9983669025585193, 'f1': 0.9839935616560851}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 16506, 'incorrect': 0, 'partial': 20, 'missed': 7, 'spurious': 490, 'possible': 16533, 'actual': 17016, 'precision': 0.9706158909261872, 'recall': 0.9989717534627715, 'f1': 0.9845897046111658}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 16490, 'incorrect': 36, 'partial': 0, 'missed': 7, 'spurious': 490, 'possible': 16533, 'actual': 17016, 'precision': 0.9690879172543488, 'recall': 0.997399141111716, 'f1': 0.9830397329279561}
```
Il est possible d'obtenir la même sortie par catégorie.
```
RESULTS PER TAG: PERS
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 2317, 'incorrect': 10, 'partial': 0, 'missed': 0, 'spurious': 141, 'possible': 2327, 'actual': 2468, 'precision': 0.9388168557536467, 'recall': 0.9957026214009455, 'f1': 0.9664233576642336}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 2322, 'incorrect': 5, 'partial': 0, 'missed': 0, 'spurious': 141, 'possible': 2327, 'actual': 2468, 'precision': 0.9408427876823339, 'recall': 0.9978513107004727, 'f1': 0.9685088633993744}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 2322, 'incorrect': 0, 'partial': 5, 'missed': 0, 'spurious': 141, 'possible': 2327, 'actual': 2468, 'precision': 0.9418557536466775, 'recall': 0.9989256553502364, 'f1': 0.9695516162669447}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 2320, 'incorrect': 7, 'partial': 0, 'missed': 0, 'spurious': 141, 'possible': 2327, 'actual': 2468, 'precision': 0.940032414910859, 'recall': 0.9969918349806618, 'f1': 0.967674661105318}
RESULTS PER TAG: VILLE
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 2416, 'incorrect': 5, 'partial': 0, 'missed': 2, 'spurious': 6, 'possible': 2423, 'actual': 2427, 'precision': 0.9954676555418212, 'recall': 0.9971110193974412, 'f1': 0.9962886597938144}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 2418, 'incorrect': 3, 'partial': 0, 'missed': 2, 'spurious': 6, 'possible': 2423, 'actual': 2427, 'precision': 0.9962917181705809, 'recall': 0.9979364424267437, 'f1': 0.9971134020618557}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 2418, 'incorrect': 0, 'partial': 3, 'missed': 2, 'spurious': 6, 'possible': 2423, 'actual': 2427, 'precision': 0.9969097651421508, 'recall': 0.9985555096987206, 'f1': 0.9977319587628867}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 2419, 'incorrect': 2, 'partial': 0, 'missed': 2, 'spurious': 6, 'possible': 2423, 'actual': 2427, 'precision': 0.9967037494849609, 'recall': 0.998349153941395, 'f1': 0.9975257731958764}
RESULTS PER TAG: VOIE
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 2250, 'incorrect': 3, 'partial': 0, 'missed': 1, 'spurious': 2, 'possible': 2254, 'actual': 2255, 'precision': 0.9977827050997783, 'recall': 0.9982253771073647, 'f1': 0.998003992015968}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 2250, 'incorrect': 3, 'partial': 0, 'missed': 1, 'spurious': 2, 'possible': 2254, 'actual': 2255, 'precision': 0.9977827050997783, 'recall': 0.9982253771073647, 'f1': 0.998003992015968}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 2250, 'incorrect': 0, 'partial': 3, 'missed': 1, 'spurious': 2, 'possible': 2254, 'actual': 2255, 'precision': 0.9984478935698448, 'recall': 0.998890860692103, 'f1': 0.9986693280106455}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 2253, 'incorrect': 0, 'partial': 0, 'missed': 1, 'spurious': 2, 'possible': 2254, 'actual': 2255, 'precision': 0.9991130820399113, 'recall': 0.9995563442768411, 'f1': 0.9993346640053226}
RESULTS PER TAG: RUE
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 2270, 'incorrect': 3, 'partial': 0, 'missed': 0, 'spurious': 52, 'possible': 2273, 'actual': 2325, 'precision': 0.9763440860215054, 'recall': 0.9986801583809943, 'f1': 0.9873858199217052}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 2270, 'incorrect': 3, 'partial': 0, 'missed': 0, 'spurious': 52, 'possible': 2273, 'actual': 2325, 'precision': 0.9763440860215054, 'recall': 0.9986801583809943, 'f1': 0.9873858199217052}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 2270, 'incorrect': 0, 'partial': 3, 'missed': 0, 'spurious': 52, 'possible': 2273, 'actual': 2325, 'precision': 0.9769892473118279, 'recall': 0.9993400791904972, 'f1': 0.9880382775119617}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 2273, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 52, 'possible': 2273, 'actual': 2325, 'precision': 0.9776344086021506, 'recall': 1.0, 'f1': 0.9886907351022184}
RESULTS PER TAG: NUM_PERS
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 2246, 'incorrect': 0, 'partial': 0, 'missed': 1, 'spurious': 100, 'possible': 2247, 'actual': 2346, 'precision': 0.9573742540494459, 'recall': 0.9995549621717846, 'f1': 0.9780100152405835}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 2246, 'incorrect': 0, 'partial': 0, 'missed': 1, 'spurious': 100, 'possible': 2247, 'actual': 2346, 'precision': 0.9573742540494459, 'recall': 0.9995549621717846, 'f1': 0.9780100152405835}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 2246, 'incorrect': 0, 'partial': 0, 'missed': 1, 'spurious': 100, 'possible': 2247, 'actual': 2346, 'precision': 0.9573742540494459, 'recall': 0.9995549621717846, 'f1': 0.9780100152405835}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 2246, 'incorrect': 0, 'partial': 0, 'missed': 1, 'spurious': 100, 'possible': 2247, 'actual': 2346, 'precision': 0.9573742540494459, 'recall': 0.9995549621717846, 'f1': 0.9780100152405835}
RESULTS PER TAG: NUM
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 3080, 'incorrect': 3, 'partial': 0, 'missed': 1, 'spurious': 18, 'possible': 3084, 'actual': 3101, 'precision': 0.9932279909706546, 'recall': 0.9987029831387808, 'f1': 0.995957962813258}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 3081, 'incorrect': 2, 'partial': 0, 'missed': 1, 'spurious': 18, 'possible': 3084, 'actual': 3101, 'precision': 0.9935504675910997, 'recall': 0.9990272373540856, 'f1': 0.9962813257881974}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 3081, 'incorrect': 0, 'partial': 2, 'missed': 1, 'spurious': 18, 'possible': 3084, 'actual': 3101, 'precision': 0.9938729442115447, 'recall': 0.9993514915693904, 'f1': 0.9966046887631367}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 3082, 'incorrect': 1, 'partial': 0, 'missed': 1, 'spurious': 18, 'possible': 3084, 'actual': 3101, 'precision': 0.9938729442115447, 'recall': 0.9993514915693904, 'f1': 0.9966046887631367}
RESULTS PER TAG: NUM_GLOB
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 30, 'incorrect': 2, 'partial': 0, 'missed': 0, 'spurious': 72, 'possible': 32, 'actual': 104, 'precision': 0.28846153846153844, 'recall': 0.9375, 'f1': 0.44117647058823534}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 32, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 72, 'possible': 32, 'actual': 104, 'precision': 0.3076923076923077, 'recall': 1.0, 'f1': 0.47058823529411764}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 32, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 72, 'possible': 32, 'actual': 104, 'precision': 0.3076923076923077, 'recall': 1.0, 'f1': 0.47058823529411764}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 30, 'incorrect': 2, 'partial': 0, 'missed': 0, 'spurious': 72, 'possible': 32, 'actual': 104, 'precision': 0.28846153846153844, 'recall': 0.9375, 'f1': 0.44117647058823534}
RESULTS PER TAG: PART
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 32, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 32, 'actual': 32, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 32, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 32, 'actual': 32, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 32, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 32, 'actual': 32, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 32, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 32, 'actual': 32, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
RESULTS PER TAG: PRENOM
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 28, 'incorrect': 1, 'partial': 0, 'missed': 0, 'spurious': 21, 'possible': 29, 'actual': 50, 'precision': 0.56, 'recall': 0.9655172413793104, 'f1': 0.7088607594936709}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 29, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 21, 'possible': 29, 'actual': 50, 'precision': 0.58, 'recall': 1.0, 'f1': 0.7341772151898733}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 29, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 21, 'possible': 29, 'actual': 50, 'precision': 0.58, 'recall': 1.0, 'f1': 0.7341772151898733}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 28, 'incorrect': 1, 'partial': 0, 'missed': 0, 'spurious': 21, 'possible': 29, 'actual': 50, 'precision': 0.56, 'recall': 0.9655172413793104, 'f1': 0.7088607594936709}
RESULTS PER TAG: ORG
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 189, 'incorrect': 6, 'partial': 0, 'missed': 0, 'spurious': 3, 'possible': 195, 'actual': 198, 'precision': 0.9545454545454546, 'recall': 0.9692307692307692, 'f1': 0.9618320610687022}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 194, 'incorrect': 1, 'partial': 0, 'missed': 0, 'spurious': 3, 'possible': 195, 'actual': 198, 'precision': 0.9797979797979798, 'recall': 0.9948717948717949, 'f1': 0.9872773536895675}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 194, 'incorrect': 0, 'partial': 1, 'missed': 0, 'spurious': 3, 'possible': 195, 'actual': 198, 'precision': 0.9823232323232324, 'recall': 0.9974358974358974, 'f1': 0.9898218829516541}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 190, 'incorrect': 5, 'partial': 0, 'missed': 0, 'spurious': 3, 'possible': 195, 'actual': 198, 'precision': 0.9595959595959596, 'recall': 0.9743589743589743, 'f1': 0.9669211195928753}
RESULTS PER TAG: STATUT
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 507, 'incorrect': 1, 'partial': 0, 'missed': 2, 'spurious': 51, 'possible': 510, 'actual': 559, 'precision': 0.9069767441860465, 'recall': 0.9941176470588236, 'f1': 0.9485500467726847}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 508, 'incorrect': 0, 'partial': 0, 'missed': 2, 'spurious': 51, 'possible': 510, 'actual': 559, 'precision': 0.9087656529516994, 'recall': 0.996078431372549, 'f1': 0.950420954162769}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 508, 'incorrect': 0, 'partial': 0, 'missed': 2, 'spurious': 51, 'possible': 510, 'actual': 559, 'precision': 0.9087656529516994, 'recall': 0.996078431372549, 'f1': 0.950420954162769}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 507, 'incorrect': 1, 'partial': 0, 'missed': 2, 'spurious': 51, 'possible': 510, 'actual': 559, 'precision': 0.9069767441860465, 'recall': 0.9941176470588236, 'f1': 0.9485500467726847}
RESULTS PER TAG: GER
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 191, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 191, 'actual': 191, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 191, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 191, 'actual': 191, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 191, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 191, 'actual': 191, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 191, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 191, 'actual': 191, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
RESULTS PER TAG: LOC
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 311, 'incorrect': 2, 'partial': 0, 'missed': 0, 'spurious': 24, 'possible': 313, 'actual': 337, 'precision': 0.9228486646884273, 'recall': 0.9936102236421726, 'f1': 0.956923076923077}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 313, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 24, 'possible': 313, 'actual': 337, 'precision': 0.9287833827893175, 'recall': 1.0, 'f1': 0.963076923076923}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 313, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 24, 'possible': 313, 'actual': 337, 'precision': 0.9287833827893175, 'recall': 1.0, 'f1': 0.963076923076923}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 311, 'incorrect': 2, 'partial': 0, 'missed': 0, 'spurious': 24, 'possible': 313, 'actual': 337, 'precision': 0.9228486646884273, 'recall': 0.9936102236421726, 'f1': 0.956923076923077}
RESULTS PER TAG: SPATIAL
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 606, 'incorrect': 17, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 623, 'actual': 623, 'precision': 0.9727126805778491, 'recall': 0.9727126805778491, 'f1': 0.9727126805778491}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 620, 'incorrect': 3, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 623, 'actual': 623, 'precision': 0.9951845906902087, 'recall': 0.9951845906902087, 'f1': 0.9951845906902087}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 620, 'incorrect': 0, 'partial': 3, 'missed': 0, 'spurious': 0, 'possible': 623, 'actual': 623, 'precision': 0.9975922953451043, 'recall': 0.9975922953451043, 'f1': 0.9975922953451043}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 608, 'incorrect': 15, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 623, 'actual': 623, 'precision': 0.9759229534510433, 'recall': 0.9759229534510433, 'f1': 0.9759229534510433}
RESULTS PER TAG: JOKER
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 0, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 0, 'actual': 0, 'precision': 0, 'recall': 0, 'f1': 0}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 0, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 0, 'actual': 0, 'precision': 0, 'recall': 0, 'f1': 0}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 0, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 0, 'actual': 0, 'precision': 0, 'recall': 0, 'f1': 0}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 0, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 0, 'actual': 0, 'precision': 0, 'recall': 0, 'f1': 0}
```
Pistes :
* ce qui n'est pas annoté pose problème : num de page, titre de rue
* D'après les types d'erreurs trouvées, on dirait que SPATIAL pose beaucoup de problèmes, sinon assez étonnant le faible nombre d'erreurs sur les autres catégories.
* Num_Glob est peut-être unutile pour la machine, lui complexifie la tâche (?postraitement serait mieux?, modèle avec des pages ayant davantage d'accolades, etc...)
## Corriger les erreurs d'annotation
Relecture manuelle, et typage manuel des erreurs
Quels outils ? Quelles méthodes ?
### dans Prodigy (méthode non retenue)
!!! ne fonctionne pas si pas au format "comme déjà annoté" ptm_all_14.jsonl
donc ajout dans le jsonl de :
"_view_id":"ner_manual","answer":"accept","_timestamp":1649840818
> prodigy db-in ptm_1898_all ../DATA/AUTOM/ptm_all_14.jsonl
> prodigy review ptm_1898_review ptm_1898_all --label NUM,NUM_GLOB,PART,PERS,PRENOM,ORG,STATUT,VILLE,VOIE,RUE,NUM_PERS,GER,LOC,SPATIAL,JOKER
!!! pb et TODO ajouter les tokens
[... etc]
### dans TagTog
= Méthode choisie
Script pour passer de jsonl à un format compatible
Possibilités évoquées :
* Relire et corriger les données annotées manuellement + qq pages avant et après.
* Annoter les titres et les pages ??? OUI
### MODELE 16 CATEGORIES
#### Analyse d'erreur
```
OVERALL RESULTS
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 16714, 'incorrect': 62, 'partial': 0, 'missed': 5, 'spurious': 14, 'possible': 16781, 'actual': 16790, 'precision': 0.995473496128648, 'recall': 0.996007389309338, 'f1': 0.9957403711536743}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 16754, 'incorrect': 22, 'partial': 0, 'missed': 5, 'spurious': 14, 'possible': 16781, 'actual': 16790, 'precision': 0.9978558665872543, 'recall': 0.9983910374828675, 'f1': 0.9981233802984719}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 16754, 'incorrect': 0, 'partial': 22, 'missed': 5, 'spurious': 14, 'possible': 16781, 'actual': 16790, 'precision': 0.9985110184633711, 'recall': 0.9990465407305882, 'f1': 0.9987787078132913}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 16732, 'incorrect': 44, 'partial': 0, 'missed': 5, 'spurious': 14, 'possible': 16781, 'actual': 16790, 'precision': 0.9965455628350208, 'recall': 0.9970800309874263, 'f1': 0.9968127252688334}
```
Par catégorie:
```
RESULTS PER TAG: PERS
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 2320, 'incorrect': 7, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 2327, 'actual': 2327, 'precision': 0.9969918349806618, 'recall': 0.9969918349806618, 'f1': 0.9969918349806618}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 2324, 'incorrect': 3, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 2327, 'actual': 2327, 'precision': 0.9987107864202837, 'recall': 0.9987107864202837, 'f1': 0.9987107864202837}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 2324, 'incorrect': 0, 'partial': 3, 'missed': 0, 'spurious': 0, 'possible': 2327, 'actual': 2327, 'precision': 0.9993553932101418, 'recall': 0.9993553932101418, 'f1': 0.9993553932101418}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 2321, 'incorrect': 6, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 2327, 'actual': 2327, 'precision': 0.9974215728405672, 'recall': 0.9974215728405672, 'f1': 0.9974215728405672}
RESULTS PER TAG: VILLE
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 2419, 'incorrect': 3, 'partial': 0, 'missed': 1, 'spurious': 3, 'possible': 2423, 'actual': 2425, 'precision': 0.9975257731958763, 'recall': 0.998349153941395, 'f1': 0.997937293729373}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 2421, 'incorrect': 1, 'partial': 0, 'missed': 1, 'spurious': 3, 'possible': 2423, 'actual': 2425, 'precision': 0.9983505154639175, 'recall': 0.9991745769706974, 'f1': 0.9987623762376238}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 2421, 'incorrect': 0, 'partial': 1, 'missed': 1, 'spurious': 3, 'possible': 2423, 'actual': 2425, 'precision': 0.9985567010309279, 'recall': 0.9993809327280231, 'f1': 0.9989686468646864}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 2420, 'incorrect': 2, 'partial': 0, 'missed': 1, 'spurious': 3, 'possible': 2423, 'actual': 2425, 'precision': 0.9979381443298969, 'recall': 0.9987618654560462, 'f1': 0.9983498349834984}
RESULTS PER TAG: VOIE
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 2248, 'incorrect': 4, 'partial': 0, 'missed': 2, 'spurious': 2, 'possible': 2254, 'actual': 2254, 'precision': 0.997338065661047, 'recall': 0.997338065661047, 'f1': 0.997338065661047}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 2248, 'incorrect': 4, 'partial': 0, 'missed': 2, 'spurious': 2, 'possible': 2254, 'actual': 2254, 'precision': 0.997338065661047, 'recall': 0.997338065661047, 'f1': 0.997338065661047}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 2248, 'incorrect': 0, 'partial': 4, 'missed': 2, 'spurious': 2, 'possible': 2254, 'actual': 2254, 'precision': 0.9982253771073647, 'recall': 0.9982253771073647, 'f1': 0.9982253771073647}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 2252, 'incorrect': 0, 'partial': 0, 'missed': 2, 'spurious': 2, 'possible': 2254, 'actual': 2254, 'precision': 0.9991126885536823, 'recall': 0.9991126885536823, 'f1': 0.9991126885536823}
RESULTS PER TAG: RUE
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 2268, 'incorrect': 5, 'partial': 0, 'missed': 0, 'spurious': 3, 'possible': 2273, 'actual': 2276, 'precision': 0.9964850615114236, 'recall': 0.9978002639683238, 'f1': 0.9971422290613322}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 2269, 'incorrect': 4, 'partial': 0, 'missed': 0, 'spurious': 3, 'possible': 2273, 'actual': 2276, 'precision': 0.9969244288224957, 'recall': 0.998240211174659, 'f1': 0.9975818861288194}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 2269, 'incorrect': 0, 'partial': 4, 'missed': 0, 'spurious': 3, 'possible': 2273, 'actual': 2276, 'precision': 0.9978031634446397, 'recall': 0.9991201055873296, 'f1': 0.9984612002637941}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 2272, 'incorrect': 1, 'partial': 0, 'missed': 0, 'spurious': 3, 'possible': 2273, 'actual': 2276, 'precision': 0.9982425307557118, 'recall': 0.9995600527936648, 'f1': 0.9989008573312816}
RESULTS PER TAG: NUM_PERS
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 2244, 'incorrect': 2, 'partial': 0, 'missed': 1, 'spurious': 4, 'possible': 2247, 'actual': 2250, 'precision': 0.9973333333333333, 'recall': 0.9986648865153538, 'f1': 0.9979986657771848}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 2245, 'incorrect': 1, 'partial': 0, 'missed': 1, 'spurious': 4, 'possible': 2247, 'actual': 2250, 'precision': 0.9977777777777778, 'recall': 0.9991099243435692, 'f1': 0.9984434067155882}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 2245, 'incorrect': 0, 'partial': 1, 'missed': 1, 'spurious': 4, 'possible': 2247, 'actual': 2250, 'precision': 0.998, 'recall': 0.9993324432576769, 'f1': 0.9986657771847899}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 2244, 'incorrect': 2, 'partial': 0, 'missed': 1, 'spurious': 4, 'possible': 2247, 'actual': 2250, 'precision': 0.9973333333333333, 'recall': 0.9986648865153538, 'f1': 0.9979986657771848}
RESULTS PER TAG: NUM
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 3081, 'incorrect': 2, 'partial': 0, 'missed': 1, 'spurious': 0, 'possible': 3084, 'actual': 3083, 'precision': 0.9993512812195913, 'recall': 0.9990272373540856, 'f1': 0.9991892330144316}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 3081, 'incorrect': 2, 'partial': 0, 'missed': 1, 'spurious': 0, 'possible': 3084, 'actual': 3083, 'precision': 0.9993512812195913, 'recall': 0.9990272373540856, 'f1': 0.9991892330144316}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 3081, 'incorrect': 0, 'partial': 2, 'missed': 1, 'spurious': 0, 'possible': 3084, 'actual': 3083, 'precision': 0.9996756406097956, 'recall': 0.9993514915693904, 'f1': 0.999513539808659}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 3083, 'incorrect': 0, 'partial': 0, 'missed': 1, 'spurious': 0, 'possible': 3084, 'actual': 3083, 'precision': 1.0, 'recall': 0.9996757457846952, 'f1': 0.9998378466028863}
RESULTS PER TAG: NUM_GLOB
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 22, 'incorrect': 10, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 32, 'actual': 32, 'precision': 0.6875, 'recall': 0.6875, 'f1': 0.6875}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 32, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 32, 'actual': 32, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 32, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 32, 'actual': 32, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 22, 'incorrect': 10, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 32, 'actual': 32, 'precision': 0.6875, 'recall': 0.6875, 'f1': 0.6875}
RESULTS PER TAG: PART
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 32, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 32, 'actual': 32, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 32, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 32, 'actual': 32, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 32, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 32, 'actual': 32, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 32, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 32, 'actual': 32, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
RESULTS PER TAG: PRENOM
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 26, 'incorrect': 3, 'partial': 0, 'missed': 0, 'spurious': 1, 'possible': 29, 'actual': 30, 'precision': 0.8666666666666667, 'recall': 0.896551724137931, 'f1': 0.8813559322033899}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 29, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 1, 'possible': 29, 'actual': 30, 'precision': 0.9666666666666667, 'recall': 1.0, 'f1': 0.983050847457627}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 29, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 1, 'possible': 29, 'actual': 30, 'precision': 0.9666666666666667, 'recall': 1.0, 'f1': 0.983050847457627}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 26, 'incorrect': 3, 'partial': 0, 'missed': 0, 'spurious': 1, 'possible': 29, 'actual': 30, 'precision': 0.8666666666666667, 'recall': 0.896551724137931, 'f1': 0.8813559322033899}
RESULTS PER TAG: ORG
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 191, 'incorrect': 4, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 195, 'actual': 195, 'precision': 0.9794871794871794, 'recall': 0.9794871794871794, 'f1': 0.9794871794871794}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 191, 'incorrect': 4, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 195, 'actual': 195, 'precision': 0.9794871794871794, 'recall': 0.9794871794871794, 'f1': 0.9794871794871794}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 191, 'incorrect': 0, 'partial': 4, 'missed': 0, 'spurious': 0, 'possible': 195, 'actual': 195, 'precision': 0.9897435897435898, 'recall': 0.9897435897435898, 'f1': 0.9897435897435898}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 195, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 195, 'actual': 195, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
RESULTS PER TAG: STATUT
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 508, 'incorrect': 2, 'partial': 0, 'missed': 0, 'spurious': 1, 'possible': 510, 'actual': 511, 'precision': 0.9941291585127201, 'recall': 0.996078431372549, 'f1': 0.9951028403525954}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 510, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 1, 'possible': 510, 'actual': 511, 'precision': 0.9980430528375733, 'recall': 1.0, 'f1': 0.999020568070519}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 510, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 1, 'possible': 510, 'actual': 511, 'precision': 0.9980430528375733, 'recall': 1.0, 'f1': 0.999020568070519}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 508, 'incorrect': 2, 'partial': 0, 'missed': 0, 'spurious': 1, 'possible': 510, 'actual': 511, 'precision': 0.9941291585127201, 'recall': 0.996078431372549, 'f1': 0.9951028403525954}
RESULTS PER TAG: GER
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 191, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 191, 'actual': 191, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 191, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 191, 'actual': 191, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 191, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 191, 'actual': 191, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 191, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 191, 'actual': 191, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
RESULTS PER TAG: LOC
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 310, 'incorrect': 3, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 313, 'actual': 313, 'precision': 0.9904153354632588, 'recall': 0.9904153354632588, 'f1': 0.9904153354632588}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 312, 'incorrect': 1, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 313, 'actual': 313, 'precision': 0.9968051118210862, 'recall': 0.9968051118210862, 'f1': 0.9968051118210862}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 312, 'incorrect': 0, 'partial': 1, 'missed': 0, 'spurious': 0, 'possible': 313, 'actual': 313, 'precision': 0.9984025559105432, 'recall': 0.9984025559105432, 'f1': 0.9984025559105432}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 310, 'incorrect': 3, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 313, 'actual': 313, 'precision': 0.9904153354632588, 'recall': 0.9904153354632588, 'f1': 0.9904153354632588}
RESULTS PER TAG: SPATIAL
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 606, 'incorrect': 17, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 623, 'actual': 623, 'precision': 0.9727126805778491, 'recall': 0.9727126805778491, 'f1': 0.9727126805778491}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 621, 'incorrect': 2, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 623, 'actual': 623, 'precision': 0.9967897271268058, 'recall': 0.9967897271268058, 'f1': 0.9967897271268058}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 621, 'incorrect': 0, 'partial': 2, 'missed': 0, 'spurious': 0, 'possible': 623, 'actual': 623, 'precision': 0.9983948635634029, 'recall': 0.9983948635634029, 'f1': 0.9983948635634029}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 608, 'incorrect': 15, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 623, 'actual': 623, 'precision': 0.9759229534510433, 'recall': 0.9759229534510433, 'f1': 0.9759229534510433}
RESULTS PER TAG: JOKER
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 0, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 0, 'actual': 0, 'precision': 0, 'recall': 0, 'f1': 0}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 0, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 0, 'actual': 0, 'precision': 0, 'recall': 0, 'f1': 0}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 0, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 0, 'actual': 0, 'precision': 0, 'recall': 0, 'f1': 0}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 0, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 0, 'actual': 0, 'precision': 0, 'recall': 0, 'f1': 0}
RESULTS PER TAG: PAGE
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 31, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 31, 'actual': 31, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 31, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 31, 'actual': 31, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 31, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 31, 'actual': 31, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 31, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 31, 'actual': 31, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
RESULTS PER TAG: TITRE
Correct (COR) both are the same
Incorrect (INC) the output of a system and the golden annotation don’t match
Partial (PAR) system and the golden annotation are somewhat similar but not the same
Missing (MIS) a golden annotation is not captured by a system
Spurius (SPU) system produces a response which doesn’t exit in the golden annotation
Strict exact boundary surface string match and entity type
{'correct': 217, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 217, 'actual': 217, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Exact exact boundary match over the surface string, regardless of the type
{'correct': 217, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 217, 'actual': 217, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Partial partial boundary match over the surface string, regardless of the type
{'correct': 217, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 217, 'actual': 217, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
Type some overlap between the system tagged entity and the gold annotation is required
{'correct': 217, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'possible': 217, 'actual': 217, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0}
```
On dirait les pb de pages et titre sont reglés.
Encore des petis soucis avec NUM_GLOB et SPATIAL mais les erreurs sont faibles à moins de 10 occurrences.
## .... TODO ...
idées diverses ...
* uniformiser les noms de fichiers entrée et sortie
* mettre l'année d'édition plutôt que ptm
* mettre une info traitement : manuel (gold) / autom
* traiter les fichier par édition et par lettre ?
= pour faciliter l'alignement
... à repenser / scripter
Lettre P :
p1142-1203 (en nom de file xml : 1153-1228)
> pdftk Annuaire_1898.pdf burst
Prendre les pages "burst" de 612 à 687 (=num page du pdf pour le P)
> pdftk LettreP/*.pdf cat output ed1898_LettreP.pdf
Passage script cpt pour la lettre P :
Annotated|Number|
|---|---|
|NUM_GLOB|155|
|NUM|5945|
|PERS|4763|
|VILLE|4706|
|VOIE|4355|
|RUE|4491|
|NUM_PERS|4532|
|SPATIAL|1048|
|LOC|628|
|STATUT|1157|
|GER|349|
|ORG|417|
|PART|50|
|PRENOM|90|
|sum of annotations / NE|32686|
* avoir un modéle pour l'ensembles des éditions (4)
= il faudrait une annotation d'une même partie des 4 éditions
Possibilité de comparer un modèle global (4ed) à 4 autres modèles (1 par édition)
* commencer à croiser les données
* carré Richelieu
= Paris Didot-Bottin année 1877 (toutes les années, demander à Eric)
https://www.fabriquenumeriquedupasse.fr/explore/dataset/paris_jobs_with_tags_richelieu_project_bottin1877/table/
* https://blog.factgrid.de/archives/2333 (Bottin, à explorer)
* etc...
(mettre ici Réf dont parlait Alena ... )