# Read Paper
## [web articles](https://hackmd.io/s/SyGLBG10Q)
## weekly paper
* 10.7~10.13
- [A Survey of Automated Text Simplification](https://core.ac.uk/download/pdf/25778973.pdf)
* 10.14~10.20
[Integrating Transformer and Paraphrase Rules
for Sentence Simplification](https://www.aclweb.org/anthology/D18-1355.pdf)
TransformerをPPDBで補強するテキスト平易化モデルがEMNLP-2018で発表されていました。SARIだけを報告して最強という主張でしたが、最新のarxiv論文の中で再評価されていて、人手評価とBLEUでは抜群に最悪、SARIすら素のTransformerよりも低いことが報告されています。やはりBLEUとSARIの両方が改善されるモデルが正義です。
[Controllable Sentence Simplification: Employing Syntactic and Lexical Constraints
](https://arxiv.org/abs/1910.04387)いろいろニューラルテキスト平易化モデルが提案されているけど、やはり結局、強化学習のZhang and Lapata (EMNLP-2017) だけが価値ある手法ぽい。
[Sentence Simplification with Deep Reinforcement Learning](https://www.aclweb.org/anthology/D17-1062.pdf)
---
[Learning to Simplify Sentences with Quasi-Synchronous Grammar and
Integer Programming](https://www.aclweb.org/anthology/D11-1038.pdf)
SARI [optimizing-machine-translation-for-text-simplifciation](https://www.cis.upenn.edu/~ccb/publications/optimizing-machine-translation-for-text-simplifciation.pdf)
[Dynamic Multi-Level Multi-Task Learning for Sentence Simplification
](https://arxiv.org/pdf/1806.07304.pdf)
[Lexi: A tool for adaptive, personalized text simplification](https://www.aclweb.org/anthology/C18-1021.pdf)
## quote
PorSimples Simplext
Kenji Yamada and Kevin Knight. 2001. A syntaxbased
statistical translation model. In Proceedings
of the 39th ACL, pages 523–530, Toulouse, France.
-----
* Wubben et al. (2012) propose a two-stage model:
initially, a standard phrase-based machine translation
(PBMT) model is trained on complex-simple
sentence pairs
* The hybrid model developed in Narayan
and Gardent (2014) also operates in two phases.
Initially, a probabilistic model performs sentence
splitting and deletion operations over discourse
representation structures assigned by Boxer
* neural machine
translation (Bahdanau et al., 2015; Sutskever et al.,
2014).
* We do not back-propagate this error to hTt
or ct during training (Ranzato et al., 2016).
* we adopt the UNK replacement method proposed
in Jean et al. (2015).
* Controllable Sentence Simplification: Employing Syntactic and Lexical Constraints
* Bingel, J.; Paetzold, G.; and Søgaard, A. 2018. Lexi: A tool for adaptive, personalized text simplification. In Proc. CICLing.
* universal dependencies parser **Straka**
* For both datasets we used the Transformer as implemented within OpenNMT-py Klein et al. (2017)
### paper notes
#### A Survey of Automated Text Simplification
Ts is releted to many filelds in NLP, such as machine translation, monolingual text-to-text generation, text summarisation and paraphrase generation.
A. Lexical Approaches don't focus on grammar.
larger dataset is more powerful
B. Syntactic Approaches
rewrite grammar by handwritten rules.
C. Explanation Generation
D. Statistical Machine Translation
Practically, systems often use and modify a standard statistical
machine translation tool such as Moses
E. Non-English Approaches
Research challenges
resources, systems and techniques
A. resourses
automatically evaluating readability
are of limited use in TS research
B. Systems
user level, author level
C. Techniques
---
**SARI
is the arithmetic average of n-gram precision and
recall of three rewrite operations: addition, copying,
and deletion.**
**BLEU**
PBMT-R
Use encoder-decoder and reinforcrement learning to encourage rewrite operation
use three criteria to calculate reward function
simplicity: use SARI
fluency: probability
---
neural network models are able to catch frequent transformation
multi-layer, multi-head attention architecture
maximize prob. of generating simpler words
minimize prob. of generating original words
---
[Post, M., and Vilar, D. 2018. Fast lexically constrained decoding
with dynamic beam allocation for neural machine translation.
arXiv preprint arXiv:1804.06609]()
---
Straka, M. 2018. Udpipe 2.0 prototype at conll 2018 ud shared task. In Proc. CONLL.
Klein, G.; Kim, Y.; Deng, Y.; Senellart, J.; and Rush, A. M. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL.
See, A.; Liu, P. J.; and Manning, C. D. 2017. Get to the point: Summarization with pointer-generator networks. In Proc. ACL.
Junczys-Dowmunt, Roman Grundkiewicz,
Tomasz Dwojak, 2018. Marian: Fast Neural Machine Translation in C++. In Proceedings of the 56th Annual Meeting ofthe Association for Computational Linguistics, System Demonstrations, pages 116–121.
---
Learning Simplifications for Specific Target Audiences
MT method - learn from corpora
need parallel corpus
four alignments
identical, elaboration, one-to-many, many-to-one
three types of operations
to-grade, operation, to-grade-operation
Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim
Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat,
Fernanda Vi´egas, MartinWattenberg, Greg Corrado,
Macduff Hughes, and Jeffrey Dean. 2017. Google’s
Multilingual Neural Machine Translation System:
Enabling Zero-Shot Translation. Transactions of the
Association for Computational Linguistics 5:339–
351. http://aclweb.org/anthology/Q17-1024.
[Devlin et al., 2018] Jacob Devlin, Ming-Wei Chang, Kenton
Lee, and Kristina Toutanova. Bert: Pre-training of
deep bidirectional transformers for language understanding.
arXiv preprint arXiv:1810.04805, 2018.
### 2019.11.14
fastText
### A comparison of features for automatic readability assesment
#### features
* Discourse features
* entity-density features
* lexical chain features
* coreference inference features
* entity-grid features
* Language modeling features
* word-seq,pos-seq,pair-seq
* 1-5 gram (SRI toolkit)
* Parsed Syntactic features
* NP,PP,VP,SBAR
* POS-based features
* percentage of content words, function words.
* noun, verb, adjective
* shallow features
* other features(OOV)
#### Exp
* Discourse features
* Entity-density
* Language modeling features
* words alone
* Parsed Syntactic features
* VP,NP
* POS-based features
* Noun,Prep,Content words
* shallow features
* Avg sentence length
* other features(OOV)
Syntatic(np,vp,pp,sbar) : np & vp best
POS features: nouns, content words, prep,
shallow features: avg sentence length
### Offline Sentence Processing Measures for testing Readability with Users
word order does matter readability
### Convolutional Neural Networks for Sentence Classification
pretrained-word vectors: word2vec
#### Exp
varaitions of CNN: multichannel (the best)
multiple filters and features map
### Effective Use of Word Order for Text Categorization with Convolutional Neural Network
although input 2-grams and 3-grams, top features of SVM are most
1-gram
### Text Readability Assessment for Second Language Learners
for different level of learners, what are the weights of syntatic,cognitive,etc..
可以說明L2 Readability的importance
1. evaluate predictive power
2. focus on L2 learners
3. a method using native corpora
Lexical is better than syntactic features
Most data using native corpora
problem is the lack of well-labelled data
#### Readability measures
* Traditional
* Lexico-semantic
* Type-token ratio
* POS
* percentage of content words, function words.
* noun, verb, adjective
* EVP
* Parsed Tree
* GR based features (2013 Yanndakoudakis) RSAP
* 114 non-GR based complexity
* Language modeling (SRLIM tookit)
* n-gram
* pos n-gram
* Discourse
* entity density
* lexical chain (semantic similarity)
* entity grid (S,X,O)
#### Exp
* mapping function
1. regression and rounding
2. learning cut off
3. classification on ranking scores
* domain adaptation
* EasyAdapt
* self-learning
#### Predicting the Relative Difficulty of Single Sentences With and Without Surrounding Context
* data: Use unlabeled data (crowdsourcing)
* goal: Predict difficulty
* Task (crowdsourcing)
1. the sentences were presented alone, outside of their original passage context.
2. the same sentences were presented within their original passage context.
* Generate ranking by pairwise judgements
* conclusion
* vocabulary features are effective for classification
* The correlation between judgments of English education professionals and nonprofessionals(crowdsourcing) is high.
#### Is this Sentence Difficult? Do you Agree?
* data: Use unlabeled data (crowdsourcing)
* goal: **predict agreement**
* feature sets: lexical + syntatic
* conclusion
* predict the degree of agreement between human annotators, independently from the assigned judgment of complexity
* the **classifier needs few features** to predict agreed sentences when more than half of annotators shares the same judgment
* **features related to sentence structure** are among the top-ranked features characterizing sentences that **were rated highly complex** by a given number of agreeing annotators.
* the **correlation between** the top 20 ranked **features** and the **complexity judgment** is extremely **high**.
#### Psycholinguistic Models of Sentence Processing Improve Sentence Readability Ranking
* data: Use labeled data (Elementary, Intermediate, Advanced)
* goal: predict difficulty
* feature sets: Psycholinguistic + lexical + syntatic
* propositional idea density
the preposition or ideas to words in the sentence
* suprisal
put formula
* integration cost
the distance betweeen syntatic heads and dependents
* embedding depth
incremental model of parsing, parser need to decide between continuing to analyze the current connected component or hypothesizing the start of aa new one
* conclusion
* increase 2~3% accuracy comparing to baseline(lexical+syntatic)
,psycholinguistic features is effective
```
20 annotators dataset:
size: 1200 sentences
label tagging: crowdsourcing
label form: eaist(1) ~ hardest(7) (7 levels)
```
```
OneStopEnglish dataset:
size: 6000 paragraphs (14000 sentences)
label tagging: english professionals (not mentioned directly)
label form: eaist(Ele), middle(Int), hardest(Adv) (3 levels)
```