## Corpora Evaluation and System Bias Detection in Multi-document Summarization
### Submit Response to Reviewers
Use the following boxes to enter your response to the reviews. Please limit the total amount of words in your comments to 900 words (longer responses will not be accepted by the system).
### Response to Review #1:
Thank you for your valuable feedback. Below we try to justify some of our experimental decisions.
*"would have liked to see further experiments that shuffled the input sentences in the candidate documents”*
- We didn’t shuffle sentences as jumbling up the order of sentences within a document often changes the meaning of the text and makes it incomprehensible for sequential attention models.
- The MDS systems we considered required appending documents for input. We did run experiments by shuffling these documents before appending them and reported our observations on layout bias in Line-259, Fig 2\(c\) .
*“trained (or pre-trained) the models across all the datasets to see if the bias still exists”*
- We studied three corpora which are large enough to be used for training -- CNN/DM, MultiNews and CQASumm. Of these, the first two are News based corpora while the latter is CQA.
- We did experiment with cross genre models (e.g., a model trained with CNN, tested with CQA and vice versa). However we didn’t report the results since the ROUGE and F1 scores were abysmal.
- In the cross domain experiments, we observed that models inherit layout bias from the corpora which they are trained on. For example, the pointer-generator model which was trained on CNN/DM (high bias corpora), when tested with CQASumm (low bias corpora), the generated summaries saw high layout bias. In the same way, when the pointer-generator model trained on CQASumm was tested with CNN/DM, the generated summaries showed low layout bias.
*“some metrics could use length normalization to provide a fairer comparison”*
- We are unsure whether you’re suggesting corpus metrics or system metrics for length normalization.
- All system summaries have been truncated to less than or equal to 100 words, therefore we feel that metrics derived from them need not be further length normalized.
- The corpus metrics are scalars/percentages, measuring similarity (computed by splitting documents into equal parts), which we feel need not be length normalized either.
### Response to Review #2:
We are thankful to you for your valuable feedback. We try our best to elucidate the points raised by you.
*“intuition behind the proposed metrics is not presented in sufficient clarity”*
- Our initially proposed metrics are purely based on our intuition of what might be able to capture certain aspects of system and corpus properties. We went through a number of studies dealing with interpretability in vision, generation, etc. to gather metric ideas.
- However when we were suggesting guidelines for imminent corpora authors, we found out that some of these metrics have strong positive correlations with others and hence need not be reported individually. Therefore, we came up with a subset of the initial metrics which can be recommended to future corpora authors.
*“we feel that the average Pyramid Score and Inverse-Pyramid Score must be reported as they are strong indicators of generic corpus quality”*
- We divided the corpus metrics defined by us into two types -- subjective and objective.
- Subjective metrics include IDS, Redundancy, Abstractness etc., which while can’t be used to quantify the general quality of a corpora, can be optimized to make task specific choices.
- On the other hand, a higher value of objective metrics such as Pyramid Score and Inverse Pyramid score indicates with certainty a higher quality MDS corpora.
#### Responses to Questions:
1. All heatmaps are highlighting the overlap at an individual document level taken randomly as an example. We decided to omit multinews heatmap due to space constraint and as already news corpora heatmaps (DUC, TAC) are added.
2. “Candidate documents in Opinosis, TAC and DUC feature a high degree of redundant information as compared to Multinews and CQASumm” is the correct sentence. Sorry for the typo.
3. We meant that models in the Opinosis column were trained on the CNN dataset (wherever required) and tested on Opinosis. Essentially, all instances of the phrase “and tested” should be removed from Table 2 caption. Sorry for the confusion.
### Response to Review #3:
Thank you so much for your valuable feedback. We are grateful to you for pointing out the typos and citations. We will incorporate your suggestions in the final version.
### General Response to Reviewers:
- This work is a first step towards global interpretability of corpora and models for the task of multi-document summarization.
- We make an attempt to quantify the quality of summarization corpus and prescribe a list of points to consider while proposing a new MDS corpus. We analyze the reason behind the absence of an MDS system which achieves superior performance across all corpora. We try to observe the extent to which system metrics are influenced, and bias is propagated due to corpus properties.
- We are thankful to the reviewers for insightful feedback. We are encouraged that they think our work raises important questions (R1, R2, R3), that our metrics are well motivated and evidence backed (R1, R3), is well written and well organized (R3) and likely to be of interest to many in the summarization community (R2, R3). Some of the requested experiments are already present in the supplemental section. We have tried our best to address concerns raised by the reviewers and will incorporate their feedback in the final version.
### Response to Chairs
Use this textbox to contact the chairs directly only when there are serious issues regarding the reviews. Such issues can include reviewers who grossly misunderstood the submission, or have made unfair comparisons or requests in their reviews. Most submissions should not need to use this facility.
- We respectfully disagree with Reviewer 1’s comment in the “reasons to reject” section. We believe this paper is highly relevant for the summarization track of EMNLP, and its takeaways will be of use to a wide group of researchers.
- We do not understand why the reproducibility points have been deducted by Reviewer 2 and 3. All required scripts have been submitted as supplementary and additional instructions and hyper-parameters are mentioned in the appendix.