# Abstract Review Systems
## Sessionize
### Call for abstracts
#### Pros
* Can limit number of submissions per speaker
* Can make tracks single choice
* Can add custom fields (single choice, multiple choice, short/long text, file, url, integer, percentage, email, checkbox)
#### Neutral
* Speaker "tagline" (affiliation) and biography is compulsory, but users can set this in their profile and reuse e.g. if they have submitted to a satRday.
* Co-speakers must register, but can remove co-speaker field and/or add separate free text field to add co-authors (not co-presenters) that do not need to register.
#### Cons
* All speaker profile is shown to reviewers, e.g. shirt size in demo instance, or photo or pronouns, which could bias reviewers.
### Review
Comments based on "stars rating" option as this is closest to what has been done at recent useR!s.
#### Pros
* Can allocate reviewers to specific tracks (and add other conditions, e.g. restrict to posters).
* Reviewers can edit their ratings.
* Web interface lets you page through abstracts for review.
#### Neutral
* Comments are shown as optional, with a tiny box, but we want to encourage commenting.
* Organizers can edit submissions - should restrict if reviewing.
* Cannot customise star rating - reviewers can give half stars, so actually 10 possible ratings, which is a bit much.
#### Cons
* All reviewing must be done through web interface.
* Comments disappear from evaluation history - if you go back, you only see the star rating you added.
* Comments entered in the evaluation are put in the "Team comments" section of the page for an individual session.
* This is only viewable by opening the session from the session table.
* Reviewers can look at other reviewer comments (and ratings?) here before submitting their evaluation.
* Reviewers cannot edit comments entered directly on the session page or entered during evaluation.
* Star rating is only shown in session table if you open the session table corresponding to an evaluation - so you can only see the ratings for each evaluation (track) separately.
* It is not possible to download ratings or comments to evaluate externally.
* Need to enter accept/reject decisions by hand (can bulk edit filtered table, but can only filter by search on title).
* Comments are only for the content team, no option to share a comment back to authors.
* Author only notified of accept/reject decision.
#### Ranking evaluation
This is an interesting idea, but not implemented well. There is no facility for reviewer commenting during the evaluation. They use an established method to create an overall ranking (Elo rating) but a poor design to collect partial (3-way) rankings. Each reviewer ranks each abstract only once (or one or two abstracts twice to make up the last triplet), creating a disconnected set of partial rankings. This will lead to a very poor estimate of the overall ranking. To get a good estimate you need a well connected set, with either the same reviewer comparing the same abstract multiple times (in different sets of 3), or many reviewers comparing the same abstracts, which we won't have. A reviewer cannot choose to do more rankings.
There are more issues in our case - an overall ranking is not sensible or useful when we have mutliple formats.
## Conftool
### Call for abstracts
#### Unsure
* Compulsory last name field? (This was an issue with sciences conf, could not change to "Full name" field).
* Topics can't be restricted to single choice?
* Fixed/custom fields in general?
### Review
#### Pros
* Can have comment for authors and private comment for program chairs.
* Reviewer can print all their abstracts for review (HTML) or export to Word doc.
* Allows reviewers to declare conflict of interest.
* Reviewer can view all their completed reviews on one page.
* Can recommend a different format on the review form (e.g. talk -> lightning talk).
#### Neutral
* Allows reviewers to select their own topics of expertise (we invited with specific topic in mind, but this might be helpful) - no order of priority though.
* Reviewer can see other reviews once they've submitted a review.
#### Cons
* Web interface does not allow to page through reviews - must select abstract, submit review, go back to all reviews.
* Reviews and ratings cannot be downloaded by program chairs.
* Need to enter accept/reject decisions by hand
* Comments are not shown in summary table - must click to enter abstract for details.
* Accept/reject votes made in online forum can be shown in summary table. Not very practical though - need to click into forum to discuss each abstract.
### Other
* Scheduling tools with "my agenda".
## pretalx
### Call for abstracts
#### Pros
* Can add custom fields (single choice, multiple choice, short/long text, file, number, yes/no)
* Can choose whether answer should be shown to reviewers
* Can limit to certain session types
* Can mark as personal data (deleted if user deletes their account)
#### Neutral
* Track must be single choice (okay for us).
#### Cons
* Field for additional speaker (limited to 1) cannot be removed - like sessionize, will invite co-speaker to register if entered.
* Can change help text though to clarify only for a co-presenter, not co-author.
### Review
#### Pros
* Web interface lets you page through abstracts for review.
* Scores are cutomisable - any number of levels, with any labels.
* Reviewers can edit comments.
#### Neutral
Putting API features as neutral because although it is good you can get data, would need work to compile for review/evaluation.
* Can use markdown in review.
* Submissions can be downloaded through API.
* Reviewer's text and scores can be downloaded through API.
#### Cons
* All reviewing must be done through web interface.
* API is read only, so need to enter accept/reject decisions by hand (can bulk edit filtered table, but can only filter by search on title).
* Comments are not shown in summary table of all abstracts.
* Comments are only for the content team, no option to share a comment back to authors.
* Author only notified of accept/reject decision.
### Other
#### Cons
* Placeholders are not currently substituted in email templates, so you can only send generic emails e.g. for acceptance.
## ETH system
So far only evaluated as a reviewer - given reviewer login, with table of abstracts to review.
### Review
#### Neutral
* Checkbox to mark review complete. Pro: reviewer can go back and forth between abstracts, edit reviews and mark when done. Con: need to click back into abstract to mark complete.
#### Cons
* Pages slow to load
* Reviews must be done through web interface.
* Web interface does not allow to page through reviews, instead requires a lot of clicking (a click each to select abstract, view title, view topic, view abstract, then review)
* No option for comment to authors?