# ICFP 2023 PC Guide You will be able to access your reviewing assignments at: https://icfp23.hotcrp.com/ Call for papers: https://icfp23.sigplan.org/track/icfp-2023-papers#Call-for-Papers ## Timeline (all dates in 2023) * ~~1 Mar (Wed)~~ - Submission deadline: Please update your profile, declare conflicts, and select topic preferences before the submission deadline * ~~5 Mar (Sun)~~ - Bidding deadline: Please submit at least 35 bids. Please specify your estimated expertise as a suffix to your preference scores (see https://icfp23.hotcrp.com/reviewprefs). * ~~8 Mar (Wed)~~ - Paper assignment: Please check the provisional paper assignment and let the PC Chair know if you see any anomalies. * ~~15 Mar (Wed)~~ - Expert reviewer suggestions: guardians please post (as a comment on HotCRP) an ordered list of 3 suggested external expert reviewers. * ~~29 Mar (Wed)~~ - Guardian review deadline * ~~26 Apr (Wed)~~ - Final (R1) review deadline * ~~27 Apr (Thu)~~ - HotCRP discussion phase 1 begins * ~~1 May (Mon)~~ - Author response period begins (it actually started 18 hours early) * ~~4 May (Thu)~~ - Author response period ends * ~~4 May (Thu)~~ - HotCRP discussion phase 2 begins * ~~4 May (Thu)~~ - PC members assigned 6 additional Discussion papers each (you shouldn't review these, but please read the discussion and join in if you have any insights or answers to reviewer questions) * ~~11 May (Thu)~~ - HotCRP discussion phase 2 ends. We should aim to have converged on decisions for as many papers as possible by this point. Only those papers for which we have not converged on a decision will be discussed in the Zoom PC meetings. * ~~15--16 May~~ (Mon-Tue) - Zoom PC meetings * ~~18 May (Thu)~~ - Round 1 notification * ~~15 Jun (Thu)~~ - Revision deadline * ~~22 Jun (Thu)~~ - Deadline for checking revisions * ~~27 Jun (Tue)~~ - Round 2 notification * ~~18 Jul (Tue)~~ - Camera-ready deadline * 4--9 Sep (Mon-Sat) - Conference ### Checking revised papers Some points to bear in mind when checking revised papers: * The primary purpose of this exercise is to allow the authors to convince the PC that they can satisfy the conditions of acceptance. If they succeed then they will have several more weeks to produce the camera-ready version of the paper. * Feel free to check the revisions as soon as they arrive. * All primary PC reviewers for each paper (not discussion reviewers or external reviewers, though) should approve the paper. * If you're happy with the revision then there's no need to prolong the process any further - just post a Reviewer discussion comment with words to the effect "I'm happy with the revised paper". * If you still have concerns (specifically about whether the authors have met the conditions), then say so in a Reviewer discussion comment. * If the authors have failed to sufficiently satisfy the conditions then we can reject papers - though my hope and expectation is that this shouldn't usually happen. * If authors say that they failed to understand a condition but did not seek to clarify it in good time then that is grounds for rejection. * We cannot mandate changes that were not covered by the original conditions, but it could be helpful to suggest further changes for the camera-ready version of the paper. * It doesn't matter if the authors go slightly over the page limit. (For the final paper they will be able to buy additional pages up to a maximum of 15 for Experience reports and 30 pages for full papers - excluding references and appendices, which are unlimited.) * The Discussion Lead should notify the authors of the final decision with an Author discussion comment. If there's nothing more to say then it's fine if this is just: "We're happy that the revised paper addresses the conditions of acceptance." ### Zoom PC meetings UPDATE: We only required one Zoom meeting and it was held at 1pm-3pm UTC on Mon 15 May. The tentative timing for Zoom PC meetings is Mon 15 and/or Tue 16 May in the range 1pm-3pm UTC which is e.g. 6am-8am PST, 2pm-4pm BST, 10pm-12am JST, and depending on who the main reviewers for the contentious / under-discussed papers are there may be potential to adjust the schedule to better accommodate their time zones. ### Posting a comment on HotCRP UPDATE: Once you've submitted a review for a particular paper, HotCRP comments on that paper will default to being about the reviews, and will (initially) only be visible to Chairs and others who have completed a review for that paper. At the final review deadline I will make all reviews visible (modulo conflicts) to the whole PC. After visiting one of your assigned papers select the "Main" tab near the top-left of the window then click the "Add Comment" button at the bottom of the window. You should select the Visibility "Reviewer discussion" for your comment (the default). Click "Save" to post your comment. ("Reviewer discussion" is the preferred default as we'd like to encourage discussion amongst the PC and other external reviewers. However, if you wish to write a message only for PC members then select "PC discussion", and if you wish to communicate only with the chairs then select "Administrators only". "Author discussion" is for comments that are visible (anonymously) to authors - that will be useful when authors seek clarification after conditional acceptance.) ### Updating reviews You may (and are encouraged!) to update your review if your understanding and opinion of the paper changes as a result of the discussion and reading other reviews. One advantage of starting the discussion as soon as possible means that you'll have an opportunity to adjust your reviews before they're sent out to authors for the first time. (If you do do that then please add a note to the "Comments for PC" section of the review summarising how and why you changed it.) ### Discussion process The most effective way of conducting online discussion on HotCRP is the same as the way in which my piano teacher used to ask me to practice the piano --- little and often. As a rough guideline I suggest you aim to connect at least once per day for the duration of the official discussion phase (27 April -- 11 May) for around 5-15 minutes. It's important to connect reasonably often in order to foster a meaningful discussion. It would be very helpful to have some discussion before the author notification, but the most important period will be after the author response arrives (4 May). At the Zoom PC meeting we will only discuss those papers for which we have not converged on a decision through the HotCRP discussion. ### Broadening participation One aspect of in-person (and Zoom) meetings that is hard to replicate using just HotCRP is engagement from the wider PC in discussions with papers that they didn't review. This means, particularly with such a large PC, that it's much harder for PC members to get a feel for the breadth of papers under consideration. In order to try to capture a little bit of this breadth I'm going to assign each of you an additional 6 papers immediately following the author response period (4 May). I do not want you to review these papers (they will be labelled "Discussion" on HotCRP), but I would like you to glance over the discussions for these papers and join in if you feel you have something to add (for instance, it might be that you are an expert in the area and have an insight or answer to a reviewer's question). (You may of course participate in discussions of any paper you're not conflicted with, but you're particularly encouraged to join in with discussions about the ones you're assigned for discussion.) ### Discussion review invitations You will be invited to Accept or Decline your Discussion reviews. This is a quirk of my abuse of HotCRP to set up Discussion reviews. Please Accept the request to indicate to the rest of the PC that you are engaging with the discussion. Please do not decline the request as that will remove your assignment to that paper. (You do not need to write a review - just join in the discussion if you have something you'd like to contribute.) Unfortunately (because I am abusing HotCRP in a way that it wasn't designed for), the links from the main HotCRP page will take you to the Review tab rather than the Main tab where the discussion takes place. To access the discussion click on the Main link at the top of the paper's page. ### Review coverage * Having a relatively large and diverse PC helps make it easier to find expert reviewers on the PC. Nevertheless, for some papers this will likely not be possible. External reviewers can play an important role both by writing expert reviews and by providing an alternative perspective that may not be readily available from the PC. * Each paper should have at least three reviews. * Ideally each paper should have one X and one Y review (it is helpful to have the perspective of both experts and non-experts). * For each paper for which you are a guardian please post a list of 3 suggested external expert reviewers by *15 Mar (Wed)*. You should do so as a comment on HotCRP. The chair responsible for the paper (either Matthew or Sam) will check for conflicts and make external review requests at their discretion. * As part of the bidding process you should have expressed your expected expertise as a suffix on your review preference for each paper (X/Y/Z). This is only visible to you and the chair responsible for your paper. * If your expertise score for a paper changes as a result of reading the paper in more depth (particularly if the score goes down), then please leave a comment to say so. This can be helpful to make the guardians and chairs aware that we may need to find more reviewers. * Ideally we will recruit external reviewers as early as possible --- in order to give them plenty of time to write reviews. There is no harm in having additional reviewers, so if in doubt please do suggest external reviewers at any stage. * All requests for external reviews must be made by contacting the chair responsible for the paper (post a HotCRP comment) who will check for conflicts before a request is sent. * The purpose of external reviewers is *not* to excuse PC members from writing their own reviews. PC members are expected to review papers themselves. However, if there is a paper on which you think a colleague's expertise would be valuable, you may collaborate with them on the review, after checking with the PC Chair for conflicts, but you must take full responsibility for opinions in the review and be willing to discuss them in HotCRP and Zoom. * If at any stage you feel, for whatever reason, that you are likely to run into difficulties in carrying out your reviewing or other PC duties on time then please let the PC Chair know. (The earlier, the better, but even if it seems too late, it's *always* helpful to know so that we can make contingency plans.) ### Functional pearls and experience reports See the [Call for papers](https://icfp23.sigplan.org/track/icfp-2023-papers#Call-for-Papers) for guidelines on functional pearls and experience reports. ### Reviewer expertise and reviewer confidence The review form includes reviewer expertise (X: expert, Y: knowledgeable, Z: outsider) and reviewer confidence (1: high, 2: medium, 3: low). Reviewer expertise concerns the reviewer's expertise in the core topics of the paper. If a paper has two core topics and you are an expert in one but not necessarily the other, then X is an appropriate rating. Reviewer confidence reflects the reviewer's level of comprehension of the technical material presented in the paper. An expert may end up with a low comprehension level, for example, if the presentation is lacking. ### Review criteria The [Call for papers](https://icfp23.sigplan.org/track/icfp-2023-papers#Call-for-Papers) says: "Submissions will be evaluated according to their relevance, correctness, significance, originality, and clarity. Each submission should explain its contributions in both general and technical terms, clearly identifying what has been accomplished, explaining why it is significant, and comparing it with previous work. The technical content should be accessible to a broad audience." The interpretation of the five criteria: relevance, correctness, significance, originality, and clarity, that the PC should use for reviewing paper submissions is spelled out below. (Most of these criteria are hopefully quite self-explanatory, but "significance", in particular, is one criterion that reviewers frequently have some difficulty interpreting.) * Relevance: Is the paper in scope for ICFP. * Correctness: Are the technical results presented in the paper correct. * Significance: Does the paper make a non-trivial and well-motivated contribution to the field. For our purposes significance is about whether the contribution is sufficient for a conference paper; it is *not* about whether the work is likely to have significant impact over the course of time --- something which is typically rather hard to reliably assess. * Originality: Does the paper present a new result. * Clarity: Is the paper comprehensible (to a broad ICFP audience). All five of these criteria should be used for assessing standard research papers, functional pearls, and experience reports, but in each case they should be calibrated appropriately. For instance, a functional pearl need not present original research, but it should still offer some original insight. ### Good and bad reasons to reject papers [The following text is based on similar content provided to the POPL 2023 PC by Amal Ahmed, which was largely derived from a note by Peter Sewell dated 2021-11-01] As reviewers, what do we have to decide? Fundamentally, whether publishing the paper will advance the subject in some substantial way. In more detail: - is the motivation real --- does the paper address an important problem? - would the claims it makes constitute substantial progress? - are those claims backed up --- is it technically solid? - is it well-written --- enough for readers (with the appropriate background) to understand? Then, as our venues are typically competitive, we have to weigh the paper against other submissions (how competitive they should be is a question, but we won't go into that here) --- so reviewers need some sense of the level of contribution appropriate to the venue, so that their scores are broadly comparable. #### Bad reasons to reject good PL papers Reviewing is essentially a judgement call. We've discussed our review processes at great length over the years, and those processes do matter - we've tuned them, and in many ways improved them - but, fundamentally, peer review relies on informed judgements from a suitably expert and sensible group of people. So this note is not about process. Instead, it identifies some of the bad forms of argument that one sees again and again. If one sees one of these, or if one finds oneself writing one, an alarm bell should ring... - I could have done this better, if only I'd got round to it - I can imagine some quite different research that I'd prefer - I can imagine some quite different exposition that I'd prefer - It's not self-contained/accessible to me, because I don't know the work it builds on - It's not self-contained, because this project is too big for all the details to fit in the page limit - I want more examples / discussion (fitting into the page limit by magic) - I want extra evaluation (even though it does a decent job to support the claims) - I just wasn't excited by it (even though it's a clear advance on an important problem) - I'm assessing this as if it was about X, even though it's actually about Y - I'm assessing this as if it was a paper of kind X, even though it's actually of kind Y (PL - and indeed FP - spans various kinds of papers, with different values and criteria) - It's about language design - It's too mathematical - It's about the semantics of actual languages, which makes it complicated - They didn't mechanise all the proofs (though they didn't claim that they did) - A previous paper claimed to do this (though it doesn't really subsume this) - It could do with another pass (and the authors will thank us for rejecting it) - It presents a big project, not a single clever/cute idea that can be fully explained in a few pages - The idea here is too simple (even though it's very useful, and no-one fleshed it out and published it before) - It's incremental (even though it's a big increment --- most research is advancing previous work) - This feels more like a paper for venue X (even though it could perfectly well fit here) - This should be a journal paper instead (for good and ill, PL is based on conference publication) - The authors already put a version on the web - It doesn't cite an informal talk or unpublished paper - I'm working on a competing project - (and finally, the classic) It doesn't cite my paper Many of these boil down to having due respect for the authors and the work they've put in. Remember, they've often spent multiple person-years on this, whereas the reviewer has spent maybe a day or two. We're not awarding prizes for effort, and sometimes a reviewer will understand things better than the authors, but one should be wary as a reviewer of trying to require substantially different research or exposition. Of course, none of them are absolutes --- even the last reason above can be a legitimate complaint in specific circumstances, e.g. if that uncited paper renders the submitted work moot. Another bad reason arises during discussion, after the first reviews have been written. At the end of the process, one has to arrive at accept/reject decisions, but during the process it's all too easy to regard the current scores as an objective assessment, e.g. saying "this is a `B' paper". The whole point of the discussion is to consider whether reviews are wrong or miscalibrated --- otherwise we'd just order papers by the original scores. #### Good reasons to reject bad PL papers On the other side, not all papers are good, and we shouldn't shy away from rejecting poor-quality work. Returning to the above list, in order of decreasing importance: - is the motivation real --- does the paper address an important problem? (sometimes, simply identifying an important problem is a major contribution) - would the claims it makes, if true, constitute substantial progress? - are those claims backed up --- is it technically solid? - is it well-written - enough for readers (with the appropriate background) to understand? A clear "no" for any of these should rule the paper out from any serious venue. In more detail: - The motivation isn't explained --- it doesn't clearly explain why anyone should care - The motivational argument is bogus - The work is technically correct but pointless (basically a rephrasing of the above) - The claims (presuming they are substantiated) wouldn't significantly advance the subject (it really is a minor increment over previous work) - ... or, so far, really insufficiently developed for this venue - It really has been done before - The claims are misleading: the work is over-sold and the authors aren't clear about the limitations, or about the relationship to previous work - The claims are unsubstantiated: it doesn't give the actual proofs or data, without a good reason why not - The claims are unsubstantiated: the evaluation is too limited or too flawed to support the claims - It is substantially less rigorous (theoretically or experimentally) than normal practice in the area - It's technically wrong (and isn't straightforwardly fixable) - The exposition is so bad that it's hard (even for an informed reader) to understand what the authors have actually done When arguing that a paper should be rejected, or summarising a PC decision for the authors, it may be useful to identify exactly which of these (or other) reasons justify that.