OpenReview serves NeurIPS for the first time

14 Nov 2021OpenReview News ArticleEveryoneRevisionsCC BY 4.0

OpenReview now powers NeurIPS. We’re excited to share that NeurIPS—the flagship machine learning conference—ran its full 2021 peer-review workflow on OpenReview for the first time. Together with the NeurIPS Program Chairs, we planned an end-to-end process that preserved the conference’s familiar double-blind, committee-driven model while adding new flexibility and instrumentation. NeurIPS invited 13k+ reviewers, 1k Area Chairs, and 155 Senior Area Chairs; during preparation and submission we created and enriched 12,537 new researcher profiles (May alone), linking publications from DBLP and gathering detailed conflicts (institutional history, advisor/advisee, collaborators, social connections, plus private conflicts). NeurIPS adopted OpenReview’s modern reviewer-expertise embeddings and our FairFlow / Min-Cost-Flow assignment, augmented by bids and conflict detection, and used our Edge Browser to interactively polish matches and balance loads.

At submission, OpenReview received 11,729 papers and handled a last-day surge of 42k updates with 28k active users (peaking at ~2.3k simultaneous) while staying under 50% CPU and maintaining smooth response. Across the cycle we sent 110k+ submission emails, supported task-based bidding for SACs/ACs/reviewers, and delivered role-specific consoles for Reviewers, ACs, SACs, Ethics Reviewers/Chairs, and PCs (including status filters, CSV exports, reminder tools, and reassignment). During review and discussion, the system processed 37,284 reviews, 8,103 meta-reviews, 452 ethics reviews, and 101,112 confidential comments, with a NeurIPS-requested multi-tab discussion forum (author discussion, committee discussion, public post-review discussion). When reviews were released, an unexpected synchronized email blast briefly stressed our databases; within an hour we streamlined landing pages and queries, returning responsiveness to normal.

True to NeurIPS’s culture of improving peer review through evidence, 2021 also ran several experiments on OpenReview: a refreshed consistency experiment, a resubmission-visibility randomized test (resubmission info shown to reviewers/ACs for 50% of papers), and an author-perception survey (two-part instrument implemented in OpenReview). After notifications, accepted papers—and their anonymous reviews, meta-reviews, and author responses—became public for community discussion. Rejected papers remained private by default, with a two-week opt-in to make the de-anonymized record public; about 2% chose to opt in, offering a transparent path for authors to surface concerns.

We’re deeply grateful for the collaboration and feedback from the NeurIPS team.

Program Co-chair Alina Beygelzimer wrote to OpenReview, saying, “As Program Chairs for NeurIPS 2021, we decided to shift the entire reviewing workflow to OpenReview. OpenReview is a flexible platform that allows heavy customization, and will be easy to adapt as the needs of the conference evolve. It brings a number of infrastructural improvements including persistent user profiles that can be self-managed, accountability in conflict-of-interest declarations, and improved modes of interaction during the discussion process. NeurIPS has a long history of experimentation with the goal of informing and improving the review process (e.g., the widely known “NeurIPS Consistency Experiment” of 2014). This year we took full advantage of the great flexibility of OpenReview’s workflow configuration to run several key experiments (including a version of the noise audit that hasn’t been done since 2014). We are grateful to the OpenReview team for supporting all requested experimentation.”

Alina added, “Our experience with OpenReview has been a delight. Not only did the paper deadline proceed smoothly (with sub-second system response time throughout the arrival of thousands of submissions just before the submission deadline), but OpenReview gracefully handled more than 20K authors accessing the system roughly at the same time to read and respond to preliminary reviews, and enabled 10K reviewers and Area Chairs and 20K authors to engage in discussions in the weeks that followed. The feedback we received from our authors and program committee members has been overwhelmingly positive.”

She concluded, “I hope that NeurIPS will continue to work with OpenReview for years to come. We are hugely grateful to the OpenReview team, for their unparalleled level of support to everyone involved in the review process. OpenReview has also supported the Data & Benchmarks track (new this year) as well as the Ethics Review process for both the main conference and the Data & Benchmarks track. It is also notable that over 20 of the NeurIPS workshops have chosen to use OpenReview for their reviewing workflow this year.”

Loading