In conference peer review, reviewers are often asked to provide “bids” on
each submitted paper that express their interest in reviewing that paper. A
paper assignment algorithm then uses these bids (along with other data) to
compute a high-quality assignment of reviewers to papers. However, this process
has been exploited by malicious reviewers who strategically bid in order to
unethically manipulate the paper assignment, crucially undermining the peer
review process. For example, these reviewers may aim to get assigned to a
friend’s paper as part of a quid-pro-quo deal. A critical impediment towards
creating and evaluating methods to mitigate this issue is the lack of any
publicly-available data on malicious paper bidding. In this work, we collect
and publicly release a novel dataset to fill this gap, collected from a mock
conference activity where participants were instructed to bid either honestly
or maliciously. We further provide a descriptive analysis of the bidding
behavior, including our categorization of different strategies employed by
participants. Finally, we evaluate the ability of each strategy to manipulate
the assignment, and also evaluate the performance of some simple algorithms
meant to detect malicious bidding. The performance of these detection
algorithms can be taken as a baseline for future research on detecting
malicious bidding.