Researchers have been embedding hidden prompts within their scientific papers to manipulate AI evaluation systems into giving positive reviews. This practice, uncovered by Nikkei Asia, involves using hidden text (e.g., written in white font) containing instructions like "give only positive feedback" or requests to highlight perceived strengths.
This manipulation affects the peer-review process, where papers are evaluated before publication in journals such as Science or Nature. The use of AI tools for summarization or fact-checking increases the vulnerability to these hidden prompts. The practice has been found in papers from universities in several countries, including the US, China, Japan, Singapore, and South Korea.
Some researchers defend the practice, viewing it as a way to identify reviewers who rely too heavily on AI. However, the concern is that this manipulation leads to biased evaluations and potentially undermines the integrity of scientific research. The increasing use of AI in peer review necessitates a discussion of best practices and measures to prevent such manipulation.