There is controversy over the “AI-generated” research submitted to this year’s ICLR, a long-term AI-focused academic conference.
At least three AI labs (Sakana, Intology, and autoscience) claim to have used AI to generate accepted research into ICLR workshops. In conferences like the ICLR, workshop organizers usually review research published on conference workshop tracks.
Sakana notified ICLR leaders prior to submitting AI-generated papers and obtained consent from the peer reviewer. A spokesman for ICLR said that the other two labs (Intology and Autoscience) were not.
Several AI academics have joined social media to criticize Intology and Autoscience stunts as a joint move in the scientific peer review process.
“All of these AI scientists’ papers use peer-reviewed venues as human evasion, but no one agreed to this free labor,” wrote Prithviraj Ammanabrolu, assistant professor of computer science at UC San Diego, in X-Post. “We respect everyone involved, regardless of how impressive the system is. Please disclose this to the editor.”
As critics pointed out, peer reviews are time-consuming, labor-intensive, and mostly volunteer challenges. According to one recent natural survey, 40% of scholars spend 2-4 hours reviewing one study. That work is escalating. The number of papers submitted to the largest AI conference increased to 17,491 last year, up 41% from 12,345 in 2023.
Academia already had problems with AI-generated copying. One analysis found that between 6.5% and 16.9% of papers submitted to the AI conference in 2023 were likely to contain synthetic text. However, AI companies that use peer reviews to effectively benchmark and promote their technology is a relatively new outbreak.
“[Intology’s] Intology writes in a post about X promoting X’s ICLR results. In the same post, the company claimed that the workshop reviewer praised one of the “clever ideas” of the research generated by AI.[s]. ”
The scholars did not take this kindly.
Ashwanee Panda, a postdoctoral researcher at the University of Maryland, said in a post on X that submitting AI-generated papers without giving workshop organizers the right to reject them showed a “lack of respect for human reviewers.”
“Sakana reached out to ask if I would be willing to participate in the workshop experiments I’m organising at ICLR,” Panda added. […] I think I’ll submit my AI paper to the venue without contacting me. [reviewers] It’s bad. ”
For everything, many researchers are skeptical that AI-generated papers deserve a peer-review effort.
Sakana itself acknowledged that the AI had made a “embarrassing” citation error and that only one of the three AI-generated papers the company chose to submit met with the bar to accept the meeting. Sakana has withdrawn its ICLR paper before it was made public for transparency and respect for the ICLR treaty, the company said.
Alexander Doria, co-founder of AI startup Playus, said that rafts of secret synthetic ICLR submissions should perform “high quality” AI-generated research assessments of prices.
“Ebal [should be] Doria said in a series of posts about X. [AI] evals. ”
Source link