Ever since the launch of AI chatbots like ChatGPT, there’s been a debate over whether they can be credited as authors. Many argue that the chatbot’s ability to respond to prompts constitutes a form of creativity, but academic publisher Springer Nature recently announced that they won’t credit ChatGPT as an author in their papers. However, the publisher emphasised they have no issue with scientists using AI to help write or generate ideas for research as long as the authors fully disclose the AI’s contribution.
Editor-in-chief of Springer Nature’s flagship publication, Nature, Magdalena Skipper, said, “This new generation of LLM tools, including ChatGPT, has exploded into the community. People are excited and playing with them, but also using them in ways that exceed their current capabilities.”
Labelling of AI Contributions in Scientific Papers
One of the underlying problems of using AI chatbots in writing scientific papers is labelling. While some papers clearly label AI-generated text, others merely acknowledge the bot’s contribution with a sentence like “contributed to writing several sections of this manuscript.” This lack of detail as to where and how the authors used AI chatbots has caused confusion and criticism in the scientific community.
Experts argue that software alone can’t fulfil the role of a human, which includes being accountable for a publication, claiming intellectual property rights, corresponding with other scientists, or answering questions. Additionally, there are concerns about the quality of output by AI writing tools, as there have been several cases of these AI tools writing factually incorrect information and amplifying biases like sexism and racism. This has led schools and organizations to ban the use of ChatGPT.
While Springer Nature doesn’t support a ban on AI in scientific work, they advocate for the scientific community to establish new norms for disclosure and safety guidelines for AI-assisted research.