INDEXED BY
Navigating Generative AI in Scientific Publishing: Our Journal's Policy
Generative AI (GenAI) tools, while boosting writing efficiency, introduce ethical and integrity concerns in scientific publishing. To address this, our journal, effective June 1, 2025, is implementing a clear policy requiring authors to disclose any GenAI use throughout the manuscript preparation, peer-review, and publication processes.
While Scopus offers flexible formatting, many major publishers have already established formal AI policies:
- Elsevier mandates disclosure of GenAI use, permitting it only for readability under human supervision. AI cannot be an author or cited (Link).
- Springer Nature stresses human accountability, disallows AI-generated figures, and requires documentation for extensive GenAI use, though not for basic LLM-assisted editing (Link).
- Publishers like BMC, Nature Portfolio, and Cureus also forbid AI as an author, demand transparent disclosure of AI tool use, and restrict unlabeled AI-generated content (Link).
Our Journal's Specific Policy:
- Disclosure Requirement (GenAI FORM): Authors must explicitly state any use of GenAI tools (e.g., ChatGPT, Grammarly) at the end of their manuscript, before the references.
- No AI Authorship: GenAI tools are strictly prohibited from being listed as authors or co-authors.
- Restrictions on AI-Generated Visuals: Figures, tables, and other visuals must not be solely AI-generated unless approved and clearly labeled.
- Permissible Use for Language Editing: AI tools can be used for minor language and grammar corrections, but only with close human oversight. Authors remain entirely responsible for their content.
These guidelines aim to maintain academic publishing integrity while recognizing the practical advantages of evolving technologies.