Generative AI Policy
The Journal of Digital Media & Interaction (JDMI) acknowledges the rapid development of generative artificial intelligence (AI) tools and their potential role in academic research and scholarly communication. While such tools may support the research process, their use requires transparency and must not undermine the principles of academic integrity, authorship, and accountability.
The following guidelines set out the standards that authors, reviewers, editors, and all parties involved in the publication process are expected to follow regarding the use of generative AI.
For authors
- Authors can use generative AI tools to support the preparation of their manuscripts, but these tools must not replace the authors’ own critical analysis, expertise, and scholarly judgment. All use of AI should remain under the authors’ direct supervision and responsibility.
- Generative AI tools cannot be credited as authors or co-authors. Authorship implies accountability for the originality, accuracy, and integrity of the work, which AI tools cannot assume.
- Authors may employ AI tools for limited tasks such as language editing, formatting, or idea structuring. However, the intellectual contribution, argumentation, and interpretation must remain the authors’ own.
- Any substantive use of generative AI (e.g., text production, image creation, or data handling) must be clearly disclosed in the manuscript, ideally in the Methods section or in a dedicated note.
- Authors remain fully responsible for verifying the originality of their submission, ensuring proper citation practices, and avoiding fabricated or biased content generated by AI.
For reviewers
- Reviewers are expected to conduct their evaluations personally and responsibly. The use of generative AI to draft, summarize, or formulate review reports is not permitted.
- Confidentiality of manuscripts must be preserved, and uploading content into AI systems that store, or process data externally is considered a breach of review ethics.
- Reviewers should critically assess whether the manuscript adequately discloses any AI use, and whether such use is consistent with JDMI’s ethical standards.
For editors
- Editors must ensure that submitted manuscripts adhere to JDMI’s policy on generative AI.
- Editorial decisions should be based on the scholarly merit of the submission and the transparency of declared AI use.
- Editors may request clarifications or revisions from authors if the role of AI tools is ambiguous or not appropriately reported.
Publication process
- Submissions found to include undisclosed or inappropriate use of generative AI may be returned to authors, delayed in review, or rejected.
- The editorial team reserves the right to use plagiarism detection, content verification, and other screening methods to identify unacknowledged AI-generated content.
- JDMI is committed to upholding ethical publishing practices, fostering innovation in digital media research, and ensuring that generative AI serves as a tool to support, rather than replace, human scholarship.