Statement on the Use of Generative Artificial Intelligence (GenAI)

International Journal of Business and Management (IJBM), eISSN: 2815-9330

1. Scope and Purpose

This statement establishes the ethical framework for the use of Generative Artificial Intelligence (GenAI) tools and large language models (LLMs) in the submission, peer review, and editorial processes of the International Journal of Business and Management (IJBM). The policy is informed by the publishing ethics guidelines and core practices provided by the Committee on Publication Ethics (COPE) and Elsevier and reflects our commitment to integrity, transparency, and accountability in scholarly publishing.

2. Definition

Generative AI refers to machine learning models capable of producing human-like text, images, code, or other creative content in response to user prompts. These include—but are not limited to—text-based models (e.g., ChatGPT, GPT-4, Claude, Gemini (Bard), LLaMA, Mistral), image-generating models (e.g., DALL·E, Midjourney, Stable Diffusion), and integrated productivity AI tools (e.g., GrammarlyGO, Microsoft Copilot, Notion AI, Jasper).

Large Language Models (LLMs) are a subset of GenAI tools specifically trained on extensive textual corpora to generate and interpret natural language. They are increasingly integrated into academic workflows and productivity platforms.

3. Principles and Standards

3.1 Ethical Use in Accordance with COPE and Elsevier Guidelines
All stakeholders, including authors, reviewers, and editors, must ensure that the use of AI tools adheres to COPE’s Core Practices and Elsevier, including those relating to authorship, transparency, peer review, and data integrity. GenAI may assist in minor, mechanical tasks (e.g., grammar correction or summarization), but must not substitute for human intellectual contributions, nor be used to fabricate content or mislead readers.

3.2 Transparency and Disclosure
Authors must provide a clear and detailed disclosure in their manuscript if GenAI tools were used. This includes specifying the tool(s) and model(s) employed (e.g., GPT-4 via ChatGPT; Claude 3 by Anthropic; Copilot in Microsoft Word), and the exact purpose(s) of their use (e.g., improving readability, generating summaries, assisting with formatting). Omission of such disclosure is considered a breach of academic integrity.

3.3 Authorship and Accountability
GenAI tools cannot be listed or credited as authors, as they do not meet COPE’s and Elsevier's authorship requirements, which include:

  • Substantial contributions to the conception or design of the work,

  • Participation in drafting or revising the manuscript, and

  • Accountability for the accuracy and integrity of the final version.

Human authors bear full responsibility for the content, including any material generated with the assistance of GenAI tools.

3.4 Peer Review Integrity
Reviewers and editors using GenAI tools for non-decisive tasks (e.g., summarizing a manuscript or checking language clarity) must disclose such usage to the editorial board. However, evaluative judgment, critical analysis, and final recommendations must be made solely by the human reviewer or editor. GenAI should never be used to generate confidential reports or substitute peer review commentary.

4. Acceptable Uses of GenAI

Provided there is full disclosure, the following applications of GenAI are acceptable:

  • Language and Style Improvement: Enhancing grammar, fluency, and clarity (e.g., using ChatGPT, GrammarlyGO).

  • Data Visualization: Producing figures, graphs, or images from author-provided data, verified for accuracy (e.g., using DALL·E or Excel Copilot).

  • Formatting Support: Assisting with citation formatting, layout structuring, or summarizing references.

  • Conceptual Brainstorming: Supporting early-stage ideation or outlining—provided all hypotheses, analysis, and conclusions are developed independently by the authors.

5. Prohibited Uses of GenAI

The following uses of GenAI are strictly prohibited and will be treated as ethical violations:

  • Fabrication or Manipulation: Creating fictitious data, references, or findings with AI assistance.

  • Plagiarism: Using AI-generated content without attribution or presenting it as original thought.

  • Undisclosed Use: Failing to declare AI assistance in the manuscript.

  • Misrepresentation of Intellectual Contribution: Claiming AI-generated output as the product of human reasoning or scholarly work.

6. Required Disclosure Statement

All authors must include a Generative AI Use Disclosure Statement in their manuscript, typically in the acknowledgments or methods section. This should include:

  • Name(s) of the GenAI or LLM tools used,

  • The specific purpose and stage of the research process in which it was used,

  • Confirmation that the core intellectual content, including analysis and interpretation, was developed by the authors.

Example disclosure:
"Generative AI tools, including ChatGPT (GPT-4, OpenAI) and GrammarlyGO, were used to enhance grammar and language clarity during manuscript preparation. All conceptualization, analysis, and interpretation of the research were conducted solely by the authors."

7. Editorial Oversight and Compliance

Editors will evaluate disclosures of GenAI use as part of the manuscript review process. Manuscripts with suspected undisclosed AI-generated content may be subjected to additional scrutiny, including plagiarism detection and requests for revision or clarification. IJBM reserves the right to contact the authors’ institution in cases of serious ethical breaches.

8. Consequences of Misuse

In line with COPE’s and Elseviser's Guidelines on Handling Misconduct, violations of this policy may result in:

  • Rejection of the manuscript during review,

  • Retraction of the article post-publication,

  • Notification to the authors' affiliated institution or funding agency,

  • Barring future submissions in severe or repeated cases.

9. References

Elsevier (2023). Generative AI Policies for Journals. Retrieved from:
https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals

Committee on Publication Ethics (COPE) (2023). Position Statement on Authorship and AI Tools. Retrieved from:
https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools

Committee on Publication Ethics (COPE) (2024). Discussion Document on AI and Peer Review. Retrieved from:
https://publicationethics.org/news/cope-publishes-guidance-on-ai-in-peer-review

Committee on Publication Ethics (COPE) (2023). Discussion Paper: Ethical Considerations in the Use of Generative AI in Publishing.
https://publicationethics.org/topic-discussions/artificial-intelligence-ai-and-fake-papers