Our AI Policy
Berghahn is committed to upholding the integrity of scholarly publishing in a time of rapid technological change. We are approaching the evolving role of Artificial Intelligence (AI) in research and publishing with care and caution, recognizing both its potential and its risks.
Across our books and journals programs, we encourage authors, editors, reviewers, and readers to engage in ongoing dialogue about the responsible and transparent use of AI. By adhering to these principles, we aim to foster a publishing environment grounded in integrity, where any use of AI is subject to accountability, academic standards and ethical scrutiny.
To this end, we have put together this policy to provide authors and editors with clear guidance in this fast-moving field. We have outlined some general definitions of AI and our policy when using these tools alongside guidelines on current best practices for their use.
Any queries please contact:
For Books - editorialus@berghahnbooks.com
For Journals - editorial@berghahnjournals.com
AI Terms
AI (GenAI) tools, such as ChatGPT, produce novel content in the form of audio, text, image, and video. Most are Large Language Models (LLM) and are trained on enormous datasets. The models’ output is [largely] determined by the quality of the training data (i.e. text) and can often be extracted from copyrighted work. The output of a model trained on non-peer reviewed text is likely to fall short of academic standards (although various emerging models are being trained exclusively on academic publications).
It is important to state that large language models are not reliably accurate, nor are their results reproducible [over time] and it is these forms of AI that have the potential to be used to generate text and results in research articles.
Assistive AI (AssAI) tools, such as Grammarly or spellcheckers, help with grammar, language refinement, or formatting without generating original content. While these tools can be useful and often freely accessible, their output is not always error-free. At Berghahn, we value the judgment, nuance, and personal attention that only human editorial care can provide.
Authorship and Responsibility
Authorship can only be assigned to humans. AI tools cannot be listed as co-authors or contributors as they cannot take legal or ethical responsibility for the content. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements.
All submitted work must be fully owned and overseen by human authors. Authors remain fully accountable for any content – including text, images, or data – that has been produced or modified with the help of AI tools. Authors must take public responsibility for their work and any breach in publication ethics and copyright use of AI may incur.
Use of AI Tools
If authors use AI tools in any capacity – for writing, image generation, data analysis, or metadata preparation – this must be clearly disclosed in the manuscript. Disclosures should appear in a note or endnote and include:
- The name of the AI tool
- The date accessed
- A brief description of how it was used
Book Example
Acknowledgement to be called out as an endnote
- In Chapter 1, section titled ‘Historical Context Overview’ includes content developed with the assistance of ChatGPT (OpenAI), accessed 15 March 2025, using the prompt: ‘Summarize the main political events in France during the 1960s in under 300 words.’ Final text was reviewed and edited by the author.
Journal Example
Footnotes
- 1 Text generated by ChatGPT, OpenAI, March 7, 2023, https://chat.openai.com/
If the prompt hasn't been included in the text, it can be included in the note:
- 1 ChatGPT, response to "Explain how to make pizza dough from common household ingredients," OpenAI, March 7, 2023, https://chat.openai.com/
Author-Date
Any information not included in the text is placed in the parenthetical reference.
- Example: (ChatGPT, March 7, 2023)
Image, Illustrations, Tables and Cover Usage
Authors who use AI-generated images – including for cover designs or internal illustrations/figures/tables – must:
- Confirm they have the right to use and modify the image.
- You will have to check the terms of use of the AI to find the specific rights and licensing statement.
- Be transparent about the AI origin of the image.
- The cover image should be a unique creation and not a copy of an existing work.
- To ensure this:
- Use only your own original prompts to generate the image.
- Avoid using an existing image as a sample, which can lead to highly derivative results.
- Disclose if you have used another image as a sample.
- Top Tip: Always do a reverse image search of generated image to ensure it is not a copy of an existing work by a human author
- You must also credit the AI tool used.
- Example of a cover credit line:
Cover image created using ChatGPT AI based on prompt developed by the author/editor, (insert name). Date (insert date) Prompt used: (Insert Prompt) - Example of an image credit line:
Image created using ChatGPT AI based on prompt developed by the author/editor, (insert name). Date (insert date) Prompt used: (Insert Prompt)
- Example of a cover credit line:
- AI tools may assist with formatting or structuring tables (e.g., arranging columns, suggesting layouts, or converting already published/verified data into a clear format) in accordance to the following guidelines:
- Authors must not use AI to generate new data or statistical results.
- Authors remain fully responsible for accuracy and verification of all table content.
- Authors should not input original, unpublished research data into public LLM engines (e.g. ChatGPT), as there is a risk that such data may be stored, learned from, or reproduced by the tool.
- Any AI assistance must be disclosed (tool, date, and how it was used). Example:
Table 2.1 was formatted with the assistance of ChatGPT (OpenAI), accessed 15 March 2025. All data was provided by the author and independently verified against the original sources.
Captions and Alt Text
Authors may use AI tools to help generate draft captions and alt text for images and figures. However, they must follow the following guidelines:
- AI output must always be reviewed and, if necessary, corrected by the author to ensure accuracy, clarity, and adherence to accessibility best practices.
- Captions and alt text should describe content in a way that is informative for all readers, including those using screen readers.
- AI-generated captions/alt text must be disclosed, noting the tool and date accessed.
- Human authors remain fully responsible for accuracy and accessibility.
Example:
Alt text for figures in Chapter 3 was drafted with the assistance of ChatGPT (OpenAI), accessed 15 March 2025, using the prompt: “Generate concise and accessible alt text for the following figure description …”. Final text was reviewed and edited by the author to ensure accuracy and adherence to accessibility standards.
Note: Alt text is an essential and important aspect because it ensures accessibility for visually impaired readers, supports legal compliance, enhances SEO, and improves user experience when images cannot be viewed. We encourage authors to produce quality and comprehensive alt text. For guidance on accessibility and best practices in alt text writing, refer to Alt Text 101.
For further information on drafting your manuscript and/or journal article for submission, please refer to our submission guidelines:
For Books:
Style Guides and Documentation Guides
For Journals:
For general guides refer to our Author page.
Refer to the journal itself for journals-specific guides.
Reviewers and Editors
- Peer reviewers must not use AI tools to read, summarize or evaluate submitted manuscripts. Uploading any part of a manuscript to a public AI platform is a breach of author confidentiality.
- Gen/Ass AI should not be employed at any stage of the decision-making process.
- Journal editors and reviewers are ultimately responsible and accountable for their decisions on all articles submitted.
- Book editors are likewise expected not to use AI in the review, selection or evaluation of book proposals or manuscripts.
- Plagiarism checks using any AI program needs to be noted to ensure ethical use.
- All editorial decisions are to be made by qualified human experts exercising independent judgment.
Berghahn’s Role and Commitments
- We will not use GenAI in the evaluation of your book or article. All decisions about your submitted manuscript are made by human editorial judgment and the expertise of our editors, editorial boards and peer reviewers.
- We will not use GenAI in the production process of your book or article without your knowledge and consent.
- We will not use GenAI in the creation of any covers or images without your knowledge and consent.
- We may use AssAI tools to carry out tasks more efficiently, such as checking grammar and spelling, creating marketing materials and enhancing metadata. However, we will not upload any content of your manuscript on a public AI tool.
- As of now we are not licensing any publication for Large Language Model (LLM) training. As the technology is developed further, any future changes to that status will be communicated.
Measures to Prevent Unauthorized AI Scraping
We have implemented server-side scripts designed to block AI-based web scrapers from accessing open access (OA) content on Berghahn website. While we believe similar protections may be in place on other platforms, we cannot guarantee that all third-party hosting environments are immune to AI-based web scrapers. However, this is something we will monitor more extensively to track how the content from our open access books and articles may be used.
Quick referral on AI Usage for Authors
| Use Case | AI Use Allowed? | Notes / Conditions |
|---|---|---|
| Grammar/spell checking with tools like Grammarly | Yes | No need to disclose unless extensive editing is performed. |
| Drafting or summarizing text with GenAI tools (e.g., ChatGPT) | Yes (with disclosure and is subject to our review) | Tool, date accessed, and prompt must be disclosed in a note or endnote. Approval of use is subject to our review. |
| Generating article/book titles or blurbs/abstracts with GenAI | Yes (with disclosure) | Must be reviewed and approved by human author. |
| Generating citations or bibliographic entries with GenAI | No | These tools are unreliable for citations and often generate false references. |
| Using AI to review or summarize someone else’s manuscript | No | Breach of confidentiality and ethical review practices. |
| Creating AI-generated cover or internal images/figures/tables | Yes (with disclosure) | Must meet quality and rights standards. Provide credit line for AI generated images. It should not be derivative; a reverse image check is a must. |
| Using AI tools to create captions and alt-text | Yes (with disclosure and human review) | Draft must be checked/edited for accuracy and accessibility. Tool and date accessed must be disclosed. |
| Using AI tools to analyze datasets | Yes (with disclosure) | Transparency about methods and tools used is required. |
| Uploading any part of manuscript to public AI platforms during peer review | No | Not allowed under any circumstance. |
| Using AI to create or summarize reviewer reports | No | All peer reviews and evaluations must be conducted by humans. |
Note: Examples taken from Chicago Manual of Style AI Chatbot citation guide.