This special issue explores the AI turn in contemporary historiography and its implications for historical knowledge, method, and ethics. Artificial intelligence reshapes core principles of the historian’s craft, from authorship and interpretation to verification and critical engagement with evidence. We invite contributions that examine how large language models and algorithmic infrastructures transform epistemic practices, reproduce or disrupt bias, challenge human interpretive agency, and alter conditions of education and scholarly labor. The issue seeks both critical reflections and methodological innovations addressing this rapidly evolving intellectual landscape.
History – Theory – Criticism Journal 2/2026: The AI Turn in Contemporary Historiography: Challenges, Applications, Reflections
Call for articles
Special Issue 2/2026
The AI Turn in Contemporary Historiography: Challenges, Applications, Reflections
Deadline for submissions: 31 June 2026
Scope and aims
Artificial intelligence has entered the field of historiography not as a neutral instrument but as a phenomenon that unsettles its very foundations. The capacity of large language models to generate and reorganize knowledge on a scale that surpasses human comprehension compels historians to reconsider the principles that have long defined their craft: authorship, interpretation, verification, and the human mediation of evidence. The accelerating automation of textual production introduces a cognitive threshold that challenges the historian’s ability to control, evaluate, and verify the narratives emerging from algorithmic systems.
This transformation reveals both the potential and the vulnerability of historical knowledge. Artificial intelligence enables new ways of analyzing extensive textual corpora, translating and connecting sources, and recognizing patterns across linguistic and temporal boundaries. At the same time, it alters the conditions under which meaning is produced and received, eroding the distinction between human interpretation and computational synthesis. The opacity of large models, concealed in their training data and hierarchies of value, complicates one of the historian’s central tasks: the capacity to identify, understand, and critique bias within sources.
The AI turn in historiography, therefore, marks more than a technical or methodological innovation. It signifies a shift in the scale and ecology of knowledge, shaped by the asymmetries of global computational power and by growing dependence on corporate infrastructures. This situation calls for reflection on how historical inquiry can preserve its ethical and interpretive integrity while adapting to an environment governed by automation, data abundance, and limited transparency.
This special issue of History – Theory – Criticism invites contributions that address these challenges. We seek studies and reflections that examine how artificial intelligence transforms the epistemology, methodology, and ethics of historical work, how historians can critically engage with opaque algorithmic systems, and how humanistic scholarship re-articulates alternative, locally grounded, and sustainable approaches to technological innovation.
Themes and questions
1. Epistemology, authorship, and interpretation
a) How does the massive production of synthetic text alter the relationship between information and interpretation? Can historians still claim control over the evidentiary process when relying on systems whose reasoning and corpus remain opaque?
b) To what extent can AI be said to “understand” the past, and how does its pattern-based synthesis differ from human interpretation?
c) What frameworks of transparency, citation, and disclosure are needed to ensure accountability in AI-assisted research and writing?
d) How might the concept of authorship evolve when historical texts are increasingly co-produced by human and machine intelligence?
2. Methodology, infrastructure, and the Black box
a) General-purpose models reproduce values, hierarchies, and linguistic biases embedded in their training data, often without the user’s awareness. This deepens the “black box” problem and undermines one of the foundations of historical scholarship—the capacity to identify and critique bias in sources.
b) How can historians engage critically with these systems without surrendering epistemic agency?
c) What role might smaller, domain-specific, and ethically curated models play in building more transparent and interpretable infrastructures for historical research?
d) How can collaboration between historians, computer scientists, and archivists foster local, open, and sustainable alternatives to corporate AI ecosystems?
3. Cognitive, political, and environmental boundaries
a) The automation of interpretation introduces a cognitive threshold: the scale of machine-generated material now exceeds what human scholars can meaningfully read or evaluate. This raises the question of how knowledge is curated, filtered, and trusted in a post-verificatory environment.
b) At the same time, the concentration of computational resources in a few global centers reinforces inequalities between academic communities and widens the gap between those who design AI and those who merely consume it.
c) Finally, the environmental and energy costs of large-scale AI infrastructures compel the humanities to consider the ecological ethics of technological progress. What forms of scholarship might align critical inquiry with sustainability and local autonomy?
4. Education, practice, and the future of humanistic knowledge
a) How can historical education cultivate critical AI literacy rather than simple tool proficiency?
b) What pedagogical strategies can help students and researchers maintain interpretive depth and ethical reflection in an environment saturated by generative systems?
c) Should AI be understood as an auxiliary method, a paradigm shift, or a mirror revealing the epistemological foundations of humanistic knowledge itself?
d) How can universities and professional organizations shape guidelines that safeguard integrity and creativity while embracing innovation?
Submission guidelines
Submissions and inquiries should be sent to the Editor-in-Chief via email.
Language: English
Text length: articles 36–72,000 characters including notes; discussion papers 18–36,000 characters; reviews 9–18,000 characters. All articles should include an abstract (150–200 words) and 4–5 keywords.
Format: Microsoft Word (.docx) or Libre Office (.odt), following the DTK Manual of Style and Ethical Code
Peer review: Double-blind by two independent reviewers
Deadline: 30 June 2026
Publication: Winter 2026, Diamond Open Access
Guest Editors: Jaromír Mrňka, Jiří Hlaváček
Petr Wohlmuth, Ph.D. (Editor-in-Chief), Petr.Wohlmuth@fhs.cuni.cz