Artificial Intelligence and scientific journals: the dilemmas that permeate contemporary academic production

Editorial


Artificial Intelligence and scientific journals: the dilemmas that permeate contemporary academic production


Patrícia Nascimento Silva1



Abstract: this editorial briefly presents some recent dilemmas arising from the use of Artificial Intelligence tools in academic publishing, especially in the context of editorial activities. These activities, which involve various actors, have been impacted in different ways and, at the same time, have contributed to redefining scientific publishing itself. To what extent is the use of Artificial Intelligence acceptable? What are its limits? Is the ethical and responsible use being neglected in academic production? What is the boundary, the borderline activities that should be performed by a researcher rather than “outsourced” to the machine? What about scientific integrity, replicability and, above all, the conscience of those who claim to have produced something that does not exist, simultaneously reducing their own ability to establish new connections and even to reason? In the case of students in training, this is a multiple waste of learning opportunities. There are many questions, and so far there are only a few answers. Finally, the guidelines of the journal Múltiplos Olhares em Ciência da Informação (Multiple Perspectives on Information Science) are being updated in order to prevent abusive practices related to the use of Artificial Intelligence.

Keywords: Artificial Intelligence; scientific integrity, scientific journals.



Inteligência Artificial e periódicos científicos: os dilemas que permeiam a produção acadêmica contemporânea



Resumo: este editorial apresenta brevemente alguns dilemas recentes decorrentes do uso de ferramentas de Inteligência Artificial na publicação acadêmica, especialmente no âmbito das atividades editoriais. Essas atividades, que envolvem diversos atores, têm sido impactadas em diferentes aspectos e, ao mesmo tempo, têm contribuído para redefinir a própria publicação científica. Até que ponto será aceitável a adoção da Inteligência Artificial? Quais são seus limites? O uso ético e responsável está sendo negligenciado na produção acadêmica? Qual é a fronteira, as atividades limítrofes, que devem ser realizadas por um pesquisador, e não “terceirizadas” para a máquina? Como ficam a integridade científica, a replicabilidade e, sobretudo, a consciência de quem afirma ter produzido algo inexistente, reduzindo simultaneamente a própria capacidade de estabelecer novas conexões e até mesmo de raciocinar? No caso de estudantes em formação, trata-se de um desperdício múltiplo de oportunidades de aprendizado. São muitas as perguntas, e até o momento há somente algumas respostas. Por fim, anuncia-se a atualização das diretrizes da Revista Múltiplos Olhares em Ciência da Informação, com o intuito de evitar práticas abusivas relacionadas ao uso de Inteligência Artificial.

Palavras-chave: Inteligência Artificial; integralidade científica, periódicos científicos.



Inteligencia Artificial y revistas científicas: los dilemas que impregnan la producción académica contemporánea


Resumen: este editorial presenta brevemente algunos dilemas recientes derivados del uso de herramientas de Inteligencia Artificial en la publicación académica, especialmente en el ámbito de las actividades editoriales. Estas actividades, que involucran a diversos actores, han sido impactadas en distintos aspectos y, al mismo tiempo, han contribuido a redefinir la propia publicación científica. ¿Hasta qué punto será aceptable la adopción de la Inteligencia Artificial? ¿Cuáles son sus límites? ¿Se está descuidando el uso ético y responsable en la producción académica? ¿Cuál es la frontera, el conjunto de actividades limítrofes, que deben ser realizadas por un investigador y no “externalizadas” a la máquina? ¿Qué ocurre con la integridad científica, la replicabilidad y, sobre todo, con la consciencia de quien afirma haber producido algo inexistente, reduciendo simultáneamente su propia capacidad de establecer nuevas conexiones e incluso de razonar? En el caso de estudiantes en formación, se trata de un desperdicio múltiple de oportunidades de aprendizaje. Hay muchas preguntas, y hasta ahora solo hay algunas respuestas. Por último, se anuncia la actualización de las directrices de la Revista Múltiples Miradas en Ciencia de la Información, con el propósito de evitar prácticas abusivas relacionadas con el uso de Inteligencia Artificial.

Palabras-clave: Inteligencia Artificial; integralidad científica, revistas científicas.


How to cite this article: NASCIMENTO SILVA, Patrícia. Inteligência artificial e periódicos científicos: os dilemas que assombram a produção acadêmica. Múltiplos Olhares em Ciência da Informação, Belo Horizonte, v. 15, p. 1-5, 2025. DOI: 10.35699/2237-6658.2025.63351.


Editorial

Artificial Intelligence (AI) has had a broad impact on society, significantly influencing human activities from a behavioral and attitudinal point of view. With its popularization beginning in late 2022, driven by the dissemination of generative AI tools, the first discussions also emerged in the fields of education, research and scientific journals. The structural changes resulting from the introduction of AI into society are irreversible and should not be ignored. It is up to institutions to learn how to use them ethically and responsibly. Providing guidance on the use of these technologies, encouraging research, training initiatives and strategic partnerships tends to be the way forward, since everything is still very new.

Considering that all innovation involves both benefits and risks, it is understood that technological innovation provides significant advances, but also poses relevant challenges. In this sense, journals need to take a stand by offering clear guidelines and encouraging the responsible use of AI technologies in editorial processes, promoting alignment between innovation, ethics and a commitment to academic quality and social responsibility.

In publishing activities, with all the existing challenges, especially in the Brazilian context, new dilemmas related to this responsibility in publication are still faced, since it involves different actors and each one of them is impacted in a different way.

From the perspective of evaluators, editors and reviewers, beyond issues of plagiarism, how can we identify texts that are generated by AI using false data and information, non-existent citations or citations taken out of context? What about an article produced by AI, with fictitious graphic data and images and research that never took place, being published? In other cases, what about text analyses produced by tools in which the author did not read or establish the connections, but rather “outsourced” them to the machine? Can a researcher produce a “systematic” review using prompts to read the articles or without even knowing how to create a search expression, a task that requires basic logical reasoning and still find it advantageous? This is the era we are living in!

From the author's perspective, what can be said about a reviewer who uploads their work to an AI tool, a piece of research that took years to develop and is an unprecedented study, so that this tool can analyze it and generate an opinion? What right do they have to do this? Wouldn't it be better for them to reject the invitation and let someone else perform the analysis? Furthermore, how can one trust a journal to submit their unpublished texts? How can one know the criteria and positions adopted regarding the use of AI? And what to do in the face of this enormous amount of preprints, produced in a rushed manner, which end up competing for space with truly rigorous articles that require so much time to prepare?

Unlike research activities, new AI tools are continuously developed and quickly become obsolete, at least in terms of versions, so that excessive use, guided by immediate, mechanized and repetitive actions, mediated by AI, tends not to be sustainable in the long term. Strategic tasks, with quality and rigor, related to research and value generation remain heavily dependent on human action. In this context, it is up to humans to direct, supervise and validate the use of technologies, and not to submit to them, as observed in some contemporary practices.

From a social perspective, there is still a persistent myth that AI is capable of providing answers to all questions, possessing total knowledge on any subject. In the academic environment, however, the community already recognizes more clearly the limitations of these technologies, especially with regard to the generation of decontextualized, inaccurate or inappropriate responses. Despite this advance in critical understanding, there are still uncertainties and misconceptions about the limits and possibilities of using AI in certain tasks, which highlights the need for clearer institutional guidelines and ongoing training on its responsible use.

In a study conducted in the first half of 2025 at UFMG (Nascimento Silva et al., 2025), relevant data was obtained on the use of AI by different profiles within the university community, including students, faculty and technical-administrative staff in education. One of the main aspects of concern identified refers to the use of AI as a source of information to “obtain data and information relevant to activities” and “search for sources and links related to a specific issue (information retrieval)”, in addition to the correction and revision of texts, as well as the production of summaries or documents.

It has been observed that many users resort to generative AI, trained largely with generalist and sometimes inaccurate data, to support activities that require specialized and, above all, scientifically validated content. This inappropriate use may be associated with limitations in data literacy, understood as the ability to access, understand, critically evaluate and consciously use data and information. The absence of this skill can favor the formation of passive subjects, who only consume automatically generated content, compromising their creative, reflective and critical thinking processes. This problem becomes even more sensitive in the context of training professionals and researchers, given that it is essential, in the initial stages, to know, understand and fully execute the training processes, without skipping any steps, in order to avoid harming teaching and learning.

With regard to text correction and revision activities, the situation is particularly sensitive, especially as it involves unpublished scientific works. Although some tools indicate the existence of privacy and data use policies, it is important to note that the information submitted to these systems often becomes part of databases controlled by large technology companies, or big techs. This context raises significant concerns related to copyright protection, the confidentiality of scientific production and the strengthening of technological surveillance mechanisms, with potential ethical, legal and institutional impacts.

Finally, we announce that the guidelines of the journal Múltiplos Olhares em Ciência da Informação (MOCI) have been updated to indicate its position on the use of AI and to provide guidance to authors, reviewers and editors. The use of AI must be reported and detailed, as well as the tools used. Any type of model, whether Large Language Models (LLMs) or Small Language Models (SLMs), must be declared and explained in the Methodology section, as is the case with other software and tools. It is suggested that the use of AI should be limited to the production of charts, tables, images or other graphic elements of the article, or to data collection. Analyses are only permitted when they refer to studies related to machine learning and model training, and should not be used for reading or interpreting text, which is the sole responsibility of a researcher. In addition, it is important to note that AI tools do not assume responsibility for authorship, and it is up to the authors to ensure the integrity and originality of research articles.

As guidance for MOCI authors, especially graduate students training in scientific research, it is important to preserve the essential activities of reading, interpretation and writing, which are fundamental to academic training and should not be entirely replaced by AI tools. There is a growing use of AI tools for the generation of scientific texts, automatic summaries, and content categorization, practices that, in addition to ethical implications and risks to scientific integrity, can compromise the effective training of researchers. This uncritical use, instead of training an autonomous and reflective researcher, will train a content replicator, unprepared to face the complex challenges of the scientific context, particularly in Brazil, marked by limitations and the constant need for overcoming obstacles and innovation.

It is essential that researchers maintain a critical and autonomous stance toward the use of AI, being honest and transparent with journals and with themselves, after all, cognitive debt will eventually be collected! Although these technologies can be used as support tools in various activities, processes involving decision-making and core activities, especially those that directly impact intellectual and professional training, should not be delegated to machines. Similar to the so-called “winters” observed in different areas of knowledge, it is likely that these technologies will undergo a process of selection and maturation. In this scenario, AI tends to remain a relevant tool, but solutions that keep humans at the center of decision-making processes, in control and supervising machines, are likely to endure.

Acknowledgments

To the National Council for Scientific and Technological Development (CNPq) for supporting the research, process 303721/2025-1.

References

NASCIMENTO SILVA, Patrícia et al. Inteligência artificial na UFMG: percepções da comunidade acadêmica – relatório da consulta à comunidade acadêmica da Universidade Federal de Minas Gerais no primeiro semestre de 2025. 2nd. ed. Belo Horizonte: Universidade Federal de Minas Gerais, 2025. 1 online resource. Available at: https://www.ufmg.br/ia/wp-content/uploads/2025/12/Inteligencia-Artificial-na-UFMG_-percepcoes-da-comunidade-academica-2-1-1.pdf. Accessed on: 10 dec. 2025.




1 Doutora em Gestão e Organização do Conhecimento (Ciência da Informação), Universidade Federal de Minas Gerais, patricians@ufmg.br.


DOI: https://doi.org/10.35699/2237-6658.2025.63351.

Revista Múltiplos Olhares em Ciência da Informação, Belo Horizonte, v. 15, e063351, 2025

0

Revista Múltiplos Olhares em Ciência da Informação, Belo Horizonte, v. 15, e063351, 2025