Generating dynamic and creative texts with Large Language Models (LLMs)

a systematic review

Authors

  • Patrick Fernandes Rezende Ribeiro Universidade Federal do Paraná, Programa de Pós-Graduação em Gestão da Informação, Departamento de Ciência e Gestão da Informação, Curitiba, PR, Brasil https://orcid.org/0000-0002-5973-1110
  • Denise Fukumi Tsunoda Universidade Federal do Paraná, Programa de Pós-Graduação em Gestão da Informação, Departamento de Ciência e Gestão da Informação, Curitiba, PR, Brasil https://orcid.org/0000-0002-5663-4534

DOI:

https://doi.org/10.1590/1983-3652.2025.60103

Keywords:

Large Language Models, Literary imaginary, Systematic review

Abstract

This article presents a systematic review of the literature on the application of Large Language Models (LLMs) in generating and identifying creative and dynamic texts based on symbolic, narrative, and literary imaginary references, considering publications from 2014 to 2024. The analysis encompasses two main stages: (1) automated bibliographic screening and curation with the support of tools based on artificial intelligence, and (2) critical analysis of the approaches, challenges and opportunities present in the selected literature. The tools used made it possible to manage references, extract metadata, and identify thematic patterns. The results indicate a predominance of studies focused on the creative generation of texts and the detection of figurative language, such as metaphors, with increasing use of hybrid models that combine symbolic and sub-symbolic structures. The conclusion is that, despite the potential of LLMs for computational creativity, challenges persist such as low interpretability and poor adaptation to non-hegemonic cultural contexts.

Downloads

Download data is not yet available.

References

ALKAISSI, H.; MCFARLANE, S.I. Artificial Hallucinations in ChatGPT: Implications in Scientific Writing. Cureus, v. 15, n. 2, p. 1–4, 2023. DOI: 10.7759/cureus.34830. Disponível em: https://pubmed.ncbi.nlm.nih.gov/36811129/. Acesso em: 13 mar. 2025.

BAGNO, Marcos. Preconceito linguístico: o que é, como se faz. 56. ed. São Paulo: Loyola, 2011.

BAKHTIN, Mikhail. Marxismo e Filosofia da Linguagem. 8. ed. São Paulo: Hucitec, 1997.

BAKHTIN, Mikhail. Estética da criação verbal. 4. ed. São Paulo: Martins Fontes, 2003.

BENDER, Emily M.; GEBRU, Timnit; MCMILLAN-MAJOR, Angelina; SHMITCHELL, Shmargaret. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In: PROCEEDINGS of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FACCT ’21). New York: ACM, 2021. p. 610–623. DOI: 10.1145/3442188.3445922.

BENDER, Emily M.; KOLLER, Alexander. Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data. In: PROCEEDINGS of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics, 2020. p. 5185–5198. Disponível em: https://aclanthology.org/2020.acl-main.463. Acesso em: 27 jun. 2025.

BLODGETT, S. L.; BAROCAS, S.; DAUMÉ III, H.; WALLACH, H. M. Language (technology) is power: a critical survey of “bias” in NLP. In: PROCEEDINGS of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics, 2020. p. 5454–5476. Disponível em: https://aclanthology.org/2020.acl-main.485. Acesso em: 27 jun. 2025.

BOOTH, Andrew. Clear and present questions: formulating questions for evidence-based practice. Library Hi Tech, v. 24, n. 3, p. 355–368, 2006. DOI: 10.1108/07378830610692127.

BRONCKART, Jean-Paul. Atividade de linguagem, textos e discursos: por um interacionismo sócio-discursivo. São Paulo: EDUC, 1999.

BROWN, Tom B.; MANN, Benjamin; RYDER, Nick; SUBBIAH, Melanie; KAPLAN, Jared; DHARIWAL, Prafulla; NEELAKANTAN, Arvind; SHYAM, Pranav; SASTRY, Girish; ASKELL, Amanda; AGARWAL, Sandhini; HERBERT-VOSS, Ariel; KRUEGER, Gretchen; HENIGHAN, Tom; CHILD, Rewon; RAMESH, Aditya; ZIEGLER, Daniel M.; WU, Jeffrey; WINTER, Clemens; HESSE, Christopher; CHEN, Mark; SIGLER, Eric; LITWIN, Mateusz; GRAY, Scott; CHESS, Benjamin; CLARK, Jack; BERNER, Christopher; MCCANDLISH, Sam; RADFORD, Alec; SUTSKEVER, Ilya; AMODEI, Dario. Language Models are Few-Shot Learners. [S. l.: s. n.], 2020. arXiv preprint arXiv:2005.14165. DOI: 10.48550/arXiv.2005.14165.

CHANG, K. K.; CRAMER, M.; SONI, S.; BAMMAN, D. Speak, memory: An archaeology of books known to chatgpt/gpt-4. [S. l.: s. n.], 2023. arXiv preprint arXiv:2305.00118. Disponível em: https://arxiv.org/abs/2305.00118. Acesso em: 30 maio 2025.

CRUMLEY, Ellen; KOUFOGIANNAKIS, Denise. Developing evidence-based librarianship: practical steps for implementation. Health Information and Libraries Journal, v. 19, n. 2, p. 61–70, 2002. DOI: 10.1046/j.1471-1842.2002.00372.x.

DEVLIN, Jacob; CHANG, Ming-Wei; LEE, Kenton; TOUTANOVA, Kristina. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. [S. l.: s. n.], 2019. arXiv preprint arXiv:1810.04805. DOI: 10.48550/arXiv.1810.04805.

DOLZ, Joaquim; SCHNEUWLY, Bernard. Gêneros orais e escritos na escola. Campinas: Mercado de Letras, 2004.

FRANCESCHELLI, G.; MUSOLESI, M. Creativity and Machine Learning: A Survey. ACM Computing Surveys, v. 56, n. 11, p. 1–41, 2021. DOI: 10.1145/3664595. Acesso em: 18 maio 2025.

FRANCESCHELLI, G.; MUSOLESI, M. On the Creativity of Large Language Models. AI & Society, v. 40, p. 3785–3795, 2024. DOI: 10.1007/s00146-024-02127-3.

GÓMEZ-RODRÍGUEZ, Carlos; WILLIAMS, Paul. A Confederacy of Models: a Comprehensive Evaluation of LLMs on Creative Writing. In: FINDINGS of the Association for Computational Linguistics: EMNLP 2023. Singapore: Association for Computational Linguistics, 2023. p. 14504–14528. Disponível em: https://aclanthology.org/2023.findings-emnlp.966. Acesso em: 18 maio 2025.

HEIDEGGER, Martin. Ser e Tempo. Petrópolis: Vozes, 2012.

HUSSERL, Edmund. Investigações Lógicas. São Paulo: Abril Cultural, 1975.

ICHIEN, Nicholas; STAMENKOVIĆ, Dušan; HOLYOAK, Keith J. Large Language Model Displays Emergent Ability to Interpret Novel Literary Metaphors. [S. l.: s. n.], 2023. arXiv preprint arXiv:2308.01497. Disponível em: https://arxiv.org/abs/2308.01497. Acesso em: 30 jun. 2025.

JAKOBSON, Roman. Linguística e comunicação. São Paulo: Cultrix, 1976.

JOVANOVIC, M.; CAMPBELL, M. Generative Artificial Intelligence: Trends and Prospects. Computer, v. 55, n. 10, p. 107–112, 2022. DOI: 10.1109/MC.2022.3192720.

JURAFSKY, Daniel; MARTIN, James H. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. 3. ed. Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 2023. Disponível em: https://web.stanford.edu/~jurafsky/slp3/ed3book_jan72023.pdf. Acesso em: 30 abr. 2025.

KAUFMAN, D. Desmistificando a inteligência artificial. Belo Horizonte: Autêntica, 2022.

KIM, Jeongyeon; SUH, Sangho; CHILTON, Lydia B.; XIA, Haijun. Metaphorian: Leveraging Large Language Models to Support Extended Metaphor Creation for Science Writing. In: PROCEEDINGS of the ACM Conference on Designing Interactive Systems (DIS’23). [S. l.: s. n.], 2023. Disponível em: https://hci.ucsd.edu/papers/metaphorian.pdf. Acesso em: 30 jun. 2025.

LABOV, William. Sociolinguística. 6. ed. São Paulo: Parábola Editorial, 2008.

LOUWERSE, Max M. Symbol Interdependency in Symbolic and Embodied Cognition. Topics in Cognitive Science, v. 3, n. 2, p. 273–302, 2011. DOI: 10.1111/j.1756-8765.2010.01106.x.

MARCUS, Gary; DAVIS, Ernest. Rebooting AI: Building Artificial Intelligence We Can Trust. New York: Pantheon Books, 2020.

MARCUSCHI, Luiz Antônio. Gêneros textuais: definição e funcionalidade. In: MARCUSCHI, Luiz Antônio; DIONÍSIO, Ángela Paiva (ed.). Gêneros textuais e ensino. Rio de Janeiro: Lucerna, 2007.

MCCULLOCH, W. S.; PITTS, W. A Logical Calculus of the Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics, v. 52, n. 1/2, p. 99–115, 1990. Reprinted from Bulletin of Mathematical Biophysics, v. 5, p. 115–133, 1943. Disponível em: https://www.cs.cmu.edu/~epxing/Class/10715/reading/McCulloch.and.Pitts.pdf. Acesso em: 30 abr. 2025.

MERLEAU-PONTY, Maurice. Fenomenologia da Percepção. São Paulo: Martins Fontes, 1999.

MINSKY, Marvin; PAPERT, Seymour A. Perceptrons: An Introduction to Computational Geometry. Electronic reissue. Cambridge, MA: MIT Press, 2017. DOI: 10.7551/mitpress/11301.001.0001.

OPENAI. GPT-4 Technical Report. [S. l.: s. n.], 2023. arXiv preprint arXiv:2303.08774. Disponível em: https://arxiv.org/abs/2303.08774. Acesso em: 30 jun. 2025.

ORTEGA-MARTÍN, Miguel; GARCÍA-SIERRA, Óscar; ARDOIZ, Alfonso; ÁLVAREZ, Jorge; ARMENTEROS, Juan Carlos; ALONSO, Adrián. Linguistic ambiguity analysis in ChatGPT. [S. l.: s. n.], 2023. arXiv preprint arXiv:2302.06426. DOI: 10.48550/arXiv.2302.06426.

RADFORD, Alec; WU, Jeffrey; CHILD, Rewon; LUAN, David; AMODEI, Dario; SUTSKEVER, Ilya. Language Models are Unsupervised Multitask Learners. [S. l.: s. n.], 2019. OpenAI. Disponível em: https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf. Acesso em: 2 fev. 2025.

ROSENBLATT, Frank. The perceptron: A probabilistic model for information storage and organization in the brain. Amercian Psychological Association, v. 65, n. 6, p. 386–408, 1958. Disponível em: https://www.ling.upenn.edu/courses/cogs501/Rosenblatt1958.pdf. Acesso em: 22 abr. 2025.

RUMELHART, David E.; HINTON, Geoffrey E.; WILLIAMS, Ronald J. Learning representations by back-propagating errors. Nature, v. 323, n. 6088, p. 533–536, 1986. DOI: 10.1038/323533a0. Acesso em: 14 jun. 2025.

SABA, Walid S. Stochastic LLMs do not Understand Language: Towards Symbolic, Explainable and Ontologically Based LLMs. [S. l.: s. n.], 2023. arXiv preprint arXiv:2309.05918. Disponível em: https://arxiv.org/abs/2309.05918. Acesso em: 30 jun. 2025.

SABA, Walid S. Symbolic and Language Agnostic Large Language Models. [S. l.: s. n.], 2023. arXiv preprint arXiv:2308.14199. Disponível em: https://arxiv.org/abs/2308.14199. Acesso em: 30 jun. 2025.

SANTO, Eniel do Espírito; ROSA, Flavia Goulart Mota Garcia; SILVA, Camila Bezerra da; BORDAS, Miguel Angel Garcia. Um Mosaico de Ideias sobre A Inteligência Artificial Generativa no Contexto da Educação. In: ALVES, Lynn (ed.). Inteligência artificial e educação: refletindo sobre os desafios contemporâneos. Salvador; Feira de Santana: EDUFBA; UEFS Editora, 2023. p. 51–70.

SAUSSURE, Ferdinand de. Curso de linguística geral. São Paulo: Cultrix, 1999.

STEVENSON, Catrine; SMAL, I.; BAAS, Maartje; GRASMAN, R.; MAAS, Han van der. Putting GPT-3’s Creativity to the (Alternative Uses) Test. In: INTERNATIONAL Conference on Computational Creativity (ICCC) 2022. [S. l.: s. n.], 2022. Disponível em: https://arxiv.org/abs/2206.08932. Acesso em: 30 jun. 2025.

VASWANI, Ashish; SHAZEER, Noam; PARMAR, Niki; USZKOREIT, Jakob; JONES, Llion; GOMEZ, Aidan N.; KAISER, Łukasz; POLOSUKHIN, Illia. Attention Is All You Need. In: ADVANCES in Neural Information Processing Systems 30 (NeurIPS 2017). [S. l.: s. n.], 2017. DOI: 10.48550/arXiv.1706.03762.

ZHANG, Ran; EGER, Steffen. LLM-based multi-agent poetry generation in non-cooperative environments. [S. l.: s. n.], 2024. arXiv preprint arXiv:2409.03659. Disponível em: https://arxiv.org/abs/2409.03659. Acesso em: 30 jun. 2025.

ZHAO, W.; CHELLAPPA, R.; PHILLIPS, P. J.; ROSENFELD, A. Face Recognition: A Literature Survey. ACM Computing Surveys, v. 35, n. 4, p. 399–458, 2003. DOI: 10.1145/954339.954342.

Published

2025-09-28

Data Availability Statement

Link de acesso ao conjunto de dados coletados: https://zenodo.org/records/15800742

How to Cite

RIBEIRO, Patrick Fernandes Rezende; TSUNODA, Denise Fukumi. Generating dynamic and creative texts with Large Language Models (LLMs): a systematic review. Texto Livre, Belo Horizonte-MG, v. 18, p. e60103, 2025. DOI: 10.1590/1983-3652.2025.60103. Disponível em: https://periodicos.ufmg.br/index.php/textolivre/article/view/60103. Acesso em: 8 dec. 2025.