The intellectual structure of learning agility: A case study using a modified BERT model for topic modeling
DOI:
https://doi.org/10.31637/epsir-2025-1416Palabras clave:
learning agility, intellectual structure, topic modeling;, BERT, leadership, potential;, identification, development.Resumen
Introduction: The study aims to deepen the understanding of learning agility, a relatively new construct in the field. Learning agility is essential for identifying and developing leadership talent in organizations, particularly in environments of constant change. Methodology: A thematic analysis was conducted on the titles and abstracts of 112 significant works on learning agility. The analysis utilized abstract clustering and a modified version of the BERT model for topic modeling. These influential works were identified through a prior study using bibliometric citation techniques. Results: Nine intellectual topics or patterns related to learning agility were identified, along with the influential works within each topic. The results were then compared to another intellectual structure derived from a co-citation analysis of the same set of works. Correspondences between the topics identified through both methods were established. Discussion: The comparison between topics identified through thematic analysis and co-citation analysis provides a comprehensive perspective. This integrated approach helps to advance towards a unified conceptualization of learning agility, which is essential for standardizing its measurement and application. Conclusions: The study demonstrates that combining bibliometric techniques and Natural Language Processing (NLP) facilitates academic exploration in complex research areas. This approach enables the development of more objective and reliable tools for organizations to identify and develop leadership talent.
Descargas
Citas
Abuzayed, A., & Al-Khalifa, H. (2021). BERT for Arabic topic modeling: An experimental study on BERTopic technique. Procedia Computer Science, 189, 191-194. https://doi.org/10.1016/j.procs.2021.05.096 DOI: https://doi.org/10.1016/j.procs.2021.05.096
Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. https://doi.org/10.18653/v1/N19-1423 DOI: https://doi.org/10.18653/v1/N19-1423
George, L., & Sumathy, P. (2023). An integrated clustering and BERT framework for improved topic modeling. International Journal of Information Technology, 15(4), 2187-2195. https://doi.org/10.1007/s41870-023-01268-w DOI: https://doi.org/10.1007/s41870-023-01268-w
Grau-Garcia, G., Rua-Vieites, A. & Martín, M. J. (2024). The intellectual structure of learning agility: A bibliometric study. Consulting Psychology Review. (Under review)
McInnes, L., Healy, J., & Melville, J. (2020). UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arXiv preprint, arXiv:1802.03426. https://arxiv.org/abs/1802.03426
Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 2227–2237. Association for Computational Linguistics. https://doi.org/10.18653/v1/N18-1202 DOI: https://doi.org/10.18653/v1/N18-1202
Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. OpenAI Technical Report. https://doi.org/10.5281/zenodo.3247217
Reimers, N., & Gurevych, I. (2019). Sentence-BERT: Sentence embeddings using Siamese BERT networks. arXiv preprint, arXiv:1908.10084. https://doi.org/10.18653/v1/2019.acl-main.657 DOI: https://doi.org/10.18653/v1/D19-1410
Ritzer, G., Zhao, S., & Murphy, J. (2001). Metatheorizing in Sociology: The Basic Parameters and the Potential Contributions of Postmodernism. En J.H. Turner (Ed.), Handbook of Sociological Theory. Springer. DOI: https://doi.org/10.1007/0-387-36274-6_6
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems (pp. 1112–1122). https://doi.org/10.5555/3295222.3295349
Williams, A., Nangia, N., & Bowman, S. R. (2018). A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 1 (Long Papers), pp. 1112–1122, Association for Computational Linguistics. DOI: https://doi.org/10.18653/v1/N18-1101
Descargas
Publicado
Cómo citar
Número
Sección
Licencia
Derechos de autor 2025 Gonzalo Grau García, María José Martín Rodrigo, Antonio Rua Vieites

Esta obra está bajo una licencia internacional Creative Commons Atribución-NoComercial-SinDerivadas 4.0.
Authors who publish with this journal agree to the following terms:- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under Creative Commons Non Commercial, No Derivatives Attribution 4.0. International (CC BY-NC-ND 4.0.), that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).