ChatGPT y discurso político. Revisión documental

Autores/as

DOI:

https://doi.org/10.31637/epsir-2024-841

Palabras clave:

ChatGPT, chatbot, Inteligencia Artificial, Procesamiento del lenguaje natural, tecnología, política, derechos fundamentales

Resumen

Introducción: ChatGPT ya se está utilizando por miembros del congreso, políticos, periodistas, estudiantes, científicos y administraciones públicas de todo el mundo. En política ha desplegado numerosas utilidades: para la generación de ensayos o discursos; para comprender las tendencias políticas emergentes; para simular debates y discusiones sobre temas de importancia para los votantes; para diseñar políticas públicas que se encuentren más en línea con la opinión pública mayoritaria o que contenten las aspiraciones de un mayor número de personas; identificar quiénes tienen una influencia significativa en la agenda pública, etc. Metodología: Este estudio, utiliza la metodología PRISMA y analiza datos obtenidos de la Web of Science y Google Scholar. Discusión: No obstante, algunos problemas inherentes a su funcionamiento (como errores, sesgos, inclinación política, desprotección de datos, vulneración de derechos como los relacionados con la protección de datos o la propiedad intelectual, impacto medioambiental) han condicionado su utilización en varios países. Conclusiones: Este trabajo pretende visibilizar estos aspectos en relación con la política, a la construcción de los discursos con fines electorales, centrando el debate en su uso ético, el respeto a los derechos fundamentales y el impacto en nuestras democracias.

Descargas

Los datos de descargas todavía no están disponibles.

Biografía del autor/a

Rosa María Ricoy Casas, Universidade de Vigo

Lecturer C.Política Uvigo (España) y Prof. Tut. Venia Docendi (Derecho y C. Política) UNED-Lugo. Doctora Derecho e Historia (Uvigo) y Lic. C.Políticas (UNED). Coord.-Directora del Grado en Dir. y Gestión Pública y Vicedecana (2015-2018) y Secretaria del Tribunal de Garantías (Uvigo) (2011-2014). Vicepresidenta ICOMOS España, Secretaria Doctorado CREA (Uvigo), IP española del proyecto de Europa Creativa “HYP you preserve”. Ha impartido docencia y conferencias en diversas y destacadas Univ., Congresos y Entidades púb. (INAP, EGAP, FEGAMP, Sorbonne, King´s College, Corvinus Budapest, Kielce Polonia, Firenze, Sao Paulo, Mar del Plata, Rep. de Irlanda, IPSA, AECPA, APCP, CEISAL, GIGAPP, REPS, etc). Ha recibido varios premios (Consejo Abogacía Gallega, Fundac. Alternativas, Congreso-USC, o la Asoc. Española de C.Política).

Raquel Fernández González, Universidade de Vigo

Doctora en Economía (Premio extraordinario 2016). Investigación principal sobre temas relacionados con la gestión sostenible de los recursos naturales, centrándose en ámbitos como la pesca, acuicultura y energía. Como resultado de sus investigaciones, sus trabajos que han sido publicados en revistas como Aquaculture, Energy, Reviews in aquaculture, Papers in Regional Science, o Aquaculture Economics & Management. Ha realizado estancias internacionales en Universidades de Europa y Asia, así como también cuenta con experiencia en proyectos internacionales.

Citas

Ahmed, T. N. y Mahmood, K. A. (2024). A Critical Discourse Analysis of ChatGPT's Role in Knowledge and Power Production. Arab World English Journ. https://dx.doi.org/10.24093/awej/ChatGPT.12

Algolia (2023). Index your world, put it in motion with our powerful search API. https://www.algolia.com/

Allport, G. W. y Postman, L. (1947). The Psychology of Rumor. Henry Holt.

Amoore, L., Campolo, A., Jacobsen, B. y Rella, L. (2024). A world model: On the political logics of generative AI. Political Geography, 113, 103134. https://doi.org/10.1016/j.polgeo.2024.103134

Aseeva, A. (2023). Liable and Sustainable by Design: A Toolbox for a Regulatory Compliant and Sustainable Tech. Sustainability, 16(1), 228. https://doi.org/10.3390/su16010228

Avetisyan, A. y Silaev, N. (2023). Russia Is a Serious Player in the AI Race. https://open.mgimo.ru/handle/123456789/4823

Aydın, Ö. y Karaarslan, E. (2023). Is ChatGPT leading generative AI? What is beyond expectations?. Academic Platform Journal of Engineering and Smart Systems, 11(3), 118-134. https://doi.org/10.21541/apjess.1293702

Bang, Y., Lee, N., Ishii, E., Madotto, A. y Fung, P. (2021). Assessing political prudence of open-domain chatbots. https://arxiv.org/pdf/2106.06157.pdf

Barščevski, T. (2024). The Church in the Face of Ethical Challenges of Artificial Intelligence. Bogoslovska smotra, 94(1), 31-51. https://doi.org/10.53745/bs.94.1.5

Bass, D. (2023a). Buzzy ChatGPT chatbot is so error-prone that its maker just publicly promised to fix the tech’s ‘glaring and subtle biases.’ Fortune. bit.ly/3Y2mFjP

Bass, D. (2023b). ChatGPT maker OpenAI says it’s working to reduce bias, bad behavior. Bloomberg. bloom.bg/4eYyei7

Berry, D. M. y Stockman, J. (2024). Schumacher in the age of generative AI: Towards a new critique of technology. European Journal of Social Theory, 13684310241234028. https://doi.org/10.1177/13684310241234028

Blodgett, S. L., Barocas, S., Daumé III, H. y Wallach, H. (2020). Language (technology) is power: A critical survey of" bias". Proceed. of the 58th Annual Meeting of the Association for Computat. Linguistics, 5454–5476. https://arxiv.org/pdf/2005.14050

Breazu, P. y Katson, N. (2024). ChatGPT-4 as a journalist: Whose perspectives is it reproducing?. Discourse & Soc. https://doi.org/10.1177/09579265241251479

Bender, E.M., Gebru, T., McMillan-Major, A., y Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big?. ACM confer. on fairness, accountab., and transp. 610-623. https://doi.org/10.1145/3442188.344592

Borji, A. (2023). A categorical archive of chatgpt failures. https://doi.org/10.48550/arXiv.2302.03494

Brenna (2023, 16 de febrero). Losing the Plot: From the Dream of ai to Performative Equity. Tru Digital Detox. https://acortar.link/yuHibH

Cao, H. y Liu, S (2024). The Effectiveness of ChatGPT in Translating Chunky Construction Texts in Chinese Political Discourse. Journ. of Electrics Systems, 20(2). 1684-1698. https://doi.org/10.52783/jes.1616

Chalkidis, I. y Brandl, S. (2024). Llama meets EU: Investigating the European Political Spectrum through the Lens of LLMs. https://aclanthology.org/2024.naacl-short.40

Chowdhury, H. (2023). Sam Altman has one big problem to solve before ChatGPT can generate big cash—making it ‘woke’. Business Insider. ChatGPT will always have bias, says OpenAI boss (thetimes.com)

Dixon, L., Li, J., Sorensen, J., Thain, N. y Vasserman, L. (2018). Measuring and mitigating unintended bias in text classification. Proceedings of the 2018 AAAI/ACM Confer. on AI, Ethics, and Society, 67-73. https://doi.org/10.1145/3278721.3278729

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., Carter , L. y Wright, R. (2023).: “So what if ChatGPT wrote it?”. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642

DW. (2023). Resuelven en Colombia el primer caso jurídico con la ayuda de robot ChatGPT. bit.ly/4f2xAA1

Europa Press Internacional (2023). La primera ministra danesa usa ChatGPT para redactar parte de un discurso y alerta de sus posibles riesgos. 31/05/2023. bit.ly/3Wjfq5V

Farhat, F., Sohail, S. S. y Madsen, D. Ø. (2023). How trustworthy is ChatGPT? The case of bibliometric analyses. Cogent Engineering, 10(1), 2222988. https://doi.org/10.1080/23311916.2023.2222988

Feldstein, S. (2023). The consequences of generative AI for democracy, governance and war. In Survival: October–November 2023 (pp. 117-142). Routledge.

Feng, S., Park, C. Y., Liu, Y. y Tsvetkov, Y. (2023). From pretraining data to language models to downstream tasks: Tracking the trails of political biases leading to unfair NLP models. https://doi.org/10.48550/arXiv.2305.08283

Fujimoto, S. y Takemoto, K. (2023). Revisiting the political biases of ChatGPT. Frontiers in Artificial Intelligence, 6, 1232003. https://doi.org/10.3389/frai.2023.1232003

Gemenis, K. (2024). Artificial intelligence and voting advice applications. Frontiers in Political Science, 6, 1286893. https://doi.org/10.3389/fpos.2024.1286893

Ghafouri, V., Agarwal, V., Zhang, Y., Sastry, N., Such, J. y Suarez-Tangil, G. (2023, October). AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics. ACM. 556-565. https://doi.org/10.1145/3583780.3614777

Ghosh, S., Baker, D., Jurgens, D. y Prabhakaran, V. (2021). Detecting cross-geographic biases in toxicity modeling on social media. https://arxiv.org/pdf/2104.06999

Gibney, E. (2024). What the EU's tough AI law means for research and ChatGPT. Nature. https://doi.org/10.1038/d41586-024-00497-8

Gregorcic, B. y Pendrill, A. M. (2023). ChatGPT and the frustrated Socrates. Physics Education, 58(3), 035021. 10.1088/1361-6552/acc299

Guo, B., Zhang, X., Wang, Z., Jiang, M., Nie, J., Ding, Y., Yue, J. y Wu, Y. (2023). How close is chatgpt to human experts? comparison corpus, evaluation, and detection. https://doi.org/10.48550/arXiv.2301.07597

Guo, D., Chen, H., Wu, R. y Wang, Y. (2023). AIGC challenges and opportunities related to public safety: a case study of ChatGPT. Journal of Safety Science and Resilience, 4(4), 329-339. https://doi.org/10.1016/j.jnlssr.2023.08.001

Halpern, D. (2015). Inside the nudge unit: How small changes can make a big difference. Random House.

Hartmann, J., Schwenzow, J. y Witte, M. (2023). "The political ideology of conversational AI: Converging evidence on ChatGPT's pro-environmental, left-libertarian orientation". https://doi.org/10.48550/arXiv.2301.01768

Heimans, S., Biesta, G., Takayama, K., y Kettle, M. (2023). ChatGPT, subjectification, and the purposes and politics of teacher education and its scholarship. Asia-Pacific Journal of Teacher Education, 51(2), 105-112. https://doi.org/10.1080/1359866X.2023.2189368

Hutchinson, B., Prabhakaran, V., Denton, E., Webster, K., Zhong, Y. y Denuyl, S. (2020). Social biases in NLP models as barriers for persons with disabilities. https://arxiv.org/pdf/2005.00813

Hutson, M. (2022). Could AI help you to write your next paper?. Nature, 611, 192-193. https://doi.org/10.1038/d41586-022-03479-w

Jarquín-Ramírez, M. R., Alonso-Martínez, H. y Díez-Gutiérrez, E. (2024). Alcances y límites educativos de la IA: control e ideología en el uso de ChatGPT. DIDAC, 84, 84-102. DOI:10.48102/didac.2024.84_JUL-DIC.217

Jenks, C. J. (2024). Communicating the cultural Other: Trust and bias in generative AI and large language models. Applied Ling. Review, 0. https://acortar.link/zNGinC

Jiang, H., Beeferman, D., Roy, B. y Roy, D. (2022). CommunityLM: Probing partisan worldviews from language models. https://arxiv.org/pdf/2209.07065

Johnson, A. (2023). “Is ChatGPT Partisan? Poems About Trump And Biden Raise Questions About The AI Bot’s Bias—Here’s What Experts Think”, Forbes, (03/02/2023). bit.ly/4cWYho7

Jungherr, A. (2023). Artificial intelligence and democracy: A conceptual framework. Social media+ society, 9(3), 20563051231186353. https://doi.org/10.1177/20563051231186353

Kallury, P. (2020, julio 7). Don’t ask if artificial intelligence is good or fair, ask how it shifts power. Nature. https://doi.org/10.1038/d41586-020-02003-2

Khanal, S., Zhang, H. y Taeihagh, A. (2024). Why and how is the power of Big Tech increasing in the policy process? Policy and Society, puae012. https://doi.org/10.1093/polsoc/puae012

Kim, J., Lee, J., Jang, K. M. y Lourentzou, I. (2024). Exploring the limitations in how ChatGPT introduces environmental justice issues in the United States. Telematics and Informatics, 86, 102085. https://doi.org/10.1016/j.tele.2023.102085

Kocoń, J., Cichecki, I., Kaszyca, O., Kochanek, M., Szydło, D., Baran, J. Bielaniewicz, J., Gruza, M., Janz, A., Kanclerz, K., Kocón, A., Koptyra, B., Mielesczenko-Kowszewicz, W., Milkowski, P., Oleksy, M., Piasecki, M. Radlinski, L., Wojtasik, K. y Kazienko, P. (2023). ChatGPT: Jack of all trades, master of none. Information Fusion, 99, 101861. https://doi.org/10.1016/j.inffus.2023.101861

Li, Y., Zhang, G., Yang, B., Lin, C., Wang, S., Ragni, A. y Fu, J. (2022). Herb: Measuring hierarchical regional bias in pre-trained language models. https://arxiv.org/pdf/2211.02882

Li, P., Yang, J., Islam, M. A. y Ren, S. (2023). Making ai less" thirsty": Uncovering and addressing the secret water footprint of ai models. https://arxiv.org/pdf/2304.03271

Liu, R., Jia, C., Wei, J., Xu, G., Wang, L. y Vosoughi, S. (2021, May). Mitigating political bias in language models through reinforced calibration. AAAI Conference on Artificial Intelligence, 35(17), 14857-14866. https://doi.org/10.1609/aaai.v35i17.17744

Maltby, J., Rayes, T., Nage, A., Sharif, S., Omar, M. y Nichani, S. (2024). Synthesizing perspectives: Crafting an Interdisciplinary view of social media’s impact on young people’s mental health. Plos one, 19(7), e0307164. https://doi.org/10.1371/journal.pone.0307164

Martin, J. L. (2023). The Ethico-Political Universe of ChatGPT. Journal of Social Computing, 4(1), 1-11. https://doi.org/ 10.23919/JSC.2023.0003

McGee, R. W. (2023). Is chat gpt biased against conservatives? an empirical study. An Empirical Study. https://doi.org/10.2139/ssrn.4359405

McGee, R. W. (2024). What Were the Causes of the American Civil War? A Study in Artificial Intelligence. A Study in Artificial Intelligence http://dx.doi.org/10.2139/ssrn.4737710

Messner, W., Greene, T. y Matalone, J. (2023). From Bytes to Biases: Investigating the Cultural Self-Perception of Large Language Models. https://doi.org/10.48550/arXiv.2312.17256

Monrad, M. (2024). Feeling rules in artificial intelligence: norms for anger management. Emotions and Society, 1(aop), 1-19. https://doi.org/10.1332/26316897Y2024D000000016

Motoki, F., Pinho Neto, V. y Rodrigues, V. (2024). More human than human: measuring ChatGPT political bias. Public Choice, 198(1), 3-23. https://acortar.link/r7zBcb

Muzanenhamo, P. y Power, S. B. (2024). ChatGPT and accounting in African contexts: Amplifying epistemic injustice. Critical Perspectives on Accounting, 99, 102735. https://doi.org/10.1016/j.cpa.2024.102735

Naing, S. Z. S. y Udomwong, P. (2024). Public Opinions on ChatGPT: An Analysis of Reddit Discussions by Using Sentiment Analysis, Topic Modeling, and SWOT Analysis. Data Intelligence, 1-50. https://doi.org/10.1162/dint_a_00250

Okolo, C. T. (2023, November). The Promise and Perils of Generative AI: Case Studies in an African Context. 4th African Human Comp. Interaction Conf. 266-270. https://doi.org/10.1145/3628096.3629066

OpenAI, Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., ... y McGrew, B. (2023). Gpt-4 technical report. https://doi.org/10.48550/arXiv.2303.08774

Page, J. M., McKenzie, J. E., Bossuyt, P. M., Boutron,I., Hoffmann, T. C.,Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville,J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., McGuinness, L. A., Stewart, L. A., Thomas, J., Tricco, A. C., Welch, V. A., Whiting,P. y Moher, D. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ, 71, 1-9. https://doi.org/10.1136/bmj.n71

Pariser, E. (2011). Cuidado con las burbujas de filtro. TED. bit.ly/4f1YVST

Pariser, E. (2017). El filtro burbuja. Taurus.

Patel, S. B., Lam, K. y Liebrenz, M. (2023). ChatGPT: friend or foe. Lancet Digit Health, 5(3). https://doi.org/10.1016/S2589-7500(23)00023-7

Pagnarasmey, P., Xingjun, M., Conway, M., Quingyu, C., Bailey, J., Henry, P., Putrasmey, K., Watey, D. y Yu-Gang, J. (2024). Whose Side Are You On? Investig. Pol. Stance of Large Lang. Models. Arxiv. https://doi.org/10.48550/arXiv.2403.13840

Pollard, E. (2024). Back to the future: everything you wish you’d asked Derrida about ChatGPT when you had the chance!. Cultural Studies Critical Methodologies. https://doi.org/10.1177/15327086241232722

Preedipadma, N. (2020, 13 de febrero). New MIT neural network architecture may reduce carbon footprint by AI. Analytics Insight. https://tinyurl.com/5n8463cw

PRISMA. (s. f.). PRISMA 2020 Checklist. https://prisma.shinyapps.io/checklist/

Punset, E. (2008). Por qué somos como somos. Aguilar.

Retzlaff, N. (2024). Political Biases of ChatGPT in Different Languages. Preprints.org. https://doi.org/10.20944/preprints202406.1224.v1

Ricoy-Casas, R.M. (2021). "Sesgos y algoritmos: inteligencia de género” and "Algunos dilemas éticos en la utilización de la inteligencia artificial y los algoritmos". En P. R. Bonorino-Ramírez, R. Fernández y P. Valcárcel, P. (Dirs.). Nuevas normatividades. Th. Reuters Aranzadi.

Ricoy-Casas, R. M. (2021b). "Inteligencia artificial y administración de justicia: una política pública sub iudice". En P. R. Bonorino-Ramírez, R. Fernández, P. Valcárcel e I. S. García (Dirs.). Justicia, administración y derecho. Thomson Reuters Aranzadi.

Ricoy-Casas R. M. (2022a). Use of Technological Means and Personal Data in Electoral Activities: Persuasive Voters. En Á. Rocha, D. Barredo, P. C. López-López e I. Puentes-Rivera (Eds) Comm. and Smart Techn, (pp. 227-237). Springer. https://doi.org/10.1007/978-981-16-5792-4_23

Ricoy Casas, R. M. (2022b). Hologramas y Avatares para la persuasión política. International Visual Culture Review. https://doi.org/10.37467/revvisual.v9.3547

Roe, J. y Perkins, M. (2023). Lo que no te están contando sobre ChatGPT. Humanities and soc. sciences comm., 10(1), 1-9. https://doi.org/10.1057/s41599-023-02282-w

Roy, N. y Maity, M. (2023). An Infinite Deal of Nothing: critical ruminations on ChatGPT and the politics of language. Decision, 50(1), 11-17. https://acortar.link/YHBlqa

Rozado, D. (2023a). “The Political Biases of ChatGP”, Social Sciences, 12(3) 148. https://doi.org/10.3390/socsci12030148;

Rozado, D. (2023b). "Danger in the Machine: The Perils of Political and Demographic Biases Embedded in AI Systems", Manhattan Institute. bit.ly/3Wj2w82

Rutinowski, J., Franke, S., Endendyk, J., Dormuth, I., Roidl, M. y Pauly, M. (2024). The Self‐Perception and Political Biases of ChatGPT. Human Behavior and Emerging Technologies, 1, 7115633. https://doi.org/10.1155/2024/7115633

Salminen, J., Veronesi, F., Almerekhi, H., Jung, S. G. y Jansen, B.J. (2018). Online hate interpretation varies by country, but more by individual. Fifth intern. Conf. on social networks analysis, manag. and security (SNAMS), 88-94. 10.1109/SNAMS.2018.8554954

Salminen, J., Almerekhi, H., Kamel, A. M., Jung, S. G. y Jansen, B. J. (2019). Online hate ratings vary by extremes: A statistical analysis. Proceedings of the 2019 confer. on human information interaction and retrieval, 213-217. https://doi.org/10.1145/3295750.3298954

Sallam, M., Salim, N. A., Ala’a, B., Barakat, M., Fayyad, D., Hallit, S., Harapan, H., Hallit, R. y Mahafzah, A. (2023). ChatGPT output regarding compulsory vaccination and COVID-19 vaccine conspiracy: A Descriptive Study at the Outset of a Paradigm Shift in Online Search for Information Cureus, 15(2). https://doi.org/10.7759/cureus.35029

Sambasivan, N., Arnesen, E., Hutchinson, B., Doshi, T. y Prabhakaran, V. (2021). Re-imagining algorithmic fairness in india and beyond. Proceed. of the 2021 ACM confer. on fairness, accountab., and transp., 315-328. https://doi.org/10.1145/3442188.3445896

Sanchis, A. (2024). Cada vez más curas españoles utilizan ChatGPT para sus misas: “Lo usé hasta para un funeral”. El Confidencial. bit.ly/46aN1SY

Sellman, M. (2023). ChatGPT will always have bias, says OpenAI boss. The Times. https://acortar.link/EBqJfR

Seyyed-Kalantari, L., Zhang, H., McDermott, M. B., Chen, I. Y. y Ghassemi, M. (2021). Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nature medicine, 27(12), 2176-2182. https://doi.org/10.1038/s41591-021-01595-0

Slater, G. B. (2024). Dread and the automation of education: From algorithmic anxiety to a new sensibility. Review of Education, Pedagogy, and Cultural Studies, 46(1), 170-182. https://doi.org/10.1080/10714413.2023.2299521

Špecián, P. (2024). Machine Advisors: Integrating Large Language Models into Democratic Assemblies. SSRN. http://dx.doi.org/10.2139/ssrn.4682958

Stahl, B. C. y Eke, D. (2024). The ethics of ChatGPT–Exploring the ethical issues of an emerging technology. International Journal of Information Management, 74, 102700. https://doi.org/10.1016/j.ijinfomgt.2023.102700

Stokel-Walker, C. (2022). AI bot ChatGPT writes smart essays-should academics worry?. Nature. https://doi.org/10.1038/d41586-022-04397-7

Stokel-Walher C. (2023). ChatGPT listed as author on research papers: many scientists disapprove. Nature, Jan 18. https://www.nature.com/articles/d41586-023-00107-z

Strubell, E., Ganesh, A. y McCallum, A. (2019). Energy and policy considerations for deep learning. NLP. 57th Ann. Meeting (ACL). https://doi.org/10.48550/arXiv.1906.02243

Suguri-Motoki, F. Y., Pinho Neto, V. y Rodrigues, V. 2023. More human than human: measuring ChatGPT political bias. Public Choice, 198(1), 3-23. https://doi.org/10.2139/ssrn.4372349

Sullivan-Paul, M. (2023). How would ChatGPT vote in a federal election? A study exploring algorithmic political bias in artificial intelligence. University of Tokyo.

Sunstein, C. (2003). República.com: Internet, democracia y libertad (Estado y Sociedad). Paidós.

Tepper, J. (2020). El mito del capitalismo. Los monopolios y la muerte de la competencia. Roca Ed.

Thaler, R. H. y Sunstein, C. R. (2009). Nudge: Improving decisions about health, wealth, and happiness. Penguin.

Theconversation (2023). ChatGPT could be a game-changer for marketers, but it won’t replace humans any time soon. bit.ly/4cY2at0

Thorp, H.H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313-313. https://www.science.org/doi/10.1126/science.adg7879

Trapero, L. (2019). Los trabajadores de las tecnológicas, en huelga por el clima. Euronews. https://tinyurl.com/5fb5pbc7

Ulnicane, I. (2024). Governance fix? Power and politics in controversies about governing generative AI. Policy and Society. https://doi.org/10.1093/polsoc/puae022

UNESCO (2019). I'd blush if I could: closing gender divides in digital skills through education. https://doi.org/10.54675/RAPC9356

Urman, A. y Makhortykh, M. (2023). The Silence of the LLMs: Cross-Lingual Analysis of Political Bias and False Information Prevalence in ChatGPT, Google Bard, and Bing Chat. OSFPrepints. https://doi.org/10.31219/osf.io/q9v8f

Van-Dis, E.A., Bollen, J., Zuidema, W., Van-Rooij, R. y Bockting, C. L. (2023). ChatGPT: five priorities for research. Nature, 614(7947), 224-226. https://acortar.link/U3oqLL

Van-Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI and Ethics, 1(3), 213-218. https://doi.org/10.1007/s43681-021-00043-6

Vincent, J. (2023). AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit. The Verge. bit.ly/3NVbaG1

Vogels, E. A. (2023). A majority of Americans have heard of ChatGPT, but few have tried it themselves. Pew Research Center. bit.ly/3zGwN7S

Wang, S. H. (2023). OpenAI—explain why some countries are excluded from ChatGPT. Nature, 615(7950), 34-34. https://doi.org/10.1038/d41586-023-00553-9

Winkel, M. (2024). Controlling the uncontrollable: the public discourse on artificial intelligence between the positions of social and technological determinism. AI & SOCIETY, 1-13.https://doi.org/10.1007/s00146-024-01979-z

Wolf, Z. B. (2023). AI can be racist, sexist and creepy. What should we do about it. AI can be racist, sexist and creepy. What should we do about it? | CNN Politics

Zack, T., Lehman, E., Suzgun, M., Rodriguez, J. A., Celi, L. A., Gichoya, J., Jurafsky, D., Szolovits, P., Bates, D. W., E. Abdulnour, R. E., Butte, A. J. y Alsentzer, E. (2023). Coding Inequity: Assessing GPT-4's Potential for Perpetuating Racial and Gender Biases in Healthcare. medRxiv, 2023-07. https://doi.org/10.1101/2023.07.13.23292577

Zhao, J., Wang, T., Yatskar, M., Ordonez, V. y Chang, K. W. (2018). Gender bias in coreference resolution: Evaluation and debiasing methods. Human Language Technologies, 2, 15-20. https://arxiv.org/pdf/1804.06876

Zhou, D. y Zhang, Y. (2023). Red AI? Inconsistent Responses from GPT3. 5 Models on Political Issues in the US and China. https://doi.org/10.48550/arXiv.2312.09917

Zou, W. y Liu, Z. (2024). Unraveling Public Conspiracy Theories Toward ChatGPT in China: A Critical Discourse Analysis of Weibo Posts. Journal of Broadcasting & Electronic Media, 68(1), 1-20. https://doi.org/10.1080/08838151.2023.2275603

Descargas

Publicado

2024-09-25

Cómo citar

Ricoy Casas, R. M., & Fernández González, R. (2024). ChatGPT y discurso político. Revisión documental. European Public & Social Innovation Review, 9, 1–24. https://doi.org/10.31637/epsir-2024-841

Número

Sección

Research articles