Araştırma Makalesi
BibTex RIS Kaynak Göster

Sosyal Medyada Politik Aktör Olarak Yapay Zekâ: Aktör-Ağ Teorisi Bağlamında Grok Yapay Zekâ Asistanı

Yıl 2025, Sayı: 43, 112 - 136, 25.12.2025

Öz

Sosyal medya platformları, günümüzde politik söylemin üretildiği ve kutuplaşmanın yoğunlaştığı alanlar olarak öne çıkmaktadır. Bu ekosistemde “Grok” gibi gelişmiş yapay zeka (YZ) modellerinin rolü giderek artan bir ilgi konusudur. YZ asistanlarını salt teknik araçlar yerine, söylemi yönlendiren ve güç ilişkilerini yeniden üreten aktif aktörler olarak gören bu araştırma, analizini Bruno Latour’un Aktör-Ağ Teorisi (AAT) temeline oturtmaktadır. Araştırma, YZ’nin “insan dışı” doğasının yarattığı sözde nötr otorite algısını sorunsallaştırmaktadır. Bu algının, algoritmik yanlılıkları ve teknokratik müdahaleleri görünmez kılarak politik kutuplaşmayı nasıl etkilediği tartışılmaktadır. Araştırmanın temel amacı, Grok’un bu tarafsızlık algısını kullanarak politik içerik dolaşımını ve tartışmaları nasıl yönlendirdiğini ortaya koymaktır. Metodoloji olarak netnografik analizin benimsendiği bu araştırmada, Twitter (X) platformunda kullanıcıların Grok ile kurdukları etkileşimler ve gelişen söylemler doğal ortamlarında gözlemlenerek tümevarımsal veriler elde edilmiştir. Araştırmada, Grok’un sunduğu argümanların kullanıcılar nezdindeki “nesnellik” kabulü ve bunun gruplar arası diyaloğa etkisi sorgulanmaktadır. Latour’un “çeviri” kavramı ışığında, Grok’un politik enformasyonu kendi mantığına göre nasıl dönüştürüp sunduğu ve bu sürecin politik çatışmaların seyrini nasıl şekillendirdiği detaylandırılmaktadır.

Kaynakça

  • Aldahoul, N., Ibrahim, H., Varvello, M., Kaufman, A., Rahwan, T., & Zaki, Y. (2025). Large Language Models are often politically extreme, usually ideologically inconsistent, and persuasive even in informational contexts. arXiv Preprint arXiv:2505.04171.
  • Bareis, J., & Katzenbach, C. (2022). Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics. Science, Technology, & Human Values, 47(5), 855–881. https://doi.org/10.1177/01622439211030007
  • Bryman, A. (2012). Social research methods (4. ed). Oxford Univ. Press.
  • Claggett, E. L., & Shirado, H. (2025). Making Pairs That Cooperate: AI Evaluation of Trust in Human Conversations. Proceedings of the ACM on Human-Computer Interaction, 9(2), 1–30. https://doi.org/10.1145/3711027
  • Dehnert, M., & Mongeau, P. A. (2022). Persuasion in the age of artificial intelligence (AI): Theories and complications of AI-based persuasion. Human Communication Research, 48(3), 386–403.
  • Ener, B. (2015). Kamusal Alan ve Filtre Balonları. İstanbul Aydın Üniversitesi Iletişim Çalışmaları Dergisi, 11(2), 99–120. https://doi.org/10.17932/IAU.ICD.2015.006/icd_v011i2001
  • Fisher, J., Appel, R. E., Park, C. Y., Potter, Y., Jiang, L., Sorensen, T., Feng, S., Tsvetkov, Y., Roberts, M. E., Pan, J., Song, D., &
  • Choi, Y. (2025). Political Neutrality in AI Is Impossible- But Here Is How to Approximate It (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2503.05728
  • Glover, E. (2025). What Is Grok? What We Know About Musk’s AI Chatbot. Built In. https://builtin.com/articles/grok
  • Goldstein, J. A., Chao, J., Grossman, S., Stamos, A., & Tomz, M. (2024). How persuasive is AI-generated propaganda? PNAS Nexus, 3(2), pgae034. https://doi.org/10.1093/pnasnexus/pgae034
  • Gosztonyi, G., Gyetván, D., & Kovács, A. (2025). Theory and Practice of Social Media’s Content Moderation by Artificial Intelligence in Light of European Union’s AI Act and Digital Services Act. European Journal of Law and Political Science, 4(1), 33–42.
  • Hay, A. (2025). What may be: Policy enactment in education, a new conceptual framework with actor-network theory. Journal of Education Policy, 40(2), 179–198. https://doi.org/10.1080/02680939.2024.2411989
  • Jasanoff, S. (2005). Designs on Nature: Science and Democracy in Europe and the United States. Princeton University Press.
  • Jin, L., Shen, Z., Alhur, A. A., & Naeem, S. B. (2025). Exploring the determinants and effects of artificial intelligence (AI) hallucination exposure on generative AI adoption in healthcare. Information Development, 02666669251340954. https://doi.org/10.1177/02666669251340954
  • Kamp, A. (2019). Actor–Network Theory. In A. Kamp, Oxford Research Encyclopedia of Education. Oxford University Press. https://doi.org/10.1093/acrefore/9780190264093.013.526
  • Karaca, M. (2024). Yapay Zekanın İç Denetime Etkileri Fırsatların Yakalanması ve Tehditlerin Yönetilmesi. Denetişim, 31, 86–101.
  • Kozinets, R. V. (2010). Netnography: Doing Ethnographic Research Online, First Edition, London: Sage Publications.
  • Latour, B. (1999). On Recalling Ant. The Sociological Review, 47(1_suppl), 15–25. https://doi.org/10.1111/j.1467-954X.1999.tb03480.x
  • Luciano, F. (2024). Hypersuasion – On AI’s Persuasive Power and How to Deal with It. Philosophy & Technology, 37(2), 64, s13347-024-00756–6. https://doi.org/10.1007/s13347-024-00756-6
  • Nickerson, C. (2024). Latour’s Actor Network Theory. https://www.simplypsychology.org/actor-network-theory.html
  • Oritsegbemi, O. (2023). Human intelligence versus AI: implications for emotional aspects of human communication. Journal of Advanced Research in Social Sciences, 6(2), 76–85.
  • Otieno, P. (2024). The Impact of Social Media on Political Polarization. Journal of Communication, 4(1), 56–68. https://doi.org/10.47941/jcomm.1686
  • Ouyang, S., Zhang, Z., & Zhao, H. (2024). Fact-driven logical reasoning for machine reading comprehension. 38(17), 18851–18859.
  • Ozar, B., & Koca, D. (2024). Bir Diyalog Ortamı Olarak Üretken Yapay Zekâ: Tasarımda Anlamsal Arayış Sürecinin Temsili. Art-E, 17(33).
  • Peng, T.-Q., Yang, K., Lee, S., Li, H., Chu, Y., Lin, Y., & Liu, H. (2024). Beyond Partisan Leaning: A Comparative Analysis of Political Bias in Large Language Models. arXiv E-Prints, arXiv-2412.
  • Pigera, A. (2024). The Impact Of AI On Media Consumption And Public Opinion Formation In The Digital Age.
  • Puri, N. (2025). The Impact of Social Media in Shaping Political Discourse. International Journal for Research Publication and Seminar, 16(1), 502–509. https://doi.org/10.36676/jrps.v16.i1.254
  • Qiao, Y., Tran, P.-N., Yoon, J. S., Nguyen, L. X., Huh, E.-N., Niyato, D., & Hong, C. S. (2025). Deepseek-inspired exploration of rl-based llms and synergy with wireless networks: A survey. arXiv Preprint arXiv:2503.09956.
  • Reuters. (2024). Elon Musk takes another swing at OpenAI, makes xAI’s Grok chatbot open-source | Reuters. https://www.reuters.com/technology/elon-musk-says-his-ai-startup-xai-will-open-source-grok-chatbot-2024-03-11/
  • Rivera, G., & Cox, A. M. (2016). An actor-network theory perspective to study the non-adoption of a collaborative technology intended to support online community participation. Academia Revista Latinoamericana de Administración, 29(3), 347–365. https://doi.org/10.1108/ARLA-02-2015-0039
  • Rozado, D. (2024). The political preferences of LLMs. PLOS ONE, 19(7), e0306621. https://doi.org/10.1371/journal.pone.0306621
  • Saleem, S., & Raza, A. (2023). The discourse on actor network theory. Journal of Policy Research (JPR), 9(2), 29–35.
  • Silva, G. (2019). Traduttore-traditore all over again?: The concept of translation in the actor-network theory. 401–406.
  • Smith, S., Rose, M., & Hamilton, E. (2010). The story of a university knowledge exchange actor‐network told through the sociology of translation: A case study. International Journal of Entrepreneurial Behavior & Research, 16(6), 502–516.
  • Sun, Y., Sheng, D., Zhou, Z., & Wu, Y. (2024). AI hallucination: Towards a comprehensive classification of distorted information in artificial intelligence-generated content. Humanities and Social Sciences Communications, 11(1), 1278. https://doi.org/10.1057/s41599-024-03811-x
  • Tessler, M. H., Bakker, M. A., Jarrett, D., Sheahan, H., Chadwick, M. J., Koster, R., Evans, G., Campbell-Gillingham, L., Collins, T., Parkes, D. C., Botvinick, M., & Summerfield, C. (2024). AI can help humans find common ground in democratic deliberation. Science, 386(6719), eadq2852. https://doi.org/10.1126/science.adq2852
  • Törnberg, A., & Törnberg, P. (2024). Intimate communities of hate: Why social media fuels far-right extremism. Routledge, Taylor & Francis Group.
  • Törnberg, P., Söderström, O., Barella, J., Greyling, S., & Oldfield, S. (2025). Artificial intelligence and the state: Seeing like an artificial neural network. Big Data & Society, 12(2), 20539517251338773. https://doi.org/10.1177/20539517251338773
  • Ulnicane, I. (2025). Governance fix? Power and politics in controversies about governing generative AI. Policy and Society, 44(1), 70–84. https://doi.org/10.1093/polsoc/puae022
  • Wangsa, K., Karim, S., Gide, E., & Elkhodr, M. (2024). A systematic review and comprehensive analysis of pioneering AI chatbot models from education to healthcare: ChatGPT, Bard, Llama, Ernie and Grok. Future Internet, 16(7), 219.
  • Weinberg, J. (2025, March 13). Philosophers Develop AI-Based Teaching Tool to Promote Constructive Disagreement (guest post)—Daily Nous. https://dailynous.com/2025/03/13/philosophers-develop-ai-based-teaching-tool-to-promote-constructive-disagreement-guest-post/
  • Wischnewski, M., & Krämer, N. (2024). Does Polarizing News Become Less Polarizing When Written by an AI?: Investigating the Perceived Credibility of News Attributed to a Machine in the Light of the Confirmation Bias. Journal of Media Psychology. https://doi.org/10.1027/1864-1105/a000441
  • Yang, K., Li, H., Chu, Y., Lin, Y., Peng, T.-Q., & Liu, H. (2024). Unpacking Political Bias in Large Language Models: Insights Across Topic Polarization. arXiv Preprint arXiv:2412.16746.
  • Yasseri, T. (2023). From Print to Pixels: The Changing Landscape of the Public Sphere in the Digital Age. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4543907
  • Yücel, A. (2025). Hybrid digital authoritarianism in Turkey: The ‘Censorship Law’ and AI-generated disinformation strategy. Turkish Studies, 26(1), 1–27. https://doi.org/10.1080/14683849.2024.2392816

Artificial Intelligence as a Political Actor on Social Media: The Grok AI Assistant in the Context of Actor-Network Theory

Yıl 2025, Sayı: 43, 112 - 136, 25.12.2025

Öz

Social media platforms have emerged as prominent arenas where political discourse is constructed and polarization is intensified. Within this ecosystem, the role of advanced artificial intelligence (AI) models, such as “Grok,” has become a subject of growing scholarly interest. Viewing AI assistants not merely as technical tools but as active actors that steer discourse and reproduce power relations, this research grounds its analysis in Bruno Latour’s Actor-Network Theory (ANT). The study problematizes the perception of a supposedly neutral authority generated by the “non-human” nature of AI. It discusses how this perception impacts political polarization by rendering algorithmic biases and technocratic interventions invisible. The primary objective of this research is to elucidate how Grok utilizes this perception of neutrality to guide the circulation of political content and shape discussions. Adopting a netnographic analysis methodology, the study obtained inductive data by observing user interactions with Grok on the Twitter (X) platform, as well as the emergent discourses within their natural digital environments. The research interrogates the user acceptance of the “objectivity” regarding the arguments presented by Grok and examines its impact on inter-group dialogue. Finally, drawing on Latour’s concept of “translation,” the study details how Grok transforms and presents political information according to its own operational logic and how this process shapes the trajectory of political conflicts.

Kaynakça

  • Aldahoul, N., Ibrahim, H., Varvello, M., Kaufman, A., Rahwan, T., & Zaki, Y. (2025). Large Language Models are often politically extreme, usually ideologically inconsistent, and persuasive even in informational contexts. arXiv Preprint arXiv:2505.04171.
  • Bareis, J., & Katzenbach, C. (2022). Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics. Science, Technology, & Human Values, 47(5), 855–881. https://doi.org/10.1177/01622439211030007
  • Bryman, A. (2012). Social research methods (4. ed). Oxford Univ. Press.
  • Claggett, E. L., & Shirado, H. (2025). Making Pairs That Cooperate: AI Evaluation of Trust in Human Conversations. Proceedings of the ACM on Human-Computer Interaction, 9(2), 1–30. https://doi.org/10.1145/3711027
  • Dehnert, M., & Mongeau, P. A. (2022). Persuasion in the age of artificial intelligence (AI): Theories and complications of AI-based persuasion. Human Communication Research, 48(3), 386–403.
  • Ener, B. (2015). Kamusal Alan ve Filtre Balonları. İstanbul Aydın Üniversitesi Iletişim Çalışmaları Dergisi, 11(2), 99–120. https://doi.org/10.17932/IAU.ICD.2015.006/icd_v011i2001
  • Fisher, J., Appel, R. E., Park, C. Y., Potter, Y., Jiang, L., Sorensen, T., Feng, S., Tsvetkov, Y., Roberts, M. E., Pan, J., Song, D., &
  • Choi, Y. (2025). Political Neutrality in AI Is Impossible- But Here Is How to Approximate It (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2503.05728
  • Glover, E. (2025). What Is Grok? What We Know About Musk’s AI Chatbot. Built In. https://builtin.com/articles/grok
  • Goldstein, J. A., Chao, J., Grossman, S., Stamos, A., & Tomz, M. (2024). How persuasive is AI-generated propaganda? PNAS Nexus, 3(2), pgae034. https://doi.org/10.1093/pnasnexus/pgae034
  • Gosztonyi, G., Gyetván, D., & Kovács, A. (2025). Theory and Practice of Social Media’s Content Moderation by Artificial Intelligence in Light of European Union’s AI Act and Digital Services Act. European Journal of Law and Political Science, 4(1), 33–42.
  • Hay, A. (2025). What may be: Policy enactment in education, a new conceptual framework with actor-network theory. Journal of Education Policy, 40(2), 179–198. https://doi.org/10.1080/02680939.2024.2411989
  • Jasanoff, S. (2005). Designs on Nature: Science and Democracy in Europe and the United States. Princeton University Press.
  • Jin, L., Shen, Z., Alhur, A. A., & Naeem, S. B. (2025). Exploring the determinants and effects of artificial intelligence (AI) hallucination exposure on generative AI adoption in healthcare. Information Development, 02666669251340954. https://doi.org/10.1177/02666669251340954
  • Kamp, A. (2019). Actor–Network Theory. In A. Kamp, Oxford Research Encyclopedia of Education. Oxford University Press. https://doi.org/10.1093/acrefore/9780190264093.013.526
  • Karaca, M. (2024). Yapay Zekanın İç Denetime Etkileri Fırsatların Yakalanması ve Tehditlerin Yönetilmesi. Denetişim, 31, 86–101.
  • Kozinets, R. V. (2010). Netnography: Doing Ethnographic Research Online, First Edition, London: Sage Publications.
  • Latour, B. (1999). On Recalling Ant. The Sociological Review, 47(1_suppl), 15–25. https://doi.org/10.1111/j.1467-954X.1999.tb03480.x
  • Luciano, F. (2024). Hypersuasion – On AI’s Persuasive Power and How to Deal with It. Philosophy & Technology, 37(2), 64, s13347-024-00756–6. https://doi.org/10.1007/s13347-024-00756-6
  • Nickerson, C. (2024). Latour’s Actor Network Theory. https://www.simplypsychology.org/actor-network-theory.html
  • Oritsegbemi, O. (2023). Human intelligence versus AI: implications for emotional aspects of human communication. Journal of Advanced Research in Social Sciences, 6(2), 76–85.
  • Otieno, P. (2024). The Impact of Social Media on Political Polarization. Journal of Communication, 4(1), 56–68. https://doi.org/10.47941/jcomm.1686
  • Ouyang, S., Zhang, Z., & Zhao, H. (2024). Fact-driven logical reasoning for machine reading comprehension. 38(17), 18851–18859.
  • Ozar, B., & Koca, D. (2024). Bir Diyalog Ortamı Olarak Üretken Yapay Zekâ: Tasarımda Anlamsal Arayış Sürecinin Temsili. Art-E, 17(33).
  • Peng, T.-Q., Yang, K., Lee, S., Li, H., Chu, Y., Lin, Y., & Liu, H. (2024). Beyond Partisan Leaning: A Comparative Analysis of Political Bias in Large Language Models. arXiv E-Prints, arXiv-2412.
  • Pigera, A. (2024). The Impact Of AI On Media Consumption And Public Opinion Formation In The Digital Age.
  • Puri, N. (2025). The Impact of Social Media in Shaping Political Discourse. International Journal for Research Publication and Seminar, 16(1), 502–509. https://doi.org/10.36676/jrps.v16.i1.254
  • Qiao, Y., Tran, P.-N., Yoon, J. S., Nguyen, L. X., Huh, E.-N., Niyato, D., & Hong, C. S. (2025). Deepseek-inspired exploration of rl-based llms and synergy with wireless networks: A survey. arXiv Preprint arXiv:2503.09956.
  • Reuters. (2024). Elon Musk takes another swing at OpenAI, makes xAI’s Grok chatbot open-source | Reuters. https://www.reuters.com/technology/elon-musk-says-his-ai-startup-xai-will-open-source-grok-chatbot-2024-03-11/
  • Rivera, G., & Cox, A. M. (2016). An actor-network theory perspective to study the non-adoption of a collaborative technology intended to support online community participation. Academia Revista Latinoamericana de Administración, 29(3), 347–365. https://doi.org/10.1108/ARLA-02-2015-0039
  • Rozado, D. (2024). The political preferences of LLMs. PLOS ONE, 19(7), e0306621. https://doi.org/10.1371/journal.pone.0306621
  • Saleem, S., & Raza, A. (2023). The discourse on actor network theory. Journal of Policy Research (JPR), 9(2), 29–35.
  • Silva, G. (2019). Traduttore-traditore all over again?: The concept of translation in the actor-network theory. 401–406.
  • Smith, S., Rose, M., & Hamilton, E. (2010). The story of a university knowledge exchange actor‐network told through the sociology of translation: A case study. International Journal of Entrepreneurial Behavior & Research, 16(6), 502–516.
  • Sun, Y., Sheng, D., Zhou, Z., & Wu, Y. (2024). AI hallucination: Towards a comprehensive classification of distorted information in artificial intelligence-generated content. Humanities and Social Sciences Communications, 11(1), 1278. https://doi.org/10.1057/s41599-024-03811-x
  • Tessler, M. H., Bakker, M. A., Jarrett, D., Sheahan, H., Chadwick, M. J., Koster, R., Evans, G., Campbell-Gillingham, L., Collins, T., Parkes, D. C., Botvinick, M., & Summerfield, C. (2024). AI can help humans find common ground in democratic deliberation. Science, 386(6719), eadq2852. https://doi.org/10.1126/science.adq2852
  • Törnberg, A., & Törnberg, P. (2024). Intimate communities of hate: Why social media fuels far-right extremism. Routledge, Taylor & Francis Group.
  • Törnberg, P., Söderström, O., Barella, J., Greyling, S., & Oldfield, S. (2025). Artificial intelligence and the state: Seeing like an artificial neural network. Big Data & Society, 12(2), 20539517251338773. https://doi.org/10.1177/20539517251338773
  • Ulnicane, I. (2025). Governance fix? Power and politics in controversies about governing generative AI. Policy and Society, 44(1), 70–84. https://doi.org/10.1093/polsoc/puae022
  • Wangsa, K., Karim, S., Gide, E., & Elkhodr, M. (2024). A systematic review and comprehensive analysis of pioneering AI chatbot models from education to healthcare: ChatGPT, Bard, Llama, Ernie and Grok. Future Internet, 16(7), 219.
  • Weinberg, J. (2025, March 13). Philosophers Develop AI-Based Teaching Tool to Promote Constructive Disagreement (guest post)—Daily Nous. https://dailynous.com/2025/03/13/philosophers-develop-ai-based-teaching-tool-to-promote-constructive-disagreement-guest-post/
  • Wischnewski, M., & Krämer, N. (2024). Does Polarizing News Become Less Polarizing When Written by an AI?: Investigating the Perceived Credibility of News Attributed to a Machine in the Light of the Confirmation Bias. Journal of Media Psychology. https://doi.org/10.1027/1864-1105/a000441
  • Yang, K., Li, H., Chu, Y., Lin, Y., Peng, T.-Q., & Liu, H. (2024). Unpacking Political Bias in Large Language Models: Insights Across Topic Polarization. arXiv Preprint arXiv:2412.16746.
  • Yasseri, T. (2023). From Print to Pixels: The Changing Landscape of the Public Sphere in the Digital Age. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4543907
  • Yücel, A. (2025). Hybrid digital authoritarianism in Turkey: The ‘Censorship Law’ and AI-generated disinformation strategy. Turkish Studies, 26(1), 1–27. https://doi.org/10.1080/14683849.2024.2392816

L’intelligence artificielle en tant qu’acteur politique sur les réseaux sociaux : l’assistant d’IA Grok dans le cadre de la théorie de l’acteur-réseau

Yıl 2025, Sayı: 43, 112 - 136, 25.12.2025

Öz

Les plateformes de réseaux sociaux sont aujourd’hui des espaces centraux où se construit le discours politique et où s’intensifie la polarisation. Dans cet écosystème, le rôle des modèles avancés d’intelligence artificielle (IA), tels que « Grok », suscite un intérêt académique croissant. En considérant les assistants d’IA non pas comme de simples outils techniques, mais comme des acteurs à part entière qui orientent les discours et reproduisent des rapports de pouvoir, cette recherche fonde son analyse sur la théorie de l’acteur-réseau (Actor-Network Theory, TAR) de Bruno Latour. L’étude remet en question la perception d’une autorité prétendument neutre, produite par la nature « non humaine » de l’IA. Elle analyse la manière dont cette perception influe sur la polarisation politique en rendant invisibles les biais algorithmiques et les interventions technocratiques. L’objectif principal de cette recherche est de montrer comment Grok mobilise cette perception de neutralité pour orienter la circulation des contenus politiques et structurer les débats. Adoptant une méthodologie d’analyse netnographique, l’étude a recueilli des données inductives à partir de l’observation des interactions des utilisateurs avec Grok, ainsi que des discours qui émergent sur la plateforme Twitter (X) dans leurs environnements numériques naturels. La recherche s’interroge sur l’adhésion des utilisateurs à l’« objectivité » supposée des arguments avancés par Grok et sur ses effets sur le dialogue intergroupes. Enfin, à la lumière du concept de « traduction » développé par Latour, l’étude montre comment Grok transforme et présente l’information politique selon sa propre logique opérationnelle, et comment ce processus façonne la dynamique des conflits politiques.

Kaynakça

  • Aldahoul, N., Ibrahim, H., Varvello, M., Kaufman, A., Rahwan, T., & Zaki, Y. (2025). Large Language Models are often politically extreme, usually ideologically inconsistent, and persuasive even in informational contexts. arXiv Preprint arXiv:2505.04171.
  • Bareis, J., & Katzenbach, C. (2022). Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics. Science, Technology, & Human Values, 47(5), 855–881. https://doi.org/10.1177/01622439211030007
  • Bryman, A. (2012). Social research methods (4. ed). Oxford Univ. Press.
  • Claggett, E. L., & Shirado, H. (2025). Making Pairs That Cooperate: AI Evaluation of Trust in Human Conversations. Proceedings of the ACM on Human-Computer Interaction, 9(2), 1–30. https://doi.org/10.1145/3711027
  • Dehnert, M., & Mongeau, P. A. (2022). Persuasion in the age of artificial intelligence (AI): Theories and complications of AI-based persuasion. Human Communication Research, 48(3), 386–403.
  • Ener, B. (2015). Kamusal Alan ve Filtre Balonları. İstanbul Aydın Üniversitesi Iletişim Çalışmaları Dergisi, 11(2), 99–120. https://doi.org/10.17932/IAU.ICD.2015.006/icd_v011i2001
  • Fisher, J., Appel, R. E., Park, C. Y., Potter, Y., Jiang, L., Sorensen, T., Feng, S., Tsvetkov, Y., Roberts, M. E., Pan, J., Song, D., &
  • Choi, Y. (2025). Political Neutrality in AI Is Impossible- But Here Is How to Approximate It (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2503.05728
  • Glover, E. (2025). What Is Grok? What We Know About Musk’s AI Chatbot. Built In. https://builtin.com/articles/grok
  • Goldstein, J. A., Chao, J., Grossman, S., Stamos, A., & Tomz, M. (2024). How persuasive is AI-generated propaganda? PNAS Nexus, 3(2), pgae034. https://doi.org/10.1093/pnasnexus/pgae034
  • Gosztonyi, G., Gyetván, D., & Kovács, A. (2025). Theory and Practice of Social Media’s Content Moderation by Artificial Intelligence in Light of European Union’s AI Act and Digital Services Act. European Journal of Law and Political Science, 4(1), 33–42.
  • Hay, A. (2025). What may be: Policy enactment in education, a new conceptual framework with actor-network theory. Journal of Education Policy, 40(2), 179–198. https://doi.org/10.1080/02680939.2024.2411989
  • Jasanoff, S. (2005). Designs on Nature: Science and Democracy in Europe and the United States. Princeton University Press.
  • Jin, L., Shen, Z., Alhur, A. A., & Naeem, S. B. (2025). Exploring the determinants and effects of artificial intelligence (AI) hallucination exposure on generative AI adoption in healthcare. Information Development, 02666669251340954. https://doi.org/10.1177/02666669251340954
  • Kamp, A. (2019). Actor–Network Theory. In A. Kamp, Oxford Research Encyclopedia of Education. Oxford University Press. https://doi.org/10.1093/acrefore/9780190264093.013.526
  • Karaca, M. (2024). Yapay Zekanın İç Denetime Etkileri Fırsatların Yakalanması ve Tehditlerin Yönetilmesi. Denetişim, 31, 86–101.
  • Kozinets, R. V. (2010). Netnography: Doing Ethnographic Research Online, First Edition, London: Sage Publications.
  • Latour, B. (1999). On Recalling Ant. The Sociological Review, 47(1_suppl), 15–25. https://doi.org/10.1111/j.1467-954X.1999.tb03480.x
  • Luciano, F. (2024). Hypersuasion – On AI’s Persuasive Power and How to Deal with It. Philosophy & Technology, 37(2), 64, s13347-024-00756–6. https://doi.org/10.1007/s13347-024-00756-6
  • Nickerson, C. (2024). Latour’s Actor Network Theory. https://www.simplypsychology.org/actor-network-theory.html
  • Oritsegbemi, O. (2023). Human intelligence versus AI: implications for emotional aspects of human communication. Journal of Advanced Research in Social Sciences, 6(2), 76–85.
  • Otieno, P. (2024). The Impact of Social Media on Political Polarization. Journal of Communication, 4(1), 56–68. https://doi.org/10.47941/jcomm.1686
  • Ouyang, S., Zhang, Z., & Zhao, H. (2024). Fact-driven logical reasoning for machine reading comprehension. 38(17), 18851–18859.
  • Ozar, B., & Koca, D. (2024). Bir Diyalog Ortamı Olarak Üretken Yapay Zekâ: Tasarımda Anlamsal Arayış Sürecinin Temsili. Art-E, 17(33).
  • Peng, T.-Q., Yang, K., Lee, S., Li, H., Chu, Y., Lin, Y., & Liu, H. (2024). Beyond Partisan Leaning: A Comparative Analysis of Political Bias in Large Language Models. arXiv E-Prints, arXiv-2412.
  • Pigera, A. (2024). The Impact Of AI On Media Consumption And Public Opinion Formation In The Digital Age.
  • Puri, N. (2025). The Impact of Social Media in Shaping Political Discourse. International Journal for Research Publication and Seminar, 16(1), 502–509. https://doi.org/10.36676/jrps.v16.i1.254
  • Qiao, Y., Tran, P.-N., Yoon, J. S., Nguyen, L. X., Huh, E.-N., Niyato, D., & Hong, C. S. (2025). Deepseek-inspired exploration of rl-based llms and synergy with wireless networks: A survey. arXiv Preprint arXiv:2503.09956.
  • Reuters. (2024). Elon Musk takes another swing at OpenAI, makes xAI’s Grok chatbot open-source | Reuters. https://www.reuters.com/technology/elon-musk-says-his-ai-startup-xai-will-open-source-grok-chatbot-2024-03-11/
  • Rivera, G., & Cox, A. M. (2016). An actor-network theory perspective to study the non-adoption of a collaborative technology intended to support online community participation. Academia Revista Latinoamericana de Administración, 29(3), 347–365. https://doi.org/10.1108/ARLA-02-2015-0039
  • Rozado, D. (2024). The political preferences of LLMs. PLOS ONE, 19(7), e0306621. https://doi.org/10.1371/journal.pone.0306621
  • Saleem, S., & Raza, A. (2023). The discourse on actor network theory. Journal of Policy Research (JPR), 9(2), 29–35.
  • Silva, G. (2019). Traduttore-traditore all over again?: The concept of translation in the actor-network theory. 401–406.
  • Smith, S., Rose, M., & Hamilton, E. (2010). The story of a university knowledge exchange actor‐network told through the sociology of translation: A case study. International Journal of Entrepreneurial Behavior & Research, 16(6), 502–516.
  • Sun, Y., Sheng, D., Zhou, Z., & Wu, Y. (2024). AI hallucination: Towards a comprehensive classification of distorted information in artificial intelligence-generated content. Humanities and Social Sciences Communications, 11(1), 1278. https://doi.org/10.1057/s41599-024-03811-x
  • Tessler, M. H., Bakker, M. A., Jarrett, D., Sheahan, H., Chadwick, M. J., Koster, R., Evans, G., Campbell-Gillingham, L., Collins, T., Parkes, D. C., Botvinick, M., & Summerfield, C. (2024). AI can help humans find common ground in democratic deliberation. Science, 386(6719), eadq2852. https://doi.org/10.1126/science.adq2852
  • Törnberg, A., & Törnberg, P. (2024). Intimate communities of hate: Why social media fuels far-right extremism. Routledge, Taylor & Francis Group.
  • Törnberg, P., Söderström, O., Barella, J., Greyling, S., & Oldfield, S. (2025). Artificial intelligence and the state: Seeing like an artificial neural network. Big Data & Society, 12(2), 20539517251338773. https://doi.org/10.1177/20539517251338773
  • Ulnicane, I. (2025). Governance fix? Power and politics in controversies about governing generative AI. Policy and Society, 44(1), 70–84. https://doi.org/10.1093/polsoc/puae022
  • Wangsa, K., Karim, S., Gide, E., & Elkhodr, M. (2024). A systematic review and comprehensive analysis of pioneering AI chatbot models from education to healthcare: ChatGPT, Bard, Llama, Ernie and Grok. Future Internet, 16(7), 219.
  • Weinberg, J. (2025, March 13). Philosophers Develop AI-Based Teaching Tool to Promote Constructive Disagreement (guest post)—Daily Nous. https://dailynous.com/2025/03/13/philosophers-develop-ai-based-teaching-tool-to-promote-constructive-disagreement-guest-post/
  • Wischnewski, M., & Krämer, N. (2024). Does Polarizing News Become Less Polarizing When Written by an AI?: Investigating the Perceived Credibility of News Attributed to a Machine in the Light of the Confirmation Bias. Journal of Media Psychology. https://doi.org/10.1027/1864-1105/a000441
  • Yang, K., Li, H., Chu, Y., Lin, Y., Peng, T.-Q., & Liu, H. (2024). Unpacking Political Bias in Large Language Models: Insights Across Topic Polarization. arXiv Preprint arXiv:2412.16746.
  • Yasseri, T. (2023). From Print to Pixels: The Changing Landscape of the Public Sphere in the Digital Age. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4543907
  • Yücel, A. (2025). Hybrid digital authoritarianism in Turkey: The ‘Censorship Law’ and AI-generated disinformation strategy. Turkish Studies, 26(1), 1–27. https://doi.org/10.1080/14683849.2024.2392816
Toplam 45 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular İletişim Sistemleri, İletişim Teknolojisi ve Dijital Medya Çalışmaları, İnternet
Bölüm Araştırma Makalesi
Yazarlar

Neslihan Bulur Demirel 0000-0001-6148-5556

Gönderilme Tarihi 1 Ağustos 2025
Kabul Tarihi 12 Aralık 2025
Yayımlanma Tarihi 25 Aralık 2025
Yayımlandığı Sayı Yıl 2025 Sayı: 43

Kaynak Göster

APA Bulur Demirel, N. (2025). Sosyal Medyada Politik Aktör Olarak Yapay Zekâ: Aktör-Ağ Teorisi Bağlamında Grok Yapay Zekâ Asistanı. Galatasaray Üniversitesi İletişim Dergisi(43), 112-136. https://doi.org/10.16878/gsuilet.1756445

Creative Commons Lisansı TRDizinlogo_live-e1586763957746.png