Research Article
BibTex RIS Cite

Disinformation and Artificial Intelligence: Looking at Ways to Combat Disinformation through Artificial Intelligence Experts’ Eyes

Year 2023, Issue: 11 - Theme: Disinformation, 83 - 106, 16.12.2023
https://doi.org/10.54722/iletisimvediplomasi.1375478

Abstract

The process of combating online disinformation is closely related to the use of artificial intelligence techniques. The technology in question plays an important role both in producing and disseminating disinformation and in detecting and controlling problematic content. These two aspects of the relationship between disinformation and artificial intelligence also require an understanding of the decisiveness of artificial intelligence technologies in the production and distribution of problematic content, as well as how artificial intelligence systems can be used most effectively to detect and reduce online disinformation. The aim of the study carried out with this focal point is to evaluate the capacity of artificial intelligence systems to combat disinformation as perceived by artificial intelligence experts. In line with this goal, descriptive field research was conducted using the semi-structured in-depth interview technique. The study's findings established that contemporary artificial intelligence systems have the capacity to both amplify and diminish disinformation. It has come to light that disinformation detection and filtering mechanisms and verification platforms should be disseminated, and the public and digital platforms should collaborate in the development of policies for this purpose. At the same time, accountability to the user should be given top priority.

References

  • Akers, J., Bansal, G., Cadamuro, G., Chen, C., Chen, Q., Lin, L., Mulcaire, P., Nandakumar, R., Rockett, M., Simko, L., Toman, J., Wu, T., Zeng, E., Zorn, B. & Roesner, F. (2018). Technology-Enabled Disinformation: Summary, Lessons, and Recommendations. Technical Report UW-CSE, 21.
  • Akhtar, P., Ghouri, A.M., Khan, H.R., ul Haq, M.A., Auan, U., Zahoor, N., Khan, Z., Ashrar, A. (2022). Detecting fake news and disinformation using artifical intelligence and machine learning to avoid supply chain distruptions. Annals of Operations Research, 327, 633-657.
  • Belhadi, A., Mani, V., Kamble, S.S., Khan, S.A.R., & Verma, S. (2021). Artificial intelligence-driven innovation for enhancing supply chain resilience and performance under the effect of supply chain dynamism: an empirical investigation. Annals of Operations Research, 021-03956-x.
  • Bergamini, D. (2020). Need for Democratic Governance of Artifical Intelligence. Comittee on Political Affairs and Democracy-Council of Europe. Retrieved https://pace.coe.int/en/files/28742 Erişim T. 13 Eylül 2023.
  • Bontridder, N. & Poullet, Y. (2021). The role of artificial intelligence in disinformation. Data & Policy, 3, e32. Bouziane, M., Perrin, H., Cluzeau, A., Mardas, J. & Sadeq, A. (2020). Team Buster. ai at CheckThat! 2020 Insights and Recommendations to Improve Fact-Checkin, CLEF 2020.
  • Chesney, B. & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107, 1753.
  • Funke, D. (2019). These fact-checkers won $2 million to implement ai in their newsrooms. Poynter. Retrieved https://www.poynter.org/fact-checking/2019/these-fact-checkers-won-2-million- to-implement-ai-in-their-newsrooms/
  • Goodfellowet, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. & Bengio, Y. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, 27, 1-9.
  • Graves, L. (2018). Understanding the Promise and Limits of Automated Fact-checking. The Reuters Institute for the Study of Journalism at the University of Oxford, February 2018.
  • Greengard, S. (2019). Will deepfakes do deep damage? Communications of the ACM, 63(1), 17-19.
  • Gupta, A., Li, H., Farnoush, A. & Jiang, K. (2022). Understanding patterns of COVID infodemic: A systematic and pragmatic approach to curb fake news. Journal of Business Research, 140, 670-683.
  • Gül-Ünlü, D. & Kesgin, Y. (2021). Tavşan deliği ve siyasal radikalleşme: YouTube kullanıcı önerileri üzerinden bir değerlendirme. In. A. Aydemir (Ed.), Gelenekselden Dijitale Siyasal İletişim Çalışmaları (ss.67-78). Konya: Eğitim Yayınevi.
  • Jackson, J. (2016). Fake news clampdown: Google gives €150,000 to fact-checking projects. The Guardian. Retrieved https://www.theguardian.com/media/2016/nov/17/fake-news-google- funding-fact-checking-us-election
  • Karakoç, E. & Kuş, O. & Gül Ünlü, D. (2023). Algoritma farkındalığı ve hayali olanaklar: İnsan Hakları ihlallerinin dijital mekanizması üzerine düşünmek. In. M.A. Göngen & Y. Kesgin (Ed.), Medya ve İnsan Hakları (ss.153-170). İstanbul: Kriter Yayınları.
  • Karakoç, E. & Zeybek, B. (2022). Görmek inanmaya yeter mi? Görsel dezenformasyonun ayırt edici biçimi olarak siyasi deepfake içerikler. Öneri Dergisi, 17(57), 50-72.
  • Kertysova, K. (2018). Artificial intelligence and disinformation: How AI changes the way disinformation is produced, disseminated, and can be countered. Security and Human Rights, 29(1- 4), 55-81.
  • Küçükşabanoğlu, Z. & Soysal, B. (2023). Yapay zekânın siyaseti. In. U. Demirezen (Ed.), Geleceği Şekillendiren Teknoloji Yapay Zekâ (ss.1-33). İstanbul: Nobel Yayıncılık.
  • Lekach, S. (2018). The cleaners shows the terrors human content moderators face at work. Marshable, 13 November 2018. https://mashable.com/article/the-cleaners-content-moderators-facebook- twitter-google Erişim T. 13 Eylül 2023.
  • Lamo, M. & Calo, R. (2018). Regulating Bot Speech. UCLA Law Review, 66, 988-1028.
  • Marechal, N. & Biddle, E.R. (2020). It’s not just the content, it’s the business model: Democracy’s online speech challenge. A Report from Ranking Digital Rights, New America, 17 March 2020.
  • Masood, M., Nawaz, M., Malik, K.M., Javed, A., Irtaza, A. & Malik, H. (2022). Deepfakes generation and detection: State-of-the-art, open challenges, countermeasures, and way forward. Applied Intelligence, 54, 3974-4026.
  • Marsden, C. & Meyer, T. (2019). Regulating Disinformation with Artifical Intelligence: Effect of Disinformation Initiatives on Freedom of Experssion and Media Pluralism. European Parliamentary Research Service (EPRS), Scientific Foresight Unit (STOA).
  • Montoro-Montarroso, A., Caton-Correa, J., Rosso, P., Chulvi, B., Panizo-Lledot, A., Huertas-Tato, J., Calvo- Figueras, B., Rementeria, M.J. & Gomez-Romero, J. (2023). Fighting disinformation with artifical intelligence: Fundamentals, advances and challenges. Profesional de la Informacion, 32(3), e320322.
  • Newton, C. (2019) The trauma floor: The secret lives of Facebook moderators in America. The Verge, 25 February 2019. Retrieved https://www.theverge.com/2019/2/25/18229714/cognizant- facebook-content-moderator-interviews-trauma-working-conditions-arizona Erişim T. 13 Eylül 2023.
  • Rosenbach, E. & Mansted, K. (2018). Can democracy survive in the information age?. Belfer Center for Science and International Affairs, 30. Retrieved https://www.belfercenter.org/publication/ can-democracy-survive-information-age Erişim T. 15 Eylül 2023.
  • Shao, C., Ciampaglia, G.L., Varol, O., Flammini, A. & Menczer, F. (2017). The spread of misinformation by social bots. ArXiv Preprint ArXiv:1707.07592.
  • Shrestha, Y.R., Ben-Menahem, S.M., & von Krogh, G. (2019). Organizational decision-making structuresin the age of artificial intelligence. California Management Review, 61(4), 66–83.
  • Stiff, H. & Johansson, F. (2022). Detecting computer-generated disinformation. International Journal of Data Science and Analytics, 13, 363-383.
  • Vincent, J. (2019). AI won't relieve the mistery of facebook's human moderators. The Verge, 27 February 2019. Retrieved https://www.theverge.com/2019/2/27/18242724/facebook-moderation-ai-artificial-intelligence-platforms. Erişim T. 15 Eylül 2023.
  • Walorska, A.M. (2020). Deepfakes and Disinformation. Friedrich Naumann Foundation for Freedom. https://www.freiheit.org/de/consent?dest=https%3A%2Fshop.freiheit.org%2F%23!%2FPublikation% 2F897. Erişim T. 15 Eylül 2023.
  • Wang, P., Angarita, R. & Renna, I. (2018). Is this the era of misinformation yet: combining social bots and fake news to deceive the masses. In Companion Proceedings of the The Web Conference 2018 (pp. 1557-1561).
  • West, D.M. (2017). How to combat fake news and disinformation. The Brookings Institution, 18 December 2017. https://www.brookings.edu/articles/how-to-combat-fake-news-and-disinformation. Erişim T. 14 Eylül 2023.

Dezenformasyon ve Yapay Zekâ: Dezenformasyonla Mücadele Yollarına Yapay Zekâ Uzmanlarının Gözünden Bakmak

Year 2023, Issue: 11 - Theme: Disinformation, 83 - 106, 16.12.2023
https://doi.org/10.54722/iletisimvediplomasi.1375478

Abstract

İletişim teknolojilerindeki gelişim ve kullanıcı kaynaklı içeriğin yükselişi, her türlü içeriği herhangi bir kontrol mekanizmasına takılmadan kolaylıkla dolaşıma sokulabilir kılmıştır. Bu durum, günümüzde dijital platform kullanıcılarının sınırsız sayıda içeriğe hızlı erişimini sağlamakla birlikte; bireylerin maruz kaldıkları yoğun dezenformasyonu da beraberinde getirmiştir. Çevrimiçi dezenformasyonla mücadele süreci, yapay zekâ tekniklerinin kullanımıyla yakından ilişkilenmekte; söz konusu teknoloji hem dezenformasyonun üretilip yaygınlaştırılmasında hem de sorunlu içeriğin tespiti ve denetiminde önemli bir rol üstlenmektedir. Dezenformasyon ve yapay zekâ ilişkisinin bu iki yönü, yapay zekâ teknolojilerinin sorunlu içeriğin üretimi ve dağıtımı sürecindeki belirleyiciliğinin ve çevrimiçi dezenformasyonun tespit edilip azaltılabilmesi için yapay zekâ sistemlerinden en efektif biçimde nasıl yararlanılabileceğinin anlaşılmasını da gerekli kılmaktadır. Bu odak noktasından hareketle gerçekleştirilen çalışma kapsamında, yapay zekâ sistemlerinin dezenformasyonla mücadele sürecindeki potansiyelinin yapay zekâ uzmanlarının gözünden değerlendirilmesi hedeflenmektedir. Bu hedef doğrultusunda, Yapay Zekâ Politikaları Derneği (AIPA) üyesi ve paydaşı olan yapay zekâ uzmanlarıyla yarı yapılandırılmış görüşme tekniğinin kullanıldığı betimsel nitelikli bir alan araştırması gerçekleştirilmiştir. Çalışma sonucunda, günümüz yapay zekâ sistemlerinin dezenformasyonun artırılmasında olduğu kadar azaltılması için de nasıl aktif kullanılabileceği; bunun için dezenformasyon tespit ve filtreleme mekanizmalarının, doğrulama platformlarının yaygınlaştırılmasının gerekliliği, bu amaçla geliştirilecek politikalar kamu-dijital platform iş birliğiyle oluşturulurken kullanıcıya karşı sorumluluğun da öncelenmesine ihtiyaç duyulduğu tespit edilmiştir.

References

  • Akers, J., Bansal, G., Cadamuro, G., Chen, C., Chen, Q., Lin, L., Mulcaire, P., Nandakumar, R., Rockett, M., Simko, L., Toman, J., Wu, T., Zeng, E., Zorn, B. & Roesner, F. (2018). Technology-Enabled Disinformation: Summary, Lessons, and Recommendations. Technical Report UW-CSE, 21.
  • Akhtar, P., Ghouri, A.M., Khan, H.R., ul Haq, M.A., Auan, U., Zahoor, N., Khan, Z., Ashrar, A. (2022). Detecting fake news and disinformation using artifical intelligence and machine learning to avoid supply chain distruptions. Annals of Operations Research, 327, 633-657.
  • Belhadi, A., Mani, V., Kamble, S.S., Khan, S.A.R., & Verma, S. (2021). Artificial intelligence-driven innovation for enhancing supply chain resilience and performance under the effect of supply chain dynamism: an empirical investigation. Annals of Operations Research, 021-03956-x.
  • Bergamini, D. (2020). Need for Democratic Governance of Artifical Intelligence. Comittee on Political Affairs and Democracy-Council of Europe. Retrieved https://pace.coe.int/en/files/28742 Erişim T. 13 Eylül 2023.
  • Bontridder, N. & Poullet, Y. (2021). The role of artificial intelligence in disinformation. Data & Policy, 3, e32. Bouziane, M., Perrin, H., Cluzeau, A., Mardas, J. & Sadeq, A. (2020). Team Buster. ai at CheckThat! 2020 Insights and Recommendations to Improve Fact-Checkin, CLEF 2020.
  • Chesney, B. & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107, 1753.
  • Funke, D. (2019). These fact-checkers won $2 million to implement ai in their newsrooms. Poynter. Retrieved https://www.poynter.org/fact-checking/2019/these-fact-checkers-won-2-million- to-implement-ai-in-their-newsrooms/
  • Goodfellowet, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. & Bengio, Y. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, 27, 1-9.
  • Graves, L. (2018). Understanding the Promise and Limits of Automated Fact-checking. The Reuters Institute for the Study of Journalism at the University of Oxford, February 2018.
  • Greengard, S. (2019). Will deepfakes do deep damage? Communications of the ACM, 63(1), 17-19.
  • Gupta, A., Li, H., Farnoush, A. & Jiang, K. (2022). Understanding patterns of COVID infodemic: A systematic and pragmatic approach to curb fake news. Journal of Business Research, 140, 670-683.
  • Gül-Ünlü, D. & Kesgin, Y. (2021). Tavşan deliği ve siyasal radikalleşme: YouTube kullanıcı önerileri üzerinden bir değerlendirme. In. A. Aydemir (Ed.), Gelenekselden Dijitale Siyasal İletişim Çalışmaları (ss.67-78). Konya: Eğitim Yayınevi.
  • Jackson, J. (2016). Fake news clampdown: Google gives €150,000 to fact-checking projects. The Guardian. Retrieved https://www.theguardian.com/media/2016/nov/17/fake-news-google- funding-fact-checking-us-election
  • Karakoç, E. & Kuş, O. & Gül Ünlü, D. (2023). Algoritma farkındalığı ve hayali olanaklar: İnsan Hakları ihlallerinin dijital mekanizması üzerine düşünmek. In. M.A. Göngen & Y. Kesgin (Ed.), Medya ve İnsan Hakları (ss.153-170). İstanbul: Kriter Yayınları.
  • Karakoç, E. & Zeybek, B. (2022). Görmek inanmaya yeter mi? Görsel dezenformasyonun ayırt edici biçimi olarak siyasi deepfake içerikler. Öneri Dergisi, 17(57), 50-72.
  • Kertysova, K. (2018). Artificial intelligence and disinformation: How AI changes the way disinformation is produced, disseminated, and can be countered. Security and Human Rights, 29(1- 4), 55-81.
  • Küçükşabanoğlu, Z. & Soysal, B. (2023). Yapay zekânın siyaseti. In. U. Demirezen (Ed.), Geleceği Şekillendiren Teknoloji Yapay Zekâ (ss.1-33). İstanbul: Nobel Yayıncılık.
  • Lekach, S. (2018). The cleaners shows the terrors human content moderators face at work. Marshable, 13 November 2018. https://mashable.com/article/the-cleaners-content-moderators-facebook- twitter-google Erişim T. 13 Eylül 2023.
  • Lamo, M. & Calo, R. (2018). Regulating Bot Speech. UCLA Law Review, 66, 988-1028.
  • Marechal, N. & Biddle, E.R. (2020). It’s not just the content, it’s the business model: Democracy’s online speech challenge. A Report from Ranking Digital Rights, New America, 17 March 2020.
  • Masood, M., Nawaz, M., Malik, K.M., Javed, A., Irtaza, A. & Malik, H. (2022). Deepfakes generation and detection: State-of-the-art, open challenges, countermeasures, and way forward. Applied Intelligence, 54, 3974-4026.
  • Marsden, C. & Meyer, T. (2019). Regulating Disinformation with Artifical Intelligence: Effect of Disinformation Initiatives on Freedom of Experssion and Media Pluralism. European Parliamentary Research Service (EPRS), Scientific Foresight Unit (STOA).
  • Montoro-Montarroso, A., Caton-Correa, J., Rosso, P., Chulvi, B., Panizo-Lledot, A., Huertas-Tato, J., Calvo- Figueras, B., Rementeria, M.J. & Gomez-Romero, J. (2023). Fighting disinformation with artifical intelligence: Fundamentals, advances and challenges. Profesional de la Informacion, 32(3), e320322.
  • Newton, C. (2019) The trauma floor: The secret lives of Facebook moderators in America. The Verge, 25 February 2019. Retrieved https://www.theverge.com/2019/2/25/18229714/cognizant- facebook-content-moderator-interviews-trauma-working-conditions-arizona Erişim T. 13 Eylül 2023.
  • Rosenbach, E. & Mansted, K. (2018). Can democracy survive in the information age?. Belfer Center for Science and International Affairs, 30. Retrieved https://www.belfercenter.org/publication/ can-democracy-survive-information-age Erişim T. 15 Eylül 2023.
  • Shao, C., Ciampaglia, G.L., Varol, O., Flammini, A. & Menczer, F. (2017). The spread of misinformation by social bots. ArXiv Preprint ArXiv:1707.07592.
  • Shrestha, Y.R., Ben-Menahem, S.M., & von Krogh, G. (2019). Organizational decision-making structuresin the age of artificial intelligence. California Management Review, 61(4), 66–83.
  • Stiff, H. & Johansson, F. (2022). Detecting computer-generated disinformation. International Journal of Data Science and Analytics, 13, 363-383.
  • Vincent, J. (2019). AI won't relieve the mistery of facebook's human moderators. The Verge, 27 February 2019. Retrieved https://www.theverge.com/2019/2/27/18242724/facebook-moderation-ai-artificial-intelligence-platforms. Erişim T. 15 Eylül 2023.
  • Walorska, A.M. (2020). Deepfakes and Disinformation. Friedrich Naumann Foundation for Freedom. https://www.freiheit.org/de/consent?dest=https%3A%2Fshop.freiheit.org%2F%23!%2FPublikation% 2F897. Erişim T. 15 Eylül 2023.
  • Wang, P., Angarita, R. & Renna, I. (2018). Is this the era of misinformation yet: combining social bots and fake news to deceive the masses. In Companion Proceedings of the The Web Conference 2018 (pp. 1557-1561).
  • West, D.M. (2017). How to combat fake news and disinformation. The Brookings Institution, 18 December 2017. https://www.brookings.edu/articles/how-to-combat-fake-news-and-disinformation. Erişim T. 14 Eylül 2023.
There are 32 citations in total.

Details

Primary Language Turkish
Subjects Communication Studies, Communication Technology and Digital Media Studies
Journal Section Research Articles
Authors

Derya Gül Ünlü 0000-0003-3936-7988

Zafer Küçükşabanoğlu This is me 0000-0003-2686-4109

Early Pub Date December 16, 2023
Publication Date December 16, 2023
Submission Date October 13, 2023
Acceptance Date November 14, 2023
Published in Issue Year 2023 Issue: 11 - Theme: Disinformation

Cite

APA Gül Ünlü, D., & Küçükşabanoğlu, Z. (2023). Dezenformasyon ve Yapay Zekâ: Dezenformasyonla Mücadele Yollarına Yapay Zekâ Uzmanlarının Gözünden Bakmak. İletişim Ve Diplomasi(11), 83-106. https://doi.org/10.54722/iletisimvediplomasi.1375478