Yıl 2019, Cilt , Sayı 31, Sayfalar 91 - 110 2019-12-28

Büyük Veri, Yapay Zeka ve Makina Öğrenimi Algoritmalar: Hakikat Ötesi Çağda Dijital Tehditlerin Betimleyici bir Analizi
Big Data, Artificial Intelligence, and Machine Learning Algorithms: A Descriptive Analysis of the Digital Threats in the Post-truth Era
Big data, Intelligence Artificielle, et Algorithmes d'Apprentissage Automatique: Une Analyse Descriptive des Menaces Numériques à l'ère Post-Vérité

Tirse FİLİBELİ [1]


Büyük verinin kullanımı, bilginin dolaşımına dair her şeyi değiştirdi mi? Dijital verilerimiz “büyük veri” adını verdiğimiz büyük depolarda saklanıyor. Sanal hayatta yaptığımız her şey dijital bir ayak izi bırakıyor ve makine öğrenimi algoritmalar sayesinde; haber akışımızda çoğunlukla daha önce aradığımız konulara benzer içeriklerle karşılaşıyoruz. Temel olarak büyük veriler, insanları yeni ürünler almaya, yeni yerlere seyahat etmeye, yeni kitaplar okumaya vb. yönlendirmek için kullanılıyor. Bununla birlikte, 2016'da ilk kez Facebook’un Cambridge Analytica Skandalı ile ortaya çıktığı üzere, bazen bu teknolojiler demokrasi için bir tehdit oluşturmaktadır. Bunun nedeni günümüzde, büyük veri ve yapay zeka algoritmaları politik kampanya yöneticileri tarafından promosyonlu yanlış içeriklerin dolaşıma sokulması yoluyla insanları manipüle etmek ve/veya ikna etmek amacıyla kullanılmaktadır. Bu çalışmada, Microsoft’un yapay zekası Tay’in başarısızlığı ve Youtube’un Brezilya’daki cumhurbaşkanlığı seçimleri üzerindeki etkisi gibi son zamanlardaki tarihi olayların tanımlayıcı bir analizini yaparak, demokrasiye karşı ortaya çıkan güncel dijital tehditleri tanımladık. Bunun yanı sıra, bu dijital tehditleri daha iyi tanımlamak ve tartışmak için yapay zeka temelli algoritmalar, büyük veri ve sosyal mühendislik üzerine çalışan dört uzmanla yarı-yapılandırılmış görüşmeler gerçekleştirildi. Yapmış olduğumuz analizler ve yarı yapılandırılmış görüşmelerden elde etmiş olduğumuz bulgular, içinde yaşadığımız hakikat ötesi çağda sosyal mühendislik, veri gizliliğinin ihlali, filtre baloncukları yaratan kişiselleştirilmiş arama motoru algoritmaları, doğru olmayan içeriklerin üretiminin ve dolaşıma sokulmasının kolaylaşması gibi birçok dijital tehdit bulunduğunu göstermiştir.

 

Did the utilization of big data change everything about how information circulates? Our digital data have been kept in big warehouses that we name ‘big data’. All the things that we do in virtual life leave a digital footprint and thanks to machine learning algorithms; in our newsfeed, we mostly face contents, which are similar to the subjects that we looked for before. Big data are being used for manipulating people to buy new products, to travel to new places, to read new books, etc. However, as it emerged in 2016 with Cambridge Analytica Scandal of Facebook, sometimes those technologies construct a threat for democracy. The underlying reason is that in our days, big data and AI algorithms have been used by political campaign managers to manipulate and/or persuade people through diffusion of promoted ‘false’ contents. The aim of this study, by doing a descriptive analysis of very recent historical events like the failure of Microsoft’s AI Tay and Youtube’s effects on the presidential election in Brazil, is to define very current digital threats against democracy. Additionally, to better describe and discuss these digital threats we did semi-structured interviews with four experts who work on AI algorithms, big data, and social engineering. Our analyses and findings that we gathered from semi-structured interviews showed that there are several digital threats in the post-truth era that we live in like digital manipulation, violation of data privacy and misuse of big data, personalized search engine algorithms that create filter bubbles, the ease of production and diffusion of fake contents.

L’utilisation de big data a t-il tout changé concernant la circulation de l'information? Nos données numériques ont été conservées dans un grand entrepôt que nous avons nommé ‘big data’. Tout ce que nous faisons dans la vie virtuelle laisse une empreinte numérique et grâce aux algorithmes d’apprentissage automatique; dans notre fils d'actualités, nous sommes principalement confrontés par des contenus similaires aux sujets que nous avions précédemment recherchés. Big data sont essentiellement utilisées pour manipuler les internautes pour acheter de nouveaux produits, visiter de nouveaux endroits, lire de nouveaux livres, etc. Cependant, comme il est apparu en 2016, avec Le Scandale Cambridge Analytica de Facebook, ces technologies constituent parfois une menace pour la démocratie. La raison sous-jacente est qu’aujourd’hui, les responsables des campagnes politiques utilisaient des big data et des algorithmes d’IA pour manipuler et/ou persuader les gens en diffusion du ‘contenus faux’. Le but de cette étude, en effectuant une analyse descriptive d’événements historiques très récents, tels que l’échec de Microsoft AI Tay et les effets de Youtube sur l’élection présidentielle au Brésil, est de définir les menaces numériques très actuelles à la démocratie. De plus, pour mieux décrire et discuter  ces menaces numériques, nous avons réalisé des entretiens semi-structurés avec quatre experts travaillant sur les algorithmes d'IA, le big data et l'ingénierie sociale. Nos analyses et résultats tirés d'entretiens semi-structurés ont montré qu'il existe plusieurs menaces numériques dans l'ère post-vérité, telles que l'ingénierie sociale, la violation de la confidentialité des données et l'utilisation abusive de big data, des algorithmes de moteur de recherche personnalisés qui créent des bulles de filtre, et la facilité de production et la diffusion de faux contenu.

  • Amer, K. & Noujaim, J. (2019). The Great Hack. [Documentary Movie]. United States: Netflix.
  • Bartlett, J. (2018). The People vs. Tech. How the internet is killing the democracy (and how we save it). London: Penguin.
  • Bauman, Z. & Lyon, D. (2013). Liquid Surveillance. Cambridge, UK: Polity Press. ISBN: 978-0-7456-6402-6
  • Binark, M. (2017). Algoritmaların Yarattığı Yankı Odaları ve Siyasal Katılım Olanağı veya Olanaksızlığı. Varlık Dergisi, 1317, 19-23.
  • Berghel, H. (2018). Malice Domestic: The Cambridge Analytica Dystopia. Computer, 51(5), 84-89. doi: 10.1109/MC.2018.2381135
  • Bozdag, E. & van den Hoven, J. (2015). Breaking the filter bubble: democracy and design. Ethics and information technology, 17 (249). https://doi.org/10.1007/s10676-015-9380-y
  • Burns, Axel. (2017). Echo chamber? What echo chamber? Reviewing the evidence. 
6th Biennial Future of Journalism Conference (FOJ17). Cardiff, UK. Retrieved from https://eprints.qut.edu.au/113937/
  • Cadwalladr, C. & Graham-Harrison, E. (2018). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. Retrieved from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica- facebook-influence-us-election
  • Cadwallar, C. (2018). The Guardian. Retrieved from https://www.theguardian.com/technology/2018/apr/10/facebook-notify-users-data-harvested-cambridge-analytica#img-1
  • Cadwalladr, C. (2017). The great British Brexit robbery: how our democracy was hijacked. The Guardian. Retrieved from https://www.theguardian.com/technology/2017/may/07/the-great-british-brexit-robbery-hijacked-democracy
  • Chivers, T. (2019). What do we do about deepfake video?. The Guardian. Retrieved from https://www.theguardian.com/technology/2019/jun/23/what-do- we-do-about-deepfake-video-ai-facebook
  • Culkin, J. M. (1967). A schoolman's guide to Marshall McLuhan. The Saturday Review, 51-53. Retrieved from http://www.unz.org/Pub/SaturdayRev- 1967mar18-00051
  • Edson C. Tandoc Jr., Zheng Wei Lim & Richard Ling. (2018). Defining “Fake News”. Digital Journalism, 6:2,137-153, doi:10.1080/21670811.2017.1360143
  • Erbaysal Filibeli, T. & Şener, O. (in press). Manipüle Edilmiş Bir Enformasyonel Vitrin ve Popülist bir Enformasyon Alanı olarak Twitter, Moment Dergi.
  • Fisher, M. & Taub, A. (2019).How YouTube Radicalized Brazil. The NewYork Times. Retrieved from https://www.nytimes.com/2019/08/11/world/americas/youtube-brazil.html
  • Flood, A. (2006). ‘Post-truth’ named word of the year by Oxford Dictionaries. The Guardian. Retrieved from https://www.theguardian.com/books/2016/nov/15/post-truth-named-word-of-the-year-by-oxford-dictionaries
  • Haim, M., Graefe, A. & Brosius H. B. (2018) Burst of the Filter Bubble? Effects of personalization on the diversity of Google News. Digital Journalism, 6(3), 330-343, doi: 10.1080/21670811.2017.1338145
  • Herman, E. S. & Noam, C. (2008). Manufacturing Consent: The Political Economy of Mass Media. London: The Bodley Head.
  • Happer, C., Hoskins, A. & Merrin, W. (2019). Weaponizing reality: an introduction to Trump’s war on the media. In Happer, C., Hoskins, A. & Merrin, W. (Eds), (2019). Trump’s Media War (pp.3-22). Switzerland: Palgrave Macmillan.
  • Hunt, E. (2016, March 24). Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter. The Guardian. Retrieved from https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai- chatbot-gets-a-crash-course-in-racism-from-twitter
  • Keyes, R. (2004). The post-truth era. New York: St. Martin’s Press. ISBN: 0312306482
  • Krombholz, K., Hobel, H., Huber, M., & Weippl, E.R. (2015). Advanced social engineering attacks. J. Inf. Sec. Appl., 22, 113-122.
  • Narin, B . (2018). Kişiselleştirilmiş Çevrimiçi Haber Akışının Yankı Odası Etkisi, Filtre Balonu ve Siberbalkanizasyon Kavramları Çerçevesinde İncelenmesi. Selçuk Üniversitesi İletişim Fakültesi Akademik Dergisi, 11 (2), 232-251. DOI: 10.18094/josc.340471
  • Metz, C. & Blumenthal, S. (2019). How A.I. could be weaponized to spread disinformation. The New York Times. Retrieved from https://www.nytimes.com/interactive/2019/06/07/technology/ai-text- disinformation.html
  • Molly, M. (2016, March 24). Microsoft 'deeply sorry' after AI becomes 'Hitler-loving sex robot'. The Telegraph. Retrieved from https://www.telegraph.co.uk/technology/2016/03/26/microsoft-deeply-sorry- after-ai-becomes-hitler-loving-sex-robot/
  • Newman, N., Fletcher, R., Kalogeropoulos, A. & Nielsen, R. K. (2019) Reuters Institute Digital News Report. Retrieved from https://reutersinstitute.politics.ox.ac.uk/sites/default/files/inline- files/DNR_2019_FINAL_27_08_2019.pdf
  • Pybus, J. (2019). Trump, the first Facebook president: why politicians need data too. In Happer, C., Hoskins, A. & Merrin, W. (Eds), (2019). Trump’s Media War (pp.227-240). Switzerland: Palgrave Macmillan.
  • Rampling, J. (2017). Secrets of Silicon Valley: The Persuasion Machine. [Documentary Movie]. UK: BBC.
  • Sample, I. (2014). The Guardian. How computer-generated fake papers flooding academia. Retrieved from https://www.theguardian.com/technology/shortcuts/2014/feb/26/how- computer-generated-fake-papers-flooding-academia
  • Schönberger, V. M. & Cukier K. (2013) “Big Data: A Revolution That Will Transform How We Live, Work, and Think.” Boston, New York: Houghton Mifflin Harcourt. ISBN 978-0-544-00269-2
  • Sunstein, Cass R. 2009. Republic.com 2.0. Princeton, N.J.: Princeton University Press.
  • Schwartz, O. (2019). Could ‘fake text’ be the next global political threat? The Guardian. Retrieved from https://www.theguardian.com/technology/2019/jul/04/ai-fake-text-gpt-2- concerns-false-information
  • Sich, A., Bullock, J. & Roberts, S. (2018). What is the Cambridge Analytica Scandal? The Guardian. Retrieved from https://www.theguardian.com/news/video/2018/mar/19/everything-you-need-to-know-about-the-cambridge-analytica-expose-video-explainer
  • Wakefield, J. (2016, March 24). Microsoft chatbot is taught to swear on Twitter. BBC News. https://www.bbc.com/news/technology-35890188
  • Pariser, Eli. (2011). The filter bubble: What the internet is hiding from you? New York: Penguin Press.
  • Parkin, S. (2019). The rise of the deepfake and the threat to democracy. The Guardian. Retrieved from https://www.theguardian.com/technology/ng-interactive/2019/jun/22/the-rise-of-the-deepfake-and-the-threat-to- democracy
  • Thurman, Neil. 2011. “Making ‘the Daily Me’: Technology, Economics, and Habit in the Mainstream Assimilation of Personalized News.” Journalism 12 (4): 395–415. doi:10.1177/1464884910388228.
  • Tufekci, Z. (2014). Engineering the public: Big data, surveillance and computational politics. First Monday, 19(7). doi:https://doi.org/10.5210/fm.v19i7.4901
  • Borgesius, Z., Trilling, D., Moeller, J., Bodó, B., de Vreese, C. H. & Helberger, N. (2016). Should We Worry About Filter Bubbles? Internet Policy Review. Journal on Internet Regulation. 5(1). Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2758126
  • Facebook'taki veri skandalı, Türkiye'de 234 bin kişiyi etkiledi. (2018, April 6). T24. Retrieved from https://t24.com.tr/haber/facebooktaki-veri-skandali-turkiyede- 234-bin-kisiyi-etkiledi,599408
  • How is fake news spread? Bots, people like you, trolls, and microtargeting. (n.d.) Retrieved from https://www.cits.ucsb.edu/fake-news/spread
Birincil Dil en
Konular Sosyal
Bölüm Makaleler
Yazarlar

Orcid: 0000-0003-4642-2279
Yazar: Tirse FİLİBELİ (Sorumlu Yazar)
Kurum: BAHÇEŞEHİR ÜNİVERSİTESİ
Ülke: Turkey


Tarihler

Yayımlanma Tarihi : 28 Aralık 2019

APA FİLİBELİ, T . (2019). Big Data, Artificial Intelligence, and Machine Learning Algorithms: A Descriptive Analysis of the Digital Threats in the Post-truth Era. Galatasaray Üniversitesi İletişim Dergisi , (31) , 91-110 . DOI: 10.16878/gsuilet.626260