Deepfakes, desinformación, discursos de odio y democracia en la era de la Inteligencia Artificial
DOI:
https://doi.org/10.62269/cavcaa.20Keywords:
artificial intelligence, misinformation, hate speech, propaganda, democracy, deepfakesAbstract
Artificial Intelligence (AI) is transforming the world, and this implies that it also presents new challenges. One of the biggest concerns is the rise of deepfakes, videos or audios manipulated to appear real. This technology has the potential to be used for spreading misinformation, propaganda, and hate speech, posing a serious threat to democracy. Deepfakes are becoming increasingly sophisticated and difficult to detect. This means they can be used to manipulate public opinion very effectively. For example, deepfakes of politicians saying things they have never said, or celebrities doing things they have never done, can be created. The spread of deepfakes can have a devastating impact on society. It can erode trust in institutions, increase political polarization, and fuel violence. In a world where people do not know what to believe, democracy becomes vulnerable. But is AI truly a threat to democracy? I do not think so. This scenario should not lead us to view AI as an inexorable threat to democracy. The key lies in taking collective responsibility to develop effective detection mechanisms, promote digital literacy and critical thinking among citizens, and safeguard a commitment to truthfulness and constructive debate in the digital realm. Rather than succumbing to panic at the possibility of manipulation, we should focus on empowering individuals to navigate this complex information landscape with discernment. This article proposes, therefore, a perspective of hope: in the battle against the malicious use of AI for purposes of disinformation and hatred, the real challenge might not be the technology itself, but our own fear. Overcoming this fear through education, the development of critical skills, and the promotion of a culture of verification and active participation can not only mitigate the risks associated with deepfakes, misinformation, or hate speech, but also strengthen the pillars of our democracy in the digital age.
Downloads
References
Aral S. (2022, 6 de abril). Fake News about our Fake News Study Spread Faster than its Truth… Just as We Predicted. https://bit.ly/3TwE0Pt
Acemoglu, D., & Robinson, J. A. (2012). Why nations fail: The origins of power, prosperity, and poverty. NY. Crown Business.
Barro, Robert (1999). The Determinants of Democracy, Journal of Political Economy 107, S158-S183.
Comisión Europea (2022). Tackling online disinformation. https://bit.ly/3v6TDDG
Saner E. (2024, 31 de enero). Inside the Taylor Swift deepfake scandal: ‘It’s men telling a powerful woman to get back in her box’ The Guardian http://bit.ly/3v283F6
Herman, E. S., & Chomsky, N. (1988). Manufacturing Consent: The Political Economy of the Mass Media. Pantheon Book
Mercier h. (2020). Not Born Yesterday: The Science of Who We Trust and What We Believe. Princeton, NJ, Princeton University Press
Morozov, Evgeny (2011). The net delusion: The dark side of internet freedom. PublicAffairs.
Maçães B. (2024, 10 de enero). The year of voting dangerously. The New Stetesman. https://bit.ly/3v4fsUs
Nyhan, B. (2020). Facts and myths about misperceptions. Journal of Ecnomics Pespectives, 34 (3): 220-36.
Solow, R. M. (1957). Technical change and the aggregate production function. Review of Economics and Statistics, 39(3), 312-320.
Savva Terentyev v. Rusia, Aplicación No. 001-185307 (TEDH, 28 de agosto de 2018). Recuperado de http://hudoc.echr.coe.int/eng#{"itemid":["001-185307"]}
Sunstein, Cass R. (2001). Republic.com 2.0: How the Internet is changing democracy. Princeton University Press.
Vosoughi, S., et al. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151. https://doi.org/10.1126/science.aap9559
Williams D. (2024, 24 de enero). AI-based disinformation is probably not a major threat to democracy https://bit.ly/3Isv8Us