Background: This significant Supreme Court decision held that corporations and unions have the same First Amendment rights as individuals regarding independent political expenditures. The ruling allows these entities to spend unlimited funds on political advertising, significantly impacting campaign finance laws.
How I used it: I used this to understand the legal framework surrounding corporate influence in politics. It provided a foundation for analyzing how corporate funding can affect the spread of misinformation and fake news in political campaigns.
2. Danielle K. Citron & Robert Chesney, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. 107 California Law Review 1753 (2019).
Background: Citron and Chesney examine the rise of deep fake technology and its implications for privacy, democratic integrity, and national security. They discuss how advancements in machine learning make it easier to create convincing deep fakes, which can be used maliciously to spread misinformation, manipulate elections, and damage famous peoples’ reputations. The authors also explore potential solutions, including legal frameworks, technological solutions, and regulatory measures to reduce the threats posed by deep fakes.
How I used it: I used this for understanding the multifaceted risks associated with deep fake technology. It provides a comprehensive overview of the challenges deep fakes present to society and offers insights into possible strategies for addressing these issues.
3. Gupta, A., Kumar, N., Prabhat, P., Gupta, R., Tanwar, S., & Sharma, G. (2022). Combating Fake News: Stakeholder Interventions and Potential Solutions. IEEE Access, vol. 10, pp. 78268-78289, 2022
Background: This paper examines the rise of fake news, particularly during the COVID-19 pandemic, and explores the challenges in detecting and mitigating its spread. It reviews existing detection methods, identifies their flaws, and proposes interventions from various people – including users, platforms, and governments—to effectively combat fake news.
How I used it: I used this source to gain a comprehensive understanding of the multi-stakeholder approach required to address fake news. The discussions on technical and policy interventions provided valuable insights into the limitations of current technologies and the necessity for integrated solutions, supporting my argument for more advanced detection methods.
4. Heidari, A., Navimipour, N. J., Dag, H., & Unal, M. (2023). Deep learning-based deepfake detection techniques: A systematic and comprehensive review. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery.
Background: This article comprehensively covers deep learning-based deepfake detection methods for images, videos, audio, and hybrid multimedia content. It further discusses advances, current challenges, and future areas regarding deepfake detection for combating AI-generated misinformation.
How I used it: I used this source to learn how the technologies of deepfake detection work at the moment and how effective they are in holding back sophisticated AI-generated content. The analysis of the limitations and weaknesses of the existing detection methods in the article provided a backbone for my argument that current technologies cannot adequately detect advanced AI-generated misinformation.
5. Painter, Richard W. Deepfake 2024: Will Citizens United and Artificial Intelligence Together Destroy Representative Democracy? Journal of National Security Law & Policy, 2023, page. 121–151. HeinOnline. (this one requires you to log in through Rowan database)
Background: The article covers how AI-generated deep fakes pose a threat to American elections. According to Painter, “very realistic fake media” can make people believe something that isn’t true about a politician. He connects this problem to the Court’s decision in Citizens United, which gave rise to dark money in politics and, subsequently, it found easier ways for actors who wish to remain anonymous to affect the outcome of an election with disinformation. The article highlights legal challenges in regulating deep fakes and suggests solutions like a “Deep Fake Alert System” to identify and flag manipulated media
How I used it: I used this article to show how present technologies and legal frameworks are unable to deal with the AI-generated misinformation spiraling out of control in politics. This supported my argument that existing mechanisms for detection and current policies cannot effectively respond to sophisticated AI-powered disinformation, emphasizing that even more robust solutions need to be developed.
6. Raman, Raghu et al. Fake news research trends, linkages to generative artificial intelligence and sustainable development goals. Heliyon, Volume 10, Issue 3, e24727
Background: This paper revisits the transformation of research into the realm of fake news during the last decade. Some major themes identified include disinformation on social media, COVID-19-induced infodemics, and technological auto-detection advancements. The article uniquely maps deep fakes research to Sustainable Development Goals—with special emphasis on health and peace—and discusses its impact on SDG 3, SDG 9 , and SDG 16. Furthermore, the authors discuss the contribution of generative AI in propagating realistic fake news and raise several important ethical concerns.
How I used it: I used this to focus on the aspect of how AI-generated misinformation challenges all the current technologies for detection. This further cemented my hypothesis that current technologies in place are not enough. The impact generative AI is capable of creating, and the ethical dilemmas involved will give yet more credibility to the reasons a more advanced detection method needs to be developed.
7. Ubillús, J. A. T., Ladera-Castañeda, M., Pacherres, C. A. A., Pacherres, M. Á. A., & Saavedra, C. L. I. (2023). Artificial Intelligence to Reduce Misleading Publications on Social Networks. EAI Endorsed Transactions on Scalable Information Systems, 10(6).
Background: This research investigates the global issues related to misleading advertisements on social networks. The authors applied various artificial intelligence techniques, including neural networks, sentiment analysis, and machine learning, to combat fake news, particularly in COVID-19. The study concluded that current AI methods being used were not effective in deeply identifying misleading news and lacked real-time application capabilities.
How I Used It: I used this source to highlight the limitations of current AI technologies in detecting and preventing AI-generated misinformation on social media platforms. The findings support my argument that existing detection methods are insufficient, underscoring the need for more advanced and real-time solutions to effectively combat misinformation.
8. Williamson, S. M., & Prybutok, V. (2023). The era of artificial intelligence deception: Unraveling the complexities of false realities and emerging threats of misinformation.
Background: This article highlights the dual face of AI, where potential benefits lead to significant risks linked to misinformation and hallucinations generated by AI. The authors further discuss how enhancements in large AI language models are likely to increase blurring the line between reality and fabricated information.
How I plan to use it: I used this article to point out the deficiency in current AI technologies in efficiently detecting and preventing misinformation generated by AI. It further discusses the potential of AI to manipulate decisions and create misleading content, and it justified my assertion about the incapacity of existing systems in handling advanced challenges related to AI-generated misinformation. This suggested a greater need for more stricter ethical guidelines and effective methods of detection that have been the core of my research.