In the past few years artificial intelligence has revolutionized information production and dissemination, promoting unprecedented efficiencies in communication. This same technological development has also opened avenues to sophisticated misinformation such as deepfakes and AI-generated fake news—which pose serious risks to information integrity and erode public confidence in media and institutions. While research on detection technologies is improving, most methods developed so far cannot keep pace with the rapid evolution, thus creating emerging challenges. This study will focus on identifying how real-time AI-generated misinformation evades detection systems and highlight the dire need for better solutions.
Background: This paper revisits the transformation of research into the domain of fake news during the last decade. Some major themes identified include disinformation on social media, COVID-19-induced infodemics, and technological auto-detection advancements. The reference uniquely maps fake news research to Sustainable Development Goals—with special emphasis on health, peace, and industry—and discusses its impact on SDG 3, SDG 16, and SDG 9. Furthermore, the authors discuss the contribution of generative AI in propagating realistic fake news and raise several important ethical concerns.
How I intend to use it: I would like to focus on the aspect of how AI-generated misinformation challenges all the current technologies for detection. This further cements my hypothesis that current technologies in place are not enough. The impact generative AI is capable of creating, and the ethical dilemmas involved will give yet more credence to the reasons a more advanced detection methodology needs to be developed.
Source 2: Deepfake detection using deep learning methods: A systematic and comprehensive review
Background: This article comprehensively covers deep learning-based deepfake detection methods for images, videos, audio, and hybrid multimedia content. It further discusses advances, current challenges, and future areas regarding deepfake detection for combating AI-generated misinformation.
How I intend to use it: I would use this source to learn how the technologies of deepfake detection work at the moment and how effective they are in holding back sophisticated AI-generated content. Such an analysis of the limitations and weaknesses of the existing detection methods in the article will provide a backbone for my argument that current technologies cannot adequately detect advanced AI-generated misinformation.
Source 3: Unmasking AI-Generated Fake News Across Multiple Domains
Background: The article discusses the danger of AI-generated fake news across different domains. The authors develop a dataset of human and AI-generated news articles by utilizing models like ChatGPT. They train machine learning models to classify the articles as AI-generated or human-written, and as true or false. While they achieve high accuracy within certain domains, they struggle when applying the models across various domains.
How I plan to use it: I will use this article to contextualize that despite certain successes, present technologies fail to effectively detect AI-generated fake news across diverse fields. This solidifies my argument that the methods used to detect these types of messages are inefficient in fighting AI-driven misinformation on a large scale.
Background: This article highlights the dual face of AI, where potential benefits are entailed hand in hand with significant risks linked to misinformation and hallucinations generated by AI. The authors further discuss how enhancements in large AI language models are likely to increasingly blur the line between reality and fabricated information, testing trust, ethics, and societal impacts.
How I plan to use it: I will use this article to point out the deficiency in current AI technologies in efficiently detecting and preventing misinformation generated by AI. The potential of AI to manipulate decisions and create misleading content, as will be evident from the ensuing discussion, further justifies my assertion about the incapacity of existing systems in handling advanced challenges related to AI-generated misinformation. This suggests a greater need for more stringent ethical guidelines and effective methods of detection that have been the core of my research.
Background: The article covers how AI-generated deepfakes pose a threat to American elections. According to Painter, “very realistic fake media” can make people believe something that isn’t true about a politician. He connects this problem to the Court’s decision in Citizens United, which gave rise to dark money in politics and, subsequently, to easier ways for actors who wish to remain anonymous to affect the outcome of an election with disinformation. The article highlights legal challenges in regulating deep fakes and suggests solutions like a “Deepfake Alert System” to identify and flag manipulated media
How I intend to use it: I will utilize this article to show how present technologies and legal frameworks are unable to deal with the AI-generated misinformation spiraling out of control in politics. This supports my argument that existing mechanisms for detection and current policies cannot effectively respond to sophisticated AI-powered disinformation, emphasizing that even more robust solutions need to be developed.
Skibidy, the time for grading the Proposal+5 has long since passed. By now, yours should have grown to 10 or 15 sources perhaps and should be renamed Annotated Bibliography. This is certainly a fine example of how to go about annotating sources at the proposal stage.
You annotations should be more mature now that you’ve actually produced a paper and should reflect how you did in fact make use of the source material.
Finally, while I appreciate the links to your sources, they should be full bibliographic notations with Author Names, Publications, etc., so readers can find their way to your sources in case the links don’t work.
This would grade well, but will pick up a grade as your Annotated Bib when that goes into your Portfolio.