Rebuttal Rewrite – SkibidySigma

The Limitations of AI in Combating Misleading Content

Ubillús et al. (2023) suggest that AI techniques, including neural networks and sentiment analysis, can be considered for the identification or mitigation of misleading ads on social networks. These technologies offer powerful tools for data analysis and pattern recognition, yet they are not flawless. Algorithms are only as good as the data they have been trained with. If that training data is biased or incomplete, the AI system could yield incorrect results, classifying legitimate content as misleading, or vice versa.

While AI can generate complex and varied content, its ability to detect such complications in misleading content remains limited. Gupta et al. (2022) have indicated that, in many cases, the language used in fake news is nuanced, culturally referenced, and contextual, which an AI system may not completely understand. While AI can adapt to different reading levels and formalities, distinguishing the origin and intent behind nuanced content requires a deeper level of understanding that is not present in current AI. Additionally, advanced disinformation groups may use AI-generated deepfakes or other synthetic media forms that can effectively evade detection by existing AI systems.. While AI may be flexible in generating content, translating that capability into accurately identifying subtle hints associated with misinformation is far from being mastered in the realm of AI.

One of the most significant limitations to using AI in fighting misleading content is access to high-quality labeled datasets. As Gupta et al. (2022) note, any AI model requires comprehensive datasets to learn from and generalize effectively, but these datasets are remarkably limited. AI needs certain datasets in which at least a portion of information is explicitly labeled as “faked” to successfully teach it to identify AI-generated misleading content. This labeling is generally done by human annotators or through an established fact-checking process, introducing subjectivity and potential bias. If these data are not labeled precisely, an AI model may treat them as equally true, reducing its ability to discern what’s genuine and what’s fabricated. As more unclassified lies would enter the dataset, the overall reliability of AI-generated detections would decrease, making it increasingly difficult to maintain accuracy in identifying genuine versus misleading content. Additionally, If AI models are trained on data where deceptive content is not correctly identified, they might inadvertently propagate or even amplify disinformation, thus affecting the integrity of their detection capabilities.

These highly advanced AI systems are very cost- and technologically intensive, making them hardly affordable for developing regions or smaller platforms. This could lead to the playing field becoming uneven, with larger platforms like Facebook and Twitter, which have substantially larger resources, implementing more robust AI defenses against misleading content than smaller or less-well-funded platforms. Consequently, this disparity could lead to bad actors exploiting platforms with weaker AI defenses to more easily spread misleading content. Additionally, while the evolving nuances of language—including slang, idioms, and changing terminology—pose challenges, they also offer opportunities. Such linguistic variations might act as indicators for identifying common perpetrators of fake content, as particular patterns or recurring phrases may signal automated or malicious content generation. However, effectively leveraging these linguistic cues requires continuous investment in AI development and monitoring, further highlighting the resource limitations faced by smaller platforms. Overcoming these economic and resource constraints is essential to ensure that AI-based solutions can be uniformly implemented across all platforms for effectively handling misleading content.

AI social network monitoring systems struggle with the enormous scale of surveillance, which raises various privacy and ethical issues. To function appropriately, AI algorithms often require enormous amounts of user data, private messages, and personal information. Gupta et al. (2022) discuss the tension between fighting fake news and maintaining user privacy. An overly surveillance-based approach leads to a breach of trust between users and platforms, potentially encroaching on the privacy and free expression rights of individuals.

Moreover, AI-driven content moderation may unwittingly hinder proper discourse. There is a great risk of over-censorship when AI algorithms incorrectly flag and remove content that does not violate any guidelines. This could hamper open communication and restrict the diversity of viewpoints that are important in a healthy democratic society.

While AI helps identify potentially misleading content, it cannot replace human judgment. Ubillús et al. (2023) point out that current AI methods have not been able to discover deeply embedded misleading news and are not real-time applications. AI systems lack certain notions of context, sarcasm, humor, or cultural nuances. Human moderators can provide the needed discernment in evaluating content, taking into consideration delicacy that AI may overlook.

In this respect, perhaps a more efficient outcome could be derived from a hybrid of both AI and human expertise, as Gupta et al. (2022) suggest. AI can do an initial pass on large volumes of data and flag suspicious content for human review. This approach allows the strengths of both AI’s efficiency and human critical thinking to come into play.

Tactics of disinformation keep evolving, and malicious actors search for ways to get around detection mechanisms. Gupta et al. (2022) discuss how recent advancements in AI have likewise enabled the development of highly evolved fake content, such as deepfakes. The more accessible this AI technology becomes, the more impossible it gets to tell the real from the fabricated. If not updated, AI systems lag in identifying new forms of deceptive content. This is a backward process in which AI is always behind, fixing a problem that has already happened. Proactive steps like teaching media literacy among users and providing incentives to encourage critical thinking are part of the strategy to tackle disinformation.

As valuable as AI may be as a tool in the identification and reduction of misleading publications on social networks, it is no panacea. The limitations of AI—technical challenges, data constraints, privacy concerns, and inability to fully understand context—mean we cannot rely on technology alone to solve this multifaceted problem. What we need is a multitrack approach: an assemblage of AI combined with human oversight, regulatory policies, user education, and collaboration among countries. The works of Ubillús et al. (2023) and Gupta et al. (2022). contribute valuable insights into the fight against misleading content. AI’s possibilities sound promising in conjunction with human-oversight. Such integration allows for us to develop effective strategies that address the root causes of disinformation rather than mere symptoms.

Sources

Gupta, A., Kumar, N., Prabhat, P., Gupta, R., Tanwar, S., & Sharma, G. (2022). Combating Fake News: Stakeholder Interventions and Potential Solutions. IEEE Access, vol. 10, pp. 78268-78289, 2022 

Ubillús, J. A. T., Ladera-Castañeda, M., Pacherres, C. A. A., Pacherres, M. Á. A., & Saavedra, C. L. I. (2023). Artificial Intelligence to Reduce Misleading Publications on Social NetworksEAI Endorsed Transactions on Scalable Information Systems, 10(6). 

This entry was posted in Portfolio Skibidy Sigma, Rebuttal Rewrite, Skibidy Sigma. Bookmark the permalink.

6 Responses to Rebuttal Rewrite – SkibidySigma

  1. davidbdale's avatar davidbdale says:

    Skibidy, you and I have a problem to which we have both contributed but the solution to which will benefit you much more than it will me, so I hope you’ll engage with me in finding a resolution.

    The problem is that your work has not undergone scrutiny and authenticity verification for most of the semester, a problem you created and which I permitted to occur. You did not seek Feedback on your work, which meant I was free to neglect it for the most part, which I did because it permitted me more time to respond to grading and feedback requests from students who wished to engage in the process.

    You also contributed to the problem by failing to place your work into the Skibidy Sigma category (a failure this post shares with maybe a dozen others) even while placing it into other appropriate categories (such as Rebuttal Rewrite in this case), a shortcoming that made it easier for your work to go by undetected. I should have worked harder to find your work and didn’t. Again, we share in creating the problem.

    None of that would matter if I could trust the authorship of your work without hesitation, but I can’t. Faced now with three short arguments whose supposed Rewrites are identical to their first drafts, I don’t have any hope of verifying that your “revisions” were responsive to anything at all. To make matters worse, they don’t contain obvious flaws a first-year writing student would be expected to make and which would require feedback and revision/correction. They have sprung fully mature out of the head of Zeus looking for a grade.

    I would like to believe in your genius, but you’ll have to forgive me for harboring doubts based on “our” failure to submit your work throughout the semester to the customary “checks” of submitting drafts and revisions.

    We used your Definition Rewrite as a model for “live” feedback in my early class on Wednesday, and more than half the students judged it to be likely generated by AI. That’s not conclusive of course, but considering the topic of your paper, the implication that you might have found a way around AI detection was hard to avoid.

    I propose we at least try to resolve this dilemma with a little experiment.

    Consider these three paragraphs and my critical reading of them. Revise them for me to address my objections, or my lack of understanding, or my faulty logic in disputing the claims they make. Write versions that would be responsive to my remarks and would prevent the next reader from objecting to or misunderstanding your original intention in writing them.

    1, The fact that fake news is complex and varied in nature makes the detection of misleading content by AI quite hard, Gupta et al. (2022) add that many aspects of fake news involve nuanced language, cultural references, and context-specific information. Sophisticated disinformation campaigns may also involve AI-generated deepfakes or other forms of synthetic media that might evade detection by existing AI systems.

    —AI doesn’t seem to have any trouble generating content that is variously complex and varied in nature. In fact, one of the truly satisfying aspects of AI generation is that it can adapt its output to meet a variety of reading levels, knowledge levels, and formalities.

    —Why, then, should it have trouble distinguishing those complexities in generated content and determining their origin?

    —If I ask an AI text generator to incorporate a specific cultural reference, or even to adopt the tone of a cultural community, it appears to “understand” the request and deliver appropriate results. So why can’t AI detect them in samples?

    2, One of the most important limitations associated with using AI to fight misleading content is the availability of high-quality labeled datasets. As Gupta et al. (2022) note, comprehensive datasets, which are commonly acknowledged as a prerequisite for any AI model to learn and generalize well, are remarkably limited. With inadequate data representing an extensive array of misleading content, AI systems will struggle in detecting new or evolving forms of disinformation.

    —This is confusing. AI accomplishes nothing without comprehensive datasets, right? Crudely described, the whole process is derived from gathering virtually everything and summarizing it.

    —So, for AI to recognize AI, does it need to know that a subset of “everything” it has gathered falls into the category of “faked”?

    —And if so, as judged by whom?

    —Or does every deepfake enter the “universal dataset” with the same “truth value” as every other bit of information so that instead of judging the differences between types of information, AI generators treat all data as equally true?

    —And, if that’s the case, do all lies taint the pool so that anything generated by AI will undeniably and increasingly deviate from “the truth”?

    3, Moreover, AI models should be continuously updated and retrained in the rapidly changing landscape of social media content. This is a resource-heavy process and may not be possible for all platforms, especially those with limited budgets. The evolving nuances of language, including slang, idioms, and changing terminology, create even greater difficulty in accurately interpreting content through AI algorithms.

    —Hmmm. When you say “may not be possible for all platforms,” are you suggesting that Facebook might be better able to afford to detect fake content than less-well-capitalized platforms because IT’S OWN detection models are the best money can buy?

    —As a side note: wouldn’t “slang, idioms, and changing terminology” be very helpful flags to help identify, for example, common perpetrators of fake material? It’s certainly helpful to ME in identifying alien generation of content when I read user instructions clearly produced by badly translating Chinese into English.

    If you will kindly revise this Rebuttal Rewrite to respond to that feedback, I can satisfy myself that you WOULD HAVE been just as responsive and brilliant in your revisions if we had engaged in the process earlier in the semester.

    Meanwhile, I’ll take your other arguments out of Feedback Please for the time being. Those requests may all be moot either way.

  2. SkibidySigma's avatar SkibidySigma says:

    Hi Professor, I deeply apologize for not uploading my arguments and rewrites under the right category. As you already know, I was a late add to your class, and I missed the first few classes, which were the basis for getting to know our blog and how to post our assignments. I know most of it is pretty self-explanatory, but I guess I was a bit sluggish regardless, which is completely my fault.

    Now, I am trying to understand your doubts about the honesty of my work, and I am willing to cooperate with you in any way possible to make up for my mistakes. I will fix the mistakes in my Rebuttal Rewrite as you instructed and put all my work under the right categories. For the Research paper, again, that’s completely my fault; I wasn’t aware that we could just stitch our rewrites, I assumed we had to take key points from each rewrite and then write it out. I will also fix the Research Paper by stitching the rewrites together.

    However I have one question, regarding one of your comments, and again this just might be because I still might not fully understand the way our blog works. What did you mean when you said, “Faced now with three short arguments whose supposed Rewrites are identical to their first drafts, I don’t have any hope of verifying that your “revisions” were responsive to anything at all. ” Now, the way I understood the assignments, I posted my arguments and rewrites the exact same; I thought we would be correcting our rewrites based on the feedback you gave, and the argument section would just be graded on how we did on our first try. Now, since I failed to upload the Rewrites in the right category, I couldn’t get any feedback, so it stayed the exact same as the initial argument. Is that what you were saying?

    It’s nothing serious but I was just a little confused what you meant by that, if you could please clarify when you have the time. Thank you professor and I will try to fix everything by this weekend.

    • davidbdale's avatar davidbdale says:

      Yes, Skibidy. That’s all I was saying. You were right to post identical Drafts and Rewrites, but to pass the class, they can’t REMAIN identical all the way into the Portfolio.

      Let’s get to work on a Revision cycle, beginning with what I’ve provided as feedback. We may not need much beyond those revisions and a thorough reorganization of your Research paper to reflect the work you already put into your short arguments.

      We’ll see. Thank you for your calm consideration.

  3. davidbdale's avatar davidbdale says:

    Let me know when the revisions to your Rebuttal argument are complete so we can proceed with grading your other papers.

  4. SkibidySigma's avatar SkibidySigma says:

    I changed it, professor.

  5. davidbdale's avatar davidbdale says:

    You’ve responded beautifully to my feedback, Skibidy, and produced new material that demonstrates your understanding and writing ability. That’s all I needed from the experiment to satisfy my that the work is yours and that you have engaged in the FYWP’s Core Value 1.

    I will now proceed to grade your Short Argument Rewrites. You might be able to refrain from totally restructuring your Research Paper if the grade it receives satisfies you. I see no need to compel you to reframe it as a combination of your short papers. How you managed to misunderstand THAT process is a mystery I don’t need solved.

    PLEASE return to all your posts and place them in the right categories, including your Username, and including your Portfolio category for the 8 items it requires, to make them easier to find them if we need them. But, for now, let’s move ahead with getting your stuff graded and ready for your Portfolio.

Leave a reply to SkibidySigma Cancel reply