Artificial Intelligence: the Enabler or Antidote for Disinformation?

Kaaf Seen
5 min readDec 18, 2022

Living in the age of technology, we now have access to countless synthetic intelligence tools like never before. AI tools can create entire news pieces prompted by a single sentence or even a phrase. Algorithms are becoming increasingly capable of producing “human-like” content. Furthermore, deep fakes, may it be doctored photographs, audio messages, and videos, can make people be in places they have never been, say things they have never said, and do things they never did. AI-aided extortion and blackmail have become easier than ever. At times, it is even state-sanctioned.

Lest we forget, we also live in the era of post-modernism, where perception is the truth. We are witnessing increasing polarisation all across the globe, with fascism rearing its ugly head once again. Our daily discourse is often centred around “us and them” — Instagram aesthetics lend credence to information more than what it is based on — and the sense of security being behind a screen lends us makes us say things on social media we would never say in the street.

On average, 500 million tweets are made daily; 95 million photos on Instagram and more than 300 million photos are uploaded to Facebook. . Reddit receives one million new comments, whilst on Facebook, 510,000 comments are posted with 293,000 status updates every hour. There is only so much of an information overload that the human mind can withstand, absorb and comprehend.

Journalists, public figures, and data scientists have often mentioned foul play about the visibility of stories and posts on specific topics: Palestine, Kashmir, Yemen, Neo-Nazism in Ukraine, and the Hijab ban in India. It has often been dismissed as propaganda and has now been proven otherwise, with the recent Twitter Files that have only increased our awareness of tech censorship. We also now know for a fact that celebrities and unknowns alike could be removed or reviewed at the behest of a political party at one of the leading platforms people check daily for news.

Amid all this, persuasive, tailored, and difficult-to-detect messaging is being created using synthetic intelligence, and it is not the only woe of the hour. Echo chambers on social media, powered by algorithms, further enhance disinformation. Besides echo chambers, another concern is the spiral of silence on social media due to these algorithms. As a consequence, underrepresented communities and minorities, by virtue of population, opinions, or the digital divide, are pushed into oblivion even more than they already are.

The point is that while gatekeeping information and narrative-building were always a reality, even in mainstream media, we live in an era where micro-targeting is the new normal. The rapid proliferation of manipulation is the order of the hour. It is no longer something which may happen in the future. Nor is it something for which certain parties, companies, states, or non-state actors may be held accountable. Everyone is complicit.

Both, the generation and spreading of the content generated by Artificial Intelligence, are now being perceived as an “existential threat” — and perhaps, rightfully so. Perceived enemies can now be drowned in a quagmire of disinformation with little or no trace. By the time traces are found, the damage is likely to be done.

Hope, however, is not lost. Humans seldom create anything without loopholes. Technology, one of our creations, is more likely to act similarly rather than differently. There are specific markers of content produced by synthetic intelligence: it is written to heighten emotions, may or may not seed conspiracy narratives, and is naturally optimised more for search engines than for humans. AI-generated content also spreads in ways different from organic content. Now that all this has been established, interestingly, it is in Artificial Intelligence that we also find tools to help AI-generated and enabled disinformation.

In linguistics, semantic analysis is the process of drawing meaning from text — now that we have machines capable of learning it. Powered by machine learning algorithms and natural language processing, semantic analysis systems can understand the context of language entered into the system. This enables synthetic intelligence to identify texts created using AI tools and software.

Artificial Intelligence is also extremely useful in detecting if particular audio or visual input is raw or manipulated. Not only can doctored images and videos be identified, should they be unclear to the naked eye, but synthetic intelligence can also reverse engineer pictures, art, text, and deep fakes, telling us everything we need to know.

Whilst machines can perform semantic analysis and reverse engineering independently, the human-computer still reigns supreme with Root Tracing and Spread Analysis.

Novel, semiautomated image, content and data analysis freeware can be used to streamline the complex roots of information. The software can accommodate a wide range of data sources identified by people. The computer can then map when, where and how the information was created, who first disseminated it, as well as who and how many entities followed suit. It also analysis user interaction with the content, relevant cross-platform communication, data analysis, sentiment mapping and several other valuable data sets, which can come in handy for us to analyse fake news.

Artificial Intelligence is here, and it is here to stay. Like every new technology, it will be used for things, both good and bad, like everything else which exists in the world today. The quicker we embrace synthetic intelligence, the more efficiently we can learn to use it for our betterment and to mitigate the threats AI itself poses to us.

The questions we now face are no longer questions for which we can find answers in empirical evidence, equations and numbers. A new wave of philosophers is long overdue. How do we approach the truth in a post-truth era? With media being the fourth pillar of the state, can we stop looking at emerging technology from the lens of national security and begin viewing security from a technological lens instead?

I am the Advocacy Lead for Web 3.0 at the Islamabad Policy Research Institute. We welcome research dispatches on the subjects of emerging tech and relevant policy frameworks. Our particular focus areas are Data Science, Artificial Intelligence, Augmented and Virtual Reality, Cryptocurrencies and NFTs. Please feel free to reach out via email for queries: komal.salman@ipripak.org

--

--

Kaaf Seen

Art, history, culture, mythology, media, and Web 3.0.