AI's integration on platforms, such as X's Grok, as a truth-seeking tool is starkly different from its use in altered content generation. This has given AI a dual role. This duality, where AI acts as a fact finder and a fact fabricator, has added layers of complexity for lawmakers. This is particularly important for the applicability of extant regulation to AI, with the emergence of unrelated laws that have already cast a long arm over its use.
One of the more complex challenges here is that AI-generated falsehoods often don't arise from intent to deceive, but from how the models are trained to predict and produce language. AI tools exhibit a form of truth-bias, traditionally seen as a uniquely human trait. This refers to the cognitive tendency to assume that most interpersonal communication is honest. A series of studies have shown that large language models are even more likely to accept and reproduce false information unless prompted otherwise. Crucially, this bias is not the result of design, but a byproduct of training on vast human corpora where truthful statements are statistically dominant. This raises a pertinent question: when AI-generated content perpetuates a falsehood without intent, should its end-generator be held criminally liable? Intent must remain central to legal culpability.
Karnataka's latest Misinformation and Fake News (Prohibition) Bill, 2025, is a case in point. The Bill stands on shaky constitutional ground. It ventures into a domain arguably reserved for the Union Government. Entry 31 of the Union List grants Parliament exclusive power over "posts and telegraphs; telephones, wireless, broadcasting and other like forms of communication". Regulating internet-based speech falls squarely within this ambit, raising serious questions about the state legislature's competency to enact such a law in the first place.
Even if the issue of jurisdiction is kept aside, the Bill's substance is deeply flawed. While it cedes any reference to AI or synthetic media, its broad definitions of ‘fake news' and ‘misinformation' could be interpreted in ways that unintentionally criminalise AI-generated content in its many forms, or other forms of digital creativity. The bill defines ‘fake news' broadly to include ‘misquotation', ‘editing audio or video which results in the distortion of facts', and ‘purely fabricated content'. Yet it fails to distinguish between malicious deception and legitimate creative expression, particularly that which uses AI for satire, parody, or commentary. A voice-dubbed parody of a political sermon, even if clearly labelled as satire, could be construed under the bill as ‘distorted' or ‘fabricated' and made liable to prosecution.
Critically, the bill's carve-out for satire and parody applies only under the definition of ‘misinformation,' not under ‘fake news,' which is governed by stricter penalties and lacks any protections for artistic or humorous work. This is precisely the kind of ambiguity the Supreme Court sought to guard against in Shreya Singhal v. Union of India (2015), when it struck down Section 66A of the IT Act. The court held that vague and overbroad language could restrict our freedom of expression under Article 19(1)(a). The judgment warned that unless laws specify clearly what kind of speech is punishable, creators will be forced into a culture of self-censorship.
Internationally, democracies are developing more targeted and technologically aware regulations that offer better models. The European Union's AI Act, for example, focuses on transparency. It mandates that AI-generated deepfakes and other synthetic content that might be mistaken for authentic must be clearly labelled as such. Crucially, the law provides explicit exceptions for content that is obviously artistic, satirical, or creative, thereby protecting free expression while empowering citizens to identify manipulated media.
Similarly, several US states have enacted laws that focus on specific, malicious uses of AI rather than banning the technology itself. Laws in states like California and Texas criminalise the creation and distribution of deceptive deepfake videos of political candidates intended to influence an election, but they are narrowly tailored, often applying only within a short period before voting. This approach aims to reduce high-stake harm, such as election interference, without imposing a blanket ban on altered content.
The Karnataka Bill ignores such nuanced approaches, opting instead for a blunt instrument that threatens to criminalise a wide range of digital creativity.
This legislative approach is especially unjust in a legal system that values precedent and practical interpretation. The legal maxim ignorantia juris non excusat, or that ignorance of the law is no excuse, only deepens the challenge for creators using new tools. If creators are to be held liable for violating a law, they must first understand what conduct is permitted.
To be clear, the dangers of deepfakes and deceptive synthetic content are real. They can be used to damage reputation or manipulate public opinion. However, the solution cannot be to criminalise ‘fake news' without regard for intent, context or creative purpose. Karnataka's policy makers would do well to recall that a well-formed legislature, as legal theorist Richard Ekins puts it, acts with the intent “to change the law in the chosen way, for the common good”. That common good must balance the need to curb digital deception with the imperative to protect expression, even (and especially) when that expression is critical, satirical, or inconvenient. There is an imminent need to reconsider this bill.
(Siddharth Subudhi is a technology policy tesearcher, and Alfahad Sorathia is a lawyer and former LAMP Fellow.)
Disclaimer: These are the personal opinions of the author