Opinion | What ChatGPT Is Now Doing With Your Darkest 'Secrets'
For over three years now, millions across the world have treated ChatGPT like a confidante. And one company - OpenAI - holds the keys to this vast digital locker.
A friend in Delhi is involved in a fiery relationship with her self-declared “maverick” partner. She resorted to what modern urban Indians do when love turns, well, let's say, complicated, though it was more than complicated. She consulted relationship counsellors, life coaches, even a “mind game strategist”. Never heard of that, but she explained they are available online and charge enormous sums by the hour. But nothing worked for her.
Defeated, she decided to divert her attention and jumped onto the ChatGPT bandwagon. She signed up for it casually. At first, it was curiosity. Then it became routine. Slowly, she began sharing her relationship angst almost every day. She spilt everything. Her fears, her doubts, her insecurities, her jealousy, her hope that maybe she was overthinking it all. She did it, secure in the knowledge that she was safe, she won't be judged, and the conversations will remain confidential.
A few days ago, she called me to say that she is really, really worried about ChatGPT introducing ads. What if one day, the same system that heard her cry about her failing relationship starts showing her ads for divorce lawyers? Or for couples therapy packages? Or medication for anxiety? She has now limited the use of ChatGPT only to research. As for her partner? That's another topic.
The issue at hand right now is that her intimate relationship with ChatGPT is not unusual. For over three years now, millions across the world have been speaking to it, pouring their hearts out, almost as if they are speaking to their own subconscious. They have treated ChatGPT like a confidante. And one company - OpenAI - holds the keys to this vast digital locker. Inside it are not just data points, but patterns of fear, longing, ambition and despair. Mountains of human candour. That intimacy is now at the centre of a growing global debate.
AI Is Getting A Bit Too Personal
Before we move any further, let me say why I will mostly talk about ChatGPT in this column. In India's fast-expanding AI chatbot market, ChatGPT has become the default, everyday AI for students, job-seekers, creators and general users. It accounts for roughly 70-75% of total usage, with Google's Gemini a distant second and tools like Perplexity and Microsoft Copilot occupying much smaller places. Claude is niche, and its consumer share is estimated at just around 1%.
My friend's worry is actually everyone's concern, as OpenAI has emailed millions of its users a privacy-policy update to say that it will start placing adverts in ChatGPT free and Go versions. OpenAI has informed users that personalisation will rely on signals, such as user interactions and chat context that remain within the system, and that new features, including contact syncing and age-prediction safeguards for younger users, are being introduced alongside clearer disclosures on how long data is stored and how it is processed. OpenAI claims conversations are not shared with advertisers, that ads will be clearly separated from responses and that marketers will see only aggregated performance data. Users, it adds, will retain control through settings. It appears to be an attempt to build a revenue model without repeating the mistakes of social media.
However, user anxiety is not misplaced. We have already seen in recent times how personal data collected for one purpose can later be used for political influence. Investigations, including by the UK Parliament, and multiple academic studies have shown that data harvested from Facebook was used to build detailed voter profiles and deliver highly personalised political messages during the 2016 US presidential election and the Brexit referendum. It indeed triggered worldwide concern about the influence of digital platforms on democracy.
The worry now is similar. If AI platforms hold the most intimate conversations of millions of people, what stops a future deal between a government, a political party or a large corporation and a technology company to use that behavioural insight to send customised political messaging or influence voters one individual at a time? This is not about what is happening today. It is about what could become possible once the data exists at that scale. In the end, trust will depend not on company assurances but on clear laws that make such use illegal and technically impossible.
High-Profile Resignations
In the past few days, two high-profile resignations from major artificial intelligence companies have highlighted concerns about where this technology is headed.
Mrinank Sharma, who led the Safeguards Research Team at Anthropic - an American AI company based in San Francisco - resigned publicly, warning that the “world is in peril”. Anthropic is best known for its chatbot called Claude, which competes directly with ChatGPT. The Wall Street Journal reported over the weekend that the US military used Claude during the operation in which Nicolás Maduro was captured in Venezuela. This is a high-profile example of how artificial intelligence is now being built into real military missions. Sharma's job was to ensure that these systems behaved safely and ethically. When the person responsible for safety walks away sounding alarmed, it naturally raises questions.
Around the same time, Zoe Hitzig, a researcher who recently left OpenAI, issued a pointed warning. Her concern was not about advertising banners or sponsored responses alone. It was about incentives.
Hitzig's worry is actually everyone's concern. Once advertising becomes central to a company's revenue model, pressure builds to increase user engagement. All digital media people know that in digital economics, more time spent means more advertising opportunities. Even if companies promise strict internal rules today, commercial incentives can slowly reshape behaviour tomorrow.
Can India Find An Answer?
These concerns are unfolding just as governments and experts gather for the Delhi AI Summit. A similar gathering in Paris a year ago emphasised cooperation and voluntary safety commitments. The Delhi Summit is expected to push further towards clearer rules and stronger accountability. I spoke to an Indian friend, an AI expert based in Vancouver, who works on integrating AI into large companies, and he strongly said that global regulation was urgently needed before it's too late.
We in India need to be more concerned, as ChatGPT's presence is larger in India than anywhere else. According to a conservative estimate, it has 73 million daily users, more than double the number of users in the US. Also, India is a major digital economy with large-scale public technology systems such as Aadhaar, UPI and CoWIN. No doubt these platforms show that scale and regulation can coexist. Policymakers must now consider how similar principles might apply to ChatGPT and other AI platforms
Globally, some experts are calling for baseline standards similar to trade rules under the World Trade Organization (WTO). The idea is to establish minimum global norms on data use, transparency and accountability so that companies cannot shop for the weakest regulatory environment.
But a global consensus is difficult, if not impossible, to achieve. Most advanced AI systems are developed by a handful of American companies. China is building its own AI ecosystem. Europe has adopted the AI Act with strict regulations. An American diplomat in London told me that President Trump prefers to wait, as he wants AI to develop more; he likes an innovation-led approach. India seeks growth but also safeguards. Aligning these approaches will not be easy.
Anthropic's Many Legal Troubles
Meanwhile, Anthropic's own journey illustrates the tension between ideals and expansion.
Founded in 2021 by former OpenAI employees, Anthropic promoted what it called “constitutional AI” - systems guided by explicit ethical principles. Claude was marketed as cautious, transparent and safety-oriented.
Yet in September 2025, Anthropic agreed to pay $1.5 billion to settle a lawsuit from authors who alleged that pirated books had been used to train its models. A US judge ruled that training AI on legally purchased books could qualify as fair use. But building a large internal library from pirated copies did not qualify. Anthropic agreed to destroy the infringing material.
Just last month, major music publishers filed another lawsuit against Anthropic, alleging that song lyrics and sheet music were used without proper licensing. These legal battles do not accuse AI of having malicious intent, but question how companies sourced and stored their training data. The message from courts is becoming clearer. Innovation is not a problem, but shortcuts are.
At the same time, Anthropic is reportedly seeking fresh investment at valuations approaching $350 billion. Investors believe AI will reshape all kinds of industries, from healthcare to finance. Naturally, that scale of ambition creates pressure. Developing powerful AI systems requires vast computing infrastructure and highly skilled engineers. Revenue must match expenditure.
The broader industry faces similar ethical strain. OpenAI, Microsoft and Meta are also confronting copyright challenges. There have been debates about AI systems being overly agreeable, reinforcing user opinions rather than challenging harmful thinking. Each controversy chips away at public trust.
Can Regulation Keep Up?
Beyond privacy and advertising, the ethical dilemmas are gaining ground. AI can generate realistic deepfake videos that damage reputations or interfere in elections. It can automate scams, clone voices for fraud and assist cyberattacks. In the wrong hands, AI tools could accelerate misinformation campaigns or enhance surveillance. The scale of potential misuse is unprecedented. One individual, equipped with advanced AI tools, can now cause harm that once required organised networks.
Yet, governance frameworks remain fragmented at best. Technology evolves globally. Regulation remains largely national and often behind technology. This is why India's voice at global forums must be heard. As the world's largest democracy and a bridge between developed and developing economies, India ought to argue that AI governance cannot be left to markets alone. Innovation must be matched by accountability.
India's Unique Challenge
At home, India faces practical challenges - deepfakes during elections, digital scams, job displacement fears and uneven digital literacy. The Modi government is seized of the matter. It has often said in recent weeks that governance must be paired with reskilling initiatives and public awareness. If AI reshapes employment, training must become a national mission.
The Delhi AI Summit will not solve every dilemma. But it can push a crucial principle. Systems that enter the most intimate corners of human life must operate under clear, transparent and enforceable rules. The resignations we have witnessed are not signs of collapse. They are warning signals from within. They reflect an internal struggle between commercial growth and ethical restraint.
AI is advancing faster than the laws designed to govern it. Companies promise privacy. Courts are drawing boundaries. Governments are convening summits. Users continue to express their fears. Can the digital confidant stay as a confidant? Or will commercial incentives gradually reshape it into something more intrusive?
No one can answer that, just yet.
(Syed Zubair Ahmed is a London-based senior Indian journalist with three decades of experience with the Western media)
Disclaimer: These are the personal opinions of the author
-
Opinion | 'Team Rahul' And 'Team Priyanka': Inside Congress' Plans For Kerala And Assam
The two siblings don't function at cross-purposes or differ on key ideological issues. However, in practice, they are often seen following slightly different 'chaal, chalan, chehra', as they say, from each other.
-
Opinion | Pak 'Impostors' To Jamaat Trolls: What's Going On With Bangladesh Social Media?
The largest single spends on social media were from DailyNews24, which masquerades as a news organisation but is, in reality, a Jamaat organ. The second and fourth spenders are also Jamaat sites.
-
Opinion | Sonia-Rahul To Maken, Why Mani Shankar Aiyar Keeps Ditching His Verbal 'Filter'
The former Rajya Sabha parliamentarian used to joke that in the time of Rajiv Gandhi, he was on "arsh" (cloud nine), while in the UPA regime under Sonia Gandhi, he was brought down to "farsh" (ground).
-
Opinion | Pakistan Cricket And Lessons In How To Destroy A Sport, Completely
In March last year, I had written about why Pakistan cricket is in ruins - simply because it is its own worst enemy. Pakistan just proved that again yesterday.
-
Opinion | Can Tarique Rahman Really Undo The 'Yunus' Damage To India-Bangladesh Ties?
Will Tarique Rahman as Prime Minister be willing to take hard steps, even reverse some decisions of the Yunus regime, looking ahead to ensure peace and stability along the more than 4,000-kilometre-long border between India and Bangladesh?
-
Private Jets, Powerful Names, And Sex Offender Jeffrey Epstein's Fortress Of Silence
Jeffrey Epstein, who pled guilty to child sex offences and faced federal sex trafficking charges, died August 10, 2019, in his New York prison cell while awaiting trial.
-
Opinion | Bangladesh Polls: What Are Jamaat And Yunus Planning Next?
Yunus's motivations across the board indicate that he's not likely to be 'retiring' soon. This election is a quiet - and overlooked - win for the Jamaat, too.
-
Blog | Goodbye, South Block: From Child Visitor To Secretary, The Corridors That Made Me
It was 1951. My mother had just joined the Government of India as a mid-level officer, three years after independence. Decades later, I would serve in the South Block. Today, they are repositories of a 75-year-long history - personal and national.
-
What Tarique Rahman's Win Means For India
India's focus is on capability and intent, specifically on the new Bangladesh government cooperating on issues like border control and infiltration and maintaining the balance of power in the South Asia region.
-
Opinion | The JF-17 'Hype': How Pak Is Pushing A Jet That's More Noise Than Substance
Even if the JF-17 may seem like a popular choice for many nations, interoperability is not exportable, nor is effective warfighting capability.