- OpenAI alleges Chinese AI firms use distillation to extract its intellectual property
- Chris Lehane highlighted this issue during the NDTV Ind.AI Summit session
- Distillation involves bombarding models to capture proprietary AI information
OpenAI has shared evidence with the US government alleging that Chinese AI firms are attempting to extract its intellectual property through a technique known as "distillation."
Chris Lehane, Chief Global Affairs Officer of OpenAI, made the remarks during a session at NDTV Ind.AI Summit, responding to questions about whether Chinese startups such as DeepSeek are using OpenAI's models to replicate its research.
"We do an awful lot to make sure we're trying to protect the integrity of our models," Lehane said, adding that material shared with US officials last week documented how "distillation is taking place."
He described distillation as a process in which AI companies in China "are basically sort of bombarding our model to take as much of the IP or underlying model information as possible."
"That does happen," Lehane said. "We do an awful lot to try to mitigate that and protect from it."
Lehane cast the issue as part of a widening geopolitical struggle over the future of artificial intelligence. Companies in the United States, India and other democracies, he said, will increasingly confront efforts to extract proprietary model data as AI becomes more central to economic and military power.
"What's really at stake here," he said, "is as the world looks over the next decade or so, are we going to build out on that democratic AI or autocratic AI?"
He said China's leadership has made clear it intends to prevail in that competition. Lehane also cited a past remark by Russian President Vladimir Putin, saying, "Whoever wins that competition between democratic nations and autocratic nations ends up winning the world."
India, as the world's largest democracy, will play a "decisive role" in shaping the outcome of the global AI race, Lehane added.
OpenAI has told US lawmakers that Chinese startup DeepSeek used large‑scale "distillation" of ChatGPT‑style systems, allegedly routing traffic through third‑party routers, resellers and other obfuscated channels to bypass geographic and usage limits and extract model outputs for training its own R1 chatbot. In this telling, DeepSeek did not hack into OpenAI or steal raw weights or source code, but instead free‑rode on billions of dollars of US R&D by systematically querying OpenAI's models, copying their behaviour, and even trying to probe or sidestep safety systems, all in breach of terms of service and framed by OpenAI as both an IP and national‑security issue.
DeepSeek, for its part, publicly positions R1 as an efficient, largely open‑sourced reasoning model developed with far lower compute than US rivals, and defends distillation as a legitimate, widely used machine‑learning technique where a student model learns from a teacher model's outputs rather than its code.














