
Billionaire Elon Musk's DOGE team is expanding use of his artificial intelligence chatbot Grok in the US federal government to analyze data, said three people familiar with the matter, potentially violating conflict-of-interest laws and putting at risk sensitive information on millions of Americans.
Such use of Grok could reinforce concerns among privacy advocates and others that Musk's Department of Government Efficiency team appears to be casting aside long-established protections over the handling of sensitive data as President Donald Trump shakes up the US bureaucracy.
One of the three people familiar with the matter, who has knowledge of DOGE's activities, said Musk's team was using a customized version of the Grok chatbot. The apparent aim was for DOGE to sift through data more efficiently, this person said. "They ask questions, get it to prepare reports, give data analysis."
The second and third person said DOGE staff also told Department of Homeland Security officials to use it even though Grok had not been approved within the department.
Reuters could not determine the specific data that had been fed into the generative AI tool or how the custom system was set up. Grok was developed by xAI, a tech operation that Musk launched in 2023 on his social media platform, X.
If the data was sensitive or confidential government information, the arrangement could violate security and privacy laws, said five specialists in technology and government ethics.
It could also give the Tesla and SpaceX CEO access to valuable nonpublic federal contracting data at agencies he privately does business with or be used to help train Grok, a process in which AI models analyze troves of data, the experts said. Musk could also gain an unfair competitive advantage over other AI service providers from use of Grok in the federal government, they added.
Musk, the White House and xAI did not respond to requests for comment. A Homeland Security spokesperson denied DOGE had pressed DHS staff to use Grok. "DOGE hasn't pushed any employees to use any particular tools or products," said the spokesperson, who did not respond to further questions. "DOGE is here to find and fight waste, fraud and abuse."
Musk's xAI, an industry newcomer compared to rivals OpenAI and Anthropic, says on its website that it may monitor Grok users for "specific business purposes." "AI's knowledge should be all-encompassing and as far-reaching as possible," the website says.
As part of Musk's stated push to eliminate government waste and inefficiency, the billionaire and his DOGE team have accessed heavily safeguarded federal databases that store personal information on millions of Americans. Experts said that data is typically off limits to all but a handful of officials because of the risk that it could be sold, lost, leaked, violate the privacy of Americans or expose the country to security threats.
Typically, data sharing within the federal government requires agency authorization and the involvement of government specialists to ensure compliance with privacy, confidentiality and other laws.
Analyzing sensitive federal data with Grok would mark an important shift in the work of DOGE, a team of software engineers and others connected to Musk. They have overseen the firing of thousands of federal workers, seized control of sensitive data systems and sought to dismantle agencies in the name of combating alleged waste, fraud and abuse.
"Given the scale of data that DOGE has amassed and given the numerous concerns of porting that data into software like Grok, this to me is about as serious a privacy threat as you get," said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates for privacy.
His concerns include the risk that government data will leak back to xAI, a private company, and a lack of clarity over who has access to this custom version of Grok.
DOGE's access to federal information could give Grok and xAI an edge over other potential AI contractors looking to provide government services, said Cary Coglianese, an expert on federal regulations and ethics at the University of Pennsylvania. "The company has a financial interest in insisting that their product be used by federal employees," he said.
"Appearance Of Self-Dealing"
In addition to using Grok for its own analysis of government data, DOGE staff told DHS officials over the last two months to use Grok even though it had not been approved for use at the sprawling agency, said the second and third person. DHS oversees border security, immigration enforcement, cybersecurity and other sensitive national security functions.
If federal employees are officially given access to Grok for such use, the federal government has to pay Musk's organization for access, the people said.
"They were pushing it to be used across the department," said one of the people.
Reuters could not independently establish if and how much the federal government would have been charged to use Grok. Reporters also couldn't determine if DHS workers followed the directive by DOGE staff to use Grok or ignored the request.
DHS, under the previous Biden administration, created policies last year allowing its staff to use specific AI platforms, including OpenAI's ChatGPT, the Claude chatbot developed by Anthropic and another AI tool developed by Grammarly. DHS also created an internal DHS chatbot.
The aim was to make DHS among the first federal agencies to embrace the technology and use generative AI, which can write research reports and carry out other complex tasks in response to prompts. Under the policy, staff could use the commercial bots for non-sensitive, non-confidential data, while DHS's internal bot could be fed more sensitive data, records posted on DHS's website show.
In May, DHS officials abruptly shut down employee access to all commercial AI tools - including ChatGPT - after workers were suspected of improperly using them with sensitive data, said the second and third sources. Instead, staff can still use the internal DHS AI tool. Reuters could not determine whether this prevented DOGE from promoting Grok at DHS.
DHS did not respond to questions about the matter.
Musk, the world's richest person, told investors last month that he would reduce his time with DOGE to a day or two a week starting in May. As a special government employee, he can only serve for 130 days. It's unclear when that term ends. If he reduces his hours to part time, he could extend his term beyond May. He has said, however, that his DOGE team will continue with their work as he winds down his role at the White House.
If Musk was directly involved in decisions to use Grok, it could violate a criminal conflict-of-interest statute which bars officials -- including special government employees -- from participating in matters that could benefit them financially, said Richard Painter, ethics counsel to former Republican President George W. Bush and a University of Minnesota professor.
"This gives the appearance that DOGE is pressuring agencies to use software to enrich Musk and xAI, and not to the benefit of the American people," said Painter. The statute is rarely prosecuted but can result in fines or jail time.
If DOGE staffers were pushing Grok's use without Musk's involvement, for instance to ingratiate themselves with the billionaire, that would be ethically problematic but not a violation of the conflict-of-interest statute, said Painter. "We can't prosecute it, but it would be the job of the White House to prevent it. It gives the appearance of self-dealing."
The push to use Grok coincides with a larger DOGE effort led by two staffers on Musk's team, Kyle Schutt and Edward Coristine, to use AI in the federal bureaucracy, said two other people familiar with DOGE's operations. Coristine, a 19-year-old who has used the online moniker "Big Balls," is one of DOGE's highest-profile members.
Schutt and Coristine did not respond to requests for comment.
DOGE staffers have attempted to gain access to DHS employee emails in recent months and ordered staff to train AI to identify communications suggesting an employee is not "loyal" to Trump's political agenda, the two sources said. Reuters could not establish whether Grok was used for such surveillance.
In the last few weeks, a group of roughly a dozen workers at a Department of Defense agency were told by a supervisor that an algorithmic tool was monitoring some of their computer activity, according to two additional people briefed on the conversations.
Reuters also reviewed two separate text message exchanges by people who were directly involved in the conversations. The sources asked that the specific agency not be named out of concern over potential retribution. They were not aware of what tool was being used.
Using AI to identify the personal political beliefs of employees could violate civil service laws aimed at shielding career civil servants from political interference, said Coglianese, the expert on federal regulations and ethics at the University of Pennsylvania.
In a statement to Reuters, the Department of Defense said the department's DOGE team had not been involved in any network monitoring nor had DOGE been "directed" to use any AI tools, including Grok. "It's important to note that all government computers are inherently subject to monitoring as part of the standard user agreement," said Kingsley Wilson, a Pentagon spokesperson.
The department did not respond to follow-up questions about whether any new monitoring systems had been deployed recently.
(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)
Track Latest News Live on NDTV.com and get news updates from India and around the world