Advertisement

"Don't Trust That Much": OpenAI CEO Sam Altman Admits ChatGPT Can Be Wrong

Altman spoke of a fundamental limitation of large language models (LLMs) - their tendency to "hallucinate" or generate incorrect information. He said that users should approach ChatGPT with healthy scepticism, as they would with any emerging technology.

"Don't Trust That Much": OpenAI CEO Sam Altman Admits ChatGPT Can Be Wrong
Don't place unwavering trust in ChatGPT, OpenAI CEO Sam Altman has warned
  • OpenAI CEO Sam Altman cautioned against over-reliance on ChatGPT due to its inaccuracies
  • Altman highlighted the tendency of large language models to generate incorrect information
  • Users should approach ChatGPT with healthy scepticism, similar to other emerging technologies
Did our AI summary help?
Let us know.

Don't place unwavering trust in ChatGPT, OpenAI CEO Sam Altman has warned. Speaking on the company's newly launched official podcast, Altman cautioned users against over-relying on the AI tool, saying that despite its impressive capabilities, it still frequently got things wrong.

"People have a very high degree of trust in ChatGPT, which is interesting because, like, AI hallucinates," Altman said during a conversation with author and technologist Andrew Mayne. "It should be the tech that you don't trust that much."

The techie spoke of a fundamental limitation of large language models (LLMs) - their tendency to "hallucinate" or generate incorrect information. He said that users should approach ChatGPT with healthy scepticism, as they would with any emerging technology.

Comparing ChatGPT with traditional platforms like web search or social media, he pointed out that those platforms often modify user experiences for monetisation. "You can kinda tell that you are being monetised," he said, adding that users should question whether content shown is truly in their best interest or tailored to drive ad engagement.

Altman did acknowledge that OpenAI may eventually explore monetisation options, such as transaction fee or advertisements placed outside the AI's response stream. He made it clear that any such efforts must be fully transparent and never interfere with the integrity of the AI's answers.

"The burden of proof there would have to be very high, and it would have to feel really useful to users and really clear that it was not messing with the LLM's output," he said.

He warned that compromising the integrity of ChatGPT's responses for commercial gain would be a "trust destroying moment." 

"If we started modifying the output, like the stream that comes back from the LLM, in exchange for who is paying us more, that would feel really bad. And I would hate that as a user," Altman said.

Earlier this year, Sam Altman admitted that recent updates had made ChatGPT overly sycophantic and "annoying," following a wave of user complaints. The issue began after the GPT-4o model was updated to enhance both intelligence and personality, aiming to improve the overall user experience. The changes made the chatbot overly agreeable, leading some users to describe it as a "yes-man" rather than a thoughtful AI assistant.

Track Latest News Live on NDTV.com and get news updates from India and around the world

Follow us:
Listen to the latest songs, only on JioSaavn.com