- Sam Altman is accused of repeatedly lying to OpenAI board members and employees
- Altman created a secret shadow board without official board knowledge
- Prior employers and partners also raised concerns about Altman’s transparency
Sam Altman, CEO of OpenAI, is facing new scrutiny after an investigation revealed his "repeated lies" in the organisation. The report is based on over 100 interviews, internal memos, and private notes from former executives.
Altman has a history of misleading people, including board members and employees, according to a New Yorker report.
The chief scientist of OpenAI, Ilya Sutskever, collected over 70 pages of HR documents, Slack chats, and cell phone images and sent them to the board as disappearing messages. The first item on his memo's list, "Sam exhibits a consistent pattern of...lying."
When Altman became CEO, he made different promises to different groups. He told some researchers that Greg Brockman's power would be reduced but secretly agreed with Brockman and Sutskever that he would step down if they both asked.
He created a "shadow board" without the knowledge of the official board.
After the board tried to fire Altman, he demanded the resignations of the members who opposed him. The departing members wanted the allegations against Altman to be investigated.
A board member described Altman as having two unusual traits at once. "First, he strongly wants to please people in each interaction, but at the same time, he shows almost no concern for the consequences of lying or deceiving others."
Altman then directly contacted Microsoft CEO Satya Nadella to suggest a new board lineup, suggesting, "Bret, Larry Summers, Adam as the board and me as CEO and then Bret handles the investigation."
At one point, he told Mira Murati, who was acting as interim CEO, that his allies were "going all out" to find damaging information about her and others who had opposed him.
Thrive Capital also delayed an $86 billion investment and suggested it would only go through if Altman returned as CEO.
In addition to this, teams promised resources for AI safety received far less than expected, and some GPT-4 features went live without full approval.
Altman also told the board that GPT-4's safety features had been approved by a safety panel. But when Helen Toner checked, she found that the most controversial features were never approved. He also didn't tell the board that Microsoft had released an early ChatGPT version in India without completing a safety review.
Prior to OpenAI, top staff members at Loopt twice requested that Altman be fired by the board due to concerns about his lack of transparency and management. Partners at Y Combinator also complained to Paul Graham, who informed coworkers that Altman "had been lying to us all the time."
Track Latest News Live on NDTV.com and get news updates from India and around the world