Skip to main content
TechnologyApr 7, 2026· 4 min read

"Almost Sociopathic": Sam Altman Seen by The New Yorker. No One Trusts the CEO of OpenAI

The New Yorker has published what is likely the most detailed and uncomfortable profile ever written about Sam Altman. Authored by Ronan Farrow and Andrew Marantz, the article is the result of 18 months of investigation, more than 100 interviews, and the analysis of internal documents never made public, including about 70 pages of memos written by Ilya Sutskever (co-founder of OpenAI) and over 200 pages of private notes from Dario Amodei (current CEO of Anthropic). The picture that emerges is one of a CEO in whom many people who have known him closely have no trust.

"He's unconstrained by truth".

The most quoted phrase from the piece comes from a former member of the OpenAI board. However, the full quote is even more revealing: according to this source, Altman possesses two traits rarely combined in the same person. The first is an intense need to please others, to gain approval in every interaction. The second is a near-sociopathic indifference to the consequences of lying. Farrow and Marantz note that the term "sociopathic" came up in several conversations during their investigation, from different people and in different contexts.

This is not the first time Altman has been at the center of accusations of lack of transparency. In the fall of 2023, OpenAI's board explicitly cited that he had not been "consistently candid" in his communications when they fired him. What followed was one of the most surreal moments in the history of big tech companies: Altman effectively orchestrated a reverse coup, employees signed a mass letter of support, investors pressured, and within a few days he was again CEO with the previous board completely dismantled. Those who had tried to stop him were out of the company.

The Structural Problem of Trust

In these years, as AI has entered daily life and public debate, OpenAI has built its entire public narrative on the promise of being a responsible guardian of artificial intelligence. It is the company that founded itself as a nonprofit in 2015 precisely because it believed that AI was too powerful a technology to be managed solely for profit. However, today that structure has become a complicated hybrid of profit and nonprofit, and the CEO is a person whom many of his former closest colleagues do not trust.

The galaxy of people who have worked with Altman and then chose to distance themselves is remarkable. The most well-known example is Anthropic, founded by Dario Amodei (former VP of Research at OpenAI) together with other researchers who had serious doubts about how Altman managed the company. The over 200 pages of private notes from Amodei used by Farrow and Marantz in their investigation suggest that those concerns were documented and specific. In its official response, OpenAI described The New Yorker article as a piece that "rehashes previously emerged events through anonymous quotes and selective anecdotes from people with clear interests," without denying any specific accusations.

Internal Clash over the IPO

While the New Yorker profile digs into the past, the more immediate tensions are reported by The Information and concern OpenAI's financial future. CFO Sarah Friar has expressed reservations about the company's ability to go public by the end of 2026, as Altman would like. Friar has reportedly questioned the sustainability of a $600 billion spending plan over five years for server infrastructure, in light of monthly revenues of about $2 billion and already contracted capital commitments of $122 billion. The concern is not unfounded: OpenAI is burning cash at a rate that few business models can justify in the short term.

Moreover, there are some doubts about Friar's actual role in decision-making processes: according to sources cited by The Information, the CFO has been absent from at least one high-level meeting with a major investor regarding infrastructure spending, an absence that those present described as "strange and glaring". Altman and Friar responded with a joint statement denying this account: "We have both been directly involved in any decision with significant outcomes over the past year and more." Whether the internal dynamic is exactly as described by The Information or as claimed by the two directly involved, a significant fact remains: Friar no longer reports directly to Altman, but to Fidji Simo, Chief of Applied Business. This is an unusual setup for any large company, especially one preparing for a historic IPO.

AGI and the Legitimacy Issue

In the backdrop of all this, OpenAI has recently published a whitepaper titled "Industrial Policy for the Intelligence Age: Ideas to Keep People First," in which it outlines the risks of its technology, from job destruction to power concentration, and proposes fiscal and governance measures to mitigate them. The document is consistent with the rhetoric Altman has been promoting for years: we are aware of the risks, trust us. The point is that convincing the public, regulators, and governments to do so is significantly more difficult when the CEO is at the center of an investigation documenting decades of opaque behavior.

The credibility of OpenAI as an institution inevitably passes through the credibility of its leadership.