Omissions, Deceptions, Lying. The New Yorker Asks: Can Sam Altman Be Trusted? (newyorker.com)
- Reference: 0181598500
- News link: https://slashdot.org/story/26/04/11/1857254/omissions-deceptions-lying-the-new-yorker-asks-can-sam-altman-be-trusted
- Source link: https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted
Among other revelations, internal messages from a few years ago show that OpenAI executives and board members "had come to believe that [1]Altman's omissions and deceptions might have ramifications for the safety of OpenAI's products ..."
> At the behest of his fellow board members, [OpenAI cofounder] Sutskever worked with like-minded colleagues to compile some seventy pages of Slack messages and H.R. documents, accompanied by explanatory text... The memos, which we reviewed, have not previously been disclosed in full. They allege that Altman misrepresented facts to executives and board members, and deceived them about internal safety protocols. One of the memos, about Altman, begins with a list headed "Sam exhibits a consistent pattern of . . ." The first item is "Lying"....
>
> In a tense call after Altman's firing, the board pressed him to acknowledge a pattern of deception. "This is just so fucked up," he said repeatedly, according to people on the call. "I can't change my personality." Altman says that he doesn't recall the exchange.... He attributed the criticism to a tendency, especially early in his career, "to be too much of a conflict avoider." But a board member offered a different interpretation of his statement: "What it meant was 'I have this trait where I lie to people, and I'm not going to stop.' " Were the colleagues who fired Altman motivated by alarmism and personal animus, or were they right that he couldn't be trusted?
Friday Altman responded in part to the article. ("I am not proud of being conflict-averse, which has caused great pain for me and OpenAI," [2]he wrote in a blog post . "I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company.")
But the article also assembled similar stories from throughout Altman's career: - At Altman's earlier startup Loopt, "groups of senior employees, concerned with Altman's leadership and lack of transparency, asked Loopt's board on two occasions to fire him as C.E.O.," according to Keach Hagey, author of the Altman biography The Optimist .
- During Altman's time as president of Y Combinator, "several Silicon Valley investors came to believe that his loyalties were divided. An investor told us that Altman was known to 'make personal investments, selectively, into the best companies, blocking outside investors.'" The article adds that in private, Y Combinator co-founder Paul Graham "has been unambiguous that Altman was removed because of Y.C. partners' mistrust... On one occasion, Graham told Y.C. colleagues that, prior to his removal, 'Sam had been lying to us all the time.'"
- "In a meeting with U.S. intelligence officials in the summer of 2017, he claimed that China had launched an 'A.G.I. Manhattan Project,'" the article points out, "and that OpenAI needed billions of dollars of government funding to keep pace...." But one intelligence official "after looking into the China project, concluded that there was no evidence that it existed: 'It was just being used as a sales pitch.'"
- As California lawmakers considered safety testing for AI model, one legislative aide complained of "increasingly cunning, deceptive behavior from OpenAI". OpenAI later subpoenaed some of the bill's top supporters (and OpenAI critics), in some cases asking for their private communications to investigate whether Elon Musk was funding them. [The article notes an ongoing animosity between Altman and Musk. "When Altman complained on X about a Tesla he'd ordered, Musk replied, 'You stole a non-profit.'"]
And "Multiple prominent investors who have worked with Altman told us that he has a reputation for freezing out investors if they back OpenAI's competitors."
> [M]ost of the people we spoke to shared the judgment of Sutskever and Amodei: Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. "He's unconstrained by truth," the board member told us. "He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone."
>
> The board member was not the only person who, unprompted, used the word "sociopathic." One of Altman's batch mates in the first Y Combinator cohort was [3]Aaron Swartz , a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage. Not long before his death, Swartz expressed concerns about Altman to several friends. "You need to understand that Sam can never be trusted," he told one. "He is a sociopath. He would do anything."
>
> Multiple senior executives at Microsoft said that, despite [CEO Satya] Nadella's long-standing loyalty, the company's relationship with Altman has become fraught. "He has misrepresented, distorted, renegotiated, reneged on agreements," one said... The senior executive at Microsoft said, of Altman, "I think there's a small but real chance he's eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer."
[1] https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted
[2] https://blog.samaltman.com/2279512
[3] https://www.newyorker.com/magazine/2013/03/11/requiem-for-a-dream
Just as much as any other CEO can be trusted (Score:2)
Sure, some CEOs are more ethical than others. But every last one of them wants to represent their company, and their own performance in it, in the best possible light. This means they will focus on things that make their company, and them, look good. And it means they will not willingly talk about things that make them and their company look bad.
If you find a CEO that never colors the truth, take a picture, you have found a unicorn.
Re: (Score:2)
There is no excusing a pathological liar who repeatedly tells lies to deceive people with a financial stake in their company.
Re: (Score:2)
No excuses was intended in my post.
Re: (Score:2)
> There is no excusing a pathological liar who repeatedly tells lies to deceive people with a financial stake in their company.
And no excusing electing someone like that to be President. /s
Okay, *maybe* the first time, giving someone the benefit of the doubt, but certainly not the second time. Just sayin' ...
Re: (Score:2)
There's different types of CEO ummm coloring of reality. Altman's problem is lying to other CEOs ... That's a no-no.
Re: (Score:2)
Right. Lying to investors, lying to other CEOs, lying to the public, lying to customers, lying to regulators. They're all lying, they just have different consequences.
Re: (Score:2)
This was from the New Yorker, not the New York Times.
Re: (Score:2)
You are stupid and wrong so, let us guess, you must work for the NYT ?
https://www.yahoo.com/news/articles/fact-check-york-times-did-032640330.html
Re: (Score:2)
I don’t have the faintest clue what you’re on about. The paper published a headline with an incorrect acronym and then offered a correction? Is this part of some larger conspiracy?
Claim + Evidence is not just for science (Score:2)
Almost no one can be trusted. Nearly everyone has imperfect knowledge, bias, an agenda, etc. And the more "well meaning" they believe their agenda the more open they are to inserting an agenda. Accurate or not, its a the ends justifying the means. A "white lie" that leads to a better outcome, or so they believe. This is normal human behavior, left, right, or center.
The solution is simple. Verify what is said, regardless of whether you like or dislike what was said, regardless of whether the person is fri
Re: (Score:2)
There are levels. People may make wrong statements that they believe to be true. People may also make wrong statements that they know to be wrong, that is something altogether different. One is being honestly mistaken and can happen to anyone. The other is lying. The great problem that I see in modern society, is that when someone changes their position on something, because they have new information, they are accused of waffling or wavering. Stupidly clinging to an old position that you took in good
Re: (Score:2)
> There are levels. People may make wrong statements that they believe to be true.
As I said, "imperfect knowledge" is one of the factors. But the result is the same, one needs to verify the evidence. Perhaps we should repurpose your "level" clarification. How much work you put into evaluating the evidence depends on the "level". A friend says a movie is good, fine, trust and go see it.
> The great problem that I see in modern society, is that when someone changes their position on something, because they have new information, they are accused of waffling or wavering. Stupidly clinging to an old position that you took in good faith and now know is wrong is just as bad as lying about it intentionally.
Which would be less of an issue if more people have a more scientific approach. Also, such accusations are useful. They help spot those that are the liars or the naively trusting. A hazard of all, the left,
Why? (Score:2)
Why did they make him CEO if they know so clearly that he can't be trusted?
Re: (Score:2)
> Why did they make him CEO if they know so clearly that he can't be trusted?
I'm going to guess: greed?
Re: (Score:2)
All that sweet sweet cash for the board members and suits.
Re: (Score:2)
Maybe because the people who made him CEO (and got him back) think everyone is untrustworthy, because they themselves are untrustworthy, and they rationalise it as a virtue?
Re: (Score:2)
Why did they make him the CEO, "Again". FIFY
"Sociopath" is definitely the right word (Score:1)
This man doesn't care what he breaks or destroys, who he hurts or kills. There is absolutely no compassion or empathy in him. He's a monster.
If you think that's harsh, first let me assure that it's not. Second: read the article. And third: or just pay attention to what he's said and done.
"When someone shows you who they are, believe them the first time." --- Maya Angelou
Re: (Score:2)
"He's a monster." Interesting! so what does that say about the OpenAI employees that threw the hissy fit when the board tried to deal with him?
What changed? (Score:2)
The guy is a caricature of an investment banker, has been from the start of OpenAI. Bit late to complain about it now.
The better question, why did everyone throw money and media adoration at him when it was already obvious?
What a weird question- (Score:2)
-I asked my friend "Satan Lord of Darkness, Deceiver of All, Most Unclean" and he says Altman is cool and pre-sold him a bunch of OpenAI stock, so trust away!
I'll have to try that on my wife (Score:2)
I wasn't lying to you - I'm just conflict-averse!
Betteridge's Law of Headlines (Score:4, Funny)
The New Yorker Asks: Can Sam Altman Be Trusted?
"No." [1]https://en.wikipedia.org/wiki/... [wikipedia.org]
[1] https://en.wikipedia.org/wiki/Betteridge's_law_of_headlines
Re: (Score:3)
> The New Yorker Asks: Can Sam Altman Be Trusted? "No." [1]https://en.wikipedia.org/wiki/... [wikipedia.org]
So this might be the prick that pops the AI bubble?
[1] https://en.wikipedia.org/wiki/Betteridge's_law_of_headlines
Re: (Score:2)
Probably just the OpenAI bubble is going to pop. Same with the Tesla bubble popping. Anthropic doesn't seem likely to pop, but could as they are too big to buy now. Google isn't going to pop, Microsoft isn't going to pop - in both cases they have strong profits. Amazon AWS has strong profits, the Chinese companies have strong profits and all the Gen AI services companies would just fire the divisions and move on.
The market as a whole is not in a bubble and that's pretty obviously if you look at actual m
Re: (Score:2)
OpenAI is not the only major player right now. Google and Anthropic in the US, and to a lesser extent Meta/Facebook. OpenAI is just the largest. And there are a lot of Chinese models now also. And this isn't the only thing that has been said negatively about OpenAI or Sam Altman. There was one may recall the entire mess where they transitioned from being a non-profit to a for-profit and all the drama from that that largely revolved around Altman. When there's a bubble it is very tough to tell what finally i