OpenAI Sam Altman Says the Company Is 'Out of GPUs' (techcrunch.com)
(Friday February 28, 2025 @11:40AM (BeauHD)
from the supply-and-demand dept.)
An anonymous reader quotes a report from TechCrunch:
> OpenAI CEO Sam Altman said that the company was forced to [1]stagger the rollout of its newest model, GPT-4.5, [2]because OpenAI is "out of GPUs ." In [3]a post on X, Altman said that GPT-4.5, which he described as "giant" and "expensive," will require "tens of thousands" more GPUs before additional ChatGPT users can gain access. GPT-4.5 will come first to subscribers to ChatGPT Pro starting Thursday, followed by ChatGPT Plus customers next week.
>
> Perhaps in part due to its enormous size, GPT-4.5 is wildly expensive. OpenAI is charging $75 per million tokens (~750,000 words) fed into the model and $150 per million tokens generated by the model. That's 30x the input cost and 15x the output cost of OpenAI's workhorse GPT-4o model. "We've been growing a lot and are out of GPUs," Altman wrote. "We will add tens of thousands of GPUs next week and roll it out to the Plus tier then [] This isn't how we want to operate, but it's hard to perfectly predict growth surges that lead to GPU shortages."
[1] https://slashdot.org/story/25/02/27/2022254/openai-rolls-out-gpt-45
[2] https://techcrunch.com/2025/02/27/openai-ceo-sam-altman-says-the-company-is-out-of-gpus/
[3] https://x.com/sama/status/1895203654103351462
> OpenAI CEO Sam Altman said that the company was forced to [1]stagger the rollout of its newest model, GPT-4.5, [2]because OpenAI is "out of GPUs ." In [3]a post on X, Altman said that GPT-4.5, which he described as "giant" and "expensive," will require "tens of thousands" more GPUs before additional ChatGPT users can gain access. GPT-4.5 will come first to subscribers to ChatGPT Pro starting Thursday, followed by ChatGPT Plus customers next week.
>
> Perhaps in part due to its enormous size, GPT-4.5 is wildly expensive. OpenAI is charging $75 per million tokens (~750,000 words) fed into the model and $150 per million tokens generated by the model. That's 30x the input cost and 15x the output cost of OpenAI's workhorse GPT-4o model. "We've been growing a lot and are out of GPUs," Altman wrote. "We will add tens of thousands of GPUs next week and roll it out to the Plus tier then [] This isn't how we want to operate, but it's hard to perfectly predict growth surges that lead to GPU shortages."
[1] https://slashdot.org/story/25/02/27/2022254/openai-rolls-out-gpt-45
[2] https://techcrunch.com/2025/02/27/openai-ceo-sam-altman-says-the-company-is-out-of-gpus/
[3] https://x.com/sama/status/1895203654103351462