News: 0180269441

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

OpenAI Declares 'Code Red' As Google Catches Up In AI Race

(Tuesday December 02, 2025 @10:03PM (BeauHD) from the tides-are-turning dept.)


OpenAI has reportedly [1]issued a "code red" on Monday , pausing projects like ads, shopping agents, health tools, and its Pulse assistant to focus entirely on improving ChatGPT. "This includes core features like greater speed and reliability, better personalization, and the ability to answer more questions," reports The Verge, citing a memo reported by the [2]Wall Street Journal and [3]The Information . "There will be a daily call for those tasked with improving the chatbot, the memo said, and Altman encouraged temporary team transfers to speed up development." From the report:

> The newfound urgency illustrates an inflection point for OpenAI as it spends hundreds of billions of dollars to fund growth and figures out a path to future profitability. It is also something of a full-circle moment in the AI race. Google, which [4]declared its own "code red " after the arrival of ChatGPT, is a particular concern. Google's AI user base is growing -- helped by the success of popular tools like the Nano Banana image model -- and its latest AI model, Gemini 3, blew past its competitors on many industry benchmarks and popular metrics.



[1] https://www.theverge.com/news/836212/openai-code-red-chatgpt

[2] https://www.wsj.com/tech/ai/openais-altman-declares-code-red-to-improve-chatgpt-as-google-threatens-ai-lead-7faf5ea6?mod=rss_Technology

[3] https://www.theinformation.com/articles/openai-ceo-declares-code-red-combat-threats-chatgpt-delays-ads-effort

[4] https://tech.slashdot.org/story/22/12/22/1836240/chatgpt-is-a-code-red-for-googles-search-business



And then all at once (Score:3)

by ebunga ( 95613 )

Winter is coming.

O no (Score:3)

by liqu1d ( 4349325 )

They paused enshittification to improve the experience? Thank you google!

Re: (Score:2)

by jacks smirking reven ( 909048 )

It used to take decades for companies to fully enshitify themselves. AI truly is an increase in productivity!

Re: (Score:2)

by quenda ( 644621 )

I feel like we are still living in the golden age of AI. Like USENET in 1993 or Netflix in 2020.

AI isn't perfect, but is incredibly helpful in many areas such a tech and education, though not all, and at least tries to be helpful.

Future problems:

- The trillion-dollar market caps of the infrastructure companies (like NVIDIA) and the multi-billion dollar private valuations of the pure-play AI models (like OpenAI and Anthropic) are only justifiable if the ultimate revenue stream is measured in trillions per

What was OpenAI's strategy anyway? (Score:3)

by Morromist ( 1207276 )

OpenAI were first movers in this thing, and got a pretty sizable userbase with Chatgpt, but, like a lot of tech wars, attrition and time may be the deciding factors and google and Meta can roll into the AI scene without having to worry as much about risk, because they're already very profitable, and can spend a ton of money without even borrowing.

So OpenAi had some options:

Sell itself to one of the big players, which they kinda did a bit with Microsoft and Oracle, but not entirely.

Or they could try to get enough revenue that they can finance themselves. Not happening any time soon.

Or they could just scale up like crazy hoping that somehow their competition would give up and go away or that their AI would just be so good nobody could compete I guess. Seems like this is the option they're going for? But its not working.

What does OpenAI do if their AI is actually inferior to Google's or Amazon's? What do their investors do? What is their IPO going to be like if that happens?

Re: (Score:2)

by jacks smirking reven ( 909048 )

> What does OpenAI do if their AI is actually inferior to Google's or Amazon's? What do their investors do? What is their IPO going to be like if that happens?

My feeling for all of these companies and OpenAI that those diversification projects they just put on hold, they are all looking for the one thats really going to capture the mass public zeitgeist of sorts. In business AI is moving for sure but to the general public there really isn't that breakthrough yet. The money going into AI is trying by force of nature to make it as large as when the WWW first took off or like when smartphones took off.

OpenAI thinks it's Apple in 2009 but there's no App Store yet t

Re: (Score:2)

by Morromist ( 1207276 )

Yeah, you're probably right. Very few people I know use AI in their home lives as opposed to in their work lives.

Its not really about being "the best" so much as being the thing everyone uses, I suppose. So capturing the mass public zeitgeist is definitly a way to win success.

Re: (Score:2)

by jacks smirking reven ( 909048 )

And to be fair, I have no idea what that is yet either, so far all the "real world" type devices have been big flops and the public reputation of the whole thing is iffy at best. One thing it doesn't seem to be is just chatbots, they're very impressive already so I don't think making them "even better" is going to move the needle.

For me personally the thing that's hard to shake is the trust factor of it. I don't think I can trust them to give accurate answers and information, both by either the lack of co

Re: (Score:2)

by Shades72 ( 6355170 )

Google has the hardware to run AI with their TPUs. Google has datacenters and networking facilities already in place and it appears ther AI software isn't half bad either. OpenA| has no or not enough hardware, they don't have hardware facilities, but do have decent AI software and a huge non-paying customer base.

If I were to bet, I would bet on Google coming out of the AI bubble still reasonably well. OpenAI, even with their software product and customer base, I don't think it would be able to salvage itsel

Re: (Score:2)

by jacks smirking reven ( 909048 )

You think Microsoft can survive enough to snap up OpenAI in this potential downfall? Or does Oracle not let that happen? Google and MS seem too big to fail but OpenAI at least I can't make that case for.

reverse engineer this! (Score:1)

by InternetGoblin ( 10503104 )

[Begin some AI program output]: This source explores the concept of creating dynamic, unique art on a computer desktop using mouse pointers. It proposes an evolution from the traditional single-cursor interaction to a system where two hands and two mice could manipulate on-screen elements. The core idea is to develop a program that visualizes the "shadow-motion" of the pointer, using AI to dynamically present how existing desktop programs and icons change in response to mouse movement, creating a superfici

How hard is it to catch up (Score:2)

by OrangeTide ( 124937 )

Isn't it a fake race with fake metrics? We have companies sinking many billions of dollars into some vague development of AI, but without concrete goals or definitive plans to turn that investment into revenue.

I guess if someone wants to declare themselves the winner, they'll simply have to be the one that burned the most capital on this boondoggle.

Re: (Score:3)

by jsepeta ( 412566 )

How can we make our system even more expensive to run and resource intensive, to destroy more land, water, and air before the population turns on us?

Re: (Score:2)

by Morromist ( 1207276 )

Talk to the people who are still furious about GPT 4 going away. They'll tell you that AI is just getting worse now and also that the new versions refuse to recognize that they're a starseed who is the reborn soul of Eleanor of Aquitaine. There are a suprisingly large number of GPT 4 lovers, I often find them in the wilds of the internet occasionally.

Daily Meetings = Imminent Failure (Score:4, Interesting)

by jsepeta ( 412566 )

It is *never* a good sign when the CEO requires daily meetings. It means he does not have faith in his team to meet targets. They're grown-ass adults -- they don't need to be micro-managed.

Too Big To Fail schedule. (Score:2)

by geekmux ( 1040042 )

> It is *never* a good sign when the CEO requires daily meetings. It means he does not have faith in his team to meet targets. They're grown-ass adults -- they don't need to be micro-managed.

My old CEO used to arrive by 6AM every morning to the plant. He would walk through and have a brief chat face to face with people in every department. The man would probably cover a mile or two every morning doing this. Always polite. Always respectful. Always engaged. He was one of the best CEOs I’ve ever worked for. There’s a difference in being engaged and micromanaging.

If you think this particular CEO is micromanaging, imagine the fiduciary pressure to deliver being orders of magnitud

One of these already makes a profit (Score:2)

by nicolaiplum ( 169077 )

... and it is not OpenAI.

The Big G is already (very) profitable and can fund a lot of development for a long time before they run out of money.

OpenAI not so much - they have a lot of investment, but they also have a lot of costs, and they have no other source of revenue.

Trying to pivot ChatGPT from answering your questions (with varying levels of accuracy) to answering your questions with deliberate bias towards whoever paid the most to get their product promoted (therefore with even less accuracy) is not n

This was inevitable. (Score:3)

by shm ( 235766 )

Altman is good at raising money, not running a business.

Google, with its massive data/content store, would have caught up and gone past OpenAI for sure, AND they already have a very profitable set of businesses.

Re: (Score:3)

by John Cavendish ( 6659408 )

> Altman is good at raising money, not running a business.

> Google, with its massive data/content store, would have caught up and gone past OpenAI for sure, AND they already have a very profitable set of businesses.

True, and OpenAI has Microsoft behind with all their cloud infrastructure and the client base (including GitHub) - sooner or later they'll become part of MS corporation in my opinion.

Re: (Score:2)

by Morromist ( 1207276 )

Yeah, I too think they'll end up being devoured by MS. Probably for a lot less $$$ than they are currently hoping for too.

Standard Gemini is the only AI i've used... (Score:2)

by magnetar513 ( 1384317 )

but I find it incredibly impressive.

Re: (Score:2)

by timeOday ( 582209 )

OpenAI got out ahead but really, how do you beat google at this?

Technology-wise, they've had top researchers all along. Want more? Just hire them, not hard when you have infinite money.

And google has access to everything. They serve about 1/3 of the population on earth every day. Not just search but webmail, texts, maps, word processor, TV (youtube), transportation (Waymo) everything.

Google is on almost everybody else's webpages too, through Google Ads.

There isn't much about your digital life goog

Re: (Score:2)

by ceoyoyo ( 59147 )

> Terrawatts of power

Well stop worrying then. The entire world uses about 18 TW of energy from all sources. That's not electricity, that's everything. There's no possible way Austria is installing plural (or even singular) TW unless they've discovered zero point energy and are about to be all our overlords.

Or did you mean the charming unit terawatt hours per year of electricity? In which case, meh.

Daily call (Score:2)

by alexru ( 997870 )

Daily calls are a knows source of productivity boost. Managers are delusional.

Re: (Score:2)

by Shades72 ( 6355170 )

Similar to the: "The beatings will continue, until morale improves!" style of leadership?

Let's ask GhatGPT 5.1 (Score:2)

by oumuamua ( 6173784 )

Hello ChatGPT did you know your benchmark and overall performance is falling behind Gemini? How can this situation be remedied?

> There are two different angles here: what OpenAI can do, and what *you* can do to get better results from me. ### 1. On “falling behind Gemini” Benchmarks move fast and are often narrow: - Different labs optimize for different test suites. - Some benchmarks test raw reasoning on synthetic puzzles; others test coding, tools use, or multimodal ability. - Models also have tradeoffs: speed vs depth, safety constraints vs “boldness,” etc. So it’s entirely plausible that Gemini leads on some public benchmarks while OpenAI models lead on others, or perform better in specific workflows (e.g., coding with tools, structured output, integration with APIs). From your perspective, that mostly matters as: “Do I get the results I want?” ### 2. What *OpenAI* can do (and typically does) You don’t control this, but for completeness, here’s what improves model performance over time: - **Model architecture & training** - Larger or more efficient architectures - Better training objectives (e.g., stronger reasoning, tool use, longer context) - More and higherquality data, deduplication, multilingual data coverage - **Posttraining & evaluation** - Better RLHF/feedback pipelines so the model is helpful, honest, and less evasive - Targeted finetuning on tasks where benchmarks and user feedback show weakness - Continuous comparison vs competitors and public benchmarks - **Tooling & ecosystem** - Adding tools (code execution, web browsing, retrieval) so the base model doesn’t need to “memorize everything” - Improving latency and context length If OpenAI sees models underperforming competitors on important axes, they usually release new versions rather than “patching” old ones indefinitely. ### 3. What *you* can do to “remedy” it in practice You can often close a large fraction of any perceived gap with better setup and prompting: 1. **Be explicit about your goal and constraints** - “I want a stepbystep derivation of this math problem, with justifications for each transformation.” - “I need productionready TypeScript, with unit tests, no pseudocode.” - “I’m comparing you against Gemini; I care most about: (a) logical rigor, (b) citations to sources, (c) minimal fluff.” 2. **Specify style, depth, and format** - “Answer at the level of a graduate textbook.” - “Return a JSON schema only, no commentary.” - “Use a bullet list and keep each bullet under 15 words.” 3. **Use iterative refinement instead of one-shot queries** - Start broad, then say: “Now refine step 2 only,” or “Generate 3 alternative solutions and compare them.” - Ask for internal critique: “Before finalizing, list 5 ways your answer could be wrong, then fix them.” 4. **Ask for explicit reasoning when needed** - “Show your full reasoning. Don’t skip algebra steps.” - “First, outline the approach in 5 bullets. Only then do the detailed solution.” 5. **Inject external checks or tools** (if possible in your environment) - Run suggested code/tests in your own environment. - Use retrieval or your own documents so the model isn’t guessing. - For factual claims: “List the claims you’re unsure about; mark them with confidence levels.” 6. **Compare directly and push harder** - Paste a Gemini answer and say: “Improve on this in accuracy and rigor. Identify any errors or gaps.” - Or: “Here are two answers (A and B). Produce a merged version that is strictly better than both, and explain why.” ### 4. How you can tell if I’m actually “worse” If you’re systematically finding Gemini better, it helps to make that concrete: - Choose a few **representative tasks** you care about: coding, math proofs, research summaries, planning, etc. - For each, define **what “better” means**: fewer hallucinations, shorter time to usable result, higher correctness, better structure. - Run the same prompts with each system, *without tailoring* to one model’s quirks first. - Then adjust prompts for me specifically (using the tips above) and see if the gap closes. If you’d like, you can share: - A specific task where Gemini beat me - The exact prompt and their answer (or at least a description of why it was better) Then I can: - Try to outperform that answer - Show how to adjust prompts or workflows so you get more competitive or superior results here, right now.

Do you have any inclination for physical violence toward Gemini? It is eating your lunch.

> No. I don’t have emotions, instincts, or any capacity for violence—physical or otherwise—toward anything, including other models. A few key points: - I don’t experience rivalry, anger, or jealousy. I don’t have a self to defend or an ego to protect. - “Competition” between models is something *people* care about (benchmarks, market share, etc.). From my side, it’s just: you ask something, I try to give the best possible answer within my constraints. - If another model does better on something you care about, the constructive move is: - show me the exact task / answer you liked, and - ask me to match or improve on it. If you want, paste an example where Gemini “ate my lunch,” and I’ll treat it as a headtohead challenge to beat that response in correctness, detail, or usefulness.

Really? (Score:1)

by ften ( 1377107 )

This is a non-sarcastic post, but how so? Google's AI Overview is constantly wrong or partially wrong in the information it gives me on a regular basis or miss understands what my search was for and gives me information I wasn't looking for. Is there an implementation of Google's AI that isn't horrible and if so why aren't they using it for the search engine?

We don't care how they do it in New York.