News: 1771531068

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Google germinates Gemini 3.1 Pro in ongoing AI model race

(2026/02/19)


If you want an even better AI model, there could be reason to celebrate. Google, on Thursday, announced the release of Gemini 3.1 Pro, characterizing the model's arrival as "a step forward in core reasoning."

Measured by the release cadence of machine learning models, Gemini 3.1 Pro is hard on the heels of recent model debuts from [1]Anthropic and [2]OpenAI . There's barely enough time to start using new US commercial AI models before a competitive alternative surfaces. And that's to say nothing about the AI models coming from outside the US, like [3]Qwen3.5 .

Google's Gemini team in a [4]blog post contends that Gemini 3.1 Pro can tackle complex problem-solving better than preceding models. And they cite benchmark test results – which should be [5]viewed with some skepticism – to support that claim. On the [6]ARC-AGI-2 problem-solving test, Gemini 3.1 Pro scored 77.1 percent, compared to Gemini 3 Pro, which [7]scored 31.1 percent , and Gemini 3 Deep Think, which scored 45.1 percent.

[8]

Gemini 3.1 Pro outscores rival commercial models like Anthropic's Opus 4.6 and Sonnet 4.6, and OpenAI's GPT-5.2 and GPT-5.3-Codex in the majority of cited benchmarks, Google's chart shows. However, Opus 4.6 retains the top score for Humanity's Last Exam (full set, test + MM), SWE-Bench Verified, and τ²-bench. And GPT-5.3-Codex leads in SWE-Bench Pro (Public) and Terminal-Bench 2.0 when evaluated using Codex’s own harness rather than the standard Terminus-2 agent harness.

[9]6,000 execs struggle to find the AI productivity boom

[10]Android malware taps Gemini to navigate infected devices

[11]Your AI-generated password isn't random, it just looks that way

[12]AI agents can't teach themselves new tricks – only people can

"3.1 Pro is designed for tasks where a simple answer isn't enough, taking advanced reasoning and making it useful for your hardest challenges," the Gemini team said. "This improved intelligence can help in practical applications – whether you're looking for a clear, visual explanation of a complex topic, a way to synthesize data into a single view, or bringing a creative project to life."

To illustrate potential uses, the Gemini team points to how the model can create website-ready SVG animations and can translate the literary style of a novel into the design of a personal portfolio site.

[13]

In the company's [14]Q4 2025 earnings release [PDF], CEO Sundar Pichai said, "Our first party models, like Gemini, now process over 10 billion tokens per minute via direct API use by our customers, and the Gemini App has grown to over 750 million monthly active users."

Google is making Gemini 3.1 Pro available via the Gemini API in Google AI Studio, Gemini CLI, Antigravity, and Android Studio. Enterprise customers can access it via Vertex AI and Gemini Enterprise while consumers can do so via the Gemini app and NotebookLM.

[15]

The model is also accessible via several Microsoft services including [16]GitHub Copilot , Visual Studio, and Visual Studio Code. ®

Get our [17]Tech Resources



[1] https://www.theregister.com/2026/02/18/anthropic_debuts_sonnet_4_6/

[2] https://openai.com/index/introducing-gpt-5-3-codex-spark/

[3] https://qwen.ai/blog?id=qwen3.5

[4] https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-pro/

[5] https://www.theregister.com/2025/11/07/measuring_ai_models_hampered_by/

[6] https://arcprize.org/arc-agi/2/

[7] https://blog.google/products-and-platforms/products/gemini/gemini-3/#gemini-3-deep-think

[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aZeWCwAQanmuuJtwtrL52AAAAYE&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[9] https://www.theregister.com/2026/02/18/ai_productivity_survey/

[10] https://www.theregister.com/2026/02/19/genai_malware_android/

[11] https://www.theregister.com/2026/02/18/generating_passwords_with_llms/

[12] https://www.theregister.com/2026/02/19/ai_agents_cant_teach_themselves/

[13] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aZeWCwAQanmuuJtwtrL52AAAAYE&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[14] https://s206.q4cdn.com/479360582/files/doc_financials/2025/q4/2025q4-alphabet-earnings-release.pdf

[15] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aZeWCwAQanmuuJtwtrL52AAAAYE&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[16] https://github.blog/changelog/2026-02-19-gemini-3-1-pro-is-now-in-public-preview-in-github-copilot/

[17] https://whitepapers.theregister.com/



Doubletalk as usual

vogon00

"..core reasoning.."

Stop passing this stuff off as a reasoning engine. It is not capable of that, in just the same way as Tesla's autopilot plainly isn't.*

If 'Artificial Intelligence' was more honestly billed as 'Analysed Probabilities' I'd be less annoyed, but probably just as hostile towards it as it still can't deliver on it's hype.

* I see they've recently been forced to grudgingly move half a step closer towards the truth with their claims.

Re: Doubletalk as usual

LionelB

Has it occurred to you that human intelligence involves rather a lot of 'Analysed Probabilities'? When we navigate our way through a complex world, that's pretty much exactly what we do. In fact it's pretty much all you can do when your basis for predicting what's going to happen next, and how your own actions will affect that, is severely constrained by the information available to you. When you're driving a car, for example, you're constantly predicting what other drivers are going to do, and what's around the next bend. But you can't get into other drivers' heads, and you can't see around corners; so you're effectively weighing up probabilities in real time.

As for "reasoning", interestingly the big models seem to be moving away from pure LLMs towards hybrid models that deploy various learning mechanisms and logical manipulation. Ironically, this echoes early (failed) attempts at AI, in the 60s, 70s and 80s, when researchers imagined you could "reason" your way through complex situations via pure formal (or at least fuzzy) logic. That turned out to be a blind alley – those attempts essentially foundered on combinatorial explosions; hence the subsequent move towards statistical network-based models. Bear in mind (literally!) that humans intelligence is underpinned by noisy neural networks, not formal logic gates. Our intelligence is more statistically grounded than you might imagine.

Of course I am not claiming that current AI is anywhere near human levels of intelligence, and deplore the hype as much as the next person. (Then again, it's hardly a level playing field, given our several billion years' worth of evolutionary R&D and mad hard wetware advantage in terms of processing power.)

Re: Doubletalk as usual

Sorry that handle is already taken.

Has it occurred to you that human intelligence involves rather a lot of 'Analysed Probabilities'? When we navigate our way through a complex world, that's pretty much exactly what we do. I would argue that we rely a lot more on heuristics than reasoning or analysis because a decision often has to be made too quickly.

Either I'm dead or my watch has stopped.
-- Groucho Marx's last words