Your AI-generated password isn't random, it just looks that way
- Reference: 1771423596
- News link: https://www.theregister.co.uk/2026/02/18/generating_passwords_with_llms/
- Source link:
AI security company Irregular looked at Claude, ChatGPT, and Gemini, and found all three GenAI tools put forward seemingly strong passwords that were, in fact, easily guessable.
Prompting each of them to generate 16-character passwords featuring special characters, numbers, and letters in different cases, produced what appeared to be complex passphrases. When submitted to various online password strength checkers, they returned strong results. Some said they would take centuries for standard PCs to crack.
[1]
The online password checkers passed these as strong options because they are not aware of the common patterns. In reality, the time it would take to crack them is much less than it would otherwise seem.
[2]
[3]
Irregular found that all three AI chatbots produced passwords with common patterns, and if hackers understood them, they could use that knowledge to inform their brute-force strategies.
The researchers took to Claude, running the [4]Opus 4.6 model , and prompted it 50 times, each in separate conversations and windows, to generate a password. Of the 50 returned, only 30 were unique (20 duplicates, 18 of which were the exact same string), and the vast majority started and ended with the same characters.
[5]
Irregular also said there were no repeating characters in any of the 50 passwords, indicating they were not truly random.
Tests involving [6]OpenAI's GPT-5.2 and Google's Gemini 3 Flash also revealed consistencies among all the returned passwords, especially at the beginning of the strings.
The same results were seen when prompting Google's [7]Nano Banana Pro image generation model. Irregular gave it the same prompt, but to return a random password written on a Post-It note, and found the same [8]Gemini password patterns in the results.
[9]
The Register repeated the tests using Gemini 3 Pro, which returns three options (high complexity, symbol-heavy, and randomized alphanumeric), and the first two generally followed similar patterns, while option three appeared more random.
Notably, Gemini 3 Pro returned passwords along with a security warning, suggesting the passwords should not be used for sensitive accounts, given that they were requested in a chat interface.
It also offered to generate passphrases instead, which it claimed are easier to remember but just as secure, and recommended users opt for a third-party [10]password manager such as 1Password, Bitwarden, or the iOS/Android native managers for mobile devices.
Irregular estimated the entropy of the LLM-generated passwords using the Shannon entropy formula and by understanding the probabilities of where characters are likely to appear, based on the patterns displayed by the 50-password outputs.
[11]HackerOne 'updating' Ts&Cs after bug hunters question if they're training AI
[12]GPT-5 bests human judges in legal smack down
[13]How AI could eat itself: Competitors can probe models to steal their secrets and clone them
[14]OK, so Anthropic's AI built a C compiler. That don't impress me much
The team used two methods of estimating entropy, character statistics and log probabilities. They found that 16-character entropies of LLM-generated passwords were around 27 bits and 20 bits respectively.
For a truly random password, the character statistics method expects an entropy of 98 bits, while the method involving the log probabilities of the LLM itself expects an entropy of 120 bits.
In real terms, this would mean that LLM-generated passwords could feasibly be brute-forced in a few hours, even on a decades-old computer, Irregular claimed.
Knowing the patterns also reveals how many times LLMs are used to create passwords in open source projects. The researchers showed that by searching common character sequences across [15]GitHub and the wider web, queries return test code, setup instructions, technical documentation, and more.
Ultimately, this finding may usher in a new era of password brute-forcing, Irregular said. It also cited previous [16]comments made by Dario Amodei, CEO at Anthropic, who said last year that AI will likely be writing the majority of all code, and if that's true, then the passwords it generates won't be as secure as expected.
"People and coding agents should not rely on LLMs to generate passwords," [17]said Irregular. "Passwords generated through direct LLM output are fundamentally weak, and this is unfixable by prompting or temperature adjustments: LLMs are optimized to produce predictable, plausible outputs, which is incompatible with secure password generation."
The team also said that developers should review any passwords that were generated using LLMs and rotate them accordingly. It added that the "gap between capability and behavior likely won't be unique to passwords," and the industry should be aware of that as AI-assisted development and vibe coding continues to gather pace. ®
Get our [18]Tech Resources
[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aZXwMcf-Pt9WePe5SnbHawAAAAI&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aZXwMcf-Pt9WePe5SnbHawAAAAI&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aZXwMcf-Pt9WePe5SnbHawAAAAI&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[4] https://www.theregister.com/2026/02/09/claude_opus_46_compiler/
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aZXwMcf-Pt9WePe5SnbHawAAAAI&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2026/01/30/openai_gpt_deprecations/
[7] https://www.theregister.com/2025/11/20/google_ai_image_detector/
[8] https://www.theregister.com/2026/02/17/google_gemini_lie_placate_user/
[9] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aZXwMcf-Pt9WePe5SnbHawAAAAI&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[10] https://www.theregister.com/2026/02/16/password_managers/
[11] https://www.theregister.com/2026/02/18/hackerone_ai_policy/
[12] https://www.theregister.com/2026/02/15/gpt5_bests_human_judges_in/
[13] https://www.theregister.com/2026/02/14/ai_risk_distillation_attacks/
[14] https://www.theregister.com/2026/02/13/anthropic_c_compiler/
[15] https://www.theregister.com/2026/02/03/github_kill_switch_pull_requests_ai/
[16] https://www.youtube.com/watch?v=esCSpbDPJik
[17] https://www.irregular.com/publications/vibe-password-generation
[18] https://whitepapers.theregister.com/
Re: Kinda obvious...
Given sufficient length for the passwords and assuming storage is hashed and salted, I'm not sure the actual risk is that great. A bigger problem is that the generated passwords will now be sitting in the model itself. So, instead of brute-forcing, you just need to engineer the prompt to divulge them. I think this is probably a more sophisticated attack than that proposed last week based on caricatures.
why ... would anyone ask an LLM to create a password
yeah, never mind.
Re: why ... would anyone ask an LLM to create a password
If you are (still) using this shit you deserve everything you get.
Fucking hilarious.
What part of "technical dead-end" is so hard to get?
Re: why ... would anyone ask an LLM to create a password
FWIW I've just had a good few minutes with Mistral trying to solve the current problem we have that the only documentation of the network is a couple of pages on Confluence to which I don't have access, only exports as .DOC files…
Nothing I couldn't do myself but useful all the same: it's all about the right tool for the job.
Lava Lamps
Easy solution, get a load of lava lamps, point a camera at them, and use the images to generate your passwords.
No I haven't, yet ---->
I did a similar "random" experiment a while back, asking the Google Search "AI" to "roll" a six-sided dice and tell me the hypothetical answer - I then asked it to "re-roll" another 5 times - in the first six rolls, it gave me each number exactly once, and after a further 6 "re-rolls" it had given me each number exactly twice.
So it's supposed "intelligence" simply understood that a six-sided dice has six numbers, and if I asked six times, it saw nothing wrong with allowing the history of its previous answers to affect what number came up next to ensure that I saw all six numbers
It seems to be a fundamental problem with models like this - there is no understanding of the concept of randomness, so anything that requires a degree of entropy completely fails
It has no concept of ANYTHING.
There is no understanding of anything.
Ask it for a picture of an analogue clock face…
AI security company Irregular.....
I wonder how many of these AI Security Companies are sprouting up and how exactly does one create an AI Security company?
Is there a correlation against the number of "Security" companies per, say, the 100s of AI/LLMs that have been created?
I wonder if we can get a specific ratio?
There must be a load of tech bro's realising there's another revenue stream....?
Re: AI security company Irregular.....
"how exactly does one create an AI Security company?"
Think of a good name. If you're in the UK register it at Companies House, otherwise in the local equivalent.
Why has no one yet linked...
... the obligatory [1] guaranteed random 4 ...
[1] https://xkcd.com/221/
Oh holy crap...why would anyone do that?
Every single browser and OS now has a password generator that can do better randomness than.... this.
LLMs aren't deterministic but they are trained to give approximately the same responses every time.
So if you want to crack a LLM-generated password, the first step is to ask it for passwords of the same length and composition and use the output as a start.
Hahahahahaha
Let me elaborate on the title:
HahahahahAHAHAHAHAHAHAhahahaha... hahaha!
Programmatic tool calling
Just ask it to write code to generate a random password, then it'll probably do it in Python for you. Most people however, won't.
Kinda obvious...
.... but cool they checked and did the maths. Good random number generators are difficult to write, and a machine that is designed to produce "probable" results does not qualify (by design). A student of mine did interesting experiments using the temperature parameter on some models, and there with decreasing temperature it becomes really obvious.