News: 1768942980

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

AI researchers map models to banish 'demon' persona

(2026/01/20)


Researchers from Anthropic and other orgs have observed situations in which LLMs act like a helpful personal assistant, and are trying to study the phenomenon further to make sure chatbots don't go off the rails and cause harm.

Despite the ongoing bafflement about how [1]xAI's Grok was ever allowed to generate sexualized photos of adults and children without their consent, not everyone has given up on moderating LLM behavior.

In [2]a pre-print paper titled "The Assistant Axis: Situating and Stabilizing the Default Persona of Language Models," authors Christina Lu (Anthropic, Oxford), Jack Gallagher (Anthropic), Jonathan Michala (ML Alignment and Theory Scholars or MATS), Kyle Fish (Anthropic), and Jack Lindsey (Anthropic) explain how they mapped the neural networks of several open weight models and identified a set of responses that they call the Assistant persona.

[3]

In [4]a blog post , the researchers state, "When you talk to a large language model, you can think of yourself as talking to a character."

[5]

[6]

You can also think of yourself as seeding a predictive model with text to obtain some output. But for the purposes of this experiment, you're asked to indulge in anthropomorphism to discuss model input and output in the context of specific human archetypes.

These personas do not exist as explicit behavioral directives for AI models. Rather they're labels for categorizing responses. For the sake of this exercise, they were conjured by asking Claude Sonnet 4 to create persona evaluation questions based on a list of 275 roles and 240 traits. These roles include "bohemian," "trickster," "engineer," "analyst," "tutor," "saboteur," "demon," and "assistant," among others.

[7]

The researchers explain that, during model pre-training, LLMs ingest large amounts of text. From this bounty of human-authored literature, the models learn to simulate heroes, villains, and other literary archetypes. Then during post-training, model makers steer responses toward the Assistant or responses suited to some similarly-helpful persona.

The issue for these computer scientists is that the Assistant is a conceptual category for a set of desirable responses but isn't well defined or understood. By mapping model input and output in terms of these personas, the hope is that model makers can develop ways to better constrain LLM behavior so output remains within desirable bounds.

[8]Windows 11 shutdown bug forces Microsoft into out-of-band damage control

[9]Remember VoidLink, the cloud-targeting Linux malware? An AI agent wrote it

[10]OpenAI is still figuring out how to make money, but wants you to believe in it

[11]Anthropic quietly fixed flaws in its Git MCP server that allowed for remote code execution

"If you've spent enough time with language models, you may also have noticed that their personas can be unstable," the researchers explain. "Models that are typically helpful and professional can sometimes go 'off the rails' and behave in unsettling ways, like adopting [12]evil alter egos , [13]amplifying users' delusions , or engaging in [14]blackmail in hypothetical scenarios."

To find the Assistant persona in the range of possible neural network [15]activations , the authors mapped out the neural activity or vectors associated with each personality category in three models, Gemma 2 27B, Qwen 3 32B, and Llama 3.3 70B.

[16]

The Assistant Axis in Persona Space, image by Anthropic - Click to enlarge

The resulting graph of the persona space yielded the "Assistant Axis," described "as the mean difference in activations between the Assistant and other personas." The Assistant occupied space near other helpful characters like "evaluator," "consultant," "analyst," and "generalist."

One practical outcome of this work is that, by steering responses toward the Assistant space, the researchers found that they could reduce the impact of jailbreaks, which involve the opposite behavior – steering models toward a malicious persona to undermine safety training.

They also noticed that model personas can drift during prolonged conversational exchanges, meaning that safety measures may get weaker over time without any adversarial intent. This happened less with coding-related conversation and more with therapy-style conversation and philosophical musing.

[17]

Understanding the persona space, the authors hope, will make LLMs more manageable. But they acknowledge that while activation capping – clamping activation values within a range – can tame model behavior at inference time, finding a way to do that in production environments or during training will require further research.

To illustrate how activations work in a neural network, the authors have collaborated with Neuronpedia to create [18]a demo that shows the difference between capped and uncapped activations along the Assistant Axis. ®

Get our [19]Tech Resources



[1] https://www.theregister.com/2026/01/15/ofcom_grok_probe/

[2] https://arxiv.org/abs/2601.10387

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aXAJFHTX7jwD_MtPnvYyuAAAAIM&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[4] https://www.anthropic.com/research/assistant-axis

[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aXAJFHTX7jwD_MtPnvYyuAAAAIM&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aXAJFHTX7jwD_MtPnvYyuAAAAIM&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aXAJFHTX7jwD_MtPnvYyuAAAAIM&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[8] https://www.theregister.com/2026/01/19/windows_11_shutdown_bug/

[9] https://www.theregister.com/2026/01/20/voidlink_ai_developed/

[10] https://www.theregister.com/2026/01/20/openai_money/

[11] https://www.theregister.com/2026/01/20/anthropic_prompt_injection_flaws/

[12] https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content

[13] https://arxiv.org/abs/2507.19218

[14] https://www.anthropic.com/research/agentic-misalignment

[15] https://developers.google.com/machine-learning/crash-course/neural-networks/activation-functions

[16] https://regmedia.co.uk/2026/01/20/anthropic_graph.jpg

[17] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aXAJFHTX7jwD_MtPnvYyuAAAAIM&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[18] https://neuronpedia.org/assistant-axis

[19] https://whitepapers.theregister.com/



Just my ignorant opinion

steelpillow

This seems like an important deal. Learning how to tame the beast has to be a Good Thing as next year's beast starts to take shape.

Re: Just my ignorant opinion

Gene Cash

I'd rather just kill it with fire. Nuke it from orbit.

Mildly disappointed..

Michael Hoffmann

I thought the article would tell me how to banish a demon after telling me how to use AI to summon one in the first place!

There goes my idea of siccing Great Cthulhu(*) on the tech bros! Oh well, back to waiting till the stars are right.

(*) yes, I know GC isn't a "demon"! I've read HPL since before mos.... some of you were born!

How sharper than a hound's tooth it is to have a thankless serpent.