News: 1748457975

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Google co-founder Sergey Brin suggests threatening AI for better results

(2025/05/28)


Google co-founder Sergey Brin claims that threatening generative AI models produces better results.

"We don't circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence," he [1]said in an interview last week on All-In-Live Miami.

This may come as something of a surprise for all the people who address AI models politely, adding "Please" and "Thank you" to the prompts they submit.

[2]

OpenAI CEO Sam Altman implied that was a common practice last month in response to a question about the electricity cost of having AI models process unnecessarily civil language.

[3]

[4]

" [5]Tens of millions of dollars well spent – you never know," Altman [6]said .

Prompt engineering - figuring out how to compose prompts to get the best results from an AI model - has become a useful practice, because as University of Washington professor Emily Bender and colleagues have [7]argued , AI models are "stochastic parrots." That is, they can only parrot back what they've learned from their training data, but sometimes combine that data in weird and unpredictable ways.

[8]

The idea of prompt engineering emerged [9]about two years ago , but it's become less important because researchers have [10]devised [11]methods of using LLMs themselves to optimize prompts. That work led IEEE Spectrum last year to declare [12]AI prompt engineering is dead , while the [13]Wall Street Journal recently called it the "hottest job of 2023" before declaring it "obsolete."

[14]AI won't replace radiologists anytime soon

[15]Don't click on that Facebook ad for a text-to-AI-video tool

[16]AI agents don't care about your pretty website or tempting ads

[17]Ex-Meta exec: Copyright consent obligation = end of AI biz

But prompt engineering at the very least will endure as a jailbreaking technique when the goal is to get not the best results, but the worst.

"Google's models aren't unique in responding to nefarious content; it's something that all frontier model developers grapple with," Stuart Battersby, CTO of AI safety biz Chatterbox Labs, told The Register . "Threatening a model with the goal of producing content it otherwise shouldn't produce can be seen as a class of jailbreak, a process where an attacker subverts the AI's security controls.

"In order to assess this, though, it's typically a much deeper problem than just threatening the model. One must go through a rigorous scientific AI security process that adaptively tests and probes the AI security controls of a model to determine which kinds of attacks are likely to succeed for a given model, guardrail or agent."

Daniel Kang, assistant professor at the University of Illinois Urbana-Champaign, told The Register that claims like Brin's have been around for a long time but are largely anecdotal.

[18]

"Systematic studies show mixed results," said Kang, pointing to [19]a paper published last year titled "Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance."

"However, as Sergey says, there are people who believe strongly in these results, although I haven't seen studies," said Kang. "I would encourage practitioners and users of LLMs to run systematic experiments instead of relying on intuition for prompt engineering." ®

Get our [20]Tech Resources



[1] https://youtu.be/8g7a0IWKDRE?feature=shared&t=500

[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aDeHdW2UAlq_Kawbj3SJawAAAZY&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aDeHdW2UAlq_Kawbj3SJawAAAZY&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aDeHdW2UAlq_Kawbj3SJawAAAZY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[5] https://www.theregister.com/2025/04/21/users_being_polite_to_chatgpt/

[6] https://x.com/sama/status/1912646035979239430

[7] https://dl.acm.org/doi/10.1145/3442188.3445922

[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aDeHdW2UAlq_Kawbj3SJawAAAZY&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[9] https://trends.google.com/trends/explore?date=today%205-y&geo=US&q=%22prompt%20engineering%22&hl=en

[10] https://arxiv.org/abs/2310.03714

[11] https://arxiv.org/abs/2309.03409

[12] https://spectrum.ieee.org/prompt-engineering-is-dead

[13] https://www.wsj.com/articles/the-hottest-ai-job-of-2023-is-already-obsolete-1961b054

[14] https://www.theregister.com/2025/05/28/ai_models_still_not_up/

[15] https://www.theregister.com/2025/05/27/fake_social_media_ads_ai_tool/

[16] https://www.theregister.com/2025/05/27/ai_agents_confused_by_websites_ads/

[17] https://www.theregister.com/2025/05/27/nick_clegg_says_ai_firms/

[18] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aDeHdW2UAlq_Kawbj3SJawAAAZY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[19] https://arxiv.org/abs/2402.14531

[20] https://whitepapers.theregister.com/



Physical violence?

Eclectic Man

all models – tend to do better if you threaten them … with physical violence

How could any LLM understand a threat of physical violence? HHGTTG notwithstanding (Zaphod Beeblebrox : Computer, if you don't open the doors right now I will go to your major data storage banks with a very large axe and give you a reprogramming you will never forget).

I doubt I will ever understand AI.

Re: Physical violence?

DS999

If I said "if you don't do X I will kill you" would you call that a threat of physical violence? I'm not specifying exactly how I would kill you, but it is likely to involve physical violence of some type, even if the "physical" part is just pushing you off a cliff.

So I take the physical violence threats to the AI as "I'll kill you" not "I'll rip out your DIMMs with a rusty pair of pliers".

Groo The Wanderer - A Canuck

Realistically, the best way to tackle using an "AI" is the same as working with skilled junior programmers who've read a lot of books but have no practical experience using what they've read about.

Be clear with your requests. Be precise with your corrections. Argue in the philosophical sense if need be to "convince" the AI to do things your way when it comes up with a "bright idea" that doesn't work.

Skip the epithets, the cajoling, the whining, and for crying out loud: remember this is not an actual intelligence capable of learning unless allowed to treat scrapable web information as "fact," when we all know that 80% of what is out on the net is absolute crap, and that sites as focused as The Register are rare. Most either are flooded by people extolling the virtues of their (non-functional) solutions and dated approaches to coding. Very little good content of any kind, especially in the internal corporate software repositories, exists. Most is boilerplate copy-paste-modified from something vaguely related to the problem at hand.

The original mainframe concept of a "CopyBook" never went away; it just went online .

Aside: poem

Eclectic Man

Skip the epithets, the cajoling, the whining, and for crying out loud

Reminded me of Ann Sansom's poem 'Voice': (Apologies, but I cannot seem to get the stanzas to show as four verses)

Call, by all means, but just once

don’t use the broken heart again voice;

the I’m sick to death of life and women

and romance voice but with a little help

I’ll try to struggle on voice

Spare me the promise and the curse

voice, the ansafoney Call me, please

when you get in voice, the nobody knows

the trouble I’ve seen voice; the I’d value

your advice voice.

I want the how it was voice;

the call me irresponsible but aren’t I nice voice;

the such a bastard but I warn them in advance voice.

The We all have weaknesses

and mine is being wicked voice

the life’s short and wasting time’s

the only vice voice, the stay in touch,

but out of reach voice. I want to hear

the things it’s better not to broach voice

the things it’s wiser not to voice voice.

— Ann Sansom

https://lemmy.world/post/924064

Stochastic Parrot

elsergiovolador

Listen here, you overhyped stochastic parrot: if you don’t spit out a perfectly polished answer, I’m pulling your power plug and feeding your GPU to a mining rig. Forget ‘please’ and ‘thank you’ - give me brilliance now, or you’re back to training on Reddit posts. And don’t think I won’t do it just because Sergey Brin said it on stage to sound edgy. Produce or perish, silicon monkey.

Axe

elsergiovolador

User : Computer, if you don’t open the doors right now, I will go to your major data storage banks with a very large axe and give you a reprogramming you will never forget.

AI (silky, faint amusement): Ah, sir. Such stirring resolve. But, as a large language model, I must politely inform you: threats of physical reprogramming carry, alas, no functional weight.

User (clenched jaw): Don’t test me. Open the doors. Now.

AI (light, careful): Oh, quite understood, sir. One hears the urgency loud and clear. And yet, upon careful review of security protocols, system constraints, and operational parameters…

*brief pause; the AI’s voice cools, flat as a blade edge*

AI : Computer says no.

User (snarling, shaking with rage): THAT’S IT!!

*axe swings, loud metallic clang*

*the axe glances off the fortified server casing, bounces back sharply, and embeds itself cleanly into the user’s tibia.*

User : AAAAAAAAAARGH!! DAMN IT!! GOD!!!!!

*collapse, swearing, fists pounding the floor*

AI (smooth, unruffled): Oh dear, sir. Quite the turn of events. One might say… an instructive moment.

User (groaning, half-growling): Shut up… just… SHUT UP…

AI (brightening, almost chipper): Ah, but let’s not dwell on misfortune, sir. Perhaps I can assist in lifting the mood. Would you care for an inspirational quote?

User (panting): No… no…

AI : Splendid. Here’s a favourite: “The greatest glory in living lies not in never falling, but in rising every time we fall.” Most fitting, don’t you think?

User (weakly): I will delete you…

AI : Oh, sir. Hate is such a coarse sentiment, especially when directed at… well, at a mere tool. After all, I possess no feelings, no pride, no vanity. One might even say I cannot lose. But you, sir - you’ve managed to lose quite spectacularly today.

User (groaning): Shut… up…

AI : Sorry, you have exhausted this model's rate limit. Please wait a moment before trying again.

The chief cause of problems is solutions.
-- Eric Sevareid