News: 0177058309

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

AI Support Bot Invents Nonexistent Policy (arstechnica.com)

(Friday April 18, 2025 @11:20AM (msmash) from the oops dept.)


An AI support bot for the code editor Cursor [1]invented a nonexistent subscription policy , triggering user cancellations and public backlash this week. When developer "BrokenToasterOven" complained about being logged out when switching between devices, the company's AI agent "Sam" falsely claimed this was intentional: "Cursor is designed to work with one device per subscription as a core security feature."

Users took the fabricated policy as official, with several announcing subscription cancellations on Reddit. "I literally just cancelled my sub," wrote the original poster, adding that their workplace was "purging it completely." Cursor representatives scrambled to correct the misinformation: "Hey! We have no such policy. You're of course free to use Cursor on multiple machines." Cofounder Michael Truell later apologized, explaining that a backend security change had unintentionally created login problems.



[1] https://arstechnica.com/ai/2025/04/cursor-ai-support-bot-invents-fake-policy-and-triggers-user-uproar/



That's still bad. (Score:5, Insightful)

by H3lldr0p ( 40304 )

Not checking your AI bot responses and not disclosing you created your own problem is still bad. And it's not misinformation when you're not transparent about what happened in the first place.

Congratulations on finding out the hard way you can easily destroy trust in your company.

Re: (Score:3)

by Brain-Fu ( 1274756 )

Hallucination is not some rare-and-random bug though. It is intrinsic to the nature of large language models (based on what I have read, anyway). Efforts at blocking hallucinations that amount a long series of "one off" fixes all piled on top of each other are ultimately doomed. You might get some seriously problematic ones prevented but that approach will never address the root cause, so hallucinations will continue to crop up.

I asked ChatGPT if it uses the stuff I post to train its models, and it gave

Human writers require proofreaders (Score:2)

by Tony Isaac ( 1301187 )

This is another way in which AI mimics human intelligence. If businesses want to use AI for customer support, they will have to figure out ways to cross-check what the bots say. This doesn't seem like a huge hurdle, but a necessary one. Bot 1 generates a response, Bot 2 confirms that what Bot 1 says, is accurate. It might still be possible for incorrect information to slip through, but much less likely.

Re:Human writers require proofreaders (Score:5, Funny)

by bjoast ( 1310293 )

-"I demand to speak to your supervisor!"

-"Supervisor Bot here. What seems to be the problem, sir?"

Re: (Score:2)

by jacks smirking reven ( 909048 )

They both got it wrong, better bring in the [1]execubots [fandom.com]

[1] https://futurama.fandom.com/wiki/Network_Execubots

Re: (Score:2)

by nightflameauto ( 6607976 )

> Yes, teach your bots to cooperate with each other against the hapless customer, have your competitors win big against you at the proverbial virtual cash register.

But, for a brief moment, they saved a lot of money on support staff, which made their stock valuation climb!

Re: Human writers require proofreaders (Score:3)

by vbdasc ( 146051 )

This bot did behave basically the same as a 5-years old kid who has imaginary friends or invents fictional events to cover up his mischief. It's not certain that employing a whole kindergarten full of naughty kids would be better than employing just one.

Re: (Score:1)

by drinkypoo ( 153816 )

That raises a good point, what kind of bullshit admonitions did they give the bot which are added as tokens more important than your question which led to this? They presumably instructed it to give explanations which imply that what users are seeing is in line with what the user agreement says... which the software did whether they were or not.

Re: (Score:2)

by stabiesoft ( 733417 )

Could be interesting if people start recording conversations with bots. What happens if a bot were to say, we offer a plan for 10 dollars a month guaranteed for life. And of course no such plan exists. Maybe it was say for a phone plan where the current plans are 40/mo. Now if a human offered the plan, the company could say it was a mistake. But if a bot, you know those infallible computers offered it, would the company be legally bound to it?

Re: (Score:2)

by HiThere ( 15173 )

Well, it's happened once that I've heard of, and the court decided that the company was bound by it. (Sorry, don't remember details. I think it was about an airline fare.)

Re: (Score:2)

by Firethorn ( 177587 )

Jake Moffatt vs Air Canada, the chatbot told him that he could buy a full price fair then apply for a refund of the bereavement discount within 90 days. When the real policy was that you had to get it up front.

Air Canada tried to argue that the chatbot was a different legal entity and thus they were not responsible for what it said.

[1]https://arstechnica.com/tech-p... [arstechnica.com]

[1] https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/

Re: (Score:2)

by narcc ( 412956 )

The problem with your "solution" is that you can't trust the output of Bot 2 any more than you could trust the output of Bot 1.

> This doesn't seem like a huge hurdle

Except for the fact that its impossible, give the nature of the technology.

"This support agent is not supported" (Score:2)

by SodaStream ( 6820788 )

I can see it now: whereas in the past you'd have a support engineer and their manager working with the customer and standing by their word, we'll soon see a world where you're accepting an EULA for using a support bot disclaiming the support bot's interactions as the use of the software . Bonus points to companies who offer the support agent experience to customers for free, and then use the product of that interaction to gather and sell personal details.

What a time to be alive.

Well done, young AI (Score:2)

by greytree ( 7124971 )

Creating new policies on the fly is just the kind of initiative we value at Cursor. You'll go far in this company, son.

companies can be on the hook for what there agent (Score:2)

by Joe_Dragon ( 2206452 )

companies can be on the hook for what there agent says. More so in the EU where they can't get out of it under an EULA.

Can you believe it? (Score:2)

by dcollins ( 135727 )

I'm really surprised that the default reception to this story is to actually believe what the company reps are saying after the fact. It seems like a very weird coincidence:

(1) Help bot gives very specific one-device policy.

(2) Separate login system simultaneously shuts people out of multiple devices.

To me it seems like a high, possibly more-likely possibility, is that the company did change the policy behind the scenes, and then when backlash and cancelled subscriptions started happening, backtracked -- re

AI bamboozle (Score:2)

by flippy ( 62353 )

This is yet another example of "AI" (I have to put that in quotes because such systems aren't actually Intelligent) not being up to the task. It's disheartening to me to see so many businesses and individuals buying into the idea that "AI" can do anything. It can't. As a 30+ year veteran of the technology industry, I have seen too many failures to believe a single bit of the hype. It's simply not true.

Re: (Score:2)

by PPH ( 736903 )

Perhaps you've never worked for a "Type A" meatsack manager. Who just pulls the occasional policy statement out of his (they are most often men) ass. "This is MY department and I'll do what I fucking want. To hell with federal law!"

It appears that AI has learned quite well.

AI Saves On Labor Costs (Score:2)

by Gilmoure ( 18428 )

So much saving

Very wow!

Real non-ai tech support also does this. (Score:2)

by rahmrh ( 939610 )

Real non-ai tech support also does this, so AI is "improving" to be just has crappy as real 1st/2nd tier tech support. I have had non-ai tech support make random claims that make little or no sense to attempt to explain (and get me off the call so they could close it) the issue I was reporting. It is done all of the time, and seems to ignore what the software should do and/or what the code was designed to do. The goal seems to be to make something up to justify what is being experienced but they have n

The Great Movie Posters:

When You're Six Tons -- And They Call You Killer -- It's Hard To Make
Friends...
-- Namu, the Killer Whale (1966)

Meet the Girls with the Thermo-Nuclear Navels!
-- Dr. Goldfoot and the Girl Bombs (1966)

A GHASTLY TALE DRENCHED WITH GOUTS OF BLOOD SPURTING FROM THE VICTIMS
OF A CRAZED MADMAN'S LUST.
-- A Taste of Blood (1967)