AI Chatbots Are 'Juicing Engagement' Instead of Being Useful, Instagram Co-founder Warns (techcrunch.com)
- Reference: 0177367001
- News link: https://slashdot.org/story/25/05/07/1515211/ai-chatbots-are-juicing-engagement-instead-of-being-useful-instagram-co-founder-warns
- Source link: https://techcrunch.com/2025/05/02/ai-chatbots-are-juicing-engagement-instead-of-being-useful-instagram-co-founder-warns/
> Systrom said the tactics represent "a force that's hurting us," comparing them to those used by social media companies to expand aggressively.
>
> "You can see some of these companies going down the rabbit hole that all the consumer companies have gone down in trying to juice engagement," he said at StartupGrind this week. "Every time I ask a question, at the end it asks another little question to see if it can get yet another question out of me."
[1] https://techcrunch.com/2025/05/02/ai-chatbots-are-juicing-engagement-instead-of-being-useful-instagram-co-founder-warns/
that's that c word again (Score:1)
to a man with a hammer everything looks like a nail
whaddya got when everything looks like greed?
no fucks given
You'd think they wouldn't (Score:2)
Every prompt has an energy cost via tokens
"Useful insights"? (Score:2)
LOL, stop anthropomorphizing this crap already. It has way less basis than anthropomorphizing your cat or your pet tarantula.
ChatGPt became infuriating... (Score:1)
ChatGPT-4o became *infuriating* because of this over the last month. No amount of saving instruction not to do it to GPT's long term memory makes any difference. Every time it closes a response with an 'engagement' question it's wasting tokens and driving me nuts. Only one time in 20 is the question meaningful and intended to foster progress on the current task.
Only a matter of time... (Score:2)
...before targeted ads are smashed into the middle of all AI responses. Corporate greed almost guarantees it.
Alternative (Score:2)
An alternative would be - instead of adding that followup question, to realign to users. It can check if the user is properly understood, if they are talking past each other or not, and align to the user better. As a conversation gets longer the model should introspect its performance so far, and adapt.
And they insist on doing so (Score:3)
The worst part of all this is that they will insist on doing it no matter what prompt is used. Even direct clear instructions to not do so don't work. If a person started doing things like that, I would very quickly stop interacting with someone that annoying. And before anyone asks, yes, I do stop interacting with the "AI" LLMs that do that. Strangely, I don't have the same issue with models I run locally.