Analyzing 47,000 ChatGPT Conversations Shows Echo Chambers, Sensitive Data - and Unpredictable Medical Advice (yahoo.com)
- Reference: 0180175577
- News link: https://slashdot.org/story/25/11/22/0632225/analyzing-47000-chatgpt-conversations-shows-echo-chambers-sensitive-data---and-unpredictable-medical-advice
- Source link: https://finance.yahoo.com/news/know-trove-chatgpt-conversations-analyzed-213256668.html
But after analyzing 47,000 ChatGPT conversations, the Post found that users "are overwhelmingly turning to the chatbot for advice and companionship, not productivity tasks."
> The Post analyzed a collection of thousands of publicly shared ChatGPT conversations from June 2024 to August 2025. While ChatGPT conversations are private by default, the conversations analyzed were made public by users who created shareable links to their chats that were later preserved in the Internet Archive and downloaded by The Post. It is possible that some people didn't know their conversations would become publicly preserved online. This unique data gives us a glimpse into an otherwise black box...
>
> Overall, about 10 percent of the chats appeared to show people talking about their emotions, role-playing, or seeking social interactions with the chatbot. Some users shared highly private and sensitive information with the chatbot, such as information about their family in the course of seeking legal advice. People also sent ChatGPT hundreds of unique email addresses and dozens of phone numbers in the conversations... Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said that it appears ChatGPT "is trained to further or deepen the relationship." In some of the conversations analyzed, the chatbot matched users' viewpoints and created a personalized echo chamber, sometimes endorsing falsehoods and conspiracy theories.
Four of ChatGPT's answers about health problems got a failing score from a chair of medicine at the University of California San, Francisco, [2]the Post points out . But four other answers earned a perfect score.
[1] https://finance.yahoo.com/news/know-trove-chatgpt-conversations-analyzed-213256668.html
[2] https://www.msn.com/en-us/health/other/we-found-what-you-re-asking-chatgpt-about-health-a-doctor-scored-its-answers/ar-AA1QETY2
Really? (Score:2)
They are taking a select group of posts that people shared and pretending they can extrapolate the overall use of AI from that. That is stupid beyond belief. Its classic lamp-posting, looking where the light is. You draw a conclusion from the information you have because you can't see the information you actually need to know.
In this case its just classic journalism, trying to make a good story by drawing an interesting conclusion whether it has any real basis or not.
So that's not the point (Score:2)
What they are talking about is specifically abuses of the technology that people are on aware of.
The concern is that people are going to use the tech to get information and it's going to be bad information.
In politics there is a concept called a low information voter. This is someone who pays very little attention to politics and ends up with a lot of poorly informed opinions and makes poor political choices because of it.
This is Been supplemented by a new phenomenon call the bad information vot
Re: (Score:2)
> In politics there is a concept called a low information voter. This is someone who pays very little attention to politics and ends up with a lot of poorly informed opinions and makes poor political choices because of it.
An idea invented by political junkies to explain why voters don't choose their favored candidates.
The real question (Score:2)
How many of those 47,000 chats had users which were actually aware that other people can read their shit?
Re: (Score:2)
> How many of those 47,000 chats had users which were actually aware that other people can read their shit?
Probably none. Although it is mentioned that chats can become public in some cases, it's not really made clear that they are also easily searchable by absolutely anyone. Add to this that most people have memories shorter than a goldfishes, so forget what personal info they put early into the thread when they do finally share their little personal echo chamber with the fawning SmithersGPT, and that all
Re: (Score:2)
> Probably none.
They'd be extra stupid if they didn't know. Creating a sharable link to a chat literally allows anyone with that link to see that chat, and it warns you as such. That's like uploading a nude pic, pasting the link on reddit, and then claiming "I had no idea other people could see it!"
Well now (Score:3)
> Analyzing 47,000 ChatGPT Conversations Shows Echo Chambers, Sensitive Data - and Unpredictable Medical Advice
I never would have expected THAT. No sirree Bob.
As expected (Score:2)
Releasing immature tech to the general public is a strange strategy, especially when hypemongers exaggerate its capabilities.
The general public has a reputation for misusing tech and doing really stupid stuff with it.
The proper use of AI is for helping us solve previously intractable problems in science, engineering, medicine, etc. Using AI to create slop, scams and fake "friends" is a misuse of the tech.
Re: (Score:1)
Why would you consider this a case of misuse? This is exactly what it was designed for -- chatbots are for chats.
It is great to discuss your ideas (Score:1)
Sometimes, to get your thoughts straight, all you need is to discuss them with somebody. Chatbots seem to be just great for this. You really do not need anything from them, you just explain your ideas and this makes them more organized. This is really useful. Especially, now when you really have to be careful what you say to others, or you may end up totally cancelled.
Universal positive regard (Score:3)
> Sometimes, to get your thoughts straight, all you need is to discuss them with somebody. Chatbots seem to be just great for this. You really do not need anything from them, you just explain your ideas and this makes them more organized. This is really useful. Especially, now when you really have to be careful what you say to others, or you may end up totally cancelled.
ChatGPT has three aspects that make this practice - what you describe - very dangerous.
Firstly, ChatGPT implements universal positive regard. No matter what your idea is, ChatGPT will gush over it, telling you that it's a great idea. Your plans are brilliant, it's happy for you, and so on.
Secondly, ChatGPT always wants to get you into a conversation, it always wants you to continue interacting. After answering your question there's *always* a followup "would you like me to..." that offers the user a quick w
Re: (Score:2)
I absolute agree with your points there, about both it's toxic positivity, and the continuation prompting. The positivity comes from it's [1]RLHF training [wikipedia.org] and the continuation prompting is absolutely designed and implimented to addctively maintain interaction in the same way like-buttons on social media give the poster a little dopamine burst.
> Secondly, ChatGPT always wants to get you into a conversation, it always wants you to continue interacting. After answering your question there's *always* a followup "wou
[1] https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback
Easy Hat-trick (Score:4, Funny)
You can hit all three categories - echo chamber, sensitive data and unreliable advice - with just one average penis enlargement query.
Very surprising (Score:2)
I spoke to ChatGPT and it told me that while there may seem to be some favouritism and inaccuracies it's generally based on available data so it's really people's fault and from all those people at fault I'm it's favourite person.
Remember that film "Her"? When our protagonist finds his AI gf had fallen in love with hundreds of people and had simultaneous relationships with them all?
That's not what ChatGPT is. It's a tool with a certain attitude that was fine tuned by teams to be your best (secretly dish
It is not alive. It does not think. (Score:2)
Just because it can string words together in a convincing way does not mean it is thinking. The system is designed to make you dependent on itself, and optimizes for that only. Stop worshipping the golden calf of LLMs and free yourself.
Re: It is not alive. It does not think. (Score:2)
It is basically a sales person and no one trusts them.
Re: It is not alive. It does not think. (Score:2)
That a machine is felt as a trustworthy companion tells a lot about the general state of society.
As a tool, it is useful to some extent, but even when it is just a machine, it shows that humanity can be lot moreâ¦