OpenAI Threatens To Ban Users Who Probe Its 'Strawberry' AI Models (wired.com)
- Reference: 0175029389
- News link: https://slashdot.org/story/24/09/18/1858224/openai-threatens-to-ban-users-who-probe-its-strawberry-ai-models
- Source link: https://www.wired.com/story/openai-threatens-bans-as-users-probe-o1-model
> Since the company [1]launched its "Strawberry" AI model family last week, touting so-called reasoning abilities with o1-preview and o1-mini, OpenAI has been [2]sending out warning emails and threats of bans to any user who tries to probe how the model works.
>
> Unlike previous AI models from OpenAI, such as GPT-4o, the company trained o1 specifically to work through a step-by-step problem-solving process before generating an answer. When users ask an "o1" model a question in ChatGPT, users have the option of seeing this chain-of-thought process written out in the ChatGPT interface. However, by design, OpenAI hides the raw chain of thought from users, instead presenting a filtered interpretation created by a second AI model. Nothing is more enticing to enthusiasts than information obscured, so the race has been on among hackers and red-teamers to try to uncover o1's raw chain of thought using jailbreaking or prompt injection techniques that attempt to trick the model into spilling its secrets.
[1] https://tech.slashdot.org/story/24/09/12/1717221/openai-releases-o1-its-first-model-with-reasoning-abilities
[2] https://www.wired.com/story/openai-threatens-bans-as-users-probe-o1-model/
How many 'r' in "strawberry"? (Score:2)
Is the Strawberry name connected to ChatGPT:s inability to count the number of r in strawberry?
[1] Straight from the AI:s mouth [chatgpt.com]
[1] https://chatgpt.com/share/66eb454d-8838-8008-aea4-b3422d972a8b
Re: (Score:2)
I feel obliged to inform them that it does not work for counting b in bubblebutt neither...
Re: (Score:2)
I just tried it.
For How many 'r' in "strawberry" ChatGPT4o correctly says 3.
For How many b's in bubblebutt ChatGPT4o also says 3, incorrectly.
Then switching to ChatGPTo-1preview, its answer is: "I'm sorry for the mistake earlier. The word "bubblebutt" contains 4 'b's."
It also has a dropdown box you can click on to see the rationale. If you do that, you get this:
> Navigating file formats
> Iâ(TM)m piecing together the Kdenlive project format, which is XML-based. This makes me think about how it c
Re: (Score:1)
It was ready for that math only because the tinted dude already asked it.
corporate insecurity (Score:2)
Apple and Microsoft pay bounties for people to find bugs. Open AI, instead, goes out of its way to discourage users from finding bugs. How insecure are they about this "reasoning" system? Is strawberry too naive and fragile?
"Don't touch my strawberries!" (Score:2)
Where have I heard that before?
For the Time Being (Score:1)
We'll overlook the fact this "artificial intelligence" isn't smart enough to manage its own security.
Re: (Score:2)
They don't want you probing it, because the secret is actually horrific. All of the things you type are read to an array of heads in jars that quickly process the information and spit out the answers. Kind of like Futurama, but more slavery involved.
Re: (Score:2)
> They don't want you probing it, because the secret is actually horrific. All of the things you type are read to an array of heads in jars that quickly process the information and spit out the answers. Kind of like Futurama, but more slavery involved.
More likely a million contractors in some third-world country — kind of like slavery, but more capitalism involved.