News: 0175367335

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

'I'm Not Just Spouting Shit': iPod Creator, Nest Founder Fadell Slams Sam Altman (techcrunch.com)

(Thursday October 31, 2024 @12:40PM (msmash) from the bring-in-the-popcorn dept.)


iPod creator and Nest founder Tony Fadell [1]criticized OpenAI CEO Sam Altman and warned of AI dangers during TechCrunch Disrupt 2024 in San Francisco this week. "I've been doing AI for 15 years, people, I'm not just spouting shit. I'm not Sam Altman, okay?" Fadell said, drawing gasps from the audience.

Fadell, whose Nest thermostat used AI in 2011, called for more specialized and transparent AI systems instead of general-purpose large language models. He cited a University of Michigan study showing AI hallucinations in 90% of ChatGPT-generated patient reports, warning such errors could prove fatal. "Right now we're all adopting this thing and we don't know what problems it causes," Fadell said, urging government regulation of AI transparency. "Those could kill people. We are using this stuff and we don't even know how it works."



[1] https://techcrunch.com/2024/10/29/tony-fadell-takes-a-shot-at-sam-altman-in-techcrunch-disrupt-interview/



We do know how it works though (Score:2)

by mukundajohnson ( 10427278 )

Lossy most-common-path word generation. We just don't know fully the effect it will have in different domains when trusted. I agree that in the medical field it will surely help identify common cases, but I think it would be a crapshoot with cases of less data.

Re: (Score:1)

by Rei ( 128717 )

> Lossy most-common-path word generation.

You are describing Markov chains. LLMs are not Markov chains.

Re: We do know how it works though (Score:3)

by Baloroth ( 2370816 )

You're half right, they're not Markov chains. But OP isn't describing a Markov chain, he's talking about the transformers used in LLM, which use the output tokens from prior steps as an input to probabilistically generate the next token (based on what word is most likely next, given the entire context and training weights).

Re: (Score:2)

by ceoyoyo ( 59147 )

That's not how transformers work, nor language models. It's one, among many, of the ways you can train them.

Re: (Score:2)

by i kan reed ( 749298 )

They're not entirely unlike markov chains with ridiculously long look-aheads. N-dimensional tranformer matrices, I mean. The math isn't actually that dissimilar if you write it out.

Then again, I can't articulate a clear case that my own writing process is provably different from that, either.

Re: (Score:2)

by ceoyoyo ( 59147 )

The Markov property, i.e. the thing that makes a Markov chain a Markov chain, is that only its present state influences its future state.

He means it's unpredictable (Score:2)

by HalAtWork ( 926717 )

I'm sure what he meant to say was that AI results can be unpredictable. This means that using it in a situation with a specific set of expectations, it is possible to get undesirable results that can be difficult to prevent, and it is also possible for AI to provide results outside the scope of the expectations.

Re: (Score:2)

by rocket rancher ( 447670 )

> Lossy most-common-path word generation. We just don't know fully the effect it will have in different domains when trusted. I agree that in the medical field it will surely help identify common cases, but I think it would be a crapshoot with cases of less data.

You are describing a Markov chain. The Markov chain analogy is easy to reach for, but it’s pretty off the mark when it comes to describing how LLMs actually work. Markov chains are limited to generating the next “thing” (like a word) based only on the immediate last state and some preset probabilities, which works fine for simple sequences. But LLMs operate on a different level entirely. They don’t just pick the “most common next word.” Instead, LLMs use transformers to

We don't know how the human brain works either (Score:2, Interesting)

by brunes69 ( 86786 )

I have no idea how Tony Fadell's brain works. I don't even know if he is actually conscious, or if it is an illusion. We have no idea how consciousness even works. We don't even have a firm grasp on how memory works.

So why should I trust anything Tony Fadell says, or what any other human says? If we need to understand 100% how something works before we can use or trust it, we're screwed already.

Re: (Score:2)

by dfghjk ( 711126 )

Because, unlike AI, we have overwhelming experience with the human brain even though we don't know the precise "how" of it.

"If we need to understand 100% how something works before we can use or trust it, we're screwed already."

But we don't need that, and you know we don't. This is what's known as a bad faith argument.

Re: We don't know how the human brain works either (Score:2)

by brunes69 ( 86786 )

That argument is the exact argument he is making, so how can it be a "bad faith argument"

He said we can't trust AI because we don't know how it works. I am directly challenging that statement as demonstrably false. We trust things without knowing how they work on a daily basis.

Re: (Score:2)

by ceoyoyo ( 59147 )

He didn't say you should understand it before you use it. Well, he might of, but he didn't say it in the summary anyway.

We do understand how generative language models work. The "generative" part means they can make up unpredictable stuff that is not strongly limited by their input. You can (and do) feed them random noise and they output things that sound reasonable.

That's awesome for a chatbot, image generator, robot artist, Hollywood scriptwriter. Maybe not so good for a transcription service.

Fadell sound

Re: (Score:2)

by geekmux ( 1040042 )

> I have no idea how Tony Fadell's brain works. I don't even know if he is actually conscious, or if it is an illusion. We have no idea how consciousness even works. We don't even have a firm grasp on how memory works.

We define consciousness based on the simple medical definition that provided that word in our vocabulary. If you don’t recognize a conscious person, rest assured you can recognize an unconscious one. Including their obvious limitations to society and even their capability to survive in that state. We recognize and have defined normal/functional states for memory as well, because we know what failing memory looks like, and what it can no longer do.

> So why should I trust anything Tony Fadell says, or what any other human says? If we need to understand 100% how something works before we can use or trust it, we're screwed already.

Some things are in fact that simple in life. We kno

Lawsuits (Score:2)

by JBMcB ( 73720 )

The regulation is already here. If you are harmed by AI screwing up, you sue the entity using AI and the entity whom made the AI. Just like any other tool.

Re: (Score:3)

by MachineShedFred ( 621896 )

Well I'll remember to file a civil suit against the hospital after I'm dead.

Great advice!

Speaking of hallucinations... (Score:2)

by Rei ( 128717 )

> He cited a University of Michigan study showing AI hallucinations in 90% of ChatGPT-generated patient reports

Surely he means the claim circulated by Reuters of a University of Michigan researcher who said that he found hallicinations in 8 out of 10 cases of audio transcriptions generated by WhisperAI ?

Re: (Score:2)

by dfghjk ( 711126 )

Everything generated by these applications is a "hallucination". Neural networks not only do not precisely memorize information, they are designed to NOT do that. It only appears that they do at times because of their enormous size.

The term "hallucination" is really used to mean "bad result", the fact is that ALL results are simply made up, it's all fake. Neural networks are imperfect memories, something you want at times and something you don't at other times. The issue is that VCs, and the Altman's and

"I'm not just spouting shit" isn't doing the work (Score:1)

by Rosco P. Coltrane ( 209368 )

you think it does.

"I'm just not spouting shit" would have been more convincing.

"The major difference between a thing that might go wrong
and a thing that cannot possibly go wrong is that when a
thing that cannot possibly go wrong goes wrong it usually
turns out to be impossible to get at or repair."

- One of the laws of computers and programming revealed.