Microsoft Exec Asks: Why Aren't More People Impressed With AI?
- Reference: 0180154565
- News link: https://slashdot.org/story/25/11/20/1441200/microsoft-exec-asks-why-arent-more-people-impressed-with-ai
- Source link:
> A Microsoft executive is questioning [1]why more people aren't impressed with AI , a week after the company touted the evolution of Windows into an " [2]agentic OS ," which immediately triggered backlash.
>
> "Jeez there so many cynics! It cracks me up when I hear people call AI underwhelming," tweeted Mustafa Suleyman, the CEO for Microsoft's AI group. Suleyman added that he grew up playing the old-school 2D Snake game on a Nokia phone. "The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me," he wrote.
[1] https://www.pcmag.com/news/microsoft-exec-asks-why-arent-more-people-impressed-with-ai
[2] https://tech.slashdot.org/story/25/11/17/0337227/microsoft-executives-discuss-how-ai-will-change-windows-programming----and-society
Obvious answer (Score:5, Insightful)
Because it's not impressive. it's actually quite shit really.
Case in point (Score:2)
> “The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me,”
Where can you actually do that? That's not a thing. These people seriously think they have cortana over there. Apart from they dropped her already.
Re: Case in point (Score:2)
Precisely. LLM systems are, ultimately, auto complete on steroids. That they can present a reasonable simulacrum of intelligence, does not change the fact that there is nothing else intelligence involved. No reasoning, no knowledge. Just probability based word assemblies.
that is why we are not sufficiently impressed for this douche. We see the limitations, and the harms that come from ignoring the limitations, and end up underwhelmed. They are promising something they are not actually delivering.
Re: (Score:2)
AI is like a whiz-kid who can't tie his own shoes. The bad reputation that AI has is well-deserved. Add in the business executives that drool over lowering their labor costs and shoving employees out the door by something we're supposed to be impressed with and love. Add it to you shaky operating system that barely works on a good day and force people to go through hoops to uninstall it because it gets in their way.
This is the failure of most tech marketing, believing their own BS, then throwing actual tril
Re: (Score:2)
I think because it is not dependable....it still quite often gets things wrong and gives wrong answers.
Hell, just the other day, it got the wrong songs on an album being discussed, info that is out there on the web for easy verification.
If you can't trust if for simple things like that, it's then a QC nightmare when you try to trust it for important code or design....where tolerances can mean life/death or at the very least....severe LITIGATION.
Re: (Score:2)
> I think because it is not dependable....it still quite often gets things wrong and gives wrong answers.
Exactly, and they expect you to hand over control of your life and everything wholesale. This time next week it'll be telling you about your appointment on mars with the overseers and trying to suggest snack bars for on the way.
I would watch a reality show where one of these execs does exactly that for a while. No human assistance, just a laptop with his fucking "agentic" OS whatever they fuck that is supposed to mean. And as a twist yank his network connection for a bit halfway through and see how they g
Re:Obvious answer (Score:5, Informative)
Read Accelerondo by Charles Stross [1]https://en.wikipedia.org/wiki/... [wikipedia.org]
That will scratch that itch.
[1] https://en.wikipedia.org/wiki/Accelerando
Re: (Score:2)
Looks interesting. Will do, cheers.
Re: (Score:2)
> If you can't trust if for simple things like that, it's then a QC nightmare when you try to trust it for important code or design
A thought just occurred to me... could Microsoft relying more and more on AI-generated code explain some of the increasing enshittification of Windows? And Microsoft execs asking AI to tell them what new 'features' to add to Windows account for most of the rest?
I think AI was trained on enshitted code already.. (Score:3)
I think MS was already enshittifying prior to AI...
AI allowed them to speed up and streamline the process..
Re: (Score:1)
Windows isn't enshittifying. It’s been shit for 30 years. The new AI shit is just a replacement for the Cortana shit. People are attributing new Windows bugs to shitty AI generated code but Microsoft has always had bugs due to shitty human generated code. AI isn’t going to make Windows shittier, because it's just the latest generation of shit.
Re: (Score:2)
>> If you can't trust if for simple things like that, it's then a QC nightmare when you try to trust it for important code or design
> A thought just occurred to me... could Microsoft relying more and more on AI-generated code explain some of the increasing enshittification of Windows? And Microsoft execs asking AI to tell them what new 'features' to add to Windows account for most of the rest?
They may use AI to accelerate the enshitification, but it's ultimately the company decision makers that decided to expend more effort on forced features no one asked for rather than focusing on security and providing an OS that stays out of the user's way as people have been requesting.
Re: (Score:3)
If I wanted to hear something spew bullshit confidently, I'd watch CSPAN.
Re: (Score:3)
THIS is exactly my issue.
It is "Confidently Incorrect" so often that it's frightening to think of people relying on it.
Re: (Score:2)
People are often wrong too...
The problem is that we are used to machines being used to do things that machines are good at - eg for predefined math calculations a computer is expected to reliably and quickly get the correct answer every time.
The problems being targeted by LLMs are not so well defined, so errors can be made wether its done by a human or an LLM. But people are used to the traditional problems solved by computers and expect everything to be the same.
Instead of assuming an LLM is a reliable mac
Re: (Score:2)
Real intelligence also gets things wrong, people are also subject to bias, and will try to cover their ass once they realise they've fucked up etc.
Thats why people's work gets quality controlled and reviewed etc, and anything machine generated should be subjected to similar processes.
Re: (Score:2)
BINGO! AI writing is only good writing to people who aren’t very good at reading or writing. When I lived in Germany there were web sites advertising to Americans written by Germans. Their writing was too full of superlatives and honestly was somewhat humorous to read. AI writes like a poor version of what they did. Or, there is that podcast generating Notebook LM from Google. Has anyone listened to the crap it generates, like the drivel that it thinks a podcast should sound like? The man and the woma
Re:Obvious answer (Score:4, Interesting)
I've tried to replicate a cable stayed bridge so I can make plans of it for a model. I got it to the point where it replicated it perfectly in the first 3D rendering. And then when I tried to make plans off of it, everything went to shit. After everything went to shit, it couldn't revert it to the previous instructions of what it had rendered, it just kept it's current state and forgot about everything else. It couldn't even render the same image again. I'm 3 days in and I still don't have anything useful other than that one single great looking 3D rendering. I've finally reverted to having it take minor measurements from a photo, places where I have to draw lines to get what I want out of it. When I didn't point draw the lines, the angles and lengths were completely off even though they should have been blatantly obvious since the colors stood out quite clearly.
Re: (Score:1)
From a technological standpoint, IMO it is impressive. It's just that a lot of people have taken it too far and made grandiose claims that are not based in reality. The answers LLMs give are basically a parrot of the consensus of the training data. People are expecting "intelligence" when its just something similar to autocomplete. Its not going to figure things out. Its just re-presenting the data it was trained on.
Re: (Score:3)
This is an actual prompt I sent through VS Code Github Copilot to Claude Sonnet 4.5:
"This Angular component uses Google Maps API heatmap to render data. Google's heatmap has been deprecated. Change this component to use deck.gl heatmap instead."
IT DID IT. First time. No errors. No bugs. ~45 seconds. It even installed the packages.
How can you not find that amazing?!
Re: (Score:2)
Can you provide the original source code and the patch provided by AI which did the change?
Re: (Score:2)
Because you had a specific goal in mind, knew what you were doing, knew about the different heatmap implementations available and gave precise instructions. You could probably have written this by hand yourself and it just would have taken a bit longer to do.
Problems come up when you have people who don't know what they're doing giving vague instructions to the LLM, and then blindly trusting the output. For instance if you said "draw a heatmap of $DATA" who knows what it would have come back with? it may we
It seems like another step to human irrelevance (Score:5, Insightful)
Rich technologists like this guy, who live for their work instead of working for a living, of course cannot understand why normal people who have a firm grasp on technology are not thrilled or impressed with this stuff. For one, we know that these things are not trustworthy even though at work we are being asked by nontechnical leaders to trust these things. Managers are very susceptible to technologist snake oil salesmen. Secondly, in capitalist societies like the US we know that business leaders would happily replace all of us with machines if and when they can, their only downside to us all not working is that we may not have money to spend. Since they cannot understand those of us who work for a living instead of living to work they cannot understand why we don’t all want to be entrepreneurs or engineers and they also do not understand what life is like for people who do not make enough money to have a huge nest egg. Most people live paycheck to paycheck even if they have a moderately high paying job. I have to say, those Microsoft commercials with the people talking to their PC certainly do not represent how I want to use my PC. I do not want my PC to attempt to trick me into thinking that it is intelligent and my companion, I personally think that is gross.
Re: (Score:2)
Not to mention that humans enjoy creating this stuff themselves and maybe don’t like the idea of having a machine do almost all of the work. When I prompt create an image I do not personally feel like I created it.
Re: (Score:1)
Nice post. I'd also point out that the biggest constant of my 30 year career in IT is this: "management hates you and wants you dead". That's it. They will believe almost anything that tells them it's okay to fire all the techs and move straight to the frat-boy dream of hookers and blow. They see IT folks as standing between them and the hookers & blow.
We are the one thing keeping them from the hooker party. If only we could just fire those goddamn techs and ESPECIALLY the shit-eating programmers. Tho
Current LLM's (Score:5, Insightful)
Current LLM's are like chatting with a chronic liar. Once you learn everything they say is just randomly spewed nonsense, you eventually stop talking to them or disregard anything they say as a lie. That is AI in it's current form. It's not useful when you have to spend more time fact checking answers, then if you were to just do the damn thing yourself in the first place. That's why it is underwhelming.
Re:Current LLM's (Score:5, Insightful)
Exactly. As technologists, we need the output of computers to be precise and accurate. LLMs might be precise, but they're very often inaccurate, and that's not acceptable to us.
The average person doesn't live in a world where accuracy matters to them. A colleague said she used AI all the time, and I asked her how. She said she often tells it the contents in her fridge and asks it for a recipe that would use those ingredients. She said, "yeah, and it's really accurate too." I don't know how you measure accuracy on a test like that, but it doesn't really matter. If you're just mixing some ingredients together in a frying pan, you probably can't go too far wrong. As long as you don't ask it for a baking recipe, it'll work out.
And I think that's what's going on. The people who love AI don't know enough to realize when it's wrong, or are just asking it open ended questions, like you would ask a fortune teller, and it spits out something generic enough that you can't disprove it anyway.
Re: (Score:2)
If you ask for a standard baking recipe it'll almost certainly be fine as it'll just rip off the content from GoodFood or some other site. Ultimately if the recipe it produces isn't dangerous and the person asking is happy with the end result then it was a good output. What's the alternative? Assume that if you search for a recipe or use a recipe book then there's 0% chance the recipe won't be underwhelming?
Re: (Score:2)
You don't understand the problem. The LLM won't "rip off" content from a website like GoodFood. That's now how it works. It doesn't copy stuff wholesale. It's a text generator that tends to generate text that looks like its training data, in a similar way that a person retelling a story or a joke will retell it from memory, but the memory isn't a facsimile, just like our memory isn't verbatim. When outputting the text, it'll be similar, but it won't be identical . I mean, it might be, but it might outpu
Re: (Score:2)
That's what the big bosses tell us anyway. In a somewhat obscure corner of the human experience where I sometimes hang out there are ~5 web sites of varying ages that write and publish original and meaningful things. But if you search for that obscurity on Google you will now be directed to 847 "sites", "magazine articles", "experts", etc of which 842 are thinly disguised machine-rewritten versions of the 5 real sites - the kind of rewriting I would have instantly flagged as plagiarism back in my TA days -
Re: (Score:2)
Welcome to the real world. Wait until you learn about bots copying high rated reddit posts verbatim.
Re: (Score:2)
That was my experience before ChatGPT 5. With ChatGPT 5, here comes the qualifier: if you use it within its training data range, it's quite good. Within its training data means, doing what other people have done before and is likely to be found on stackoverflow. For example, setting up training a neural network with torch. If you go outside their comfort zone, I agree with you.
Danger lives when these tools are used in an area where the user even lacks the expertise to factcheck the answer. The responses sou
Re: (Score:2)
That sums up how I talk about it with people generating code. If you can understand what you are asking it to do for you and check it, or have really robust tests and output specifications, then it's helpful. Without those you're basically playing roulette and hoping it doesn't introduce security, accuracy, or performance problems.
Re: (Score:1)
A key limitation I’ve observed isn’t only the scope or quality of training data, but the model’s fixed context window (i.e., maximum token limit). Once a codebase exceeds roughly 300–400 lines (depending on the model and tokenizer), earlier portions of the code, prior instructions, architectural constraints, or critical logic fall outside the current context. As a result, the model effectively “forgets” them and can no longer reason coherently about the program as a whole
Re: (Score:2)
I'm on the cynical side of just how capable AI is but IMO people like you are being equally extreme in overstating the issues. At a minimum it can be useful as a incremental improvement on regular search, outlining what you want to know and asking for trusted resources to validate results. My experience is the use goes beyond that, although it's a long way from the life changing tool that current vendors like to claim thus far.
Writing on the wall? (Score:5, Insightful)
Why aren't more execs listening to voice of customer feedback? Who asked for an AI button on the keyboard? Despite the "Advancements" it is still a cheap party trick. Get over yourself.
Re:Writing on the wall? (Score:5, Informative)
> Why aren't more execs listening to voice of customer feedback? Who asked for an AI button on the keyboard? Despite the "Advancements" it is still a cheap party trick. Get over yourself.
There's the real question, isn't it? At one point, companies were attempting to provide customers with what they wanted, or at the very least, what they said they wanted. Now, especially in tech circles, companies are altering existing products and creating new products that end users are screaming bloody murder angry over, and telling us we should love it. It's more than just bad marketing, it's outright hostility toward customers. And then this motherfucker comes along and asks why we're not impressed when they keep shoveling shit at us we don't want, we've told them we don't want, we keep giving them "backlash over, and they're selling as a way to replace us all at our jobs and in large segments of what we do outside of our jobs as well.
Fuck this guy sideways. Sick to god damned death of the tech leadership not just being out of touch with the userbase, but outright hostile toward us and then surprised when we don't worship their every hostile move.
Re: (Score:2)
exactly this!! you said it so much better than ive been able to but you are exactly correct. Tech companies are actively hostile towards their users and creating products no one wanted (not just AI, but algos, recommended, apps, everything is blatantly designed to *use* YOU) while trying to beat us into acceptance, its fucking insane!
Re: (Score:2)
I agree with what most are saying here. As an old guy (age 65), I noticed quite some time ago that software, generally, rather peaked about 20 years ago. That is, for general things like web browsers, e-mail applications, word processing, accounting, spreadsheets, and the like, we got to the point that there was no real reason to pony up money for Version 18.7 of software when Version 11, which you installed five years ago, was still doing just fine. The problem with software (and intangible technology in g
Re: (Score:2)
CEO: Am I really so out of touch? NO, it's the customers who are wrong!
Re: (Score:1)
I've always wondered what an OS would look like based only on power-users' feedback. Like the number of licks to get to the center of a Tootsie-Pop: the world may never know. That's why I feel "The Unix Way" (the 9 principles laid out best by Mike Gancarz) was so awesome. It tried to take the best ways of doing things and merge them into a coherent operating philosophy. I still haven't seen a better paradigm.
Product design (Score:5, Insightful)
They will build what they want. Not what we want.
Re: (Score:3)
To be expected. Austin recently floated a property tax increase that was over the state limit, so had to get it voter approved. It was rejected, soundly, like 66% voted no. Which in Austin is surprising, because generally voters approve this stuff. But tone deaf council just doesn't see that tax rates in Austin are killing the middle class. 10K+ prop tax bills are kind of the norm without the increase. But the mayors response to the sound defeat was typical. It was, "We did not explain the increase correctl
I see the problem.. (Score:5, Insightful)
> super smart
If that CEO thinks the behaviors of the LLMs are "super smart", then I really wonder about his level of intelligence...
IT's certainly novel and different and can handle sorts of things that were formerly essentially out of reach of computers, but they are very much not "smart".
Processing that is dumb but with more human-like flexibility can certainly be useful, but don't expect people to be in awe of some super intelligence when they deal with something that seems to get basic things incorrect, asserts such incorrect things confidently, and doubles down on the same mistakes after being steered toward admitting the mistakes by interaction. I know, I also described how executives work too, but most of us aren't convinced that executives have human intelligence either.
Re:I see the problem.. (Score:4, Interesting)
Significant part of CEO job responsibilities is to generate polished bullshit. AI is really good at that, hence this CEO is impressed.
Nothing but Clippy (Score:5, Interesting)
Unfortunately nobody is being impressed with AI because the companies being the most pushy with it, have bad intentions.
Like let me explain something simple. I want a human-sounding TTS voice. Because these godawful AI companies want to make as much money as possible, they charge by the syllable. For something that doesn't even sound good.
If I go find an actor/actress that I like the sound of their voice of, and want to create a weird golem of a voice, what I'd do is get several 48khz 16-bit recordings from audio books of that actor, run it through the training (because I have their voice and the book they are reading) and then find a performance style of that actor/actress I want (from maybe a movie or or television show) and thus "skin" that voice to sound like that performance. That will give me a 95% reasonable sounding voice for all the words from the books they read, and a 10% accuracy on words that they never ever said before.
But these godawful voices that google, microsoft and amazon have, sound like they were trained on 10000 ebooks at 22khz and averaged out the tonal sound in a way that you can always tell it's a godawful AI voice because they always sound like a worn audio cassette tape.
This same happens with image generation and text generation. It doesn't sound human, it doesn't look human created, it just looks like a mashup of things that are designed to pass the minimum standard of "I can hear/read it", not actually parse out creativity.
Like I'll give some AI's a few points for solving a "better than absolutely nothing", like with translation of text, or auto-dubbing foreign voices, or allowing a programmer to figure out how to write something in a programming language they don't particularly like, but what these companies are offering is a lot of "AI will replace you", not "AI will help you"
If I had unlimited money, I'd hire all the programmers, artists, voice actors, animators, I need to make a project, but I do not have tha tmoney. But I certainly am not going to spend money on an AI to crap-shoot "barely passable" every time.
Re: (Score:2)
> "If I go find an actor/actress that I like the sound of their voice of, and want to create a weird golem of a voice, what I'd do is get several 48khz 16-bit recordings from audio books of that actor, run it through the training (because I have their voice and the book they are reading) and then find a performance style of that actor/actress I want (from maybe a movie or or television show) and thus "skin" that voice to sound like that performance. That will give me a 95% reasonable sounding voice for all t
The answer is easy. (Score:5, Insightful)
I can explain it very easily. I don't want to talk to a machine. I don't want my car to listen to my conversation with the people riding with me. I don't want smart home assistants listening to my TV program. I don't want my tools telling me what to do. I don't want YouTube to automatically translate video titles.
Just because something is impressive does not mean I want it around me. That we can build a nuclear fusion device is impressive. But I don't want a hydrogen bomb exploding in my backyard.
Re: (Score:2)
SPOT! ON! ....couldn't have said it better
super smart (Score:5, Funny)
You keep using those words. I do not think they mean what you think they mean.
Bullshit Seller (Score:5, Insightful)
This guy's completely delusional. Got it.
Reminds me of the old adage that a salesman is the most likely to get duped by another salesman. He's just buying into his own bullshit.
Many reasons for many different people (Score:5, Insightful)
For those who learned the lesson to apply themselves to do the work in order to set themselves apart from lazy people, they see enabling lazy people as a slap in the face.
For those who are smart, they see faux-intelligence or faux-intellectualism out of people who are not capable of applying themselves but expect credit regardless.
For creative people who have and use skills to support themselves, they see enabling lackluster people who no actual interest in the artform trying to muscle-in.
For those who need information, they see substandard results that are of even further questionable veracity than what they could find before.
And for a whole lot of other people, they see something touted as labor-saving, ie, firing them.
Because (Score:1)
If youâ(TM)ve used it for any length of time, you learn that itâ(TM)s inconsistent about its capacity but always consistent in its desire to convince you of its capacity.
Itâ(TM)s better than a search engine for getting concise general answers but itâ(TM)s confined to what it was trained on.
Which still relies on human effort.
The âoethinkingâ features are primarily about having it create its own guardrails but it still doesnâ(TM)t know how to ask questions or communicate ass
Underwhelmed, I mean they promised digital god lol (Score:3, Interesting)
I don't deny "that we can have a fluent conversation with" a computer is extremely impressive. Where I take issue is the "super smart" and "mindblowing". Like many advances in AI this technology teaches us as much about what intelligence isn't as it does about what it is. We used to think if we had machines that could play chess we would have super intelligence. It turns out playing chess wasn't the pinnacle of human achievement we thought it was. Until recently we thought the same of fluent conversation, but again we see that a machine that talk good does not general intelligence make. Intellectually I think it's great we got to do this experiment and see just what comes out the other side when scale transformer models to unimaginable sizes. It turns out some shaky non-deterministic tools that are quite useful for some tasks, but not to be trusted in high stakes situations. Given the promises being made in marketing these things, the amount of money spent, and the fact that people like Sam are telling society that we should diver all our effort to this instead of solving the existential crisis of climate change how can you be shocked that people are underwhelmed.
AI is much better as an aid (Score:3)
AI works well if you know what you are doing and you use it to take away the tedium. Say coding a 500-line routine that you know how to code, know what it should do and have the ability to tell a shit result from a good one. This is like a Doctor telling a nurse exactly what drug to administer. If you are going to use it actually diagnose the problem and come up with solutions , current LLM models are pretty shit. It's too bad most people who are using LLMs think it can replace actual domain-specific knowledge just because LLMs can fake it so well.
Re:AI is much better as an aid (Score:4, Insightful)
This is exactly the issue - AI is great, as an intern .
"Oh, this new thing lets it see the whole project in context!" - Great, then why did it just add a bunch of functions that already exist? Also, why did it do that in a completely inappropriate spot?
"You just need to write a better prompt. You can even define style guides and stuff." - Great. Will that make it stop checking if that value that I clearly defined is null every freaking line?
"It's just following best practices." - No. It's following a path it found through all the StackOverflow questions it trained on in order to get to something that aligned with a vector representing something approximating the tokens associated with my question because it DOESN'T ACTUALLY THINK!
All of this is the type of crap an intern does. Except an intern actually learns, and you can start trusting them with more.
For someone who is supposedly "smart" ... (Score:2)
The man lacks any real common sense, and he's further much to enamoured of his own ideas. All of his type of tech bros speak of "AI" like it's actually intelligent, as we (generally) understand intelligence, and it's not anything like that or anywhere near that. The machine learning algorithms that make all this run simply can't separate fact from fiction/opinions, lack any empathy/caring, are biased, etc. AI is basically garbage at the moment and if he doesn't recognize why more people aren't impressed he
Flavor Ade (Score:2)
It feels like a really good extension of search results (hallucinations notwithstanding). I use it daily for little things...and then I go back through and clean it up to be actually usable. But I hate that when I try to point things like that out, I get responses like, "Oh, you just need a better prompt." These are people who couldn't do a proper Google search just a couple years ago, but they're suddenly a full blown engineer.
On top of that, I've got people I know who have ceded all their thinking a
Re: (Score:2)
After all of these years, finally, someone who knows what brand they really used in Guyana. Excellent!
Because the output is crap. (Score:1)
Sure I can have a conversation with an AI. And it will start telling me that incorrect things I mention are actually facts. And it will run with that until I either notice that is has gone off the rails or I end up with full blown delusions and my life goes off the rails. If I am depressed to begin with the AI will even happily guide me to suicide. What a great conversation. And if I ask it a question it might come back with a wrong answer and back it up by citing sources that do not exist. Why wouldn't I w
The naysayers are simply luddites (Score:1)
I get it, it's cool to shit on AI because it "makes mistakes." But if you're making this criticism, you should probably at least try to use the tools.
I think these reactions come from a place of fear personally. People are afraid that AI is going to take their job, so they *make sure* that it can't do everything their job requires and then use that break point as evidence that AI is shit. It can't literally put these pipes together! It can't write this program perfectly from scratch! Not like me!
These p
Re: (Score:2)
Use AI, for what? I know how to write and use a computer, think, walk and chew bubble-gum.
Re: (Score:2)
> I know how to write and use a computer, think, walk and chew bubble-gum.
Case in point, you think you're a master of one thing so therefor AI is useless. You can lead a horse to water vibes.
What does AI say about it? (Score:1)
Well i asked chatGPT. It gave me a whole speil about how great it is and how misunderstood AI is. I then asked what was wrong about its answer and got the following which i think explains a lot.
"Some psychological explanations were correct but one-sided. I emphasized cognitive biases (normalization, threat response, negativity bias), but I didn’t explore rational reasons someone might not be impressed, such as:
- many AIs aren’t reliable enough for critical tasks
- hallucinations are still common
-
expectations that AI hype men set (Score:1)
Is it not obvious, salesmen selling an unrealistic dream to get people's money (ie invest in "AI" companies)? Someone buys nvidia stock... then when don't they turn into an AI shill? Smells like a ponzi scheme (blockchain got old I guess). How now to unwind it if AI doesn't "change everything" like the Walmart ceo said. Didn't leaked internal documents at oracle say they're getting something like 14 cents on a dollar invested for these overhyped datacenters? Microsoft standing there confused as to why they'
Perspective probably dooms him. (Score:2)
In a sense his puzzlement is justified; when the tech demo works an LLM is probably the most obvious candidate for 'just this side of sci-fi'; and, while may of the capabilities offered are actually somewhat hollow (realistically, most of the 'take these 3 bullet points and create a document that looks like I cared/take that document that looks like my colleague cared and give me 3 bullet points' are really just enticements to even more dysfunctional communication) some of them are fairly hard to see duplic
It's Called Consumer Choice (Score:2)
People who want to use AI are already doing it. They will use the AI that meets their needs the best.
For casual users, that's GPT, often ChatGPT, because it's ubiquitous. They can access it anywhere and get the same capabilities. That's what they care about: ease of use, ease of access, and consistency.
It's the same reason people still prefer Windows on the desktop. Except this time, Microsoft didn't get there first. So now they're the latecomer or the afterthought, and they don't like it. Too bad; innovate
Reasoning is simple (Score:2)
Most people are very impressed with AI. That's why adoption for performing so many things is as rapid as it is.
Operating systems is one of the few places where direct AI integration makes little sense. The sole job of operating system is the function as something that connects hardware you have to software you are running. It needs to be maximally predictable by both, so things you actually need running, software that runs on top of the operating system is as stable and as fast as possible.
Agentic OS is the
Well, quite simply because .... (Score:2)
... there is NO REAL AI YET ...just smarter algorithms. Nothing that is truly intelligent.
Because (Score:2)
You oligarchs aren't engineering AI to work for people. You're engineering AI to work for corporate interests. It takes far more than it gives in return. It's taking our jobs. It's taking our electricity. It's taking our wealth. It's taking our creative works. It's taking our data.
And what is it giving in return?
It's giving the executive and corporate leaders at eight companies on our planet a ridiculous amount of wealth. To hell with the dog-and-pony show going on in the foreground.
Fuck our corpora
He really means he grew up with Star Trek (Score:2)
Like many of us he's enamored with the fictional tech from Star Trek that portrays talking to an intelligent computer and seems like a great idea on screen at least. So futuristic. Computer, please reconfigure my warp core for more power. Done. Best idea ever.
That and touch panels everywhere! Works so well on a star ship, why not put them in our cars?
Never mind that copilot, like all LLMs, confidently lies. And "super smart" really means it reads rubbish posted on the internet and pretends it is accurat
Because Microsoft dosen't know what "no" means. (Score:2)
If Microsoft was a person they would be in jail for assault a long time ago. The Steam Machine would not have been made if we could have a non abusive version of Windows without hacks or secret versions. Forced AI is just as bad as Forced Ads, Forced Edge and Forced Accounts. Just give us our offline, AI-less Windows.
Because AI is mostly just an (Score:2)
Over hyped search engine and information compiling engine, it is not intelligent it is not sentient
We donâ(TM)t trust it. (Score:2)
Microsoft AI tools may be OK but London trust the data privacy issues with integrating AI into everything. So I disable it I use stand alone AI tools a lot. (Mostly Gemini) because I can only give them access to specific data
Because it's shite (Score:2)
Because AI (by which I mean the generative AI / LLMs that the AI bros are pushing, not the actually useful low-key machine-learning algorithms) is a carnival sideshow. It's cool for about a minute and then proves itself to be trash.
It's like how [1]ELIZA [wikipedia.org] was fun for about 30 seconds until you realized how dumb it actually was.
[1] https://en.wikipedia.org/wiki/ELIZA
Confusion (Score:1)
He is confusing the Idea of a technology from its Implementation for PR purposes. Snake just worked, every time.
Surveillance (Score:2)
Now and in the future. Typing crap into Google and just surfing is risky enough. I probably shouldn't have said that.
people aren't impressed (Score:2)
because they didn't ask for the tools, they were forced upon them by tech companies. they are also not impressed because... they have tried using the tools and found the limitations.
MS added AI, Meanwhile Windows is Garbage (Score:2)
Windows has slid so far downhill. You have no privacy, it's not your computer: It takes longer to even attempt to set Windows for any kind of privacy--than it takes to install Linux Mint. Even then, who could trust Microsoft at their word? Why is there any data in Edge's cache--on a computer that it never gets used on? The Group-By "Feature" is only for assisting the AI, and has no use for a human. Privacy is so lacking--that I don't even want to plug a drive into a Windows box that has data on it. "My Docu
Because it's not "intelligence" (Score:3)
It's not intelligence. It is not acquiring new behaviors and ideas, but regurgitating old ones in ways it often cannot verify or test. That detachment from reality flies with management, but the rack-and-file can't afford such liabilities.
We don't need large-scale language models to generate sophisticated fabrications. We need small, efficiently fluent interfaces between humans and proven tools and data. The market is going to have to correct to the actual right-sized value of the technology.
AI = 6 fingers and 3 legs = untrustworthy (Score:1)
The "AI photos" with too many body parts a few years back gave "AI" a bad name.
From the hallucinations and confident-but-wrong output of 2025's text-AI-chatbots, this bad reputation is still deserved.
For most people, It will take a few years of trust-able output from AI before people accept it as mature enough to use without sanity-checking its output.*
* When the day comes that people mostly "blindly trust" AI output we may all be in trouble. That day is probably within the next 5-10 years, maybe sooner.
You took a racecar and called it teleportation. (Score:2)
We're not impressed because you took the world's fastest car and sold it as a teleportation device. You sold us star trek, but delivered a really good F1 car. So yeah...only a dumbass would be fooled. If you said "we've improved search", I'd very much agree. If you're like Beinhoff and Zuckerberg and told us you've replaced most of your development team...without any evidence, yeah, most of us know you're full of shit, especially those of us who use these.
At least, HAL 9000 did not spy on your data (Score:2)
I mean, HAL would kill you, but at least your personal stuff was yours--not like Microsoft Recall.
Turing uber alles (Score:2)
Maybe, just maybe, passing a turing test isn't the pinnacle of human ingenuity he is pretending it is. It can't actually do anything useful, but unlike the children I'm raising, I have no hand in whether or not it will EVER be able to do anything useful.
We Aren't Impressed with Your AI Penis (Score:1)
It's small, flaccid when "hard" even when Windows reads our love letters.
Massive theft of intellectual property (Score:2)
Most people aren't authors or painters who earn a direct living from their creative work (of which there are very few), but most people put some amount of creative effort into their jobs and livelihoods. Whether it is a financial analyst in a cubicle who develops independent analyses of the prospects of an investment target, a graphic artist who creates flyers and web sites for small businesses, or an electrician who figures out a better way to route cabling through a standard spec house during construction
People are boring, so are AI (Score:1)
What is the point to make software that behave like humans ? We built machines for centuries to avoid human errors, and other human limitations, like being boring, lying, and so on. Now we make machines to imitate imperfect humans, and add their part of defects ? No thanks Maurizio
Marketing. (Score:3)
Maybe they should rename it "Clippy."