Growth of AI Adoption Slows Among US Workers, Study Says (axios.com)
- Reference: 0175458229
- News link: https://slashdot.org/story/24/11/12/2014203/growth-of-ai-adoption-slows-among-us-workers-study-says
- Source link: https://www.axios.com/2024/11/12/ai-adoption-slows-slack-study
> If AI's rapid adoption curve slows or flattens, a lot of very rosy assumptions about the technology -- and very high market valuations tied to them -- could change. Slack said its most recent survey found 33% of U.S. workers say they are using AI at work, an increase of just a single percentage point. That represents a significant flattening of the rapid growth noted in prior surveys.
>
> Global adoption of AI use at work, meanwhile, rose from 32% to 36%. Between the lines: Slack also found that globally, nearly half of workers (48%) said they were uncomfortable telling their managers they use AI at work. Among the top reasons cited were a fear of being seen as lazy, cheating or incompetent.
[1] https://www.axios.com/2024/11/12/ai-adoption-slows-slack-study
Where is management? (Score:2)
They should be telling employees what their corporate AI policy is. They should also listen to employees with an open mind about what they see as assisting them be more productive. Fears like this: Among the top reasons cited were a fear of being seen as lazy, cheating or incompetent. should not be an issue.
AI is a company decision not an individual employee decision.
Re: (Score:3)
> They should be telling employees what their corporate AI policy is. They should also listen to employees with an open mind about what they see as assisting them be more productive. Fears like this: Among the top reasons cited were a fear of being seen as lazy, cheating or incompetent. should not be an issue.
> AI is a company decision not an individual employee decision.
I should think that security would be one of the TOP risks for using AI...at least any AI resource that is located and operated externally
Re: (Score:2)
That is why management should be doing their due diligence and deciding the company stance on the use of AI. I know how my company weighs the security prior to any approval of AI use first and foremost. There are a lot of claims about private tenant use of AI but I am not convinced about the privacy controls.
Re: (Score:2)
And don't forget performance and accuracy. An AI support server may be cheaper than humans, but if its responses are useless garbage, your customers are going to go away.
Early adoption rates were experimentation. (Score:3)
Once you experiment you either fand that it helps a bit, and retain it, or it slows you down a bit, and you set it aside. Early adopters are early adopters. There tends to be a fairly significant portion of the business world that's a bit conservative with new tech. Those folks aren't jumping on board until it seems to have given a competitor an edge they feel they are missing, or they see something truly compelling and amazing about it. Right now the "compelling" is coming from the AI prognosticators, not the AI itself. And the AI prognosticators are coming off very much like snake oil salesmen. The whole "it's gonna save the planet" thing hits the ear wrong, for a myriad of reasons, and the louder that shit gets yelled, the more people go, "Oh, it's one of those deals," and turn away.
Re: (Score:3)
> Right now the "compelling" is coming from the AI prognosticators, not the AI itself. And the AI prognosticators are coming off very much like snake oil salesmen. The whole "it's gonna save the planet" thing hits the ear wrong, for a myriad of reasons, and the louder that shit gets yelled, the more people go, "Oh, it's one of those deals," and turn away.
Yep, pretty much. While many people cannot recognize a scam when it is just wrapped nicely enough, many people also can.
bottom of the class (Score:1)
I've seen three kinds of people interested in AI:
1. Slashdot types that might be interested but at least need to keep up and have a formed opinion.
2. Those that buy the marketing and think the current state of AI is really going to help the organization get more done.
3. Folks that see this as giving them some edge. I tend to think these are the bottom of the class folks or they at least feel inadequate.
I don't use the current garbage AI much because it doesn't do much for me that I'm not already easily doin
Re: (Score:2)
Case 3: why are they bottom of the barrel? AI is just a tool. If a tool can make me more productive, what's wrong with that?
I know (like almost everyone else here with rare exceptions) that AI is not magical and doesn't think or have a personality or need human rights or any of that crap. But it has proven very good at finding patterns and extracting information from very large seemingly random data sets, among other things and used by humans as a tool to enhance their work, not replace it, I believe it
Re: (Score:2)
Case 3: Yes, very much. It is basically the same effect that makes incompetent coders case the latest language hypes, frameworks and tools in a vein hope that this one great tool will finally make them non-crappy coders. Of course that never works out.
Case 2: For bullshit-work that may even be true. Not for any real work though. Of course, there is a lot of bullshit work being done in the corporate world.
Developer Tools (Score:2)
I've installed every AI code assistant there is into my IDE so they can all argue.
Re: (Score:1)
Who wins?
Do they uninstall the losers for you?
Some form of AI-icide?
Re: (Score:3)
And that is useful for... heating your home? While I appreciate the effort to mock reality, a heat-pump would give you 3-4 times the effect.
Methodology (Score:2)
There's a Methodology section in the [1]Slack report [slack.com]. There were a bunch of respondents in the survey (17,372). However, the report doesn't say how the respondents were selected. It's very likely that there was some or a great deal of self-selection. So, it's unclear how well the respondents population corresponds to the general population, i.e., it's unclear what these results mean.
[1] https://slack.com/intl/en-gb/blog/news/the-fall-2024-workforce-index-shows-ai-hype-is-cooling
The answer is obvious (Score:2)
Today's AI sucks mightily for useful work.
AI is a research project, and future AI may possibly be very useful, but today's AI is just crap generators, suitable only for having fun and laughing at it.
Re: (Score:2)
Well, it does "better search" and "better crap". The first has some limited use and the second is useful for DoS attacks on assholes (and for all kinds of SPAM, fraud and phishing attacks). But that is about it. Nothing that would justify the massive investments into practical deployments.
Whether it ever becomes a lot more useful, we will see. After 70 years of intense AI research, I do not think the big hopes (AGI or something close to it, even in a limited form only) will come true anytime soon and maybe
Welcome to the Gartner Hype Cycle (Score:3)
Which stage are we on currently? Peak of Inflated Expectations heading to Trough of Disillusionment?
Re: (Score:2)
I asked ChatGPT and this is what is gave lol. ------
The "Peak of Inflated Expectations" is a phase in Gartner's Hype Cycle, which describes the progression of technological adoption and expectations over time. Identifying when a technology, such as artificial intelligence (AI), has reached this peak involves observing several key indicators: Indicators of the Peak of Inflated Expectations:
1. Widespread Media Coverage:
There is extensive and often sensational media coverage about the potential of A
"being seen as lazy, cheating or incompetent" (Score:2)
Which is probably right on the mark in many cases.
Seems like AI's value is misunderstood (Score:2)
One headline says AI will be a a multi billion dollar industry, while another, like this one, suggests stuttering use of LLM based copilots. Seems like many people don't understand its real value which will be in things like cancer screening, protein folding, autonomous driving. i.e. things like classic pattern matching and classification type problems. LLM offerings appear to be duping the Ill informed into thinking that they can do various forms of sentient work for them, when of course LLMs are no more t
I am not allowed to (Score:2)
All AI sites are blocked in the company firewall, except Copilot, which is so slow and throttled that it is easier just to use ChatGPT on my phone and send the result to my work email or run something in MLStudio on my private macbook i also bring to work. :D
Not surprised (Score:2)
Most larger workplaces are very much in a limited trial phase and haven't done a big rollout yet. These trials take time. Where I work we had a limited set of licenses on trial for the past 6 months using Copilot, even the weirdos who *want* to use it can't get it, while some of us are forced into this sick experiment.
Re: (Score:2)
It's about 1/3. Is that considered low? 1/3 of everybody is a lot. Do more than 1/3 of all workers regularly use a word processor?
Re: (Score:2)
> It's about 1/3. Is that considered low?
In the corporate world yes. If you are a Microsoft shop with a push of a button Copilot is magically pushed to ever Office 365 app. That's kind of my point. Eventually if adopted at a corporate level this AI stuff will be used whether people want it or not, but right now it's a management decision more than anything else.