Ask Slashdot: Do You Use AI - and Is It Actually Helpful?
- Reference: 0178210036
- News link: https://ask.slashdot.org/story/25/06/28/0521201/ask-slashdot-do-you-use-ai---and-is-it-actually-helpful
- Source link:
> Out of pure curiosity, I have asked various AI models to create: simple Arduino code, business letters, real estate listing descriptions, and 3D models/vector art for various methods of manufacturing (3D printing, laser printing, CNC machining). None of it has been what I would call "turnkey". Everything required some form of correction or editing before it was usable.
>
> So what's the point?
Their original submission includes more AI-related questions for Slashdot readers ("Do you use it? Why?") But their biggest question seems to be: "Do you have to correct it?"
And if that's the case, then when you add up all that correction time... "Is it actually helpful?"
Share your own thoughts and experiences in the comments. Do you use AI — and is it actually helpful?
[1] https://www.slashdot.org/~VertosCay
Yes (Score:3)
My biggest use is as a search engine. Sometimes I'll just use the AI answer Google automatically posts in the search results, or sometimes I'll ask ChatGPT specifically. I don't rely on the answer, I click on the link citations provided, which are sometimes good, sometimes bad. Google AI is still garbage tier, but it does provide link references.
My second biggest use is to generate art. [1]Flowers love you [deepai.org]. [2]Time flies like an arrow [deepai.org].
[1] https://deepai.org/gallery-item/12255ca18a834cb396c45a9f62f25330/flowers-love-you-with-a-girl.jpg.html
[2] https://deepai.org/gallery-item/541731be65ba460dae888c9c731c7738/time-flies-like-an-arrow-1dce50.jpg.html
Re:Yes (Score:4, Insightful)
My experience with ChatGPT is that once things become really interesting, it fails to provide sources. That is not good at all.
Re: (Score:3)
If that happens, you just have to look elsewhere. It can't be trusted. (To be fair, Wikipedia has the same limitation).
Re: (Score:2)
It was for something that I had been looking unsuccessfuly for a while. So all ChatGPT did was add to my frustration. I do not even know whether this was a hallucination or the truth it gave me. Pathetic.
As for Wikipedia, quality is usually high and you get tons of sources if you do not trust it.
Re: (Score:2)
It gets worse. You'll find the sources they do provide very often don't exist or don't support the nonsense the chatbot smeared on your screen.
Re: (Score:2)
Exactly. That is why the result was useless (or worse): High probability of hallucination and no way to verify.
Re: (Score:2)
While I think the high end models are actually very impressive, the one on top of google search results is a drooling idiot and regularly just makes up nonsense. Just yesterday I searched for information about radiation suits on the "Dune Awakening" game (very fun), and it utterly hallucinated a big bunch of nonsense about it being 3 piece, with exchangable breathing apparatuses and the like. None of that is true. I *think* it was cribbing details from Fallout. (It mentioned "Radaway" tablets, which are fro
Re: (Score:2)
Yeah Google's Search AI is the worst...
It can't even get the names of A-List Hollywood actors right if you give it the movie.
Re: (Score:2)
You're almost there. What you've discovered is that they don't actually summarize text. (They can't.) They don't do most of the things people think they do.
> Its not a good implementation of generative AI.
They're all a waste of time. Call me an optimist., but I'm hoping that the nonsense generator at the top of Google's search results will teach more people to think critically and not just blindly accept whatever nonsense a chatbot tells them.
Re: (Score:2)
> My biggest use is as a search engine. Sometimes I'll just use the AI answer Google automatically posts in the search results, or sometimes I'll ask ChatGPT specifically. I don't rely on the answer, I click on the link citations provided, which are sometimes good, sometimes bad. Google AI is still garbage tier, but it does provide link references.
This is also how I use AI, not for generation but as an efficient first-step replacement for Google Search and Wikipedia. I never understood the criticism of ChatGPT for hallucinations because every source of information I've ever come across in my life has some non-zero probability of inaccuracy, and intelligent reading requires questioning everything that I read. I suppose that that the people who blindly accept all output from ChatGPT are the same ones that are easily swayed by anything they read in we
Re: (Score:3)
> every source of information I've ever come across in my life has some non-zero probability of inaccuracy
What that chance >60%? Because that's what you're getting with silly chatbots.
Further, the nature and type of mistakes you'll find in other sources are fundamentally different from the kind chatbots produce. As for people's trust in their output, they look to the average person like an all-knowing guru. They blindly trust the output because the output looks credible, especially when it includes a "source" and because of all the media hype telling them how amazing they are.
> intelligent reading requires questioning everything that I read
Nonsense. That would be par
Re:Yes (Score:4, Informative)
> My biggest use is as a search engine. Sometimes I'll just use the AI answer Google automatically posts in the search results, or sometimes I'll ask ChatGPT specifically. I don't rely on the answer, I click on the link citations provided, which are sometimes good, sometimes bad. Google AI is still garbage tier, but it does provide link references.
I was just about to post the same. Search engines - the traditional ones, are less than worthless.
As an example, I had wanted to find details of a man and wife who drove off a bridge because his GPS told him to. This was 10 or more years ago. Old school, page after page after page after page of a family suing Google because a man drove off a bridge in 2023. Wasting my time.
DDG AI found it using the same search terms in one hit, with two links.
Re: (Score:2)
> DDG AI found it using the same search terms in one hit, with two links.
Ah, I didn't realize DuckDuckGo has AI search now. Nice tip, I'll check it out, thanks.
Re: (Score:2)
https://duckduckgo.com/chat
100% of your searches reside on local storage only, but who knows what the chatbot API servers keep?
Re: (Score:2)
Yeah. Here's an [1]example chat with citations [chatgpt.com], so you can get a feel for what it's like. Bing.com/chat is also worth checking out imo, since it gives better results than Google.
[1] https://chatgpt.com/share/68613027-64a4-8006-9cca-ac4b98a78939
Re: (Score:2)
I sometimes read the AI summaries from Qwant. It works by summarising the first page of search results, so these are the sources.
Re: (Score:2)
You do know that these things can't actually summarize , right? Stat checking the "sources" against the output. It's astonishing just how much these things get wrong or fabricate completely. It's best to ignore the AI slop at the top and scroll down to the actual results.
Re: (Score:2)
For now I use it out of curiosity, before following the links, and I don't have to complain. There is a field to report mistakes, I think used it twice in the last year. I misrepresented Qwant, it actually shows the exact list of sources for the summary under "detailed answer".
Re: AI = glorified search engine (Score:2)
Summarizing is one of the things they are actually decent at. Since the "facts" come from a site and not from the model, they are about as reliable as the source. Of course, sometimes the source is a Reddit post and the summary makes it look more official than that.
Re: AI = glorified search engine (Score:2)
Perplexity.ai uses several different models (automatic or manual selection) and gives sources.
Useful If Verified (Score:5, Interesting)
So I'm not a good programmer. At all. I know just enough to be dangerous, and am significantly slower than someone who would know what they're doing.
I've been using LLMs (no such thing as AI!) to help me write code for the past two weeks or so. I've been wanting to do these tasks for years but it would have taken me days or weeks to get to a solution to any one of them. I have revolutionized several rote tasks that I did on a regular basis and will save myself tons of time in the future. And I'm still working on more of them.
The real trick is that you have to verify everything that comes out of it. It's never right the first time, even if you do a good job of describing what you want. I've had to tweak the code directly in some cases where it just won't get it right. And I've seen it get stuck in loops where it just breaks worse and worse. Then it's necessary to grab the last working version of the code, start a new chat, and paste it in with the latest request.
You can't just say "write code to do x." You have to have some idea of what it's doing and be able to thoroughly test it and validate its results. Do that and it can be useful.
Re: (Score:1)
> You can't just say "write code to do x." You have to have some idea of what it's doing and be able to thoroughly test it and validate its results.
Which is exactly what makes "A.I." completely useless. If everything has to be extensively tested and verified, what is the point? If you can't trust the output then you might as well just do the work yourself.
Re:Useful If Verified (Score:5, Informative)
The point is, for me anyway, that I don't have to do the typing!
I *am* a good programmer, but there's a lot of boilerplate that sucks when you have to type it all out (or copy/paste from an old project or example then customize to fit the current project). I can tell it HOW to make a program work and get it to do all the boring parts. Sure, it's just going out there and copy/pasting whatever, but it's also doing a search and replace for me.
The only time I've had trouble with AI programming is when I ask it to do something clever. It doesn't do clever very well. But if I give it my "clever" code and ask it to do something boring and mundane with it? It does a great job with that.
Re: Useful If Verified (Score:3)
This. Claude is flat out great at the boilerplate. Certainly better than me: it comes up with more thorough solutions than I would ('cuz I'm lazy), and in instances where I'd have to dig into arcane platform docs (multiplatform stuff), it just knows.
Lets me get to the _real_ work of a project far quicker.
Re: Useful If Verified (Score:5, Insightful)
I think the point here is that it can be useful for someone who isn't really a coder, to produce just about usable simple things. But if you are a coder it's not really faster. Maybe in the short term, but in the long term it's worse.
As somebody who's been coding for 40 years, in multiple languages, I already have my own libraries full of the vast majority of stuff. These libraries are not just fully tested already, I understand them thoroughly and so can write the small changes to them needed for each project. I only need to test the changes, not the main body of code. If I use LLMs for it, I need to test every single bit of it every single time, and because I'm not learning the algorithm myself I make myself dependent on the LLM going forward. Even on those occasions where I don't already have something similar in a library it's better to write it and understand it myself rather than rely on an LLM, just in case I need something similar again in the future. And if I do the code I've saved will be fully tested in advance.
So in summary an LLM can be useful if you don't code often, and can speed up work in the short term. But using it will prevent you becoming fully experienced and will mean that you are always slower than someone like me no matter how many years' experience you get, because using it prevents you from gaining that experience. And it will always be useless for larger, more systems level projects because it is too unreliable and if you don't have my level of experience you won't be able to spot the thousands of subtle bugs it would put in such a project.
Not that most systems level projects aren't already full of these subtle bugs, but that's a whole different problem caused by Companies not wanting to pay people like me what I'm worth.
Re: Useful If Verified (Score:2)
With great respect for your skills and your long tenure as a coder, a couple other ideas here:
1. We're all (me included) looking at this at a moment in time, at "AI", however it's construed, in mid-2025.
Have LLM's and their recent ingestion of the Internet reached the point of diminishing returns and we've reached a stasis? Or are we soon going to see more punctuated equilibrium and AIs that are advanced enough to do the whole thing buglessly from a product description?
I really don't know, and I've seen pos
Re: (Score:2)
I'm as certain as it's possible to be that LLMs are not the path to more accurate AI. They will be a part of it, but statistics just doesn't work that way. They are as accurate as they will ever be. Something else is needed to correct their errors, and to the best of my knowledge nobody knows what that is as yet. These Companies keep claiming they've found it but every time their claims don't stand up to any scrutiny. That's not to say it won't happen in the near future, nobody can predict when that sort of
Re: (Score:2)
Good and general thoughts.
I've come to mine from the position of having at least two prior careers ended by the march of technology. Not bitter about 'em, but they were kind of a scramble (for the first) and frustrating (for the second).
I used to make beautiful and challenging special effects slide work on what has been called variously a "Rostrum Camera", an "Animation Stand" or an "Optical Printer". I'd have held my skills up to anyone else's in my large metro at the time. Then came "Genigraphics" - GE's
Re: Useful If Verified (Score:5, Informative)
Dunno if you're a programmer or not, but if you're not extensively testing and verifying what you wrote before you put it in production, you're doing it wrong.
You have to verify and test *all* code. LLMs are great for producing a bunch of boiler plate code that would take a long time to write and is easily testable. The claim that LLMs are useless for programming flies in the face of everything happening in the ivoriest of towers of programming these days. Professionals in every major shop in the world use it now as appropriate. Sorry that makes you mad. I'm not young either. I've been producing C++ on embedded systems used by millions of people for 20+ years. Nobody doing serious programming takes the "LLMs are useless" opinion seriously anymore.
Re: (Score:2)
They're not useless but they are overblown. I use them of course. A nontrivial amount of time though I wonder if they have actually saved me time. They are good at API search, though I have been in the position of trying to use one to help with a particularly strange API and have the LLM cheerfully hallucinate a much better API.
I do like being able to give a vague description or an off the cuff code sniper and getting back the correct API call or landing close enough that I can tear the docs etc for the res
Re: (Score:2)
> Which is exactly what makes "A.I." completely useless. If everything has to be extensively tested and verified, what is the point? If you can't trust the output then you might as well just do the work yourself.
Why would that make it useless? You cannot assume a human would do things right either. Unless you are doing something not that critical, you need to extensively test and verify anyway.
Furthermore, not everything needs the same level of "correctness". As example, AI can implement prototypes or proof-of concepts which can have different requirements in terms of "correctness" and can be valuable even if not completely "right".
Re: (Score:2)
> You cannot assume a human would do things right either.
That is the dumbest take. The kinds of mistakes that humans make are fundamentally different from the kinds of "mistakes" that LLMs make. Humans are also capable of evaluating their work and making appropriate changes and corrections. LMMs are not.
> Why would that make it useless?
You wouldn't tolerate a 1% error rate from any other kind of program, let alone >60%. Using an LLM to write code requires more effort than just doing it yourself, not less. That makes it useless.
Re: (Score:2)
Don't try that approach with any (somewhat) advanced code. Checking code becomes a lot harder after a very low bound that writing it correctly from scratch. This has been known for ages.
A lot (Score:2, Interesting)
I use ChatGPT daily, not for work, but to ask questions about topics I'm curious about that I'm sure experts don't have time for.
I also discuss topics that I find overly hyped, skewed, oversimplified, tribal, or full of agendas or propaganda in modern discourse -- and that's pretty much everything you read about online. For instance, I recently discussed the film Anora, which won five Oscars. It was horrible -- a poor B-movie at best. I have no idea what happened to the Academy, and ChatGPT agreed with me.
Re: A lot (Score:4, Interesting)
Well sure, it has unlimited patience and knows the entire Internet. Though I I'm concerned you use it for confirmation and value judgements because that can easily be manipulated by the vendor of the AI. Eventually I'm sure Pepsi and Coke will be paying to have AI consider either one the best soft drink in the world and answer accordingly.
Re: A lot (Score:2)
> I also discuss topics that I find overly hyped, skewed, oversimplified, tribal, or full of agendas or propaganda in modern discourse
Apologies, I took this to mean that you use AI to discuss topics that you find overly hyped, skewed, oversimplified, tribal, or full of agendas or propaganda in modern. Not sure what led me to believe that.
Re: (Score:2)
ChatGPT is an expert in things you know nothing about, but seems to know nothing about things you're an expert in...
> I also ask it health-related questions because, where I live in Russia, getting good, professional healthcare is nearly impossible
ChatGPT isn't going to change that. The answers it gives are unreliable.
> I find conversations with ChatGPT far more fulfilling and grounded than conversations with real people
[1]That's by design. [futurism.com]. "The Stanford researchers also found that ChatGPT and other bots frequently affirmed users' delusional beliefs instead of pushing back against them"
[1] https://futurism.com/commitment-jail-chatgpt-psychosis
I was wrong (Score:5, Interesting)
I was originally an AI hater, but I have to admit I have saved hours with it. It doesn't do my coding for me per se but asking it to 'provide an example of x' or 'how would I do x' has got me in a single answer that would have taken an hour or more with Google.
Re: (Score:2)
> that would have taken an hour or more with Google
Do you have an example? That seems insane to me. Are you just bad at using Google?
No, not useful (Score:2)
It is never OK to use the AI output directly without checking it thoroughly. There are factual errors far too often.
I asked for a paragraph about my home village. The AI told me very enthusiastically about my parish church, St Andrews.
In truth, that church has been there for 800 years and has NEVER been named St Andrews.
Writing it myself is quicker and more likely to be what I want.
I became a 100% vibe coder (Score:3, Interesting)
I use tools like Lovable all day long now, 100% of the code I produce is generated. I do not even type anymore I use speech to text. I read the commits and suggest refactorings. The future of development is no more human teams just one senior dev who leads a team of sotware agents.
Re: (Score:2)
Hmm. Sarcasm, troll or moron? Hard to tell.
Re: (Score:2)
LOL!
ChatGPT has been a lifesaver for me (Score:3, Interesting)
This tool has both saved me time and also rescued me from some emergencies. Examples: I used it to write GUI functions. It’s not that I don’t know how to do such things but it can do in 1 minute what would take me a1/2 hour so it’s a massive time saver. I just have to check the code and sometimes tweak it a bit. I consider this to be “busywork” but that alone saves thousands of dollars a month in developer costs. Last year I deleted an entry in a SQL database table not realizing that cascading delete was enabled and I ended up deleting license keys for about 2,000 customers. There was no backup because the person who implemented the backup had recently died and his backup implementation had started failing due to changes in SSH keys.However, the information was available in a database managed by our payment fulfillment company via a pretty complicated REST API. I was able to instruct ChatGPT to “ write a python function to retrieve customer information from that company and generate a SQLinsert query to merge it back into our own database”. After a little tweaking, the damn thing gave me a function that just worked. Since I rarely use Python and SAL, It would have taken me hours if not days to both decipher that REST API, figure out the right SQL query and write the python code. Major customer catastrophe averted. Separately from the above, I’ve found it very useful for quick how-to tips when using applications that I don’t use much, e.g. how do I do”X” in Photoshop, etc. If I was a young recent graduate, I would be very concerned about my future opportunities and I remain very concerned how such tools will have detrimental affect on society due to how well AI can replace what previously required significant expertise and experience,
Re: (Score:2)
> If I was a young recent graduate, I would be very concerned about my future opportunities and I remain very concerned how such tools will have detrimental affect on society due to how well AI can replace what previously required significant expertise and experience,
That would be true if all the graduate expected to do was write code; what the graduate needs to be think is 'how can I use what I've learned to identify solutions to problems by understanding what is needed and then use the tools to deliver that." The jobs in danger are all those cheap coding shops that employ a bunch of people to churn out code; companies ill be able to do more of that in house or with shops that can understand the need and use tools to deliver it.
Re: (Score:2)
> What kind of schema do you have in which deleting a single entry ends up deleting license keys for 2000 customers? That makes no sense?
One generated by an LLM, I'm sure.
Re: (Score:2)
> What kind of schema do you have in which deleting a single entry ends up deleting license keys for 2000 customers? That makes no sense?
It would certainly be a schema failure, but if they had a bunch of customers (or customers' licenses, but they said "to retrieve customer information" and not "...license information") in some kind of group and overused ON DELETE CASCADE, I could see it happening. As you say, it would make no sense to use that feature for such a grouping (which might be deleted someday) because you would be creating a situation where you could cause a failure like this...
Absolutely not (Score:5, Interesting)
Classic AI, sure. LLMs: absolutely not. I avoid it on principle.
The crime of stealing people's works for "AI training" has also effectively stopped me from publishing any of my personal source code or CAD drawings on-line.
Not only do I not want it copied without my consent, I also have some source code that is very very ugly, optimised for specific compilers for embedded systems -- and which should never ever be used as a template by someone else, who is probably not very skilled ... or who is under more stress because their boss has got the delusion that fewer people would be able to do the same amount of work.
Here in the EU, there is law that says that a content creator has the right to opt out of "AI training". (except for the "for research" loophole).
However, even if there are well-established file formats and protocols for expressing opt-out (where there aren't), the techbros have already shown over and over again that they don't give a sith -- they have even used pirated works.
Before there are proper non-AI licenses, and infrastructure in place restricting access for those who are bound to EU jurisdiction and are well-behaving, I don't see how the situation could improve.
Re:Absolutely not (Score:4, Interesting)
> The crime of stealing people's works
LLMs don't need to be based on stealing people's work. The fact that a few models are shouldn't mean you avoid a technology on principle.
> the techbros have already shown over and over again that they don't give a sith
If it is determined they are actually "stealing" people's work then they will learn that lesson painfully very soon. Especially now that the Mouse is involved, and it does look like Disney may actually have competent lawyers (unlike the group who sued Midjourney and didn't argue a case either for piracy nor for indirect infringement).
Re: (Score:2)
So in that case you shouldn't get advice from humans either (or hire humans to write code, books, summaries, etc); they a) learn from other people's work, b) they are fallible, and c) from time to time some of them will exploit others' work and claim it for it's own. The reason that LLMs do these things is because it has been trained (for the overwhelming part) on human output.
Re: (Score:2)
> The reason that LLMs do these things is because it has been trained (for the overwhelming part) on human output.
No. Doing this is orthogonal to what it's trained on. Even if you trained AI only on exceptional output which was all created by software it would still hallucinate things that don't exist. This is a fundamental limit to the technology and until it's addressed somehow (whether stopping it, making it recognize it and recalculate, or something else — whatever solution actually is feasible) even feeding it 100% high-quality input will still result in hallucinatory output.
Re: (Score:2)
Indeed. Hallucinations are a primary characteristic of any LLM and _cannot_ be avoided.
AI for debugging (Score:2)
Sometimes I use AI to write quick functions with bounds checking, it does a decent job, but it always needs tweaking.
AI shines for debugging, when I have something that "should" work, I copy/paste the errors or problem description into AI and ask "How to Fix ELI5"
Once, AI even suggested there was a bug in a library, and it was right.
Note: I pay for Google Gemini 2.5 Pro and find it the best for coding compared to Chat-GPT.
Gemini does a good job of breaking everything down.
Here is an example from a mysql ses
I use it plenty (Score:4, Insightful)
I use ChatGPT, but mostly just talk to it. I discuss matters with it and basically use it as a 24/7 tutor. I find it very helpful. Using it for code generation has been so-so, however. It is so agreeable that when you communicate your own misunderstandings to it regarding certain constructs, it will hallucinate a universe where that misunderstanding is true, and derive the rest from it. Not all programs it generates work, and most don't fit my very opinionated code style. I still prefer to write my own code completely from scratch, and don't even use LLMs for drafts or boilerplate. I think it's about mindset. You have to treat it like a very knowledgeable, easy-to-work with co-worker, who may be wrong sometimes.
Re: (Score:2)
Just be careful: [1]https://slashdot.org/story/25/... [slashdot.org]
[1] https://slashdot.org/story/25/06/28/1859227/people-are-being-committed-after-spiraling-into-chatgpt-psychosis
Example (Score:2)
Yesterday, I wanted an example of a PIO program to generate high resolution, arbitrary waveform (variable frequency) PWM output using DMA on an RP2350 MCU. Gemini 2.5 Pro generated a correct, working, basic example. I refined it further by changing and adding requirements to deal with end state, corner cases and the deficiencies in the generated code. The final result works perfectly. Guessing here: It took probably perhaps 25% of the time to accomplish this than it would have without "AI." And while P
Re: (Score:2)
Yes, this is my experience also, AI will help you tackle something you might not have attempted on your own because you didn't know how to begin. But once you see a good example (geared toward your own goal) you can see how to finish it off.
I also think of AI chats as very socratic, it's really about the quality and refinement of your questions. The AI dialog or conversation is how you refine what it is you are asking. Once you get to that good question, AI will get you a good answer for it.
Awe *hit - Microsoft was Right with "Co-Pilot" (Score:3)
1: My default search engine. It rarely needs "correcting". Very little query/prompt refinement as an SE. It is always miles ahead of Google (which seems to need refining more times than not).
Search string: "chatgpt.com/?q=%s&hints=search&model=gpt-4o" (where model is your preferred model)
2: Coding. All-of-the-things: C+/VB/VS, shell scripts, Perl, WordPress plugins, PHP, JS, HTML, and even slumming with CSS. For short stuff - it rarely needs "correcting", but some times it does. It eliminates all the big issues when laying out a task. It's guidance turns a day job into an hour job.
Where it shines for me is in languages that I use very rarely but a project requires it. (I loath js with the fire of a thousand suns - ChatGPT makes it tolerable)
3: Ideation. It's so good at this. It will think of stuff you miss. It has been way-way over baked, but MS did have it right when they said ai was a Co-pilot. I can't count the times I've had a phone meeting in 5 mins and ask ChatGPT to give me things to talk about and ask.
4: Content. Yes, we generate all sorts of content with it from blog posts to schedules, to project overviews.
Obviously, everything needs human oversight, but sure, I've asked it to write stuff, and dropped it in for a test without looking at line-for-line.
Last week I asked it to lay out a modest wordpress plugin, and without prompting specifically for code, it generated it and it ran, and worked, the first go around.
sonnet (Score:2)
I once asked chatgpt to write a sonnet about constipation. I remember being abused by the result.
Mostly, though, I try to avoid the brain atrophe device.
Re: (Score:2)
I found the latest gemma to be pretty good at poetry. It produced some pretty satisfying e.e. Cummins [sic] poems for me while all other LLMs failed badly.
Re: (Score:2)
> Mostly, though, I try to avoid the brain atrophe device.
Indeed. These things are dangerous. Use only when needed and then carefully is the name of the game.
Yes, I do. For what it's good at. (Score:2)
Let's start with some disclaimer: this is about LLM. Not AI, as it is a very large field of stuff that existed way before LLM became the latest craze, and hopefully will keep existing until we get something impressive out of it.
Also, some issues with LLM are not stemming from their output, but with economic models around them, privacy issues, licensing issues, etc. To address some of those, most of our daily stuff is done on locally running models on cheap hardware, so no 400B parameters stuff.
There's four
Re: (Score:2)
> I also dipped in so-called "vibe coding" using commercial offers (my small 12B model would not have been fair in that regard. I spent a few hours trying to make something I would consider both basic, easy to find many example of, and relatively useful: a browser extension that intercept a specific download URL and replace it with something else. At every step of the way, it did progress. However, it was a mess. None of the initial suggestion were ok by themselves; even the initial scaffolding (a modern browser extension is made of a json manifest and a mostly blank script) would not load without me putting more info in the "discussion". And even pointing the issues (non-existent constants, invalid json properties, mismatched settings, broken code) would not always lead to a proper fix until I spelled it out. To make it short: it wasn't impressive at all. And I'm deeply worried that people find this kind of fumbling acceptable. I basically ended up telling the tool "write this, call this, do this, do that", which is in no way more useful than writing the stuff myself. At best it can be an accessibility thing for people that have a hard time typing, but it's not worth consideration if someone's looking at a "dev" of some sort.
Disappointing but expected. And generic URL repacer is not even a "hard" project by any means. I recently talked to somebody with a similar experience. First, the model omitted 8 of the 12 steps the solutuion would have needed and, when asked, claimed that this was correct. And after finally and laborously coerced in solving the full problem and then asked for test code, it provided test code for the first 2 (!) of the 12 steps and then claimed this was 100% test coverage. In the end, tis may be somehwt hel
Rarely and mostly no (Score:2)
The one thing I found it useful for is searching something when I di not know the specific term. Then I can ask and do a real search afterwards with the term and get context, references, limitations. etc. For anything else, AI is a waste of time. The one time I found something I had searched conventionally without results, the AI was unable to provide an original reference, i.e. the result was worthless.
Lets face it, this AI hype is a dud. Much the same as all previous ones. Something will come out of it,
Yes and Yes (Score:4, Interesting)
I use ChatGPT almost daily. I use it to write short scripts that have very well defined behaviours. Sure, it makes mistakes, and I have to check its code, but it saves me a heap of time looking up obscure functions. And it comments its code quite nicely. Sometimes, the comments are a bit inane, but I've seen so much uncommented code in my life, seeing any comments at all is a breath of fresh air.
I think that the mistake people make is that they assume that if ChatGPT can write a good 10 line function, then it can write a good 1000 line suite of functions. It cannot.
Its a tool, and it does very well when it is used in the context in which it performs. The wrong tool will do poorly at any task.
I would estimate that ChatGPT saves me about 5-6 hours a week. Time that I can spend on my higher skills rather than my grunt code-monkey skills.
All the time (Score:2)
I use LLMs - primarily ChatGPT - in programming all the time.
It's a tool. I don't care if it "really" understands anything, that's not what I use tools for.
Yes, it really saves me time overall. I can't help it if it somehow doesn't suit your problem space (possible), or if you don't know how to or refuse to learn how to use it properly (likely).
It's an excellent writing assistant / editor (Score:3)
I rountinely use LLMs for cleaning up prose in various reports. As a proofreader and editor, it's the best tool I've ever found.
For creating reports from scratch, you have to be careful. It's not perfect, but it will get you 85% to 95% of the way there on a first cut once you feed it the data. It's no replacement for a human, but it does save a lot of time.
I also use it for email. When I have an important email to send out that must be "perfect", I'll run my draft through ChatGPT and ask it for a review, and to show me what it changed, and why. More than once, it has caught a missing word or a clumsy phrase.
So yes, LLMs are not a gimmick, and they do increase productivity if used correctly.
Yes, of course you need to iterate/correct (Score:2)
Even if we had full-on human level AGI (which we don't), you'd still need to iterate and correct.
You wouldn't expect to give a non-trivial programming task to another human without having to iterate on understanding of requirements and corner cases, making changes after code review, bugs needing to be caught by unit/system tests, etc.
If you hired a graphic artist to create you a graphic to illustrate an article, or book, or some advertising campaign, then you also wouldn't expect to get it right first time.
Ai = Cognitive Mirror (Score:2)
What I find current LLMs the most useful at is to review my work to give a different and useful perspective.
The LLM doesn't generate any work that goes into my output, it augments my own work to make it better, while I determine what to use (I am in the drivers seat).
The LLM has been trained in a huge quantity of good quality human-generated text and it surprises me how good it can be at offering associations that I had not considered.
AI is great for project localization (Score:4, Interesting)
One area where it shines is localization. "Hey bot, find all English strings in the code enclosed in double quotes, for each string create a key in the form 'category-description', for example error-not-found, create a translation file in Fluent format in the form of "key = value" for all those keys and their values. Do that for the following locales: ..."
With some prompt refinements, corrections, etc, done in 2 hours. Saved me 2 weeks of manual work.
Re: (Score:2)
Great use case.
What's the point? (Score:2)
Correcting something is quite often far less work / effort than creating something from scratch. Obviously this will depend on context of what the AI is being asked to do .
Yep, super helpful (Score:2)
LLMs are thorough and merciless critics/code reviewers. Once I got past my ego and let a machine pass judgment on my code/prose/whatever, I gained an invaluable sounding board.
I do not (Score:4, Interesting)
My view is that AIs are both irredeemably poisoned and also a privacy nightmare designed to profile users at unprecedented level of details.
I go out of my way to avoid AI-generated slop, going as far as searching with -AI tags. This is mainly because I do not want to be exposed to bias that pollutes AI models (it is black Nazis all the way down) and it is way too much effort to validate each output to filter these out.
Code examples (Score:2)
My #1 use for ChatGPT is "show me an example of some C code that implements functionality (X)".
Then I can read that example, research the APIs it is calling (to make sure they actually exist and are appropriate for what I'm trying to accomplish), and use it to write my own function that does something similar. This is often much faster than my previous approach (googling, asking for advice on StackOverflow, trial and error).
Yes and kinda (Score:3)
I use LLMs for my own amusement, they are useful for that.
I have little to no visual memory so I struggle to draw even simple things. I can do drafting-style sketches OK because they are logical, but just remembering the shape of a curve even between looking at the thing and looking at the paper is difficult. So I use AI to generate images and feel zero remorse about it, since it lets me do something I cannot otherwise do — envision a concept I can imagine, but cannot picture.
For answering questions, I find that they are good only for pointing me in a direction for additional research, because you cannot trust what they say. I knew this before actually trying to get them to help me find out facts, because I asked them about things I knew about and they spat out a mixture of accurate information and absolutely invented bullshit that looks like a correct explanation but actually bears only a passing resemblance to reality. It cites laws and rules that don't exist, and it makes up entire concepts which it will never mention again unless you ask specifically, then it has a roughly 50/50 chance of either inventing some bullshit background for the fake thing or explaining that it's not a thing without irony.
Yes and no (Score:3)
I use AI for the following tasks:
(1) generating cartoon-like illustrations for my web site, because I have no artistic talent and the output of AI good enough for my tiny personal site.
(2) Transcribing speech to text to generate video captions (using [1]whisper-cpp [github.com]
(3) Generating speech from text with [2]Piper TTS [ttstool.com] because it generates really high-quality speech.
(4) Removing the backgrounds from images with [3]RemBG [github.com] because it does a decent job with very little effort.
All of the processing is done locally on my computer (except for the image generation in point 1.) I do not use LLMs such as ChatGPT or coding assistants because I find them useless and untrustworthy.
[1] https://github.com/ggml-org/whisper.cpp
[2] https://piper.ttstool.com/
[3] https://github.com/danielgatis/rembg
I us eit when programming (Score:2)
I use it much like a community to get help with programming, asking questions such as "what does this statement do?" or "what are ways to do X?" or "What is wrong with this code ...?" or " Can I accomplish X by doing this using this code...? I don't use it to simply 'vibe code' but to help me get over a hurdle when I can't figure something out; much like how a community works but with instant response.
I view it as an adjunct and learning tool; not to simply produce cut and paste code. If it generates code
Use it sometimes for work (Score:2)
I needed to do a few presentations on technical topics the last few months.
The last time I thought, let's see if Chatgpt has some extra suggestions on the topic. I gave it the text of my presentation. It answered, "your presentation is already reasonably well structured, and I will make the text flow a bit better" - being not native English I appreciated the edits; and then it suggested some extra slides as I had asked. I checked with chatgpt and using google on these; and found them niche yet quite well
Treat AI as your junior / intern (Score:2)
It will think outside the box, and maybe even come up with novel methods.
But none of this is of any use unless you are senior / experienced enough to ask the right questions
and guide your junior/intern towards optimal solutions.
In short its a tool but its a useless tool if the operator has little or no
domain knowledge in the areas for which the tech is being used.
I'm writing a novel (Score:2)
AI is absolutely useless for writing a decent story. But it's brilliant at smashing through writers block (suggesting places my story could go next), and research features are great at synthesizing the collective of writing advice from across the internet.
Running analysis (Score:3)
We are in chemical manufacturing. We use it to answer questions like âoewhat are possible causes of variation among all lots of product code Xâ. It analyzes all lots of ingredients, process conditions and gives us options for further examination.
Used it to configure my home server (Score:2)
Using Warp terminal, it actually nice for a non-admin to ask questions to Claude and get some really helpful work.
I do not know every in and out of Linux server config, my day job doesn't depend on that I do. So I can connect up, ask Claude, "is this service running?" or " My plex server isn't responding, can we run some diagnostics?"
Is it perfect? No, is it better than me? Oh god yes. Is my system a mission critical server? Not in the slightest.
But its fun, I actually can get a working docker serve
sure, it needs editing/oversight (Score:2)
Yeah, needs a ton of oversight, etc, I have to read it carefully...
So I wanted to do a thing, and I could easily have made a working prototype of this thing in a couple of days, maybe a week tops. So I had claude do it and spent nearly an entire afternoon fixing things or making claude fix them because I understood the problem better.
But at the end of the day, I'd spent an afternoon doing something I would have expected to take a week.
Useful for targeted tasks (Score:2)
I've used AI for two tasks where I found it very useful, and some very minor ones as well.
1) I had to write some code to invert a matrix in C. I knew the code was out there, but Google's search is so polluted today I could not find it. ChatGPT immediately returned working code. I noticed it did not calculate the determinant, so I asked it for that, and it modified the code to do so. As I say, I know that code is out there somewhere in a book, probably a dozen books, but I can no longer find older topics bec
I Use It Git Commit (Score:3)
Started using it for Git commits. Saves me the frustration of figuring out what to say.
Tried it. (Score:2)
I've tried it for a variety of things, mostly when I couldn't figure them out myself. It tends to hallucinate when it doesn't know the answer. It would be more useful if it just said, "I don't know" on occasion.
I try every now and then (Score:2)
It is completely worthless for anything that isn't completely trivial.
Even the glorified pay-for-use versions.
Software, electronic hardware, history questions - no matter what you ask it, the "AI" is worthless.
There is limited utility in some machine learning methods when sifting through bad data from physics experiments, but even there the applications need triple checking.
The only thing where the "AI" excels is creating trolling posts for maga-like readers.
Outsourced Junior Developer (Score:2)
Like the title says, its effectively an outsourced junior dev. It has no experience of its own and is just parroting what it read, imperfectly. Plus you probably aren't generating any institutional knowledge.
Source code + compiler =reliable output , prompts + LLM-du-jour = inconsistency. Vibe-coding is going to leave a lot of people with zero actual knowledge of their product as if they had out-sourced the development.
Because that's what they did.
So with that out of the way...its really good at first-dra
Yes, but... (Score:2)
I don't use AI in cases of creative expression. Using AI for that feels downright dishonest, lazy, and unsatisfying.
However, when I need some information which is not straightforward, I use it. However, only if search engines don't give relevant information from which I can extrapolate things on my own.
I prefer to use my own brains.
Test it occasionally (Score:2)
I've tested a variety of chat bots on a regular basis to see how they perform in terms of coding or suggesting logic solutions. I've tried ChatGPT, Copilot, and Llama. Most of the time they are pretty bad, particularly doing anything above entry-level.
So I could see them being useful for a complete beginner (as long as the beginner checks the results) or as a way to fill in some common boilerplate stuff, but it's not really useful for anything beyond that.
Most of the code LLMs have given me (in Bash,
Very helpful if the subject is also your expertise (Score:3)
As a Linux sysadmin, I've been using Google's Gemini a lot recently, asking it many Ansible related questions for example. It's solutions are sometimes pretty far off the mark, but more often than not its answers are accurate and very helpful, sometimes offering solutions I would not have thought of. It's kind of like having a flawed savant as your coding assistant: you can't always take what it gives you too seriously, but it's certainly useful as long as you keep that in mind.
Yes, I use AI some (Score:2)
I noticed duckduckgo has an AI generated answer thing on their search engine and I used it a few times with good results, I think AI has its uses like for a smarter search engine for inquiries , but like the previous thread on here it is not healthy to lean too heavily on AI and especially not on a personal level as it made some people go off the rails into crazyland,
Great for genetics. (Score:2)
ChatGPT Pro Deep Research has been very useful when researching my own genome. I use chatgpt to write the prompts for Deep Research, and AFAICT the results are accurate. From time to time, I ask Grok to check the output for correctness.
Useless at finding media (Score:2)
I read a great deal, particularly speculative fiction. I've tried repeatedly to use various AI tools to track down a short story or book where I can remember details of the story, and perhaps the rough publishing date, but not the title, author, or publisher.
AI appears to be utterly useless for this. It will either come up empty and make vague suggestions, like "Look at fantasy recommendations for the date range you've provided". Or worse, it will focus on the wrong book (say, one published in the 2020s, wh
Speech (Score:2)
Is there anything better for TTS, STT, and translation?
Problem I recently worked on:
Here is 20,000 hours of audio. Make it queryable.
Back in the 90's when I was doing some grad classes in Information Retrieval this would have been considered nearly insurmountable.
On the other hand I had 16MB then, now this takes 128GB of RAM.
That's mostly Python being obscene with RAM.
Normal (Score:2)
"Everything required some form of correction or editing before it was usable."
So just like a human assistant.
Re: (Score:2)
Probably an AI editor.
its just auto predict (Score:1)
its been called auto predict or initelsense rule systems previously
get over yourselves its wildly variable... good and bad
yeah great but honestly bad input bad out
good input and good output
garbage...
history repeating itself go buy tulips... bitcoin
Re: its just auto predict (Score:2)
Peopleâ(TM)s speech is largely just prediction too. People donâ(TM)t generally think out entire monologues first, they start talking and go with it.