News: 0176738625

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Ask Slashdot: Where Are the Open-Source Local-Only AI Solutions?

(Saturday March 15, 2025 @11:34PM (EditorDavid) from the opened-AI dept.)


"Why can't we each have our own AI software that runs locally," asks long-time Slashdot reader [1]BrendaEM — and that doesn't steal the work of others.

Imagine a powerful-but-locally-hosted LLM that "doesn't spy... and no one else owns it."

> We download it, from souce-code if you like, install it, if we want. And it assists: us... No one gate-keeps it. It's not out to get us...

>

> And this is important: because no one owns it, the AI software is ours and leaks no data anywhere — to no one, no company, for no political nor financial purpose. No one profits — but you!

Their longer [2]original submission also asks a series of related questions — like why can't we have software without AI? (Along with "Why is AMD stamping AI on local-processors?" and "Should AI be crowned the ultimate hype?") But this question seems to be at the heart of their concern. "What future will anyone have if anything they really wanted to do — could be mimicked and sold by the ill-gotten work of others...?"

"Could local, open-source, AI software be the only answer to dishearten billionaire companies from taking and selling back to their customers — everything we have done? Could we not...instead — steal their dream?!"

Share your own thoughts and answers in the comments. Where are the open-source, local-only AI solutions?



[1] https://www.slashdot.org/~BrendaEM

[2] https://slashdot.org/submission/17334119/where-are-the-open-source-local-only-ai-solutions



Re:get your own quantum computer while at it (Score:1)

by Tablizer ( 95088 )

I've been collecting all the neighborhood cats for that very purpose. Some MAGAs suspect I'm Haitian even.

Well, there's DeepSeek (Score:3)

by Mr. Dollar Ton ( 5495648 )

As for "open-source" open-source, what is called "AI" today isn't about "source", it is about having the ability to collect and store other people's data on a large scale, build pyramids out of expensive hardware and pay the power bills to power these to sift through the data and produce tables of statistical coefficients from it. That needs money and not just free work, so, unsurprisingly, there isn't much of it outside of areas that have public financing.

Incidentally, some of these areas (mostly the sciences) provide more useful "AI" than the LLMs from elona, sam and whatever.

Re:Well, there's DeepSeek (Score:4, Informative)

by Mr. Dollar Ton ( 5495648 )

And for more practical advice on the subject, there's [1]https://ollama.com/ [ollama.com]

[1] https://ollama.com/

Re: Well, there's DeepSeek (Score:1)

by pcias1 ( 1749214 )

Obviously, and itâ(TM)s so commonly known. I am really astonished where /. both questions and replies go these days

Here's One (Score:2)

by zenlessyank ( 748553 )

[1]https://www.localai.app/ [localai.app]

A duckduckgo search found many more options.

[1] https://www.localai.app/

Re: Here's One (Score:2)

by Z00L00K ( 682162 )

Even then it needs a huge amount of data to be trained on.

AI is a case where the tool itself isn't the thing, it's the data behind the tool.

Wyoming Protocol (Score:1)

by chickenandporn ( 848524 )

I think the Wyoming Protocol bridges access to local-only AI / LLM resources.

There are a ton of open weights models (Score:4, Informative)

by ihadafivedigituid ( 8391795 )

Check Hugging Face, there are tons of open weight models. I run four or five different ones locally on my Mac and rarely use any of the online LLM providers.

ollama (open source), LM Studio (not open source, but free), and other "host" programs are out there too.

Checkout localllama (Score:4, Informative)

by Utopia ( 149375 )

To address the question: "Where can I find open-source, local-only AI solutions?"

There is a vibrant online community called localllama dedicated to this very topic. Localllama is a great resource for individuals interested in running AI models locally without relying on cloud services. You can explore a variety of models and tools within this community.

One popular option is 'llama.cpp', a high-performance library for running large language models locally. 'llama.cpp' is designed to be efficient and can run on both CPUs and GPUs. For those who prefer a more comprehensive framework, the Hugging Face Transformers library is another excellent choice. It supports a wide range of models and provides extensive documentation and community support.

While a GPU is recommended for optimal performance and faster inference times, it is possible to run these models on a CPU. However, be prepared for significantly longer processing times if you choose to use a CPU.

llm leaderboards (Score:1)

by winelfredpasamba ( 1865250 )

[1]https://www.google.com/search?... [google.com] who has 800GB ram with 2TB/s memory bandwidth and hundreds of thousands of cores to run the biggest models?

[1] https://www.google.com/search?q=llm+leaderboard

Yow! And then we could sit on the hoods of cars at stop lights!