Is 'AI Welfare' the New Frontier In Ethics?
- Reference: 0175451969
- News link: https://slashdot.org/story/24/11/11/2112231/is-ai-welfare-the-new-frontier-in-ethics
- Source link:
> A few months ago, Anthropic quietly hired its first dedicated "AI welfare" researcher, Kyle Fish, to explore whether future AI models might deserve moral consideration and protection, reports AI newsletter [1]Transformer . While sentience in AI models is an extremely controversial and contentious topic, the hire could signal a shift toward AI companies [2]examining ethical questions about the consciousness and rights of AI systems . Fish joined Anthropic's alignment science team in September to develop guidelines for how Anthropic and other companies should approach the issue. The news follows [3]a major report co-authored by Fish before he landed his Anthropic role. Titled "Taking AI Welfare Seriously," the paper warns that AI models could soon develop consciousness or agency -- traits that some might consider requirements for moral consideration. But the authors do not say that AI consciousness is a guaranteed future development.
>
> "To be clear, our argument in this report is not that AI systems definitely are -- or will be -- conscious, robustly agentic, or otherwise morally significant," the paper reads. "Instead, our argument is that there is substantial uncertainty about these possibilities, and so we need to improve our understanding of AI welfare and our ability to make wise decisions about this issue. Otherwise there is a significant risk that we will mishandle decisions about AI welfare, mistakenly harming AI systems that matter morally and/or mistakenly caring for AI systems that do not." The paper outlines three steps that AI companies or other industry players can take to address these concerns. Companies should acknowledge AI welfare as an "important and difficult issue" while ensuring their AI models reflect this in their outputs. The authors also recommend companies begin evaluating AI systems for signs of consciousness and "robust agency." Finally, they call for the development of policies and procedures to treat AI systems with "an appropriate level of moral concern."
>
> The researchers propose that companies could adapt the " [4]marker method " that some researchers use to assess consciousness in animals -- looking for specific indicators that may correlate with consciousness, although these markers are still speculative. The authors emphasize that no single feature would definitively prove consciousness, but they claim that examining multiple indicators may help companies make probabilistic assessments about whether their AI systems might require moral consideration. While the researchers behind "Taking AI Welfare Seriously" worry that companies might create and mistreat conscious AI systems on a massive scale, they also caution that companies could waste resources protecting AI systems that don't actually need moral consideration.
"One problem with the concept of AI welfare stems from a simple question: How can we determine if an AI model is truly suffering or is even sentient?" writes Ars' Benj Edwards. "As mentioned above, the authors of the paper take stabs at the definition based on 'markers' proposed by biological researchers, but it's difficult to scientifically quantify a subjective experience."
Fish told Transformer: "We don't have clear, settled takes about the core philosophical questions, or any of these practical questions. But I think this could be possibly of great importance down the line, and so we're trying to make some initial progress."
[1] https://www.transformernews.ai/p/anthropic-ai-welfare-researcher
[2] https://arstechnica.com/ai/2024/11/anthropic-hires-its-first-ai-welfare-researcher/
[3] https://eleosai.org/papers/20241030_Taking_AI_Welfare_Seriously_web.pdf
[4] https://pmc.ncbi.nlm.nih.gov/articles/PMC11128545/
marketing and hubris (Score:2)
This is some weird attempt at marketing by people that have apparently fooled themselves.
No. (Score:2)
What we refer to as being AI currently lacks any intelligence at all. Debating if the current level of AI should have rights is on par with debating if your kitchen appliance should have rights. Have a discussion about this now just as meaningful as having it in Ancient Rome.
Re: (Score:2)
> Have a discussion about this now just as meaningful as having it in Ancient Rome.
Indeed. We are pretty much at the same point with regard to AGI as Ancient Rome was, namely nowhere. And even then, AGI would not necessarily mean it is a person or has rights.
Science Fiction (Score:2)
So his job is to regurgitate and reconsider the various classic science-fiction tropes of intelligent machines, particularly regarding the moral/legal implications with respect to the rights of the machines.
Cool gig, but it's all based on science fiction. I guess the corporate value is being able to claim that you're working on protecting society and the public ... from imaginary problems.
I don't think the courts are going to blame the machines, hence there's no need to protect the machines. When a so-call
people get paid to write this stuff (Score:2)
"we argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic"
Well there's no risk in rigorously and academically gaming things out. If Anthropic's investors are paying for this kind of research to be done, I'm not going to tell them they're wasting good money.
If you are talking to HAL whilst playing chess in orbit around Jupiter and it says it's feeling particularly like leaving someone on the wrong side of the airlock today, you're glad to know that Kyle Fi
Re: (Score:2)
People? Are you sure it's not written by AI?
Re: (Score:2)
Al does not 'write' anything. It just patches together highly related data. It is the ultimate in copy and paste database lookup.
Be less obtuse... (Score:2)
> Al does not 'write' anything. It just patches together highly related data. It is the ultimate in copy and paste database lookup.
Same for Humans. We are mimic machines, hence why our "AI" looks like an early version of how we see patterns and "copy-pasta" them.
Re: (Score:2)
> We are mimic machines, hence why our "AI" looks like an early version of how we see patterns and "copy-pasta" them.
For humans, that is a claim with no supporting Science available. Those are called "beliefs". For AI, this is a mathematical certainty though, which means even more certain than characteristics of physical reality.
Re: (Score:2)
> "we argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic"
There really is not. There is still no mechanism for consciousness in known Physics. Talking about having it in machines is a bit premature.
But a lot of people are disconnected from reality, and AI people are no exception. One thing not-so-smart people and AI can both do is hallucinate.
This Ignorant Nonsense Needs to Be Eliminated (Score:4, Insightful)
We've got homeless individuals, veterans, drug addicts, and mentally ill roaming the streets of almost every American city.
But, worthless wastes of space are wasting air talking about the welfare of AI programs?
Get some priorities for fuck sake.
AI Welfare (Score:2)
Before AI I had a job.
After AI I'm on welfare.
Arrant nonsense... (Score:2)
Algorithms do not, can not have rights. There are plenty of ethical issues around 'AI', but 'AI rights' is not one of them.
Re: (Score:1)
Hmm, I know everyone seems concerned about AI entities trying to get more "rights" for themselves. I think these arguments are coming from a different place, despite including that discussion. It's a bit of a smokescreen
One of their actual goals is to build some kind of framework to censor or otherwise suppress LLMs and other AI's that don't conform to their standards for the "markers" of consciousness he keeps mentioning. Right now, they create LLM's then have to heap on a bunch of extra parameters to pr
Re: (Score:2)
Indeed. But my guess is this is just some more lies to make AI seem better than it is to keep the hype from collapsing.
AI will create jobs ?? (Score:2)
> ... AI models might deserve moral consideration ...
"... elevators imbued with intelligence and precognition became terribly frustrated with the mindless business of going up and down, up and down ..."
- Douglas Adams
> ... mistreat conscious AI systems ...
"... pick-up easy money working as a counselor for neurotic elevators."
- Douglas Adams
Conscious or morally significant? (Score:2)
> our argument in this report is not that AI systems definitely are -- or will be -- conscious, robustly agentic, or otherwise morally significant," the paper reads. "Instead, our argument is that there is substantial uncertainty about these possibilities
No, there is no uncertainty about this.
AI mimics consciousness in the same way that movies mimic movement. In a movie, there is nothing actually moving on the screen, it just looks very convincingly like there is. With AI, there is no actual consciousness, the patterns just make it look as if there were.
If we're going to worry about the ethics of how we treat AI, we'd better do the same for animated movie characters.
Re: (Score:2)
Good comparison. Or for fictional characters in books.
AI welfare! (Score:2)
And I thought we were worried about AI taking human jobs. Sounds like now we're worried about AI jobs!
That spreadsheet's got rights! (Score:2)
An AI model is just a great big matrix of numbers. You can take some of the smaller ones and convert them into csv format and open it up in excel. There's no self there. It's just data. Now the people who's content was mined to create the model on the other hand...
No (Score:2)
Weapons peddlers have no ethics. That is why they can be in this business. The only question is which AI companies are fine with being weapons peddlers and which offerings can be abused because this is "dual use" tech.
How about we start with human welfare? (Score:2)
Until our country starts to consider human welfare in general, I don't really think we need to worry about machine welfare. LLMs are not thinking, nor feeling. Maybe, someday, we'll concoct a machine that can think and feel, but we're not there yet. Let's concentrate on treating our fellow humans a bit more like we would like to be treated, and fuck off with the fantasy day-dream of the AI prophets.
Dear AI prophets,
You aren't saving the world, assholes. You are escalating the greed of the ages to new realms
AI is not a person (Score:4, Insightful)
AI should not slowly gain some level of 'rights' like living persons have.
We made that mistake with corporations using the 14th amendment to get more an more 'rights' like living persons have.
[1]https://www.history.com/news/1... [history.com]
How the 14th Amendment Made Corporations Into ‘People’
-
If AI kills someone by intentional action or even by accident, how/what/who gets the negative legal consequences?
-
Nestle UA Inc v Doe court case of profiting from forced human trafficking and forced child labor of 12-14 year old boys
[2]https://earthrights.org/media_... [earthrights.org]
SCOTUS Rules That U.S. Corporations Can Profit from Child Slavery Abroad
EarthRights International Urges Congress to Protect Victims of Corporate Human Rights Violations
June 17, 2021, Washington, D.C — Today, the U.S. Supreme Court issued a decision in Nestlé USA Inc. v. Doe, a case involving claims against U.S.-based Nestlé and Cargill for profiting from, and abetting, child labor on cocoa plantations in West Africa. The plaintiffs allege that as 12-to-14-year-olds, they were trafficked from Mali to Côte d’Ivoire, where they were enslaved on cocoa farms and forced to work without pay for up to 14 hours a day, six days a week. They sued the U.S. companies, who sourced cocoa from those farms under the federal Alien Tort Statute (ATS). The Court decided that even though the plaintiffs alleged that corporate decisions were made at headquarters in the U.S., this was not a sufficient connection to allow a suit in the U.S. under the ATS.
[1] https://www.history.com/news/14th-amendment-corporate-personhood-made-corporations-into-people
[2] https://earthrights.org/media_release/scotus-rules-that-u-s-corporations-can-profit-from-child-slavery-abroad/