Can AI Developers Be Held Liable for Negligence? (lawfaremedia.org)
(Saturday September 28, 2024 @11:34PM (EditorDavid)
from the Hal-open-the-doors dept.)
- Reference: 0175152419
- News link: https://yro.slashdot.org/story/24/09/29/0122212/can-ai-developers-be-held-liable-for-negligence
- Source link: https://www.lawfaremedia.org/article/negligence-liability-for-ai-developers
Bryan Choi, an associate professor of law and computer science focusing on software safety, proposes [1]shifting AI liability onto the builders of the systems :
> To date, most popular approaches to AI safety and accountability have focused on the technological characteristics and risks of AI systems, while averting attention from the [2]workers behind the curtain responsible for designing, implementing, testing, and maintaining such systems...
>
> I have previously [3] argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in [4]California's AI safety bill , which specifies that AI developers shall articulate and implement protocols that embody the "developer's duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm" (emphasis added). Although [5]tech leaders have opposed California's bill, courts don't need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate?
The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies."
> AI is a field that perhaps uniquely seeks to obscure its human elements in order to magnify its technical wizardry. The virtue of the negligence-based approach is that it centers legal scrutiny back on the conduct of the people who build and hype the technology. To be sure, negligence is limited in key ways and should not be viewed as a complete answer to AI governance. But fault should be the default and the starting point from which all conversations about AI accountability and AI safety begin.
Thanks to long-time Slashdot reader [6]david.emery for sharing the article.
[1] https://www.lawfaremedia.org/article/negligence-liability-for-ai-developers
[2] https://kb.osu.edu/server/api/core/bitstreams/bead352c-3581-5430-ac35-3ecd74ad50d7/content
[3] https://via.library.depaul.edu/cgi/viewcontent.cgi?article=4275
[4] https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047
[5] https://spectrum.ieee.org/california-ai-bill
[6] https://www.slashdot.org/~david.emery
> To date, most popular approaches to AI safety and accountability have focused on the technological characteristics and risks of AI systems, while averting attention from the [2]workers behind the curtain responsible for designing, implementing, testing, and maintaining such systems...
>
> I have previously [3] argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in [4]California's AI safety bill , which specifies that AI developers shall articulate and implement protocols that embody the "developer's duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm" (emphasis added). Although [5]tech leaders have opposed California's bill, courts don't need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate?
The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies."
> AI is a field that perhaps uniquely seeks to obscure its human elements in order to magnify its technical wizardry. The virtue of the negligence-based approach is that it centers legal scrutiny back on the conduct of the people who build and hype the technology. To be sure, negligence is limited in key ways and should not be viewed as a complete answer to AI governance. But fault should be the default and the starting point from which all conversations about AI accountability and AI safety begin.
Thanks to long-time Slashdot reader [6]david.emery for sharing the article.
[1] https://www.lawfaremedia.org/article/negligence-liability-for-ai-developers
[2] https://kb.osu.edu/server/api/core/bitstreams/bead352c-3581-5430-ac35-3ecd74ad50d7/content
[3] https://via.library.depaul.edu/cgi/viewcontent.cgi?article=4275
[4] https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047
[5] https://spectrum.ieee.org/california-ai-bill
[6] https://www.slashdot.org/~david.emery
CEOs, sure (Score:2)
by evanh ( 627108 )
Those making the business decisions are the ones responsible.
Re: (Score:2)
This will simply result in endless litigation trying to score jackpots from deep pockets. Why don't we just put a warning label on every AI engine saying "use at your own risk, output may not be what you want". AI is in its infancy, it is going to have a HUGE number of errors in it. AFAIK no significant AI engine in existence can be blindly used without humans supervising its decisions. Consider AI as making suggestions, and the YOU decide whether or not to accept that suggestion.