News: 1776125113

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

The votes are in: AI will hurt elections and relationships

(2026/04/14)


Artificial intelligence has achieved mass adoption faster than the personal computer or the internet, reaching 53 percent of the population in just three years. The number of harmful AI incidents has increased correspondingly. And both experts and laypeople believe the impact will be felt in two areas: Elections and relationships.

According to the [1]2026 AI Index Report [PDF], from Stanford University's Institute for Human-Centered Artificial Intelligence (HAI), "Responsible AI is not keeping pace with AI capability, with safety benchmarks lagging and incidents rising sharply."

[2]Documented AI incidents – defined as "harms or near harms realized in the real world by the deployment of artificial intelligence systems" by the AI Incident Database – reached 362 in 2025, up from 233 in 2024, the report says.

[3]

That coincides with an increase in AI adoption: 88 percent of organizations say they're using AI and about 80 percent of university students admit as much.

[4]

[5]

One possible explanation for that finding is that AI models have become quite good at programming, with scores on the [6]SWE-bench test of success tackling real-world GitHub issues rising from 60 percent to close to 100 percent in the space of a year.

High scores on a particular benchmark don't tell the full story because all AI models tend to be deficient in different areas. On the [7]AA-Omniscient Index , designed to assess whether models will admit when they're unsure about something instead of just guessing, hallucination rates across 26 models varied from 22 percent to 94 percent.

[8]

When attorneys use AI models to make "over two dozen fake citations and misrepresentations of fact," and [9]get called out for it by the US Sixth Circuit Court of Appeals, that's an example of what the Stanford HAI researchers mean when they say responsible AI hasn't kept pace with usage.

And despite all the talk about AI superintelligence, AI lags behind people when it comes to telling time – OpenAI's GPT-5.4 High managed to read analog clocks correctly just 50.6 percent of the time as of March 2026, compared to about 90 percent for "unspecialized humans," as described in the [10]ClockBench benchmark [PDF].

Robots demonstrate even less competence, succeeding in only 12 percent of household tasks, based on the [11]BEHAVIOR-1K simulation benchmark .

[12]Claude Code cache chaos creates quota complaints

[13]Claude is getting worse, according to Claude

[14]France's digital directorate dumping Windows desktops, adopting Linux instead

[15]Notepad sheds Copilot from toolbar as Microsoft gives subtlety a try

The HAI report, at 423 pages, represents the Stanford group’s summary of the current state of AI research and its impact on society. Written by human researchers with help from ChatGPT and Claude, not to mention financial support from Google, OpenAI, and others, the report's findings extend beyond the scarcity of "responsible AI" to touch on various aspects of the AI industry.

In terms of public opinion, the report finds "AI experts and the US public disagree on nearly everything about AI's future, except that it will hurt elections and personal relationships."

[16]

Sixty-four percent of the American public expect AI will reduce the number of jobs available to humans over the next two decades, while five percent foresee AI creating more jobs. Only 39 percent of experts anticipate fewer jobs while 19 percent of experts project more employment. Experts, however, believe that generative AI will contribute to 80 percent of US work hours by 2030, compared to the public's prediction of 10 percent.

Just 31 percent of US respondents said they trust in their government to regulate AI responsibly, the lowest level of any country. With OpenAI backing an Illinois state bill that would [17]limit the liability of AI companies in the event their models cause catastrophic harm, and the White House pursuing an " [18]industry-friendly AI policy ," it's not difficult to see how Americans might have doubts about their government's interest in protecting them.

The HAI report observes that Chinese AI models have closed the performance gap with US AI models. As of March 2026, the top US model, Claude Opus 4.6 scored 1,503 on the Arena benchmark, just 2.7 percentage points above ByteDance's [19]Dola-Seed Preview at 1,464. [20]That lead had narrowed as of April 9, 2026 , with Claude Opus 4.6 Thinking at 1,548, closely followed by Z.ai's GLM-5.1 at 1,530.

The US continues to lead in AI investment, said to have reached $285.9 billion in 2025. That's 23 times more than the $12.4 billion invested in China, though the report notes it may have under-counted government funding. Even so, the US is losing technical talent. "The number of AI researchers and developers moving to the US has dropped 89 percent since 2017, with an 80 percent decline in the last year alone," the report finds. ®

Get our [21]Tech Resources



[1] https://hai.stanford.edu/assets/files/ai_index_report_2026.pdf

[2] https://incidentdatabase.ai/

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2ad275E7HnER3IG6ZzpzZcQAAAYQ&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44ad275E7HnER3IG6ZzpzZcQAAAYQ&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33ad275E7HnER3IG6ZzpzZcQAAAYQ&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[6] https://www.swebench.com/

[7] https://artificialanalysis.ai/evaluations/omniscience

[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44ad275E7HnER3IG6ZzpzZcQAAAYQ&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[9] https://incidentdatabase.ai/cite/1447/#r7128

[10] https://clockbench.ai/ClockBench.pdf

[11] https://behavior.stanford.edu/challenge/leaderboard.html

[12] https://www.theregister.com/2026/04/13/claude_code_cache_confusion/

[13] https://www.theregister.com/2026/04/13/claude_outage_quality_complaints/

[14] https://www.theregister.com/2026/04/13/france_tech_sovereignty_plan/

[15] https://www.theregister.com/2026/04/13/microsoft_notepad_copilot_icon/

[16] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33ad275E7HnER3IG6ZzpzZcQAAAYQ&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[17] https://qz.com/openai-illinois-bill-ai-liability-critical-harm-041026

[18] https://www.wsj.com/tech/ai/ai-policy-david-sacks-midterm-elections-aced91a7

[19] https://seed.bytedance.com/en/blog/dola-seed-2-0-preview-model-release-on-arena

[20] https://arena.ai/leaderboard/code?rankBy=labs

[21] https://whitepapers.theregister.com/



Dinanziame

"The number of AI researchers and developers moving to the US has dropped 89 percent since 2017, with an 80 percent decline in the last year alone,"

Oh gee, I wonder what could be causing the US to be so unpopular right now

USA is still a drawcard.

Anonymous Coward

Over 70% of Iranian university researchers said they'd rather be in the USA than Tehran. A number that was oddly exceeded by 99% of the IED experts who wanted to be in the USA, and ballistic missile designers who were anxious to share their work with the US public.

You see all these people must really hate the regime.

Rather humourless as a joke

Anonymous Coward

They don't even define what AI is in this 423 page report, how can we trust them then to tell us how well it's doing? I would have expected their chapter on Research and Development (p. 13+) to have specified the different types of AI they were looking at (LLM, random forests, image classification, genMuzak, protein checking ...) and then tell us how things are going in each of those areas, but no, they lump everything together in some fuzzy pool of whatchamacallit nebulosity ... not very expertish, imho! For all we know, their AI (so-called) right includes Hightower's smart 0-token architectures, Wolfram Alpha, pocket calculators, and abacuses.

On page 26, they highlight the question " Will Models Run Out of Data? ", the answer to which is obviously " yes (that time has passed already) " (as far as LLMs are concerned at least), but rather than just stating the obvious, they dance around the issue suggesting " hybrid training approaches " and other nonsense that will do nothing to alleviate the related underlying issues: current AI has no proper model of cognition to build upon -- it only has language, which is not enough, by a long shot. Equating cognition with language is, at best, wishful thinking.

The surveys are interesting though, especially with respect to " elections and personal relationships " where it seems like we can look forward to being screwed by some folks' RotM wet dreams of having their boots stomping on our human faces, for ever ... unless we promptly grow the spines (and brains) needed to rectify this horror backpropagating curse! It's not smart to be optimistic about dystopias ...

Get it up, keep it up... LINUX: Viagra for the PC.

-- Chris Abbey