ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Ubisoft Shows Off New AI-Powered FPS And Hopes You've Forgotten About Its Failed NFTs (kotaku.com)

(Monday November 24, 2025 @05:40PM (msmash) from the chasing-trends dept.)

Ubisoft has revealed Teammates, a first-person shooter built around AI-powered squadmates that the company is calling its " [1]first playable generative AI research project " -- not long after the publisher went all-in on NFTs and the metaverse only to largely move on from both. Built in the Snowdrop Engine that powers The Division 2 and Star Wars Outlaws, the game features an AI assistant named Jaspar and two AI squadmates called Pablo and Sofia. Players can issue natural voice commands to direct the squadmates in combat or puzzle-solving, while Jaspar handles mission tracking and guidance. The project comes from the same team behind Ubisoft's Neo NPCs, demonstrated at GDC 2024.



[1] https://kotaku.com/ubisoft-ai-fps-temmates-genai-nfts-gameplay-snowdrop-reveal-2000646251



How Google Finally Leapfrogged Rivals With New Gemini Rollout (msn.com)

(Monday November 24, 2025 @05:40PM (msmash) from the closer-look dept.)

An anonymous reader shares a report:

> With the release of its [1]third version last week , Google's Gemini large language model surged past ChatGPT and other competitors to [2]become the most capable AI chatbot , as determined by consensus industry-benchmark tests. [...] Aaron Levie, chief executive of the cloud content management company Box, got early access to Gemini 3 several days ahead of the launch. The company ran its own evaluations of the model over the weekend to see how well it could analyze large sets of complex documents. "At first we kind of had to squint and be like, 'OK, did we do something wrong in our eval?' because the jump was so big," he said. "But every time we tested it, it came out double-digit points ahead."

>

> [...] Google has been scrambling to get an edge in the AI race since the launch of ChatGPT three years ago, which stoked fears among investors that the company's iconic search engine would lose significant traffic to chatbots. The company struggled for months to get traction. Chief Executive Sundar Pichai and other executives have since worked to overhaul the company's AI development strategy by breaking down internal silos, streamlining leadership and consolidating work on its models, employees say. Sergey Brin, one of Google's co-founders, resumed a day-to-day role at the company helping to oversee its AI-development efforts.



[1] https://tech.slashdot.org/story/25/11/18/1634253/google-launches-gemini-3-its-most-intelligent-ai-model-yet

[2] https://www.msn.com/en-us/news/technology/how-google-finally-leapfrogged-rivals-with-new-gemini-rollout/ar-AA1QWgd8



New Mars Orbiter Manuever Challenges Theory: That May Not Be an Underground Lake on Mars (phys.org)

(Monday November 24, 2025 @05:40PM (EditorDavid) from the Mars-needs-water dept.)

In 2018 researchers claimed evidence of [1]a lake beneath the surface of Mars , detected by the Mars Advanced Radar for Subsurface and Ionosphere Sounding instrument (or Marsis for short).

But [2]new Mars observations "are not consistent with the presence of liquid water in this location and an alternative explanation, such as very smooth basal materials, is needed." [3] Phys.org explains

> Aboard the Mars Reconnaissance Orbiter, the Shallow Radar (SHARAD) uses higher frequencies than MARSIS. Until recently, though, SHARAD's signals couldn't reach deep enough into Mars to bounce off the base layer of the ice where the potential water lies — meaning its results couldn't be compared with those from MARSIS. However, the Mars Reconnaissance Orbiter team recently tested a new maneuver that rolls the spacecraft on its flight axis by 120 degrees — whereas it previously could roll only [4]up to 28 degrees . The new maneuver, termed a "very large roll," or VLR, can increase SHARAD's signal strength and penetration depth, allowing researchers to examine the base of the ice in the enigmatic high-reflectivity zone. Gareth Morgan and colleagues, for their [5]article published in Geophysical Research Letters , examined 91 SHARAD observations that crossed the high-reflectivity zone.

>

> Only when using the VLR maneuver was a SHARAD basal echo detected at the site. In contrast to the MARSIS detection, the SHARAD detection was very weak, meaning it is unlikely that liquid water is present in the high-reflectivity zone.



[1] https://science.slashdot.org/story/18/07/25/1428254/evidence-detected-of-lake-beneath-the-surface-of-mars

[2] https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2025GL118537

[3] https://phys.org/news/2025-11-liquid-mars.html

[4] https://phys.org/news/2025-06-science-mars-orbiter-years-space.html

[5] https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2025GL118537



'We Could've Asked ChatGPT': UK Students Fight Back Over Course Taught By AI (theguardian.com)

(Monday November 24, 2025 @11:41AM (EditorDavid) from the AI-in-the-UK dept.)

An anonymous reader shared [1]this report from the Guardian :

> James and Owen were among 41 students who took a coding module at the University of Staffordshire last year, hoping to change careers through a government-funded apprenticeship programme designed to help them become cybersecurity experts or software engineers. But after a term of AI-generated slides being read, at times, by an AI voiceover, James said he had lost faith in the programme and the people running it, worrying he had "used up two years" of his life on a course that had been done "in the cheapest way possible".

>

> "If we handed in stuff that was AI-generated, we would be kicked out of the uni, but we're being taught by an AI," said James during a confrontation with his lecturer recorded as a part of the course in October 2024. James and other students confronted university officials multiple times about the AI materials. But the university appears to still be using AI-generated materials to teach the course. This year, the university uploaded a policy statement to the course website appearing to justify the use of AI, laying out "a framework for academic professionals leveraging AI automation" in scholarly work and teaching...

>

> For students, AI teaching appears to be less transformative than it is demoralising. In the US, students post negative online [2]reviews about professors who use AI. In the UK, undergraduates have taken to Reddit to complain about their lecturers copying and pasting feedback [3]from ChatGPT or using AI-generated [4]images in courses.

"I feel like a bit of my life was stolen," James told the Guardian (which also quotes an unidentified student saying they felt "robbed of knowledge and enjoyment".) But the article also points out that a [5]survey last year of 3,287 higher-education teaching staff by edtech firm Jisc found that nearly a quarter were using AI tools in their teaching.



[1] https://www.theguardian.com/education/2025/nov/20/university-of-staffordshire-course-taught-in-large-part-by-ai-artificial-intelligence

[2] https://www.nytimes.com/2025/05/14/technology/chatgpt-college-professors.html

[3] https://www.reddit.com/r/UniUK/comments/1m5de13/disheartened_after_receiving_possibly_aigenerated/

[4] https://www.reddit.com/r/UniUK/comments/1msts5q/nah_cuz_wtf_is_this_why_are_my_lectures_now_using/

[5] https://repository.jisc.ac.uk/9702/1/DEI-2024-teaching-staff-he-report.pdf



How An MIT Student Awed Top Economists With His AI Study - Until It All Fell Apart (msn.com)

(Monday November 24, 2025 @11:41AM (EditorDavid) from the very-artificial-intelligence dept.)

In May MIT announced "no confidence" in a preprint paper on how AI increased scientific discovery, [1]asking arXiv to withdraw it . The paper, authored by 27-year-old grad student Aidan Toner-Rodgers, had claimed an AI-driven materials discovery tool helped 1,018 scientists at a U.S. R&D lab.

But within weeks his academic mentors "were asking an unthinkable question," [2]reports the Wall Street Journal . Had Toner-Rodgers made it all up?

> Toner-Rodgers's illusory success seems in part thanks to the dynamics he has now upset: an academic culture at MIT where high levels of trust, integrity and rigor are all — for better or worse — assumed. He focused on AI, a field where peer-reviewed research is still in its infancy and the hunger for data is insatiable. What has stunned his former colleagues and mentors is the sheer breadth of his apparent deception. He didn't just tweak a few variables. It appears he invented the entire study. In the aftermath, MIT economics professors have been discussing ways to raise standards for graduate students' research papers, including scrutinizing raw data, and students are going out of their way to show their work isn't counterfeit, according to people at the school.

>

> Since parting with the university, Toner-Rodgers has told other students that his paper's problems were essentially a mere issue with data rights. According to him, he had indeed burrowed into a trove of data from a large materials-science company, as his paper said he did. But instead of getting formal permission to use the data, he faked a data-use agreement after the company wanted to pull out, he told other students via a WhatsApp message in May... On Jan. 31, Corning filed a complaint with the World Intellectual Property Organization against the registrar of the domain name corningresearch.com. Someone who controlled that domain name could potentially create email addresses or webpages that gave the impression they were affiliated with the company. WIPO soon found that Toner-Rodgers had apparently registered the domain name, according to the organization's written decision on the case. Toner-Rodgers never responded to the complaint, and Corning successfully won the transfer of the domain name. WIPO declined to comment...

>

> In the WhatsApp chat in May, in which Toner-Rodgers told other students he had faked the data-use agreement, he wrote, "This was a huge and embarrassing act of dishonesty on my part, and in hindsight it clearly would've been better to just abandon the paper." Both Corning and 3M told the Journal that they didn't roll out the experiment Toner-Rodgers described, and that they didn't share data with him.



[1] https://science.slashdot.org/story/25/05/16/213210/mit-asks-arxiv-to-take-down-preprint-paper-on-ai-and-scientific-discovery

[2] https://www.msn.com/en-us/money/careersandeducation/an-mit-student-awed-top-economists-with-his-ai-study-then-it-all-fell-apart/ar-AA1QV7Rk



Napster Said It Raised $3 Billion From a Mystery Investor. But Now the 'Investor' and 'Money' Are Gone (forbes.com)

(Monday November 24, 2025 @11:41AM (EditorDavid) from the taking-a-Napster dept.)

An anonymous reader shared [1]this report from Forbes :

> On November 20, at approximately 4 p.m. Eastern time, Napster held an online meeting for its shareholders; an estimated 700 of roughly 1,500 including employees, former employees and individual investors tuned in. That's when its CEO John Acunto told everyone he believed that the never-identified big investor — who the company had insisted put in $3.36 billion at a $12 billion valuation in January, which would have made it one of the year's biggest fundraises — was not going to come through.

>

> In an email sent out shortly after, it told existing investors that some would get a bigger percentage of the company, due to the canceled shares, and went on to describe itself as a "victim of misconduct," adding that it was "assisting law enforcement with their ongoing investigations." As for the promised tender offer, which would have allowed shareholders to cash out, that too was called off. "Since that investor was also behind the potential tender, we also no longer believe that will occur," the company wrote in the email.

>

> At this point it seems unlikely that getting bigger stakes in the business will make any of the investors too happy. The company had been stringing its employees and investors along for nearly a year with ever-changing promises of an impending cash infusion and chances to sell their shares in a tender offer that would change everything. In fact, it was the fourth time since 2022 they've been told they could soon cash out via a tender offer, and the fourth time the potential deal fell through. Napster spokesperson Gillian Sheldon said certain statements about the fundraise "were made in good faith based on what we understood at the time. We have since uncovered indications of misconduct that suggest the information provided to us then was not accurate."

The article notes America's Department of Justice has launched an investigation (in which Napster is not a target), while the Securities and Exchange Commission has a separate ongoing investigation from 2022 into Napster's scrapped reverse merger.

While Napster announced they'd been [2]acquired for $207 million by a tech company named Infinite Reality , Forbes says that company faced " [3]a string of lawsuits from creditors alleging unpaid bills , a federal lawsuit to enforce compliance with an SEC subpoena (now dismissed) and exaggerated claims about the extent of their partnerships with Manchester City Football Club and Google. The company also touted 'top-tier' investors who never directly invested in the firm, and its anonymous $3 billion investment that its spokesperson told Forbes in March was in "an Infinite Reality account and is available to us" and that they were 'actively leveraging' it..."

And by the end, "Napster appears to have been scrambling to raise cash to keep the lights on, working with brokers and investment advisors including a few who had previously gotten into trouble with regulators.... If it turns out that Napster knew the fundraise wasn't happening and it benefited from misrepresenting itself to investors or acquirees, it could face much bigger problems. That's because doing so could be considered securities fraud."



[1] https://www.forbes.com/sites/phoebeliu/2025/11/23/napster-said-raised-3-billion-mystery-investor-now-the-investor-money-gone/

[2] https://entertainment.slashdot.org/story/25/03/25/1229231/music-pioneer-napster-sells-for-207-million

[3] https://www.forbes.com/sites/phoebeliu/2025/04/24/infinite-reality-john-acunto-155-billion-metaverse-startup-biggest-fundraise/



New Research Finds America's Top Social Media Sites: YouTube (84%) Facebook (71%), Instagram (50%) (pewresearch.org)

(Monday November 24, 2025 @11:41AM (EditorDavid) from the words-with-friends dept.)

Pew Research [1]surveyed 5,022 Americans this year (between February 5 and June 18), asking them "do you ever use" YouTube, Facebook, and nine of the other top social media platforms. The results?

YouTube

84%

Facebook

71%

Instagram

50%

TikTok

37%

WhatsApp

32%

Reddit

26%

Snapchat

25%

X.com (formerly Twitter)

21%

Threads

8%

Bluesky

4%

Truth Social

3%

An announcement from Pew Research [2]adds some trends and demographics :

> The Center has long tracked use of many of these platforms. Over the past few years, four of them have grown in overall use among U.S. adults — TikTok, Instagram, WhatsApp and Reddit. 37% of U.S. adults report using TikTok, which is slightly up from last year and up from 21% in 2021. Half of U.S. adults now report using Instagram, which is on par with last year but up from 40% in 2021. About a third say they use WhatsApp, up from 23% in 2021. And 26% today report using Reddit, compared with 18% four years ago.

>

> While YouTube and Facebook continue to sit at the top, the shares of Americans who report using them have remained relatively stable in recent years... YouTube and Facebook are the only sites asked about that a majority in all age groups use, though for YouTube, the youngest adults are still the most likely to do so. This differs from Facebook, where 30- to 49-year-olds most commonly say they use it (80%).

Other interesting statistics:

"More than half of women report using Instagram (55%), compared with under half of men (44%). Alternatively, men are more likely to report using platforms such as X and Reddit."

"Democrats and Democratic-leaning independents are more likely to report using WhatsApp, Reddit, TikTok, Bluesky and Threads."



[1] https://www.pewresearch.org/internet/2025/11/20/americans-social-media-use-2025/

[2] https://www.pewresearch.org/internet/2025/11/20/americans-social-media-use-2025/



Was the Moon-Forming Protoplanet 'Theia' a Neighbor of Earth? (mps.mpg.de)

(Monday November 24, 2025 @11:41AM (EditorDavid) from the over-the-moon dept.)

Theia crashed into earth and formed the moon, the theory goes. But then where did Theia come from? The lead author on a new study says "The most convincing scenario is that most of the building blocks of Earth and Theia [1]originated in the inner Solar System . Earth and Theia are likely to have been neighbors."

Though Theia was completely destroyed in the collision, scientists from the Max Planck Institute for Solar System Research led a team that was able to measure the ratio of tell-tale isotopes in Earth and Moon rocks, [2] Euronews explains :

> The research team used rocks collected on Earth and samples brought back from the lunar surface by Apollo astronauts to examine their isotopes. These isotopes act like chemical fingerprints. Scientists already knew that Earth and Moon rocks are almost identical in their metal isotope ratios. That similarity, however, has made it hard to learn much about Theia, because it has been difficult to separate material from early Earth and [3]material from the impactor .

>

> The new research attempts a kind of planetary reverse engineering. By examining isotopes of iron, chromium, zirconium and molybdenum, the team modelled hundreds of possible scenarios for the early Earth and Theia, testing which combinations could produce the isotope signatures seen today. Because materials closer to the Sun formed under different temperatures and conditions than those further out, those isotopes exist in slightly different patterns in different regions of the Solar System.

>

> By comparing these patterns, researchers concluded that Theia most likely originated in the inner Solar System, even closer to the Sun than the early Earth.

The team published their findings in the journal Science . Its title? " [4]The Moon-forming impactor Theia originated from the inner Solar System ."



[1] https://www.mps.mpg.de/theia-and-earth-were-neighbors

[2] https://www.euronews.com/next/2025/11/23/long-lost-moon-forming-planet-formed-in-the-inner-solar-system-new-analysis-shows

[3] https://science.slashdot.org/story/14/06/06/0246206/evidence-of-protoplanet-found-on-moon

[4] https://www.science.org/doi/10.1126/science.ado0623



Cryptologist DJB Criticizes Push to Finalize Non-Hybrid Security for Post-Quantum Cryptography (cr.yp.to)

(Monday November 24, 2025 @11:41AM (EditorDavid) from the instantiating-an-objection dept.)

In October cryptologist/CS professor Daniel J. Bernstein alleged that America's National Security Agency (and its UK counterpart GCHQ) were attempting to influence NIST to adopt weaker post-quantum cryptography standards without a "hybrid" approach that would've also included pre-quantum [1]ECC .

Bernstein is of the opinion that "Given [2]how many post-quantum proposals have been broken and the continuing flood of [3]side-channel attacks , any competent engineering evaluation will conclude that the best way to deploy post-quantum [PQ] encryption for TLS, and for the Internet more broadly, is as [4]double encryption : post-quantum cryptography on top of ECC." But he says he's seen it playing out differently:

> By 2013, NSA had a [5]quarter-billion-dollar-a-year budget to "covertly influence and/or overtly leverage" systems to "make the systems in question exploitable"; in particular, to "influence policies, standards and specification for commercial public key technologies". NSA is [6]quietly using stronger cryptography for the data it cares about , but meanwhile is spending money to promote a market for weakened cryptography, the same way that it successfully created decades of security failures by [7]building up the market for, e.g., 40-bit RC4 and 512-bit RSA and [8]Dual EC . I looked concretely at what was happening in [9]IETF's TLS working group , compared to the [10]consensus requirements for standards-development organizations. I reviewed how a call for "adoption" of an NSA-driven specification produced a variety of [11]objections that [12]weren't handled properly . ("Adoption" is a preliminary step before IETF standardization....) On 5 November 2025, the chairs issued [13]"last call" for objections to publication of the document. The deadline for input is "2025-11-26", this coming Wednesday.

Bernstein also shares [14]concerns about how the Internet Engineering Task Force is handling the discussion , and argues that the document is even "out of scope" for the IETF TLS working group

> This document doesn't serve any of the official goals in the TLS working group charter. Most importantly, this document is directly contrary to the "improve security" goal, so it would violate the charter even if it contributed to another goal... Half of the PQ proposals submitted to NIST in 2017 [15]have been broken already ... often with attacks having sufficiently low cost to demonstrate on readily available computer equipment. Further PQ software has been broken by [16]implementation issues such as side-channel attacks .

He's also [17]concerned about how that discussion is being handled :

> On 17 October 2025, they [18]posted a "Notice of Moderation for Postings by D. J. Bernstein" saying that they would "moderate the postings of D. J. Bernstein for 30 days due to disruptive behavior effective immediately" and specifically that my postings "will be held for moderation and after confirmation by the TLS Chairs of being on topic and not disruptive, will be released to the list"...

>

> I didn't send anything to the IETF TLS mailing list for 30 days after that. Yesterday [November 22nd] I finished writing up my new objection and sent that in. And, gee, after more than 24 hours it still hasn't appeared... Presumably the chairs "forgot" to flip the censorship button off after 30 days.

Thanks to [19]alanw (Slashdot reader #1,822) for spotting the blog posts.



[1] https://en.wikipedia.org/wiki/Elliptic-curve_cryptography

[2] https://cr.yp.to/papers.html#qrcsp

[3] https://cr.yp.to/papers.html#kyberslash

[4] https://blog.cr.yp.to/20240102-hybrid.html

[5] https://www.eff.org/files/2014/04/09/20130905-guard-sigint_enabling.pdf

[6] https://blog.cr.yp.to/20251004-weakened.html#dogfood

[7] https://cr.yp.to/export/dtn/V3N4_10_92.pdf

[8] https://cr.yp.to/papers.html#dual-ec

[9] https://blog.cr.yp.to/20251004-weakened.html#tls

[10] https://blog.cr.yp.to/20251004-weakened.html#standards

[11] https://blog.cr.yp.to/20251004-weakened.html#callbad

[12] https://blog.cr.yp.to/20251004-weakened.html#resolution

[13] https://web.archive.org/web/20251122073342/https://mailarchive.ietf.org/arch/msg/tls/Pzdox1sDDG36q19PWDVPghsiyXA/

[14] https://blog.cr.yp.to/20251123-dodging.html

[15] https://cr.yp.to/papers.html#qrcsp

[16] https://cr.yp.to/papers.html#kyberslash

[17] https://blog.cr.yp.to/20251123-scope.html

[18] https://web.archive.org/web/20251123162658/https://mailarchive.ietf.org/arch/msg/tls/ivYCeEHbgs_qo5EaenOezxK11jg/

[19] https://www.slashdot.org/~alanw



Google Revisits JPEG XL in Chromium After Earlier Removal (windowsreport.com)

(Monday November 24, 2025 @11:41AM (EditorDavid) from the imaging-that dept.)

"Three years ago, Google [1]removed JPEG XL support from Chrome , stating there wasn't enough interest at the time," [2]writes the blog Windows Report . "That position has now changed."

> In a recent note to developers, a [3]Chrome team representative confirmed that work has restarted to bring JPEG XL to Chromium and said Google "would ship it in Chrome" once long-term maintenance and the usual launch requirements are met.

>

> The team explained that other platforms moved ahead. Safari supports JPEG XL, and Windows 11 users can add native support through an image extension from Microsoft Store. The format is also confirmed for use in PDF documents. There has been continuous demand from developers and users who ask for its return.

>

> Before Google ships the feature in Chrome, the company wants the integration to be secure and supported over time. A developer has submitted new code that reintroduces JPEG XL to Chromium. This version is marked as feature complete. The developer [4]said it also "includes animation support," which earlier implementations did not offer.



[1] https://tech.slashdot.org/story/22/10/31/2236220/why-google-is-removing-jpeg-xl-support-from-chrome

[2] https://windowsreport.com/google-revisits-jpeg-xl-in-chromium-after-earlier-removal/

[3] https://groups.google.com/a/chromium.org/g/blink-dev/c/WjCKcBw219k/m/NmOyvMCCBAAJ

[4] https://groups.google.com/a/chromium.org/g/blink-dev/c/WjCKcBw219k/m/NeiCV32tBAAJ



Mozilla Announces 'TABS API' For Developers Building AI Agents (omgubuntu.co.uk)

(Monday November 24, 2025 @11:41AM (EditorDavid) from the agent-APIs dept.)

"Fresh from announcing it is building an [1]AI browsing mode in Firefox and laying the groundwork for agentic interactions [2]in the Firefox 145 release , the corp arm of Mozilla is now flexing its AI muscles in the direction of those more likely to care," [3]writes the blog OMG Ubuntu :

> If you're a developer building AI agents, you can sign up to get early access to [4]Mozilla's TABS API , a "powerful web content extraction and transformation toolkit designed specifically for AI agent builders"... The TABS API enables devs to create agents to automate web interactions, like clicking, scrolling, searching, and submitting forms "just like a human". Real-time feedback and adaptive behaviours will, Mozilla say, offer "full control of the web, without the complexity."

>

> As TABS is not powered by a Mozilla-backed LLM you'll need to connect it to your choice of third-party LLM for any relevant processing... Developers get 1,000 requests monthly on the free tier, which seems reasonable for prototyping personal projects. Complex agentic workloads may require more. Though pricing is yet to be locked in, the TABS API website suggests it'll cost ~$5 per 1000 requests. Paid plans will offer additional features too, like lower latency and, somewhat ironically, CAPTCHA solving so AI can 'prove' it's not a robot on pages gated to prevent automated activities.

>

> Google, OpenAI, and other major AI vendors offer their own agentic APIs. Mozilla is pitching up late, but it plans to play differently. It touts a "strong focus on data minimisation and security", with scraped data treated ephemerally — i.e., not kept. As a distinction, that matters. [5]AI agents can be given complex online tasks that involve all sorts of personal or sensitive data being fetched and worked with.... If you're minded to make one, perhaps without a motivation to asset-strip the common good, Mozilla's TABS API look like a solid place to start.



[1] https://www.omgubuntu.co.uk/2025/11/firefox-ai-window-browsing-mode-coming

[2] https://www.omgubuntu.co.uk/2025/11/firefox-145-released-new-features

[3] https://www.omgubuntu.co.uk/2025/11/mozilla-tabs-api-ai-web-agents

[4] https://tabstack.ai/

[5] https://techcrunch.com/2025/03/14/no-one-knows-what-the-hell-an-ai-agent-is/



One Company's Plan to Sink Nuclear Reactors Deep Underground (ieee.org)

(Monday November 24, 2025 @11:41AM (EditorDavid) from the thinking-deeply dept.)

Long-time Slashdot reader [1]jenningsthecat shared [2]this article from IEEE Spectrum :

> By dropping a nuclear reactor 1.6 kilometers (1 mile) underground, Deep Fission aims to use the weight of a billion tons of rock and water as a natural containment system comparable to concrete domes and cooling towers. With the fission reaction occurring far below the surface, steam can safely circulate in a closed loop to generate power.

>

> The California-based startup [3]announced in October that prospective customers had signed non-binding letters of intent for 12.5 gigawatts of power involving data center developers, industrial parks, and other (mostly undisclosed) strategic partners, with initial sites under consideration in Kansas, Texas, and Utah... The company [4]says its modular approach allows multiple 15-megawatt reactors to be clustered on a single site: A block of 10 would total 150 MW, and Deep Fission claims that larger groupings could scale to 1.5 GW. Deep Fission claims that using geological depth as containment could make nuclear energy cheaper, safer, and deployable in months at a fraction of a conventional plant's footprint...

>

> The company aims to finalize its reactor design and confirm the pilot site in the coming months. [Company founder Liz] Muller says the plan is to drill the borehole, lower the canister, load the fuel, and bring the reactor to criticality underground in 2026. Sites in Utah, Texas, and Kansas are among the leading candidates for the first commercial-scale projects, which could begin construction in 2027 or 2028, depending on the speed of DOE and NRC approvals. Deep Fission expects to start manufacturing components for the first unit in 2026 and does not anticipate major bottlenecks aside from typical long-lead items.

In short "The same oil and gas drilling techniques that reliably reach kilometer-deep wells can be adapted to host nuclear reactors..." the article points out. Their design would also streamline construction, since "Locating the reactors under a deep water column subjects them to roughly 160 atmospheres of pressure — the same conditions maintained inside a conventional nuclear reactor — which forms a natural seal to keep any radioactive coolant or steam contained at depth, preventing leaks from reaching the surface."

Other interesting points from the article:

They plan on operating and controlling the reactor remotely from the surface.

Company founder Muller says if an earthquake ever disrupted the site, "you seal it off at the bottom of the borehole, plug up the borehole, and you have your waste in safe disposal."

For waste management, the company "is eyeing deep geological disposal in the very borehole systems they deploy for their reactors."

"The company claims it can cut overall costs by 70 to 80 percent compared with full-scale nuclear plants."

"Among its competition are projects like [5]TerraPower's Natrium , notes [6]the tech news site Hackaday , saying TerraPower's fast neutron reactors "are already under construction and offer much more power per reactor, along with Natrium in particular also providing built-in grid-level storage.

"One thing is definitely for certain..." they add. "The commercial power sector in the US has stopped being mind-numbingly boring."



[1] https://www.slashdot.org/~jenningsthecat

[2] https://spectrum.ieee.org/underground-nuclear-reactor-deep-fission

[3] https://www.businesswire.com/news/home/20251015263249/en/Deep-Fission-Expands-Customer-Pipeline-to-12.5-Gigawatts

[4] https://deepfission.com/technology/

[5] https://hackaday.com/2021/07/06/terrapowers-natrium-combining-a-fast-neutron-reactor-with-built-in-grid-level-storage/

[6] https://hackaday.com/2025/11/23/deep-fission-wants-to-put-nuclear-reactors-deep-underground/



How the Internet Rewired Work - and What That Tells Us About AI's Likely Impact (msn.com)

(Monday November 24, 2025 @04:34AM (EditorDavid) from the killing-a-stopped-job dept.)

"The internet did transform work — but not the way 1998 thought..." [1]argues the Wall Street Journal . "The internet slipped inside almost every job and rewired how work got done."

So while the number of single-task jobs like travel agent dropped, most jobs "are bundles of judgment, coordination and hands-on work," and instead the internet brought "the quiet transformation of nearly every job in the economy... Today, just 10% of workers make minimal use of the internet on the job — roles like butcher and carpet installer."

> [T]he bigger story has been additive. In 1998, few could conceive of social media — let alone 65,000 social-media managers — and 200,000 information-security analysts would have sounded absurd when data still lived on floppy disks... Marketing shifted from campaign bursts to always-on funnels and A/B testing. Clinics embedded e-prescribing and patient portals, reshaping front-office and clinical handoffs. The steps, owners and metrics shifted. Only then did the backbone scale: We went from server closets wedged next to the mop sink to data centers and cloud regions, from lone system administrators to fulfillment networks, cybersecurity and compliance.

>

> That is where many unexpected jobs appeared. Networked machines and web-enabled software quietly transformed back offices as much as our on-screen lives. Similarly, as e-commerce took off, internet-enabled logistics rewired planning roles — logisticians, transportation and distribution managers — and unlocked a surge in last-mile work. The build-out didn't just hire coders; it hired coordinators, pickers, packers and drivers. It spawned hundreds of thousands of warehouse and delivery jobs — the largest pockets of internet-driven job growth, and yet few had them on their 1998 bingo card... Today, the share of workers in professional and managerial occupations has more than doubled since the dawn of the digital era.

>

> So what does that tell us about AI? Our mental model often defaults to an industrial image — John Henry versus the steam drill — where jobs are one dominant task, and automation maps one-to-one: Automate the task, eliminate the job. The internet revealed a different reality: Modern roles are bundles. Technologies typically hit routine tasks first, then workflows, and only later reshape jobs, with second-order hiring around the backbone. That complexity is what made disruption slower and more subtle than anyone predicted. AI fits that pattern more than it breaks it... [LLMs] can draft briefs, summarize medical notes and answer queries. Those are tasks — important ones — but still parts of larger roles. They don't manage risk, hold accountability, reassure anxious clients or integrate messy context across teams. Expect a rebalanced division of labor: The technical layer gets faster and cheaper; the human layer shifts toward supervision, coordination, complex judgment, relationship work and exception handling.

>

> What to expect from AI, then, is messy, uneven reshuffling in stages. Some roles will contract sharply — and those contractions will affect real people. But many occupations will be rewired in quieter ways. Productivity gains will unlock new demand and create work that didn't exist, alongside a build-out around data, safety, compliance and infrastructure.

>

> AI is unprecedented; so was the internet. The real risk is timing: overestimating job losses, underestimating the long, quiet rewiring already under way, and overlooking the jobs created in the backbone. That was the internet's lesson. It's likely to be AI's as well.



[1] https://www.msn.com/en-us/technology/artificial-intelligence/how-the-internet-rewired-work-and-what-that-tells-us-about-ai-s-likely-impact/ar-AA1QXb9W



Microsoft Warns Its Windows AI Feature Brings Data Theft and Malware Risks, and 'Occasionally May Hallucinate' (itsfoss.com)

(Monday November 24, 2025 @04:34AM (EditorDavid) from the game-of-Risks dept.)

"Copilot Actions on Windows 11" is currently available in Insider builds ( version 26220.7262) as part of Copilot Labs, [1]according to a recent report , "and is off by default, requiring admin access to set it up."

But maybe it's off for a good reason...besides the fact that it can access any apps installed on your system:

> In [2]a support document , Microsoft admits that features like Copilot Actions introduce " novel security risks ." They warn about cross-prompt injection (XPIA), where malicious content in documents or UI elements can override the AI's instructions. The result? " Unintended actions like data exfiltration or malware installation ."

>

> Yeah, you read that right. Microsoft is shipping a feature that could be tricked into installing malware on your system. Microsoft's own warning hits hard: " We recommend that you only enable this feature if you understand the security implications ." When you try to enable these experimental features, Windows shows you a warning dialog that you have to acknowledge. ["This feature is still being tested and may impact the performance or security of your device."]

>

> Even with these warnings, the level of access Copilot Actions demands is concerning. When you enable the feature, it gets read and write access to your Documents, Downloads, Desktop, Pictures, Videos, and Music folders... Microsoft says they are implementing safeguards. All actions are logged, users must approve data access requests, the feature operates in isolated workspaces, and the system uses audit logs to track activity.

>

> But you are still giving an AI system that can " hallucinate and produce unexpected outputs " ( Microsoft's words, not mine ) full access to your personal files.

To address this, [3] Ars Technica notes , Microsoft added this helpful warning to its support document this week. "As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs."

But Microsoft didn't describe "what actions they should take to prevent their devices from being compromised. I asked Microsoft to provide these details, and the company declined..."



[1] https://itsfoss.com/news/new-windows-ai-feature-can-be-tricked/

[2] https://blogs.windows.com/windowsexperience/2025/10/16/securing-ai-agents-on-windows/

[3] https://arstechnica.com/security/2025/11/critics-scoff-after-microsoft-warns-ai-feature-can-infect-machines-and-pilfer-data/



Amazon's AI-Powered IDE Kiro Helps Vibe Coders with 'Spec Mode' (geekwire.com)

(Monday November 24, 2025 @04:34AM (EditorDavid) from the technical-specifications dept.)

A promotional video for Amazon's Kiro software development system took a unique approach, [1]writes GeekWire . "Instead of product diagrams or keynote slides, a crew from Seattle's Packrat creative studio used action figures on a miniature set to create a stop-motion sequence..."

"Can the software development hero conquer the 'AI Slop Monster' to uncover the gleaming, fully functional robot buried beneath the coding chaos?"

> Kiro (pronounced KEE-ro) is Amazon's effort to rethink how developers use AI. It's an integrated development environment that attempts to tame the wild world of vibe coding... But rather than simply generating code from prompts [in "vibe mode"], Kiro breaks down requests into formal specifications, design documents, and task lists [in "spec mode"]. This spec-driven development approach aims to solve a fundamental problem with vibe coding: AI can quickly generate prototypes, but without structure or documentation, that code becomes unmaintainable...

>

> The market for AI-powered development tools is booming. Gartner expects AI code assistants to become ubiquitous, [2]forecasting that 90% of enterprise software engineers will use them by 2028, up from less than 14% in early 2024... Amazon [3]launched Kiro in preview in July, to a strong response. Positive early reviews were tempered by frustration from users unable to gain access. Capacity constraints have since been resolved, and Amazon says more than 250,000 developers used Kiro in the first three months...

>

> Now, the company is taking Kiro out of preview [4]into general availability , rolling out new features and opening the tool more broadly to development teams and companies... During the preview period, Kiro handled more than 300 million requests and processed trillions of tokens as developers explored its capabilities, according to stats provided by the company. Rackspace used Kiro to complete what they estimated as 52 weeks of software modernization in three weeks, according to Amazon executives. SmugMug and Flickr are among other companies espousing the virtues of Kiro's spec-driven development approach. Early users [5]are posting in glowing terms about the efficiencies they're seeing from adopting the tool... startups in most countries [6]can apply for up to 100 free Pro+ seats for a year's worth of Kiro credits.

Kiro offers property-based testing "to verify that generated code actually does what developers specified," according to the article — plus a checkpointing system that "lets developers roll back changes or retrace an agent's steps when an idea goes sideways..."

"And yes, they've been using Kiro to build Kiro, which has allowed them to move much faster."



[1] https://www.geekwire.com/2025/amazons-surprise-indie-hit-kiro-launches-broadly-in-bid-to-reshape-ai-powered-software-development/

[2] https://www.gartner.com/en/newsroom/press-releases/2025-07-01-gartner-identifies-the-top-strategic-trends-in-software-engineering-for-2025-and-beyond

[3] https://www.geekwire.com/2025/amazon-targets-vibe-coding-chaos-with-new-kiro-ai-software-development-tool/

[4] https://kiro.dev/blog/general-availability/

[5] https://x.com/AllAboutJoeX/status/1986833166104162782

[6] https://kiro.dev/blog/one-year-free-for-startups-2025/



Did Bitcoin Play a Role in Thursday's Stock Sell-Off? (msn.com)

(Sunday November 23, 2025 @09:35PM (EditorDavid) from the magic-internet-money dept.)

A week ago [1]Bitcoin was at $93,714 . Saturday it dropped to $85,300.

Late Thursday, market researcher Ed Yardeni blamed some of Thursday's stock market sell-off on "the ongoing plunge in bitcoin's price," [2]reports Fortune :

> "There has been a strong correlation between it and the price of TQQQ, an ETF that seeks to achieve daily investment results that correspond to three times (3x) the daily performance of the Nasdaq-100 Index," [Yardeni wrote in a note]. Yardeni blamed bitcoin's slide on the GENIUS Act, which was enacted on July 18, saying that the regulatory framework it established for stablecoins eliminated bitcoin's transactional role in the monetary system. "It's possible that the rout in bitcoin is forcing some investors to sell stocks that they own," he added... Traders who used leverage to make crypto bets would need to liquidate positions in the event of margin calls.

>

> Steve Sosnick, chief strategist at Interactive Brokers, also said bitcoin could swing the entire stock market, pointing out that it's become a proxy for speculation. "As a long-time systematic trader, it tells me that algorithms are acting upon the relationship between stocks and bitcoin," he [3]wrote in a note on Thursday .



[1] https://news.slashdot.org/story/25/11/17/024220/bitcoin-erases-years-gain-as-crypto-bear-market-deepens

[2] https://www.msn.com/en-us/money/markets/wall-street-eyes-a-possible-culprit-in-this-week-s-head-spinning-stock-market-reversal-bitcoin/ar-AA1QXdVo

[3] https://www.interactivebrokers.com/campus/traders-insight/securities/stocks/nvda-to-the-rescue-but-bitcoin-spoils-the-fun/?mod=livecoverage_web



Microsoft and GitHub Preview New Tool That Identifies, Prioritizes, and Fixes Vulnerabilities With AI (thenewstack.io)

(Monday November 24, 2025 @04:34AM (EditorDavid) from the pair-programming dept.)

"Security, development, and AI now move as one," says Microsoft's director of cloud/AI security product marketing.

Microsoft and GitHub "have launched a native integration between [1]Microsoft Defender for Cloud and [2]GitHub Advanced Security that aims to address what one executive calls decades of accumulated security debt in enterprise codebases..." [3]according to The New Stack :

> The integration, announced this week in San Francisco at the [4]Microsoft Ignite 2025 conference and now available in public preview, connects runtime intelligence from production environments directly into developer workflows. The goal is to help organizations prioritize which vulnerabilities actually matter and use AI to fix them faster. "Throughout my career, I've seen vulnerability trends going up into the right. It didn't matter how good of a [5]detection engine and how accurate our detection engine was, people just couldn't fix things fast enough," said [6]Marcelo Oliveira , VP of product management at GitHub, who has spent nearly a decade in application security. "That basically resulted in decades of accumulation of security debt into enterprise code bases." According to industry data, critical and high-severity vulnerabilities constitute 17.4% of security backlogs, with a mean time to remediation of 116 days, said [7]Andrew Flick , senior director of developer services, languages and tools at Microsoft, in a [8]blog post . Meanwhile, applications face attacks as frequently as once every three minutes, Oliveira said.

>

> The integration represents the first native link between runtime intelligence and developer workflows, said [9]Elif Algedik , director of product marketing for cloud and AI security at Microsoft, in a [10]blog post ... The problem, according to Flick, comes down to three challenges: security teams drowning in alert fatigue while AI rapidly introduces new [11]threat vectors that they have little time to understand; developers lacking clear prioritization while remediation takes too long; and both teams relying on separate, nonintegrated tools that make collaboration slow and frustrating... The new integration works bidirectionally. When Defender for Cloud detects a vulnerability in a running workload, that runtime context flows into GitHub, showing developers whether the vulnerability is internet-facing, handling sensitive data or actually exposed in production. This is powered by what GitHub calls the Virtual Registry, which creates code-to-runtime mapping, Flick said...

>

> In the past, this alert would age in a dashboard while developers worked on unrelated fixes because they didn't know this was the critical one, he said. Now, a security campaign can be created in GitHub, filtering for runtime risk like internet exposure or sensitive data, notifying the developer to prioritize this issue.

GitHub Copilot "now automatically checks dependencies, scans for first-party code vulnerabilities and catches hardcoded secrets before code reaches developers," the article points out — but GitHub's VP of product management says this takes things even further.

"We're not only helping you fix existing vulnerabilities, we're also reducing the number of vulnerabilities that come into the system when the level of throughput of new code being created is increasing dramatically with all these agentic coding agent platforms."



[1] https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-cloud-introduction

[2] https://docs.github.com/en/get-started/learning-about-github/about-github-advanced-security

[3] https://thenewstack.io/github-and-microsoft-use-ai-to-fix-security-debt-crisis/

[4] https://ignite.microsoft.com/en-US/home

[5] https://thenewstack.io/is-the-end-of-detection-based-security-here/

[6] https://www.linkedin.com/in/marcelogoliveira22/

[7] https://www.linkedin.com/in/andrewmflick/

[8] https://techcommunity.microsoft.com/blog/appsonazureblog/security-where-it-matters-runtime-context-and-ai-fixes-now-integrated-in-your-de/4470794

[9] https://www.linkedin.com/in/elifalgedik/

[10] https://techcommunity.microsoft.com/blog/microsoftdefendercloudblog/microsoft-defender-for-cloud-innovations-at-ignite-2025/4469386

[11] https://thenewstack.io/modern-attack-methods-jeopardize-cybersecurity-strategies/



PHP 8.5 Brings Long-Awaited Pipe Operator, Adds New URI Tools (theregister.com)

(Sunday November 23, 2025 @09:35PM (EditorDavid) from the piping-at-the-gates-of-dawn dept.)

"PHP 8.5 landed on Thursday with a long-awaited pipe operator and a new standards-compliant URI parser," [1]reports the Register , "marking one of the scripting language's more substantial updates... "

> The pipe operator allows function calls to be chained together, which avoids the extraneous variables and nested statements that might otherwise be involved. Pipes tend to make code more readable than other ways to implement serial operations. Anyone familiar with the Unix/Linux command line or programming languages like [2]R , [3]F# , [4]Clojure , or [5]Elixir may have used the pipe operator. In JavaScript, aka ECMAScript, [6]a pipe operator has been proposed , though there are alternatives like method chaining.

>

> Another significant addition is the [7]URI extension , which allows developers to parse and modify URIs and URLs based on both the RFC 3986 and the WHATWG URL standards. Parsing with URIs and URLs â" reading them and breaking them down into their different parts â" is a rather common task for web-oriented applications. Yet prior versions of PHP didn't include a standards-compliant parser in the standard library. As [8]noted by software developer Tim Düsterhus, the [9]parse_url() function that dates back to PHP 4 doesn't follow any standard and comes with a warning that it should not be used with untrusted or malformed URLs.

>

> Other noteworthy additions to the language include: [10]Clone With , for updating properties more efficiently; the [11]#[\NoDiscard] attribute , for warning when a return value goes unused; the ability to use [12]static closures and first-class callables in constant expressions; and [13]persistent cURL handles that can be shared across multiple PHP requests.



[1] https://www.theregister.com/2025/11/20/php_85_lays_pipe_operator/

[2] https://r4ds.had.co.nz/pipes.html

[3] https://camilotk.github.io/fsharp-by-example/chapters/pipe/

[4] https://clojure.org/guides/threading_macros

[5] https://elixirschool.com/en/lessons/basics/pipe_operator

[6] https://tc39.es/proposal-pipeline-operator/

[7] https://www.php.net/releases/8.5/en.php#new-uri-extension

[8] https://thephp.foundation/blog/2025/10/10/php-85-uri-extension/

[9] https://www.php.net/manual/en/function.parse-url.php

[10] https://www.php.net/releases/8.5/en.php#clone-with

[11] https://www.php.net/releases/8.5/en.php#no-discard-attribute

[12] https://www.php.net/releases/8.5/en.php#closures-in-const-expr

[13] https://www.php.net/releases/8.5/en.php#persistent-curl-share-handles



'The Strange and Totally Real Plan to Blot Out the Sun and Reverse Global Warming' (politico.com)

(Sunday November 23, 2025 @05:09PM (EditorDavid) from the giant-geoengineering dept.)

In a 2023 pitch to investors, a "well-financed, highly credentialed" startup named Stardust aimed for a "gradual temperature reduction demonstration" in 2027, [1]according to a massive new 9,600-word article from Politico . ("Annually dispersing ~1 million tons of sun-reflecting particles," says one slide. "Equivalent to ~1% extra cloud coverage.")

"Another page told potential investors Stardust had already run low-altitude experiments using 'test particles'," the article notes:

> [P]ublic records and interviews with more than three dozen scientists, investors, legal experts and others familiar with the company reveal an organization advancing rapidly to the brink of being able to press "go" on its planet-cooling plans. Meanwhile, Stardust is seeking U.S. government contracts and quietly building an influence machine in Washington to lobby lawmakers and officials in the Trump administration on the need for a regulatory framework that it says is necessary to gain public approval for full-scale deployment....

>

> The presentation also included revenue projections and a series of opportunities for venture capitalists to recoup their investments. Stardust planned to sign "government contracts," said a slide with the company's logo next to an American flag, and consider a "potential acquisition" by 2028. By 2030, the deck foresaw a "large-scale demonstration" of Stardust's system. At that point, the company claimed it would already be bringing in $200 million per year from its government contracts and eyeing an initial public offering, if it hadn't been sold already.

The article notes that for "a widening circle of researchers and government officials, Stardust's perceived failures to be transparent about its work and technology have triggered a larger conversation about what kind of international governance framework will be needed to regulate a new generation of climate technologies." (Since currently Stardust and its backers "have no legal obligations to adhere to strenuous safety principles or to submit themselves to the public view.")

In October Politico spoke to Stardust CEO, Yanai Yedvab, a former nuclear physicist who was once deputy chief scientist at the Israeli Atomic Energy Commission. Stardust "was ready to announce the $60 million it had raised from 13 new investors," the article points out, " [2]far larger than any previous investment in solar geoengineering."

> [Yedvab] was delighted, he said, not by the money, but what it meant for the project. "We are, like, few years away from having the technology ready to a level that decisions can be taken" — meaning that deployment was still on track to potentially begin on the timeline laid out in the 2023 pitch deck. The money raised was enough to start "outdoor contained experiments" as soon as April, Yedvab said. These would test how their particles performed inside a plane flying at stratospheric heights, some 11 miles above the Earth's surface... The key thing, he insisted, was the particle was "safe." It would not damage the ozone layer and, when the particles fall back to Earth, they could be absorbed back into the biosphere, he said. Though it's impossible to know this is true until the company releases its formula. Yedvab said this round of testing would make Stardust's technology ready to begin a staged process of full-scale, global deployment before the decade is over — as long as the company can secure a government client. To start, they would only try to stabilize global temperatures — in other words fly enough particles into the sky to counteract the steady rise in greenhouse gas levels — which would initially take a fleet of 100 planes.

This begs the question: should the world attempt solar geoengineering?

> That the global temperature would drop is not in question. Britain's Royal Society... said in a report issued in early November that there was little doubt it would be effective. They did not endorse its use, but said that, given the growing interest in this field, there was good reason to be better informed about the side effects... [T]hat doesn't mean it can't have broad benefits when weighed against deleterious climate change, according to Ben Kravitz, a professor of earth and atmospheric sciences at Indiana University who has closely studied the potential effects of solar geoengineering. "There would be some winners and some losers. But in general, some amount of ... stratospheric aerosol injection would likely benefit a whole lot of people, probably most people," he said. Other scientists are far more cautious. The Royal Society report listed a range of potential negative side effects that climate models had displayed, including drought in sub-Saharan Africa. In accompanying documents, it also warned of more intense hurricanes in the North Atlantic and winter droughts in the Mediterranean. But the picture remains partial, meaning there is no way yet to have an informed debate over how useful or not solar geoengineering could be...

>

> And then there's the problem of trying to stop. Because an abrupt end to geoengineering, with all the carbon still in the atmosphere, would cause the temperature to soar suddenly upward with unknown, but likely disastrous, effects... Once the technology is deployed, the entire world would be dependent on it for however long it takes to reduce the trillion or more tons of excess carbon dioxide in the atmosphere to a safe level...

>

> Stardust claims to have solved many technical and safety challenges, especially related to the environmental impacts of the particle, which they say would not harm nature or people. But researchers say the company's current lack of transparency makes it impossible to trust.

Thanks to long-time Slashdot reader [3]fjo3 for sharing the article.



[1] https://www.politico.com/news/magazine/2025/11/21/stardust-geoengineering-janos-pasztor-regulations-00646414?nid=0000014f-1646-d88f-a1cf-5f46b7bd0000&nname=playbook&nrid=00000157-90cc-d19a-afd7-fbff07390002

[2] https://srm360.org/funding-tracker/

[3] https://www.slashdot.org/~fjo3



Are Astronomers Wrong About Dark Energy? (cnn.com)

(Sunday November 23, 2025 @05:09PM (EditorDavid) from the life-the-universe-and-everything dept.)

An anonymous reader shared [1]this report from CNN :

> The universe's expansion might not be accelerating but slowing down, a new study suggests. If confirmed, the finding would upend decades of established astronomical assumptions and rewrite our understanding of dark energy, the elusive force that counters the inward pull of gravity in our universe...

>

> Last year, a consortium of hundreds of researchers using data from the Dark Energy Spectroscopic Instrument (DESI) in Arizona, developed the [2]largest ever 3D map of the universe. The observations hinted at the fact that dark energy may be weakening over time, indicating that the universe's rate of expansion could eventually slow. Now, [3]a study published November 6 in the journal Monthly Notices of the Royal Astronomical Society provides further evidence that dark energy might not be pushing on the universe with the same strength it used to. The DESI project's findings last year represented "a major, major paradigm change ... and our result, in some sense, agrees well with that," said Young-Wook Lee, a professor of astrophysics at Yonsei University in South Korea and lead researcher for the new study....

>

> To reach their conclusions, the researchers analyzed a sample of 300 galaxies containing Type 1a supernovas and posited that the dimming of distant exploding stars was not only due to their moving farther away from Earth, but also due to the progenitor star's age... [Study coauthor Junhyuk Son, a doctoral candidate of astronomy at Yonsei University, said] "we found that their luminosity actually depends on the age of the stars that produce them — younger progenitors yield slightly dimmer supernovae, while older ones are brighter." Son said the team has a high statistical confidence — 99.99% — about this age-brightness relation, allowing them to use Type 1a supernovas more accurately than before to assess the universe's expansion... Eventually, if the expansion continues to slow down, the universe could begin to contract, ending in what astronomers imagine may be the opposite of the big bang — the big crunch. "That is certainly a possibility," Lee said. "Even two years ago, [4]the Big Crunch was out of the question. But we need more work to see whether it could actually happen."

>

> The new research proposes a radical revision of accepted knowledge, so, understandably, it is being met with skepticism. "This study rests on a flawed premise," Adam Riess, a professor of physics and astronomy at the Johns Hopkins University and one of the recipients of the 2011 Nobel Prize in physics, said in an email. "It suggests supernovae have aged with the Universe, yet observations show the opposite — today's supernovae occur where young stars form. The same idea was proposed years ago and refuted then, and there appears to be nothing new in this version." Lee, however, said Riess' claim is incorrect. "Even in the present-day Universe, Type Ia supernovae are found just as frequently in old, quiescent elliptical galaxies as in young, star-forming ones — which clearly shows that this comment is mistaken. The so-called paper that 'refuted' our earlier result relied on deeply flawed data with enormous uncertainties," he said, adding that the age-brightness correlation has been independently confirmed by two separate teams in the United States and China... "Extraordinary claims require extraordinary evidence," Dragan Huterer, a professor of physics at the University of Michigan in Ann Arbor, said in an email, noting that he does not feel the new research "rises to the threshold to overturn the currently favored model...."

>

> The [5]new Vera C. Rubin Observatory , which started operating this year, is set to help settle the debate with the early 2026 launch of the Legacy Survey of Space and Time, an ultrawide and ultra-high-definition time-lapse record of the universe made by scanning the entire sky every few nights over 10 years to capture a compilation of asteroids and comets, exploding stars, and distant galaxies as they change.



[1] https://www.cnn.com/2025/11/20/science/universe-expansion-slow-down-dark-energy

[2] https://newscenter.lbl.gov/2024/04/04/desi-first-results-make-most-precise-measurement-of-expanding-universe/

[3] https://academic.oup.com/mnras/article/544/1/975/8281988

[4] https://science.nasa.gov/asset/hubble/big-crunch/

[5] https://www.cnn.com/2025/06/23/science/vera-rubin-observatory-first-images



More

ARE YOU ADDICTED TO SLASHDOT?
Take this short test to find out if you are a Dothead.

1. Do you submit articles to Slashdot and then reload the main page every 3.2
seconds to see if your article has been published yet?
2. Have you made more than one "first comment!" post within the past week?
3. Have you ever participated in a Gnome vs. KDE or a Linux vs. FreeBSD
flamewar on Slashdot?
4. Do you write jokes about Slashdot?
5. Do you wake up at night, go to the bathroom, and fire up your web browser
to get your Slashdot fix on the way back?
6. Do you dump your date at the curb so you can hurry home to visit Slashdot?
7. Do you think of Slashdot when you order a taco at a restaurant?
8. Are you a charter member of the Rob Malda Fan Club?
9. Did you lease a T3 line so you could download Slashdot faster?
10. Is Slashdot your only brower's bookmark?
11. Do you get a buzz when your browser finally connects to Slashdot?
12. Do you panic when your browser says "Unable to connect to slashdot.org"?
13. Have you even made a New Year's Resolution to cut back on Slashdot
access... only to visit it at 12:01?