ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Open Source Coalition Announces 'Model-Signing' with Sigstore to Strengthen the ML Supply Chain (googleblog.com)

(Sunday April 06, 2025 @11:34AM (EditorDavid) from the model-citizens dept.)

The advent of LLMs and machine learning-based applications "opened the door to a new wave of security threats," argues Google's security blog. (Including [1]model and [2]data poisoning , [3]prompt injection , [4]prompt leaking and [5]prompt evasion .)

So as part of the Linux Foundation's nonprofit [6]Open Source Security Foundation , and in partnership with NVIDIA and HiddenLayer, Google's Open Source Security Team on Friday announced the first stable [7]model-signing library (hosted at PyPI.org), with digital signatures letting users verify that the model used by their application "is exactly the model that was created by the developers," according to a post on Google's security blog.

> [S]ince models are an uninspectable collection of weights (sometimes also with arbitrary code), an attacker can tamper with them and achieve significant impact to those using the models. Users, developers, and practitioners need to examine an important question during their risk assessment process: "can I trust this model?"

>

> Since its launch, [8]Google's Secure AI Framework (SAIF) has created guidance and technical solutions for creating AI applications that users can trust. A first step in achieving trust in the model is to permit users to verify its integrity and provenance, to prevent tampering across all processes from training to usage, via cryptographic signing... [T]he signature would have to be verified when the model gets uploaded to a model hub, when the model gets selected to be deployed into an application (embedded or via remote APIs) and when the model is used as an intermediary during another training run. Assuming the training infrastructure is trustworthy and not compromised, this approach guarantees that each model user can trust the model...

>

> The average developer, however, would not want to manage keys and rotate them on compromise. These challenges are addressed by using [9]Sigstore , a collection of tools and services that make code signing secure and easy. By binding an OpenID Connect token to a workload or developer identity, Sigstore alleviates the need to manage or rotate long-lived secrets. Furthermore, signing is made transparent so signatures over malicious artifacts could be audited in a public transparency log, by anyone. This ensures that split-view attacks are not possible, so any user would get the exact same model. These features are why we recommend Sigstore's signing mechanism as the default approach for signing ML models.

>

> Today the OSS community is releasing the v1.0 stable version of our model signing library as a Python package supporting Sigstore and traditional signing methods. This model signing library is specialized to handle the sheer scale of ML models (which are usually much larger than traditional software components), and handles signing models represented as a directory tree. The package provides CLI utilities so that users can sign and verify model signatures for individual models. The package can also be used as a library which we plan to incorporate directly into model hub upload flows as well as into ML frameworks.

"We can view model signing as establishing the foundation of trust in the ML ecosystem..." the post concludes (adding "We envision extending this approach to also include datasets and other ML-related artifacts.")

> Then, we plan to build on top of signatures, towards fully tamper-proof metadata records, that can be read by both humans and machines. This has the potential to automate a significant fraction of the work needed to perform incident response in case of a compromise in the ML world...

>

> To shape the future of building tamper-proof ML, join the [10]Coalition for Secure AI , where we are [11]planning to work on building the entire trust ecosystem together with the open source community. In collaboration with multiple industry partners, we are starting up a special interest group under CoSAI for defining the future of ML signing and including tamper-proof ML metadata, such as model cards and evaluation results.



[1] https://userpages.cs.umbc.edu/hpirsiav/papers/hidden_aaai20.pdf

[2] https://blog.barracuda.com/2024/04/03/generative-ai-data-poisoning-manipulation

[3] https://hiddenlayer.com/innovation-hub/indirect-prompt-injection-of-claude-computer-use/

[4] https://hiddenlayer.com/innovation-hub/prompt-injection-attacks-on-llms/

[5] https://hiddenlayer.com/innovation-hub/the-tactics-and-techniques-of-adversarial-ml/

[6] http://openssf.org/

[7] https://pypi.org/project/model-signing/

[8] https://saif.google/

[9] https://www.sigstore.dev/

[10] https://www.coalitionforsecureai.org/

[11] https://github.com/cosai-oasis/ws1-supply-chain/issues/4



Python's PyPI Finally Gets Closer to Adding 'Organization Accounts' and SBOMs (mailchi.mp)

(Sunday April 06, 2025 @11:34AM (EditorDavid) from the accounting-errors dept.)

[1]Back in 2023 Python's infrastructure director called it "the first step in our plan to build financial support and long-term sustainability of PyPI" while giving users "one of our most requested features: organization accounts." (That is, "self-managed teams with their own exclusive branded web addresses" to make their massive Python Package Index repository "easier to use for large community projects, organizations, or companies who manage multiple sub-teams and multiple packages.")

Nearly two years later, they've announced that [2]they're "making progress" on its rollout ...

> Over the last month, we have taken some more baby steps to onboard new Organizations, welcoming 61 new Community Organizations and our first 18 Company Organizations. We're still working to improve the review and approval process and hope to improve our processing speed over time. To date, we have 3,562 Community and 6,424 Company Organization requests to process in our backlog.

They've also onboarded a PyPI Support Specialist to provide "critical bandwidth to review the backlog of requests" and "free up staff engineering time to develop features to assist in that review." (And "we were finally able to [3]finalize our Terms of Service document for PyPI ," build the tooling necessary to notify users, and [4]initiate the Terms of Service rollout . [Since launching 20 years ago PyPi's terms of service have only been updated twice.]

In other news the security developer-in-residence at the Python Software Foundation has been continuing work on a Software Bill-of-Materials (SBOM) as described in [5]Python Enhancement Proposal #770 . The feature "would designate a specific directory inside of Python package metadata (".dist-info/sboms") as a directory where build backends and other tools can store SBOM documents that describe components within the package beyond the top-level component."

> The goal of this project is to make bundled dependencies measurable by software analysis tools like vulnerability scanning, license compliance, and static analysis tools. Bundled dependencies are common for scientific computing and AI packages, but also generally in packages that use multiple programming languages like C, C++, Rust, and JavaScript. The PEP has been moved to Provisional Status, meaning the PEP sponsor is doing a final review before tools can begin implementing the PEP ahead of its final acceptance into changing Python packaging standards. Seth has begun implementing code that tools can use when adopting the PEP, such as a [6]project which abstracts different Linux system package managers functionality to reverse a file path into the providing package metadata.

>

> Security developer-in-residence Seth Larson will be speaking about this project at [7]PyCon US 2025 in Pittsburgh, PA in a talk titled " [8]Phantom Dependencies: is your requirements.txt haunted ?"

Meanwhile InfoWorld reports that newly approved Python Enhancement Proposal 751 [9]will also give Python a standard lock file format .



[1] https://developers.slashdot.org/story/23/04/24/0040250/pythons-pypi-will-sell-organization-accounts-to-corporate-projects-to-fund-staff

[2] https://mailchi.mp/python/python-software-foundation-july-2024-newsletter-19878179

[3] https://policies.python.org/pypi.org/Terms-of-Service/

[4] https://blog.pypi.org/posts/2025-02-25-terms-of-service/

[5] https://discuss.python.org/t/pep-770-improving-measurability-of-python-packages-with-software-bill-of-materials/76308

[6] https://pypi.org/project/whichprovides/

[7] https://us.pycon.org/2025

[8] https://us.pycon.org/2025/schedule/presentation/14/

[9] https://www.infoworld.com/article/3951671/understand-pythons-new-lock-file-format.html



New Tinder Game 'Lets You Flirt With AI Characters. Three of Them Dumped Me' (msn.com)

(Monday April 07, 2025 @03:34AM (EditorDavid) from the speed-dumping dept.)

Tinder "is experimenting with a chatbot that claims to help users improve their flirting skills," [1]notes Washington Post internet-culture reporter Tatum Hunter . The chatbot is available only to users in the United States on iPhones for a limited time, and powered by OpenAI's GPT-4o each character "kicks off an improvised conversation, and the user responds out loud with something flirty..."

"Three of them dumped me."

> You can win points for banter the app deems "charming" or "playful." You lose points if your back-and-forth seems "cheeky" or "quirky"... It asked me to talk out loud into my phone and win the romantic interest of various AI characters.

>

> The first scenario involved a financial analyst named Charles, whom I've supposedly run into at the Tokyo airport after accidentally swapping our luggage. I tried my best to be polite to the finance guy who stole my suitcase, asking questions about his travel and agreeing to go to coffee. But the game had some critical feedback: I should try to connect more emotionally using humor or stories from my life. My next go had me at a Dallas wedding trying to flirt with Andrew, a data analyst who had supposedly stumbled into the venue, underdressed, because he'd been looking for a quiet spot to ... analyze data. This time I kept things playful, poking fun at Andrew for crashing a wedding. Andrew didn't like that. I'd "opted to disengage" by teasing this person instead of helping him blend in at the wedding, the app said. A failure on my part, apparently — and also a reminder why generative AI doesn't belong everywhere...

>

> Going in, I was worried Tinder's AI characters would outperform the people I've met on dating apps and I'd fall down a rabbit hole of robot love. Instead, they behaved in a way typical for chatbots: Drifting toward biased norms and failing to capture the complexity of human emotions and interactions. The "Game Game" seemed to replicate the worst parts of flirting — the confusion, the unclear expectations, the uncomfortable power dynamics — without the good parts, like the spark of curiosity about another person. Tinder released the feature on April Fools' Day, likely as a bid for impressions and traffic. But its limitations overshadowed its novelty...

>

> Hillary Paine, Tinder's vice president of product, growth and revenue, said in an email that AI will play a "big role in the future of dating and Tinder's evolution." She said the game is meant to be silly and that the company "leaned into the campiness." Gen Z is a socially anxious generation, Paine said, and this age group is willing to endure a little cringe if it leads to a "real connection."

The article suggests it's another example of companies "eager to incorporate this newish technology, often without considering whether it adds any value for users." But "As apps like Tinder and Bumble lose users amid ' [2]dating app burnout ,' the companies are turning to AI to win new growth." (The dating app Rizz "uses AI to autosuggest good lines to use," while Teaser "spins up a chatbot that's based on your personality, meant to talk and behave like you would during a flirty chat," and people "are forming relationships with [3]AI companion bots by the millions.") And the companion-bot company Replika "boasts more than 30 million users..."



[1] https://www.msn.com/en-us/news/technology/tinder-lets-you-flirt-with-ai-characters-three-of-them-dumped-me/ar-AA1CdFyY

[2] https://www.yahoo.com/lifestyle/dating-apps-gotten-bad-speed-161412318.html

[3] https://www.msn.com/en-us/news/technology/ai-companions-can-relieve-loneliness-here-are-four-red-flags-to-watch-for-in-your-chatbot-friend/ar-BB1m6Goe



OpenAI's Motion to Dismiss Copyright Claims Rejected by Judge (arstechnica.com)

(Sunday April 06, 2025 @11:34AM (EditorDavid) from the prompt-ruling dept.)

Is OpenAI's ChatGPT violating copyrights? The New York Times sued OpenAI in December 2023. But [1] Ars Technica summarizes OpenAI's response . The New York Times (or NYT) "should have known that ChatGPT was being trained on its articles... partly because of the newspaper's own reporting..."

> OpenAI pointed to a single November 2020 article, where the NYT reported that OpenAI was analyzing a trillion words on the Internet.

>

> But on Friday, [2]U.S. district judge Sidney Stein disagreed , denying OpenAI's motion to dismiss the NYT's copyright claims partly based on one NYT journalist's reporting. In his opinion, Stein confirmed that it's OpenAI's burden to prove that the NYT knew that ChatGPT would potentially violate its copyrights two years prior to its release in November 2022... And OpenAI's other argument — that it was "common knowledge" that ChatGPT was trained on NYT articles in 2020 based on other reporting — also failed for similar reasons...

>

> OpenAI may still be able to prove through discovery that the NYT knew that ChatGPT would have infringing outputs in 2020, Stein said. But at this early stage, dismissal is not appropriate, the judge concluded. The same logic follows in a related case from The Daily News, Stein ruled. Davida Brook, co-lead counsel for the NYT, suggested in a statement to Ars that the NYT counts Friday's ruling as a win. "We appreciate Judge Stein's careful consideration of these issues," Brook said. "As the opinion indicates, all of our copyright claims will continue against Microsoft and OpenAI for their widespread theft of millions of The Times's works, and we look forward to continuing to pursue them."

>

> The New York Times is also arguing that OpenAI contributes to ChatGPT users' infringement of its articles, and OpenAI lost its bid to dismiss that claim, too. The NYT argued that by training AI models on NYT works and training ChatGPT to deliver certain outputs, without the NYT's consent, OpenAI should be liable for users who manipulate ChatGPT to regurgitate content in order to skirt the NYT's paywalls... At this stage, Stein said that the NYT has "plausibly" alleged contributory infringement, showing through more than 100 pages of examples of ChatGPT outputs and media reports showing that ChatGPT could regurgitate portions of paywalled news articles that OpenAI "possessed constructive, if not actual, knowledge of end-user infringement." Perhaps more troubling to OpenAI, the judge noted that "The Times even informed defendants 'that their tools infringed its copyrighted works,' supporting the inference that defendants possessed actual knowledge of infringement by end users."



[1] https://arstechnica.com/tech-policy/2025/04/judge-doesnt-buy-openai-argument-nyts-own-reporting-weakens-copyright-suit/

[2] https://cdn.arstechnica.net/wp-content/uploads/2025/04/NYT-v-OpenAI-Opinion-4-4-25.pdf



Microsoft Uses AI To Find Flaws In GRUB2, U-Boot, Barebox Bootloaders (bleepingcomputer.com)

(Sunday April 06, 2025 @05:40PM (EditorDavid) from the bootloader-bugs dept.)

Slashdot reader [1]zlives shared [2]this report from BleepingComputer :

> Microsoft used its AI-powered Security Copilot to discover 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders.

>

> GRUB2 (GRand Unified Bootloader) is the default boot loader for most Linux distributions, including Ubuntu, while U-Boot and Barebox are commonly used in embedded and IoT devices. Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison. Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and symlinks were discovered in U-Boot and Barebox, which require physical access to exploit.

>

> The newly discovered flaws impact devices relying on UEFI Secure Boot, and if the right conditions are met, attackers can bypass security protections to execute arbitrary code on the device. While exploiting these flaws would likely need local access to devices, previous bootkit attacks [3]like BlackLotus achieved this through malware infections.

Miccrosoft titled its blog post " [4]Analyzing open-source bootloaders: Finding vulnerabilities faster with AI ." (And they do note that Micxrosoft disclosed the discovered vulnerabilities to the GRUB2, U-boot, and Barebox maintainers and "worked with the GRUB2 maintainers to contribute fixes... GRUB2 maintainers released security updates on February 18, 2025, and both the U-boot and Barebox maintainers released updates on February 19, 2025.")

They add that performing their initial research, using Security Copilot "saved our team approximately a week's worth of time," Microsoft writes, "that would have otherwise been spent manually reviewing the content."

> Through a series of prompts, we identified and refined security issues, ultimately uncovering an exploitable integer overflow vulnerability. Copilot also assisted in finding similar patterns in other files, ensuring comprehensive coverage and validation of our findings...

>

> As AI continues to emerge as a key tool in the cybersecurity community, Microsoft emphasizes the importance of vendors and researchers maintaining their focus on information sharing. This approach ensures that AI's advantages in rapid vulnerability discovery, remediation, and accelerated security operations can effectively counter malicious actors' attempts to use AI to scale common attack tactics, techniques, and procedures (TTPs).

This week Google also [5]announced Sec-Gemini v1 , "a new experimental AI model focused on advancing cybersecurity AI frontiers."



[1] https://www.slashdot.org/~zlives

[2] https://www.bleepingcomputer.com/news/security/microsoft-uses-ai-to-find-flaws-in-grub2-u-boot-barebox-bootloaders/

[3] https://www.bleepingcomputer.com/news/security/blacklotus-bootkit-bypasses-uefi-secure-boot-on-patched-windows-11/

[4] https://www.microsoft.com/en-us/security/blog/2025/03/31/analyzing-open-source-bootloaders-finding-vulnerabilities-faster-with-ai/

[5] https://it.slashdot.org/story/25/04/04/2035236/google-launches-sec-gemini-v1-ai-model-to-improve-cybersecurity-defense



Eric Raymond, John Carmack Mourn Death of 'Bufferbloat' Fighter Dave Taht (x.com)

(Sunday April 06, 2025 @11:34AM (EditorDavid) from the sad-news dept.)

[1]Wikipedia remembers Dave Täht as "an American network engineer, musician, lecturer, asteroid exploration advocate, and Internet activist. He was the chief executive officer of [2]TekLibre ."

But on X.com Eric S. Raymond called him " [3]one of the unsung heroes of the Internet , and a close friend of mine who I will miss very badly."

> Dave, known on X as [4]@mtaht because his birth name was Michael, was a true hacker of the old school who touched the lives of everybody using X. His work on mitigating bufferbloat improved practical TCP/IP performance tremendously, especially around video streaming and other applications requiring low latency. Without him, Netflix and similar services might still be plagued by glitches and stutters.

Also on X, [5]legendary game developer John Carmack remembered that Täht "did a great service for online gamers with his long campaign against bufferbloat in routers and access points. There is a very good chance your packets flow through some code he wrote." (Carmack also says he and Täht "corresponded for years".)

Long-time Slashdot reader [6]TheBracket [7]remembers him as "the driving force behind [8]">the Bufferbloat project and a contributor to FQ-CoDel, and CAKE in the Linux kernel."

> Dave spent years doing battle with Internet latency and bufferbloat, contributing to countless projects. In recent years, he's been working with Robert, Frank and myself at LibreQoS to provide CAKE at the ISP level, helping Starlink with their latency and bufferbloat, and assisting the OpenWrt project.

Eric Raymond remembered first meeting Täht in 2001 "near the peak of my Mr. Famous Guy years. Once, sometimes twice a year he'd come visit, carrying his guitar, and crash out in my basement for a week or so hacking on stuff. A lot of the central work on bufferbloat got done while I was figuratively looking over his shoulder..."

Raymond said Täht "lived for the work he did" and "bore deteriorating health stoically. While I know him he went blind in one eye and was diagnosed with multiple sclerosis."

> He barely let it slow him down. Despite constantly griping in later years about being burned out on programming, he kept not only doing excellent work but bringing good work out of others, assembling teams of amazing collaborators to tackle problems lesser men would have considered intractable... Dave should have been famous, and he should have been rich. If he had a cent for every dollar of value he generated in the world he probably could have bought the entire country of Nicaragua and had enough left over to finance a space program. He joked about wanting to do the latter, and I don't think he was actually joking...

>

> In the invisible college of people who made the Internet run, he was among the best of us. He said I inspired him, but I often thought he was a better and more selfless man than me. [9] Ave atque vale , Dave.

Weeks before his death Täht was still active on X.com, retweeting LWN's article about [10]"The AI scraperbot scourge" , an announcement [11]from Texas Instruments , and even [12]a Slashdot headline .

Täht [13]was also Slashdot reader #603,670 , submitting stories [14]about network latency , leaving [15]comments about AI , and making announcements about [16]the Bufferbloat project .



[1] https://en.wikipedia.org/wiki/Dave_T%C3%A4ht

[2] http://www.teklibre.net/

[3] https://x.com/esrtweet/status/1907401538093416621

[4] https://x.com/mtaht

[5] https://x.com/id_aa_carmack/status/1907459628897587216

[6] https://slashdot.org/~TheBracket

[7] https://slashdot.org/submission/17334845/dave-tht-has-passed-away-aged-59

[8] https://www.bufferbloat.net/projects/

[9] https://en.wikipedia.org/wiki/Hail_and_Farewell

[10] https://t.co/EpF4k7EWRr

[11] https://x.com/mtaht/status/1899813471518093682

[12] https://x.com/mtaht/status/1901951648991203789

[13] https://slashdot.org/~mtaht/

[14] https://tech.slashdot.org/story/21/12/05/0225227/comcast-reduced-working-latency-by-90-with-aqm-is-this-the-future?

[15] https://tech.slashdot.org/comments.pl?cid=64262236&sid=23235502&tid=84

[16] https://linux.slashdot.org/story/11/02/26/038249/got-buffer-bloat#



Scientists Warn Indonesia's Rice Megaproject Faces Failure (science.org)

(Saturday April 05, 2025 @09:34PM (msmash) from the according-to-the-science dept.)

Indonesian President Prabowo Subianto's ambitious plan to create 1 million hectares of new rice farms in eastern Merauke Regency faces strong criticism from scientists who have warned [1]it will fail due to unsuitable soils and climate . Military "food brigades" are currently guarding bulldozers clearing swampy forests in Indonesian New Guinea for the project, which aims to boost food self-sufficiency for the nation's 281 million people.

Soil scientists warn that Merauke's conditions could lead to acidic soils unable to support economically viable rice farming, potentially resulting in abandoned fields vulnerable to wildfires. "Farmers will get no profit at all," said Dwi Andreas, a soil scientist at Bogor Agricultural University who tested 12 rice varieties in similar soils with poor results.

The initiative mirrors past failed megaprojects, including a 1990s attempt to convert 1 million hectares of Borneo peatlands to rice paddies and a 2020 onion and potato farming expansion in North Sumatra that saw 90% of fields abandoned. A previous 2010 attempt to expand rice farming in Merauke also failed, destroying forests that Indigenous Papuans relied on and increasing childhood malnutrition, according to anthropologist Laksmi Adriani.



[1] https://www.science.org/content/article/indonesia-s-planned-rice-megaproject-doomed-fail



A Busy Hurricane Season is Expected. Here's How It Will Be Different From the Last (washingtonpost.com)

(Sunday April 06, 2025 @03:34AM (msmash) from the issued-in-public-interest dept.)

An anonymous reader shares a report:

> Yet another busy hurricane season is likely across the Atlantic this year -- but some of the conditions that supercharged storms like Hurricanes Helene and Milton in 2024 have waned, according to a key forecast issued Thursday.

>

> A warm -- yet no longer record-hot -- strip of waters across the Atlantic Ocean is [1]forecast to help fuel development of 17 named tropical cyclones during the season that runs from June 1 through Nov. 30, according to Colorado State University researchers. Of those tropical cyclones, nine are forecast to become hurricanes, with four of those expected to reach "major" hurricane strength.

>

> That would mean a few more tropical storms and hurricanes than in an average year, yet slightly quieter conditions than those observed across the Atlantic basin last year. This time last year, researchers from CSU were warning of an "extremely active" hurricane season with nearly two dozen named tropical storms. The next month, the National Oceanic and Atmospheric Administration released an aggressive forecast, warning the United States could face one of its worst hurricane seasons in two decades.

>

> The forecast out Thursday underscores how warming oceans and cyclical patterns in storm activity have primed the Atlantic basin for what is now a decades-long string of frequent, above-normal -- but not necessarily hyperactive -- seasons, said Philip Klotzbach, a senior research scientist at Colorado State and the forecast's lead author.



[1] https://www.msn.com/en-us/weather/meteorology/a-busy-hurricane-season-is-expected-here-s-how-it-will-be-different-from-the-last/ar-AA1Cevu8



Bonobos May Combine Words In Ways Previously Thought Unique To Humans (theguardian.com)

(Sunday April 06, 2025 @03:34AM (BeauHD) from the really-cool-findings dept.)

A new study shows bonobos [1]can combine vocal calls in ways that mirror human language , producing phrases with meanings beyond the sum of individual sounds. "Human language is not as unique as we thought," said Dr Melissa Berthet, the first author of the research from the University of Zurich. Another author, Dr Simon Townsend, said: "The cognitive building blocks that facilitate this capacity is at least 7m years old. And I think that is a really cool finding." The Guardian reports:

> Writing in the journal Science, Berthet and colleagues said that in the human language, words were often combined to produce phrases that either had a meaning that was simply the sum of its parts, or a meaning that was related to, but differed from, those of the constituent words. "'Blond dancer' -- it's a person that is both blond and a dancer, you just have to add the meanings. But a 'bad dancer' is not a person that is bad and a dancer," said Berthet. "So bad is really modifying the meaning of dancer here." It was previously thought animals such as birds and chimpanzees were only able to produce the former type of combination, but scientists have found bonobos can create both.

>

> The team recorded 700 vocalizations from 30 adult bonobos in the Democratic Republic of the Congo, checking the context of each against a list of 300 possible situations or descriptions. The results reveal bonobos have seven different types of call, used in 19 different combinations. Of these, 15 require further analysis, but four appear to follow the rules of human sentences. Yelps -- thought to mean "'et's do that" -- followed by grunts -- thought to mean "look at what I am doing," were combined to make "yelp-grunt," which appeared to mean "let's do what I'm doing." The combination, the team said, reflected the sum of its parts and was used by bonobos to encourage others to build their night nests.

>

> The other three combinations had a meaning apparently related to, but different from, their constituent calls. For example, the team found a peep -- which roughly means "I would like to ..." -- followed by a whistle -- appeared to mean "let's stay together" -- could be combined to create "peep-whistle." This combination was used to smooth over tense social situations, such as during mating or displays of prowess. The team speculated its meaning was akin to "let's find peace." The team said the findings in bonobos, together with the previous work in chimps, had implications for the evolution of language in humans, given all three species showed the ability to combine words or vocalizations to create phrases.



[1] https://www.theguardian.com/science/2025/apr/03/bonobos-combine-words-ways-previously-unique-humans-study



Wikimedia Drowning in AI Bot Traffic as Crawlers Consume 65% of Resources

(Saturday April 05, 2025 @05:34PM (msmash) from the closer-look dept.)

Web crawlers collecting training data for AI models are [1]overwhelming Wikipedia's infrastructure , with bot traffic growing exponentially since early 2024, according to the Wikimedia Foundation. According to data released April 1, bandwidth for multimedia content has surged 50% since January, primarily from automated programs scraping Wikimedia Commons' 144 million openly licensed media files.

This unprecedented traffic is causing operational challenges for the non-profit. When Jimmy Carter [2]died in December 2024 , his Wikipedia page received 2.8 million views in a day, while a 1.5-hour video of his 1980 presidential debate caused network traffic to double, resulting in slow page loads for some users.

Analysis shows 65% of the foundation's most resource-intensive traffic comes from bots, despite bots accounting for only 35% of total pageviews. The foundation's Site Reliability team now routinely blocks overwhelming crawler traffic to prevent service disruptions. "Our content is free, our infrastructure is not," the foundation said, announcing plans to establish sustainable boundaries for automated content consumption.



[1] https://diff.wikimedia.org/2025/04/01/how-crawlers-impact-the-operations-of-the-wikimedia-projects/

[2] https://news.slashdot.org/story/24/12/30/0251249/when-jimmy-carter-spoke-at-a-wireless-tradeshow



Fram2 Crew Returns To Earth After Polar Orbit Mission (cnn.com)

(Sunday April 06, 2025 @03:34AM (BeauHD) from the mission-accomplished dept.)

SpaceX's Fram2 mission returned safely after becoming the [1]first crewed spaceflight to orbit directly over Earth's poles . From a report:

> Led by cryptocurrency billionaire Chun Wang, who is the financier of this mission, the Fram2 crew has been free-flying through orbit since Monday. The group splashed down at 9:19 a.m. PT, or 12:19 p.m. ET, off the coast of California -- the first West Coast landing in SpaceX's five-year history of human spaceflight missions. The company livestreamed the splashdown and recovery of the capsule on [2]its website .

>

> During the journey, the Fram2 crew members were slated to carry out various research projects, including capturing images of auroras from space and documenting their experiences with motion sickness. [...] This trip is privately funded, and such missions allow for SpaceX's customers to spend their time in space as they see fit. For Fram2, the crew traveled to orbit prepared to carry out 22 research and science experiments, some of which were designed and overseen by SpaceX. Most of the research involves evaluating crew health.



[1] https://edition.cnn.com/2025/04/04/science/spacex-fram2-mission-return-earth/index.html

[2] https://www.spacex.com/launches/mission/?missionId=fram2



Two Teenagers Built 'Cal AI', a Photo Calorie App With Over a Million Users (techcrunch.com)

(Saturday April 05, 2025 @09:34PM (BeauHD) from the bright-futures dept.)

An anonymous reader quotes a report from TechCrunch:

> In a world filled with "vibe coding," Zach Yadegari, teen founder of Cal AI, stands in ironic, old-fashioned contrast. Ironic because Yadegari and his co-founder, Henry Langmack, are [1]both just 18 years old and still in high school . Yet their story, so far, is a classic. Launched in May, Cal AI has generated over 5 million downloads in eight months, Yadegari says. Better still, he tells TechCrunch that the customer retention rate is over 30% and that the app generated over $2 million in revenue last month. [...]

>

> The concept is simple: Take a picture of the food you are about to consume, and let the app log calories and macros for you. It's not a unique idea. For instance, the big dog in calorie counting, MyFitnessPal, has its Meal Scan feature. Then there are apps like SnapCalorie, which was released in 2023 and created by the founder of Google Lens. Cal AI's advantage, perhaps, is that it was built wholly in the age of large image models. It uses models from Anthropic and OpenAI and RAG to improve accuracy and is trained on open source food calorie and image databases from sites like GitHub.

>

> "We have found that different models are better with different foods," Yadegari tells TechCrunch. Along the way, the founders coded through technical problems like recognizing ingredients from food packages or in jumbled bowls. The result is an app that the creators say is 90% accurate, which appears to be good enough for many dieters.

The report says Yadegari began mastering Python and C# in middle school and went on to build his first business in ninth grade -- a website called Totally Science that gave students access to unblocked games (cleverly named to evade school filters). He sold the company at age 16 to FreezeNova for $100,000.

Following the sale, Yadegari immersed himself in the startup scene, watching Y Combinator videos and networking on X, where he met co-founder Blake Anderson, known for creating ChatGPT-powered apps like RizzGPT. Together, they launched Cal AI and moved to a hacker house in San Francisco to develop their prototype.



[1] https://techcrunch.com/2025/03/16/photo-calorie-app-cal-ai-downloaded-over-a-million-times-was-built-by-two-teenagers/



An Interactive-Speed Linux Computer Made of Only 3 8-Pin Chips (dmitry.gr)

(Saturday April 05, 2025 @05:34PM (BeauHD) from the homebrew-computing dept.)

Software engineer and longtime Slashdot reader, Dmitry Grinberg ( [1]dmitrygr ), shares a recent project they've been working on: " [2]an interactive-speed Linux on a tiny board you can easily build with only 3 8-pin chips ":

> There was a time when one could order a kit and assemble a computer at home. It would do just about what a contemporary store-bought computer could do. That time is long gone. Modern computers are made of hundreds of huge complex chips with no public datasheets and many hundreds of watts of power supplied to them over complex power delivery topologies. It does not help that modern operating systems require gigabytes of RAM, terabytes of storage, and always-on internet connectivity to properly spy on you. But what if one tried to fit a modern computer into a kit that could be easily assembled at home? What if the kit only had three chips, each with only 8 pins? Can it be done? Yes.

The system runs a custom MIPS emulator written in ARMv6 assembly and includes a custom bootloader that supports firmware updates via FAT16-formatted SD cards. Clever pin-sharing hacks allow all components (RAM, SD, serial I/O) to work despite the 6 usable I/O pins. Overclocked to up to 150MHz, the board boots into a full Linux shell in about a minute and performs at ~1.65MHz MIPS-equivalent speed.

It's not fast, writes Dmitry, but it's fully functional -- you can edit files, compile code, and even install Debian packages. A kit may be made available if a partner is found.



[1] https://slashdot.org/~dmitrygr

[2] https://dmitry.gr/?r=05.Projects&proj=36.%208pinLinux



AT&T Email-To-Text Gateway Service Ending (att.com)

(Saturday April 05, 2025 @11:34AM (BeauHD) from the end-of-an-era dept.)

Longtime Slashdot reader [1]CyberSlugGump shares a support article from AT&T, writing:

> On June 17th, AT&T will stop supporting email-to-text messages. That means you won't be able to send a text message to an AT&T customer from an email address. You can still get in touch with AT&T customers using SMS (text), MMS, and standard email services.



[1] https://slashdot.org/~CyberSlugGump



Midjourney Releases V7, Its First New AI Image Model In Nearly a Year

(Saturday April 05, 2025 @11:34AM (BeauHD) from the new-and-improved dept.)

Midjourney's new [1]V7 image model features a revamped architecture with [2]smarter text prompt handling, higher image quality, and default personalization based on user-rated images . While some features like upscaling aren't yet available, it does come with a faster, cheaper Draft Mode. TechCrunch reports:

> To use it, you'll first have to rate around 200 images to build a Midjourney "personalization" profile, if you haven't already. This profile tunes the model to your individual visual preferences; V7 is Midjourney's first model to have personalization switched on by default. Once you've done that, you'll be able to turn V7 on or off on Midjourney's website and, if you're a member of Midjourney's Discord server, on its Discord chatbot. In the web app, you can quickly select the model from the drop-down menu next to the "Version" label.

>

> Midjourney CEO David Holz described V7 as a "totally different architecture" in a [3]post on X . "V7 is ... much smarter with text prompts," Holz continued in an announcement on Discord. "[I]mage prompts look fantastic, image quality is noticeably higher with beautiful textures, and bodies, hands, and objects of all kinds have significantly better coherence on all details." V7 is available in two flavors, Turbo (costlier to run) and Relax, and powers a new tool called Draft Mode that renders images at 10x the speed and half the cost of the standard mode. Draft images are of lower quality than standard-mode images, but they can be enhanced and re-rendered with a click.

>

> A number of standard Midjourney features aren't available yet for V7, according to Holz, including image upscaling and retexturing. Those will arrive in the near future, he said, possibly within two months. "This is an entirely new model with unique strengths and probably a few weaknesses" Holz wrote on Discord. "[W]e want to learn from you what it's good and bad at, but definitely keep in mind it may require different styles of prompting. So play around a bit."



[1] https://www.midjourney.com/updates/v7-alpha

[2] https://techcrunch.com/2025/04/03/midjourney-releases-its-first-new-ai-image-model-in-nearly-a-year/

[3] https://x.com/DavidSHolz/status/1908007345495638337



NSA Warns 'Fast Flux' Threatens National Security (arstechnica.com)

(Saturday April 05, 2025 @11:34AM (BeauHD) from the PSA dept.)

An anonymous reader quotes a report from Ars Technica:

> A technique that hostile nation-states and financially motivated ransomware groups are using to hide their operations poses a threat to critical infrastructure and national security, the National Security Agency has warned. The technique is known as fast flux. It allows decentralized networks operated by threat actors to hide their infrastructure and survive takedown attempts that would otherwise succeed. Fast flux [1]works by cycling through a range of IP addresses and domain names that these botnets use to connect to the Internet. In some cases, IPs and domain names change every day or two; in other cases, they change almost hourly. The constant flux complicates the task of isolating the true origin of the infrastructure. It also provides redundancy. By the time defenders block one address or domain, new ones have already been assigned.

>

> "This technique poses a significant threat to national security, enabling malicious cyber actors to consistently evade detection," the NSA, FBI, and their counterparts from Canada, Australia, and New Zealand [2]warned Thursday . "Malicious cyber actors, including cybercriminals and nation-state actors, use fast flux to obfuscate the locations of malicious servers by rapidly changing Domain Name System (DNS) records. Additionally, they can create resilient, highly available command and control (C2) infrastructure, concealing their subsequent malicious operations."

There are two variations of fast flux described in the advisory: single flux and double flux. Single flux involves mapping a single domain to a rotating pool of IP addresses using DNS A (IPv4) or AAAA (IPv6) records. This constant cycling makes it difficult for defenders to track or block the associated malicious servers since the addresses change frequently, yet the domain name remains consistent.

Double flux takes this a step further by also rotating the DNS name servers themselves. In addition to changing the IP addresses of the domain, it cycles through the name servers using NS (Name Server) and CNAME (Canonical Name) records. This adds an additional layer of obfuscation and resilience, complicating takedown efforts.

"A key means for achieving this is the use of Wildcard DNS records," notes Ars. "These records define zones within the Domain Name System, which map domains to IP addresses. The wildcards cause DNS lookups for subdomains that do not exist, specifically by tying MX (mail exchange) records used to designate mail servers. The result is the assignment of an attacker IP to a subdomain such as malicious.example.com, even though it doesn't exist." Both methods typically rely on large botnets of compromised devices acting as proxies, making it challenging for defenders to trace or disrupt the malicious activity.



[1] https://arstechnica.com/security/2025/04/nsa-warns-that-overlooked-botnet-technique-threatens-national-security/

[2] https://www.cisa.gov/news-events/cybersecurity-advisories/aa25-093a



Trump Extends TikTok Deadline For the Second Time (cnbc.com)

(Saturday April 05, 2025 @11:34AM (BeauHD) from the art-of-the-deal dept.)

For the second time, President Trump has [1]extended the deadline for ByteDance to divest TikTok's U.S. operations by 75 days . The TikTok deal "requires more work to ensure all necessary approvals are signed," said Trump in [2]a post on his Truth Social platform. The extension will "keep TikTok up and running for an additional 75 days."

"We hope to continue working in Good Faith with China, who I understand are not very happy about our Reciprocal Tariffs (Necessary for Fair and Balanced Trade between China and the U.S.A.!)," Trump added. CNBC reports:

> ByteDance has been in discussion with the U.S. government, the company told CNBC, adding that any agreement will be subject to approval under Chinese law. "An agreement has not been executed," a spokesperson for ByteDance said in a statement. "There are key matters to be resolved." Before Trump's decision, ByteDance faced an April 5 deadline to carry out a "qualified divestiture" of TikTok's U.S. business as required by a national security law [3]signed by former President Joe Biden in April 2024 .

>

> ByteDance's original deadline to sell TikTok was on Jan. 19, but Trump [4]signed an executive order when he took office the next day that gave the company 75 more days to make a deal. Although the law would penalize internet service providers and app store owners like Apple and Google for hosting and providing services to TikTok in the U.S., Trump's executive order instructed the attorney general to not enforce it.

"This proves that Tariffs are the most powerful Economic tool, and very important to our National Security!," Trump said in the Truth Social post. "We do not want TikTok to 'go dark.' We look forward to working with TikTok and China to close the Deal. Thank you for your attention to this matter!"



[1] https://www.cnbc.com/2025/04/04/trumps-extends-tiktok-second-time.html

[2] https://truthsocial.com/@realDonaldTrump/posts/114280893859636366

[3] https://news.slashdot.org/story/24/04/24/1514216/biden-signs-tiktok-divest-or-ban-bill-into-law

[4] https://yro.slashdot.org/story/25/01/21/055229/executive-order-delays-tiktok-ban-for-75-days



Google Launches Sec-Gemini v1 AI Model To Improve Cybersecurity Defense

(Saturday April 05, 2025 @11:34AM (BeauHD) from the AI-all-the-things dept.)

Google has [1]introduced Sec-Gemini v1, an experimental AI model [2]built on its Gemini platform and tailored for cybersecurity . BetaNews reports:

> Sec-Gemini v1 is built on top of Gemini, but it's not just some repackaged chatbot. Actually, it has been tailored with security in mind, pulling in fresh data from sources like Google Threat Intelligence, the OSV vulnerability database, and Mandiant's threat reports. This gives it the ability to help with root cause analysis, threat identification, and vulnerability triage.

>

> Google says the model performs better than others on two well-known benchmarks. On CTI-MCQ, which measures how well models understand threat intelligence, it scores at least 11 percent higher than competitors. On CTI-Root Cause Mapping, it edges out rivals by at least 10.5 percent. Benchmarks only tell part of the story, but those numbers suggest it's doing something right.

Access is currently limited to select researchers and professionals for early testing. If you meet that criteria, you can request access [3]here .



[1] https://security.googleblog.com/2025/04/google-launches-sec-gemini-v1-new.html

[2] https://betanews.com/2025/04/04/google-launches-sec-gemini-v1-ai-model-to-improve-cybersecurity-defense/

[3] https://docs.google.com/forms/d/1MBVz-2Zf7u8fEiZlP2_Kw_ZIlu-NQ372dkodFhqcYaQ/edit



AI Avatar Tries To Argue Case Before a New York Court (apnews.com)

(Saturday April 05, 2025 @11:34AM (BeauHD) from the nice-try dept.)

An anonymous reader quotes a report from the Associated Press:

> It took only seconds for the judges on a New York appeals court to realize that the man addressing them from a video screen -- a person about to present an argument in a lawsuit -- [1]not only had no law degree, but didn't exist at all . The latest bizarre chapter in the awkward arrival of artificial intelligence in the legal world unfolded March 26 under the stained-glass dome of New York State Supreme Court Appellate Division's First Judicial Department, where a panel of judges was set to hear from Jerome Dewald, a plaintiff in an employment dispute. "The appellant has submitted a video for his argument," said Justice Sallie Manzanet-Daniels. "Ok. We will hear that video now."

>

> On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater. "May it please the court," the man began. "I come here today a humble pro se before a panel of five distinguished justices." "Ok, hold on," Manzanet-Daniels said. "Is that counsel for the case?" "I generated that. That's not a real person," Dewald answered. It was, in fact, an avatar generated by artificial intelligence. The judge was not pleased. "It would have been nice to know that when you made your application. You did not tell me that sir," Manzanet-Daniels said before yelling across the room for the video to be shut off. "I don't appreciate being misled," she said before letting Dewald continue with his argument.

>

> Dewald later penned an apology to the court, saying he hadn't intended any harm. He didn't have a lawyer representing him in the lawsuit, so he had to present his legal arguments himself. And he felt the avatar would be able to deliver the presentation without his own usual mumbling, stumbling and tripping over words. In an interview with The Associated Press, Dewald said he applied to the court for permission to play a prerecorded video, then used a product created by a San Francisco tech company to create the avatar. Originally, he tried to generate a digital replica that looked like him, but he was unable to accomplish that before the hearing. "The court was really upset about it," Dewald conceded. "They chewed me up pretty good." [...] As for Dewald's case, it was still pending before the appeals court as of Thursday.



[1] https://apnews.com/article/artificial-intelligence-ai-courts-nyc-5c97cba3f3757d9ab3c2e5840127f765



Microsoft Employee Disrupts 50th Anniversary and Calls AI Boss 'War Profiteer' (theverge.com)

(Saturday April 05, 2025 @03:00AM (msmash) from the stranger-things dept.)

An anonymous reader shares a report:

> A Microsoft employee [1]disrupted the company's 50th anniversary event to protest its use of AI. "Shame on you," said Microsoft employee Ibtihal Aboussad, speaking directly to Microsoft AI CEO Mustafa Suleyman. "You are a war profiteer. Stop using AI for genocide. Stop using AI for genocide in our region. You have blood on your hands. All of Microsoft has blood on its hands. How dare you all celebrate when Microsoft is killing children. Shame on you all."



[1] https://www.theverge.com/news/643670/microsoft-employee-protest-50th-annivesary-ai



More

Sam: What's new, Norm?
Norm: Most of my wife.
-- Cheers, The Spy Who Came in for a Cold One

Coach: Beer, Norm?
Norm: Naah, I'd probably just drink it.
-- Cheers, Now Pitching, Sam Malone

Coach: What's doing, Norm?
Norm: Well, science is seeking a cure for thirst. I happen
to be the guinea pig.
-- Cheers, Let Me Count the Ways