Curl Warns GitHub About 'Malicious Unicode' Security Issue (daniel.haxx.se)
- Reference: 0177572249
- News link: https://developers.slashdot.org/story/25/05/17/0420236/curl-warns-github-about-malicious-unicode-security-issue
- Source link: https://daniel.haxx.se/blog/2025/05/16/detecting-malicious-unicode/
The change "looked identical to the ASCII version, so it was not possible to visually spot this..."
> The impact of changing one or more letters in a URL can of course be devastating depending on conditions... [W]e have implemented checks to help us poor humans spot things like this. To detect malicious Unicode. We have added [2]a CI job that scans all files and validates every UTF-8 sequence in the git repository.
>
> In the curl git repository most files and most content are plain old ASCII so we can "easily" whitelist a small set of UTF-8 sequences and some specific files, the rest of the files are simply not allowed to use UTF-8 at all as they will then fail the CI job and turn up red. In order to drive this change home, we went through all the test files in the curl repository and made sure that all the UTF-8 occurrences were instead replaced by other kind of escape sequences and similar. Some of them were also used more or less by mistake and could easily be replaced by their ASCII counterparts.
>
> The next time someone tries this stunt on us it could be someone with less good intentions, but now ideally our CI will tell us... We want and strive to be proactive and tighten everything before malicious people exploit some weakness somewhere but security remains this never-ending race where we can only do the best we can and while the other side is working in silence and might at some future point attack us in new creative ways we had not anticipated. That future unknown attack is a tricky thing.
In the original blog post Stenberg complained he got "barely no responses" from GitHub (joking "perhaps they are all just too busy implementing the next AI feature we don't want.") But hours later he posted an update.
"GitHub has told me they have raised this as a security issue internally and they are working on a fix."
[1] https://daniel.haxx.se/blog/2025/05/16/detecting-malicious-unicode/
[2] https://github.com/curl/curl/blob/master/.github/scripts/spacecheck.pl
Re: (Score:2)
> pretty much every major open source package is compromised at this point. The fact that once every few years these infiltration attempts are barely caught (xz, now this) just goes to show how many get through.
Or it goes to show that almost nobody is actively trying, and attempts happen only every few years. The absence of detecting attacks is not inherently a defect in detection; it can also be a lack of attacks.
More highlighting (Score:2)
If it can refine the difference to highlight which word in a line was different, maybe it could use a different color (if moving to a [1]3-color process [britannica.com] isn't too much more expensive) for which characters in that line are different. Or have a checkbox to temporarily highlight non-ASCII UTF-8 characters.
[1] http://www.britannica.com/technology/trichromatic-printing
Re: (Score:2)
> If it can refine the difference to highlight which word in a line was different, maybe it could use a different color (if moving to a [1]3-color process [britannica.com] isn't too much more expensive) for which characters in that line are different. Or have a checkbox to temporarily highlight non-ASCII UTF-8 characters.
Various diff utilities do highlight things at the character level.
[1] http://www.britannica.com/technology/trichromatic-printing
I imagine Github will tout a CoPilot solution (Score:2)
However, looking for these sort of shenanigans seems like something that could've (and maybe should've) been at least semi-automated a couple decades ago - search for characters outside the typical ASCII range and flag those parts for human review.
No reason we can't automate (Score:2)
> However, looking for these sort of shenanigans seems like something that could've (and maybe should've) been at least semi-automated a couple decades ago - search for characters outside the typical ASCII range and flag those parts for human review.
An automated review is not that difficult. For each ASCII character there can be a list of visually similar characters. For example a Latin (Ascii) 'a' would have a Cyrillic 'a' on its list.
U+0061: U+0430, ...
Flagging everything would include characters that do not look the same. That would seem like false positives. Or maybe lower priority warnings. Visually similar characters being a higher priority warning.
7 Bit ASCII (Score:1)
Programs should be written 7 bit ASCII like in the good old days.
You need that 8th character Re:7 Bit ASCII (Score:1)
EBCIDIC is the [1]One True Standard [xkcd.com].
[1] https://xkcd.com/927/
Re: 7 Bit ASCII (Score:2)
Or even better, 6-bit BCD code like in good old Fortran on an IBM 7094 mainframe.
Unicode is a bug (Score:1)
Vertical double quotes.
Closing double quotes. Opening double quotes.
Homoglyphs.
Arbitrary number of bytes per glyph.
If it ain't ascii it isn't worth expressing in bytes.
Re: (Score:2)
Unicode is fucking ridiculous and so are standards bodies who seem to be entirely composed of zero experts and just industry insiders. Javascript is even worse and the web as a whole is getting progressively worse.
Re: (Score:2)
> If it ain't ascii it isn't worth expressing in bytes.
If you exclusively speak American then you can say everything is US ASCII ... but for many who, reasonably, want to express themselves in their own language they will want other characters. But the "everything" is not entirely true even for Americans, eg 1/100 of a dollar is a cent which is U+00A2 - which slashdot will not display correctly.
Re: (Score:2)
Yeah, but to be fair, Unicode was invented to put those in.
Also to be fair, and I don't want to be fair, Unicode and multilanguage websites, where the content owners hounded me forever to get the orthography right in 7 languages, was a source of significant and ongoing pain and irritation... that is actually the whole point of it. Apparently, not everyone speaks ASCII.
Re: (Score:2)
> Yeah, but to be fair, Unicode was invented to put those in.
More specifically, Unicode was invented to provide a standard encoding for all living languages. Anything currently used books, magazines, newspapers, etc.
It was later expanded to include dead languages to help researchers.
Re: (Score:2)
> Arbitrary number of bytes per glyph.
Yes and no. That's mostly a result of encoding, UTF-8 vs UTF-32. Although there would still be some glyphs that are composed from multiple code points. To oversimplify, image two characters, 'A' and '`', creating an accented A glyph.
FWIW, UTF-8 is not difficult to decode, so doing comparisons or detecting malformed UTF-8 isn't too much work. As part of defensive programming I check for proper UTF-8 encoding on any inputs. Its a write once, use many times, sort of thing.
> If it ain't ascii it isn't worth expressing in bytes.
Bytes, iie UTF-8 encoding of code p
Spoofing attacks are old (Score:2)
Wikipedia used to have sockpuppet accounts that spoofed admin usernames until they implemented the antispoof feature. Then there was the spoofing from punycode domain names. and colombian domains spoofing .co.uk.
Package Managers (Score:2)
Many traditional distros still ship unusably old versions of some packages - due to some network dependency they literally don't work anymore.
Some are buggy with upstream fixes (e.g. nvme tool) and just don't work. "Wait a year and we'll ship a version that works".
This pushes people to use upstream packages which often times come with update scripts that run as root.
These would be an ideal place for a malicious "contributor" to put in an update URL he controls.
It would be better for the distros to remove t
AI would have caught this (Score:2)
That's indeed one of the use-cases than an AI can catch easier than a human.
Patch (Simplified as I couldn't copy&paste from the screenshot):
--- test1.txt 2025-05-17 20:56:18.097357631 +0200
+++ test2.txt 2025-05-17 20:56:33.357317426 +0200
@@ -1 +1 @@
-Find the file at [1]https://githubusercontent.com/... [githubusercontent.com]
+Find the file at [2]https:/// [https]ithubusercontent.com/mozilla-firefox/file.json
Instruction: "Describe the changes done in this patch"
Input: (the patch)
AI:
In this patch, the following changes were made:
1. **Re
[1] https://githubusercontent.com/mozilla-firefox/file.json
[2] https:/
Re: (Score:2)
Also note that the LLM did get the actual code point (first question) and the script (second question) wrong. To the AI's defense: It was only a small 12B model.
Not everything needs AI (Score:2)
> That's indeed one of the use-cases than an AI can catch easier than a human.
A very small amount of non-AI code could also catch it. Not everything needs AI.
Re: (Score:2)
You(%\TM(*&3re 100% right! I couldn(*(&)89t agree more! Slashdot works fine without it.
Re: Yes, unicode is a security issue (Score:2)
Iâ(TM)m always surprised itâ(TM)s the only site that canâ(TM)t handle basic apostrophes. I wouldnâ(TM)t be doling out credit for slashdotâ(TM)s inadequacies.
Re: (Score:3)
> And I don't miss the stupid emojis either.
:-(
Re: (Score:2)
Question is: DNS being ASCII based, browsers do some [1]conversion [wikipedia.org] of a multi-bytes Unicode domain name to ASCII (the "punycode" ASCII domain name being created beforehand). Did we really need this?
[1] https://en.wikipedia.org/wiki/Punycode