News: 0000826772

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Image "Cloaking" for Personal Privacy

([Security] Jul 22, 2020 22:43 UTC (Wed) (jake))


[1]SAND Lab at the University of Chicago has [2]announced Fawkes , which is a BSD-licensed privacy-protection tool [3]available on GitHub. " At a high level, Fawkes takes your personal images, and makes tiny, pixel-level changes to them that are invisible to the human eye, in a process we call image cloaking. You can then use these "cloaked" photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, "cloaked" images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail. "



[1] http://sandlab.cs.uchicago.edu/

[2] http://sandlab.cs.uchicago.edu/fawkes/

[3] https://github.com/Shawn-Shan/fawkes

Image "Cloaking" for Personal Privacy

It's BSD-licensed, yes, but the "copyright" section of their README says this:

> This code is intended only for personal privacy protection or academic research.

>

> We are currently exploring the filing of a provisional patent on the Fawkes algorithm.

I'm going to assume the first line is just a standard CYA "there's no warranty" disclaimer, and not an actual condition on use (because it would flatly contradict the LICENSE file). However, the patent is a great deal more alarming, and in fact, I'm not sure I can recommend using this thing as long as that sentence remains there. It basically amounts to "You can do what you like with our software, but we could turn around and sue you at any time, once the USPTO rubber stamps our patent."

Image "Cloaking" for Personal Privacy

It's BSD-licensed, yes, but the "copyright" section of their README says this:

> This code is intended only for personal privacy protection or academic research.

>

> We are currently exploring the filing of a provisional patent on the Fawkes algorithm.

I'm going to assume the first line is just a standard CYA "there's no warranty" disclaimer, and not an actual condition on use (because it would flatly contradict the LICENSE file). However, the patent is a great deal more alarming, and in fact, I'm not sure I can recommend using this thing as long as that sentence remains there. It basically amounts to "You can do what you like with our software, but we could turn around and sue you at any time, once the USPTO rubber stamps our patent."

Image "Cloaking" for Personal Privacy

> The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.

This assumes not only that all existing facial-recognition systems are vulnerable to their specific tweaking approach, but that future ones will be too. The modified photos will be out there indefinitely.

A lot of research is being done on solving this class of weakness -- it's a serious problem for self-driving vehicles too, there've been demonstrations of minor changes to street signs that lead to completely different recognition outcomes.

While their list of current systems fooled is impressive, I think the absolute assurance of privacy given here is unwarranted.

the real ttys became pseudo ttys and vice-versa.