EU privacy regulators are concerned about law requiring child abuse scan – IT Pro – News


Empheria wrote:

As a result, I can hardly imagine that you would then think of storing your material in the cloud, where you do not have full control over it.

CORRECTION: it concerns the EC proposal not particularly to scan material in the cloud (which is increasingly encrypted), but to search for “unwanted” material on end-user devices (especially smartphones) when files are no longer encrypted, i.e. “Client-Side Scanning”.

Effectively a kind of “antivirus” in which governments determine what unwanted material is.

Personally, I have much more understanding when cloud providers check if there is nothing illegal on it she storage systems (which could criminalize them) than they, or OS/app makers, mine equipment to sniff.

Empheria also wrote:

By scanning material in the ‘white circuit’, in my opinion you only catch the small fish.

Especially stupid. Because it is not difficult at all to circumvent these measures (see below).

More importantly, you probably hardly ever catch child pornography makers, child abusers and their cameramen. What I would like to achieve is that zero (at least as few as possible) children become victims of these ruthless egoists.

So it’s all about treating symptoms anyway. However, I fear it is an illusion that we this much track viewers and either lock up until they lose interest (walker age?) either have successfully re-educated (possibly just as promising as curing homosexuality) that the earnings model of the makers disappears.

As an aside, computer generated child pornography is debatable as an alternative to real ones. That could trigger “live action”. But if that opportunity were high, I wonder how violence in movies and games affects some people.

Bypassing detection of imagery can be done, essentially, in two ways:

1) User views images on other (not backdoored) device other than which is exchanged with
Less chance of being caught, more “hassle”

Examples: an extra layer encryption, steganografie (background), digital image editing (a large part of the edits will be insufficient to exclude recognition) and customization, such as a kind of “slide puzzles” in which pixels are swapped in a certain pattern according to a “key” and/or with an “exclusive or” function of (eg RGB) values. “Disadvantage”: You may not be able to view the images (and other material) with impunity in the backed-up app, or not on the device if the scan function is in the OS.

2) User views images on the same (but backdoored) device as exchanged
As soon as leaks (which is inevitable) how the scanner plus definitions work, apps will appear that allow the images to be modified in such a way that they are “worth watching” but are no longer detected, in more or less the same way as virus scanners en masse be led around the garden.

If the scan function is in the OS and all files scans, can view in a special not backdoored app, which may can fool the OS by not saving files “detectable”, but reconstructing them in the image memory (if the EC scanner also scans the image memory and the definitions “see” prohibited material in there, one will have to go to option 1 ). Due to an expected game of cat and mouse, the chance of being caught is greater.

Anyway, off research (bron) turns out:

[…] In a large-scale evaluation, we show perceptual hashing-based client-side scanning mechanisms to be highly vulnerable to detection avoidance attacks in a black-box setting, with more than 99.9% of images successfully attacked while preserving the content of the image. […]

Because, for example, grooming involves chatting or calling a child, technical tricks to avoid detection are a lot more difficult (but see below).

Empheria further wrote:

Then the question of proportionality comes into play: Are there really enough of these small fish to justify that mistakes can also be made and/or that the technology can be abused?

Those errors are there: allegedly (plus article) gives the EC scanner a lot false positives.

In grooming, there would be 10% false positives (which seems to have been “rounded” out of the 12% which Microsoft said to findwhich was probably also rosy): those people are already suspicious and can just endlessly enter politiedatabases arrive (possibly with a checkmark in front of the phrase “ever associated with grooming”).

Moreover, you want to act quickly when grooming, the safest thing for the child is to arrest the suspect immediately (the neighbors also have something to do) and then find out whether the suspicion was justified. After that, the following applies to the neighbors: “where there is smoke, there is fire”. To what numbers is this proportional?

Regarding images: if, after some image editing, you manage that two totally different pictures the same neural hash yield (a neural hash is definitely something heel other than a cryptographic hash), then it seems clear to me that it is not difficult to manipulate an image in such a way that most bytes in that neural hash have a different value, but also that the chance of false positives is very high.

Finally, researchers (other than those mentioned above) have developed the EC plan scrutinized (bron) and write about it among other things (“CSS” = Client-Side Scanning):

[…] the European Union decided to propose forced CSS to combat and prevent child sexual abuse and weaken encryption. CSS is mass surveillance of personal property, pictures and text, without considerations of privacy and cybersecurity and the law. […]

The risk of misuse for other investigative purposes within the EU may not be too bad, but as soon as this technology is included in apps and/or devices, regimes that respect human rights less or do not respect human rights also want to use this scanner (to trace owners of material that that regimes deem undesirable).

A very disproportionate measure in my opinion. Worse, the time and energy we spend on this risky nonsense could be better used to get to the root of the problem and many more people (especially young men) are pointing out the enormous emotional damage they are doing to children for the short “pleasure” from themselves and the viewers.

This is pure techsolutionism – it creates more problems than it solves. Which lobbyists are actually behind this and what is their goal?

Edits up to 17:11: textual corrections and a small addition.

Edit 17:25: URL to the two (nice) different photos with the same neural hash was wrong.

[Reactie gewijzigd door ErikvanStraten op 30 juli 2022 17:25]