July 9th, 2024

Effective CSAM filters are impossible because what CSAM is depends on context

Filters struggle to detect child sexual exploitation materials due to contextual nuances. Technical solutions like hashing or AI lack context, leading to privacy invasion and mislabeling. Effective prevention requires holistic interventions over technological reliance.

Read original articleLink Icon
Effective CSAM filters are impossible because what CSAM is depends on context

Effective filters for detecting child sexual exploitation materials (CSAM) face a significant challenge due to the contextual nature of what constitutes CSAM. The distinction between innocent content and CSAM depends heavily on the context in which the material is shared. Even identical images can be harmless when exchanged within a family but become CSAM when circulated in inappropriate settings. Technical solutions, such as hashing algorithms or AI, are often proposed but fall short in accurately identifying CSAM without invading privacy or mislabeling innocent content. The key issue lies in the fact that crucial contextual information, like the relationship between individuals and the nature of their communication, is not embedded in the media files themselves. Without access to this context, any filtering tool would struggle to differentiate between harmless and harmful content effectively. Addressing the serious problem of sexual exploitation of children requires a comprehensive approach beyond relying on technological solutions, emphasizing the need for targeted programs and interventions instead of unrealistic filtering mechanisms.

Related

EU cancels vote on private chat app law amid encryption concerns

EU cancels vote on private chat app law amid encryption concerns

The European Union cancels vote on law targeting child sexual abuse material over encryption concerns. Proposed measures involve scanning images on messaging apps, sparking privacy debates among member states. Negotiations ongoing.

Simple ways to find exposed sensitive information

Simple ways to find exposed sensitive information

Various methods to find exposed sensitive information are discussed, including search engine dorking, Github searches, and PublicWWW for hardcoded API keys. Risks of misconfigured AWS S3 buckets are highlighted, stressing data confidentiality.

'Skeleton Key' attack unlocks the worst of AI, says Microsoft

'Skeleton Key' attack unlocks the worst of AI, says Microsoft

Microsoft warns of "Skeleton Key" attack exploiting AI models to generate harmful content. Mark Russinovich stresses the need for model-makers to address vulnerabilities. Advanced attacks like BEAST pose significant risks. Microsoft introduces AI security tools.

Can a law make social media less 'addictive'?

Can a law make social media less 'addictive'?

New York passed laws to protect children on social media: SAFE for Kids Act requires parental consent for addictive feeds and limits notifications; Child Data Protection Act restricts data collection. Debate ensues over enforceability and unintended consequences.

Google's Nonconsensual Explicit Images Problem Is Getting Worse

Google's Nonconsensual Explicit Images Problem Is Getting Worse

Google is struggling with the rise of nonconsensual explicit image sharing online. Despite some efforts to help victims remove content, advocates push for stronger measures to protect privacy, citing the company's capability based on actions against child sexual abuse material.

Link Icon 6 comments
By @havkom - 3 months
The doctors example is good, however may not only be health communication between a parent and a doctor but among parents themselves to figure out what action (if any) to take in relation to a condition.

The hesitation to effectively communicate after CSAM may actually cost lives.

By @neutered_knot - 3 months
I’m not advocating widespread filtering in this answer.

This article is essentially referring only to edge cases which require context to determine if images are CSAM. There are other cases where context is not needed and determining if something is CSAM is sadly easy.

For example, there is no appropriate context for images or videos of adults raping toddlers or infants.

By @S0y - 3 months
A good way to treat CSAM is to see it as radioactive material. You always want it to be accounted for. Some people have legitimate use for it (x-ray machines, nuclear power plants, etc...) these would be the doctor example in this article. and then you have the actual illegal uses of it, like someone trying to build a bomb in their backyard. In that sense, a CSAM filter shouldn't assume anything about the content, just be able to identify that its content that needs to be accounted for.
By @advisedwang - 3 months
The article is correct, however most people would rather CSAM filters with errors than just throwing up our hands and saying no CSAM detection is possible.

The solution to have human-in-the-loop appeals processes. And because companies don't want to pay for this, it should be enforced as regulation.