C2PA Is An American Surveillance Network.
C2PA grants big tech companies the power to disable our cameras at will. This is unacceptable in today’s geopolitical climate, especially given the European Union’s commitment to being free from foreign digital influence. C2PA cannot survive in its current form.
For those unaware, C2PA is a way to add information (metadata) to media about where it comes from. It can be used to confirm that an image came from a real camera, or that a video was published by a source you trust. It sounds like a great way to combat deepfake misinformation, but it’s a dangerous lie.
Approved By Three-Letter Agencies
C2PA silently leaks the identity of the photo-taker. Even if the C2PA metadata doesn’t contain identifiable information, the digital signatures used to secure this information will always leak a unique identifier: the public key. You can’t verify the metadata without knowing its source’s public key, so it’s included with every C2PA-protected image they take.
C2PA’s advocates say they solved privacy, but that’s a lie. They say that key rotation will protect user identities, using a unique key for every image to keep C2PA metadata anonymous. Still, it doesn’t prevent corporations from determining who took a given image.
Users of C2PA rely on Certificate Authorities (CAs) to determine which sources they should trust. Without CAs, they’d need to vet each public key, independently deciding whether the source is trustworthy. C2PA is impractical without CAs. They’re also essential for key rotation: we must request a unique key for every image from our CA, since they broker trust to otherwise unknown keys. CAs must know to whom they’re giving each key, otherwise there’s no basis for trust. The privacy is only an illusion.
The CA’s maintain records that link keys to specific devices or users. Keeping these records is part of a CA’s job since they must wield their banning capabilities to keep malicious actors out of the system.
Yes, they can just ban your camera. Are there any checks on this power? No.
It’s Probably Going To Be Illegal In The EU
C2PA exposes EU citizens to a foreign CA’s unchecked truth-deciding power. Since CA’s decide who gets to take C2PA-certified images, they de-facto determine which images are considered real, including political content consumed in the EU. A Russian CA could certify a targeted political deepfake as real, and an American CA could silence a European activist.
Using C2PA to determine what people see online would put social media companies in the middle of a geopolitical standoff between the EU, US, Russia, and China.
The EU has worked diligently to maintain sovereignty over its digital infrastructure for decades, and there’s no reason to think they’ll reverse course soon. The General Data Protection Regulation (GDPR) and the Digital Services Act have set high standards for data protection and antitrust. C2PA conflicts with their agenda.
And if the EU still won’t complain, China or Russia will.
Thus far, C2PA has survived by misleading regulators and the public. Inevitably, there will be pushback against C2PA, and it will be deeply rooted in the desire for independence from foreign influence.
Why is nobody talking about this?
It’s Not Even A Good Lie
Let’s imagine that key rotation is a good idea. How is this even going to scale?
If we want a network of a million devices, each needing unique keys to maintain privacy, the CAs would be overwhelmed with key issuance requests. That’s just one of the fun little obstacles that C2PA’s users face.
What about offline devices? I guess we’ll need to hire a few engineering teams to work that out. How will we maintain that growing list of banned devices, which we’ll need to check every time we look at an image? Perhaps there’s enough VC money to cover that up.
Even if these are not unsolvable challenges, I don’t think anyone signed up for OSCP-related mental gymnastics. Oh, and by the way, you’ll need to dodge TruePic’s patents to get any work done.
Designed To Be Unfixable
The framework’s commitment to flawed technologies makes pivoting difficult. They can’t just scrap the use of CA-generated X.509 certificates, they’re essential to how C2PA works. These technological potholes require an overhaul of the system — a move that’s unlikely given the internal politics and substantial existing investments of the corporations involved.
They even introduced more sensible ways of managing devices … then scrapped them. C2PA was once designed to work with W3C Verifiable Credentials but that ship has now sailed.
As far as Adobe, Microsoft, and Amazon are concerned, the population will accept a sub-par solution if it features their sticker-of-approval.
We Tried To Fix C2PA
I’ll keep this part brief since there’s more to say about C2PA.
We developed See3, a privacy-preserving, scalable, and moderation-friendly alternative to C2PA. It’s based on a new cryptographic tool: cryptographic descriptors.
Cryptographic descriptors distribute trust by eliminating the need for CAs, replacing them with less powerful entities. They are adaptable, efficient, and won’t anger the EU.
You can learn more about See3 by clicking here.
The Conclusion
The flaws in C2PA are too significant to ignore. Supporting C2PA means promoting corporate surveillance, escalating geopolitical conflict, and possibly breaking social media platforms. It’s time to embrace technologies that empower users, protect privacy, and distribute trust.
How do you envision the future of digital content authentication? Share your thoughts or reach out to us at Veracity Labs to explore how cryptographic descriptors can make a difference in your field.