Tag Archives: Hardware Security Modules

Challenges in Digital Content Authentication and the Persistent Battle Against Fakes

Efforts have been made for years to detect modified content by enabling content-creation devices, such as cameras, to digitally sign or watermark the content they produce. Significant efforts in this area include the Content Authenticity Initiative and the Coalition for Content Provenance and Authenticity. However, these initiatives face numerous issues, including privacy concerns and fundamental flaws in their operation, as discussed here.

It is important to understand that detecting fakes differs from authenticating originals. This distinction may not be immediately apparent, but it is essential to realize that without 100% adoption of content authentication technology—an unachievable goal—the absence of a signature or a watermark does not mean that it is fake. To give that some color just consider that photographers to this day love antique Leica cameras and despite modern alternatives, these are still often their go-to cameras.

Moreover, even the presence of a legitimate signature on content does not guarantee its authenticity. If the stakes are high enough, it is certainly possible to extract signing key material from an authentic device and use it to sign AI-generated content. For instance, a foreign actor attempting to influence an election may find the investment of time and money to extract the key from a legitimate device worthwhile. The history of DVD CSS demonstrates how easy it is for these keys to be extracted from devices and how even just being able to watch a movie on your favorite device can provide enough motivation for an attacker to extract keys. Once extracted, you cannot unring this bell.

This has not stopped researchers from developing alternative authenticity schemes. For example, Google recently published a new scheme they call SynthID. That said, this approach faces the same fundamental problem: authenticating trustworthy produced generative AI content isn’t the same thing as detecting fakes.

It may also be interesting to note that the problem of detecting authentic digital content isn’t limited to generative AI content. For example, the Costco virtual member card uses a server-generated QR code that rotates periodically to limit the exposure of sharing of that QR code via screenshots.

This does not mean that the approach of signing or watermarking content to make it authenticatable lacks value; rather, it underscores the need to recognize that detecting fakes is not the same as authenticating genuine content — and even then we must temper the faith we put in those claims.

Another use case for digital signatures and watermarking techniques involves their utility in combating the use of generative AI to create realistic-looking fake driver’s licenses and generative AI videos capable of bypassing liveness tests. There have also been instances of generative AI being used in real-time to impersonate executives in video conferences, leading to significant financial losses.

Mobile phones, such as iPhones and Android devices, offer features that help remote servers authenticate the applications they communicate with. While not foolproof, assuming a hardened and unmodified mobile device, these features provide a reasonable level of protection against specific attacks. However, if a device is rooted at the kernel level, or physically altered, these protective measures become ineffective. For instance, attaching an external, virtual camera could allow an attacker to input their AI-generated content without the application detecting the anomaly.

There have also been efforts to extend similar capabilities to browsers, enabling modern web applications to benefit from them. Putting aside the risks of abuse of these capabilities to make a more closed web, the challenge here, at least in these use cases, is that browsers are used on a wider range of devices than just mobile phones, including desktops, which vary greatly in configuration. A single driver update by an attacker could enable AI-generated content sources to be transmitted to the application undetected.

This does not bode well for the future of remote identification on the web, as these problems are largely intractable. In the near term, the best option that exists is to force users from the web to mobile applications where the server captures and authenticates the application, but even this should be limited to lower-value use cases because it too is bypassable by a motivated attacker.

In the longer term, it seems that it will fuel the fire for governments to become de facto authentication service providers, which they have demonstrated to be ineffective at. Beyond that, if these solutions do become common, we can certainly expect their use to be mandated in cases that create long-lasting privacy problems for our children and grandchildren.

UPDATE: A SecurityWeek article came out today on this topic that has some interesting figures on this topic.

UPDATE: Another SecurityWeek article on this came out today.

Hardware Based Key Management and Bitcoin

Hardware based key management solutions like Smart Cards and Hardware Security Modules provide a lot of value. Probably the most important being that the keys are moved out-of-process into a totally separate computer. This goes a long way towards protecting keys from being stolen via by malware or exposing keys to an attacker via software defects like happened with Heartbleed.

Depending on the device you choose you may also get:

  1. Third-party assurances that their cryptographic implementations and random number generators are sound which is incredibly hard to be sure of when you just pick something up blindly off the Internet.
  2. A verifiable supply chain with third-party assurances and audit trails the devices have not been tampered with.
  3. Hardware that makes it obvious it has been tampered with and is resistant to such attacks.
  4. Protection from side channel attacks such as Differential Power Analysis, Electromagnetic Leakage and Timing Attacks.
  5. Basic policy enforcement mechanisms like preventing keys from being exported, limiting which users can use them and requiring M of N users approve.
  6. Mechanisms to securely clone keys from one device to another to improve survivability of failure and compromise.
  7. Some devices support the concept of “Remote Pin Entry Devices” so that the cryptographic device can be stored in one location but the tokens used to approve an operation to happen with the keys managed by it can be located anywhere on the globe.

Despite how valuable these solutions are they are not without their shortcomings one of which is that for the last twenty years they have not changed much short of getting faster and adding support for newer mandated algorithms.

One of the reasons these devices have not changed is that Common Criteria (CC) and FIPS 140-2 verification, the standards they must conform with to be sold to their largest customers, make it excruciating hard to change and as such the incentive model is set up to discourage innovation and often encourage bad behavior.

These restrictions also have resulted in them not supporting algorithms not mandated by these standards this means in the case of Bitcoin the decision to use secp256k1 in the protocol precludes their use or limits their use to a limited feature set and significantly reduced performance.

Additionally since PKCS#11 (the library one uses to work with these devices) doesn’t specify how to generate a secp256k1 key any code written to use such device ends up being proprietary.

The net-effect of this is if you buy one of these devices your going to be spending $5,000 for a device that gives you some of the above properties that you can write custom software on that would be able to do about 24 secp256k1 operations a second.

This is more than enough for a personal wallet but nowhere near enough for an exchange or payment provider; which means these vendors, are not using these sorts of techniques to keep your keys safe.

There have been a number of solutions that have been started by individuals to bring some of these protections to Bitcoin to-date they are all incomplete, unusable, unmaintained or not available.

The most promising being the Trezor but based on what we know of these systems its seems very unlikely they will provide the kind of protection one gets from a commercial hardware security module or many of the other features these devices often have.

And even if they do since they are for the most part by individuals with limited resources who knows if they will be around or available a year from now? If you have lots of Bitcoin in these devices and the vendor goes down or the device fails what are you to do?

That is not to say this these projects are not good, in-fact I will order a Trezor once they start taking orders again but they should be thought of as a Wallet and not a Safe or Vault as they will not protect from a well healed attacker and without much more work are not appropriate for cold-wallet storage of large amounts of Bitcoin.