Increasingly we are seeing attacks against what is now commonly referred to as the software supply chain.
One of the more notable examples in the last few months was from the Nodejs package management ecosystem . In this case, an attacker convinced the owner of a popular but unmaintained Node package to transfer ownership to them. The attacker than crafted a version of the package that unsuccessfully attacked Copay, a bitcoin wallet platform.
This is just one example of this class of attack, insider attacks of the software supply chain are also becoming more prevalent. When looking at this risk it holistically it is also important to realize that as deployments move to the Cloud the lines between software and services also blur.
Though, not specifically an example of a Cloud deployment issue, in 2015 there was a public story of how some Facebooks employees have the ability to log into users accounts without the target user’s knowledge . This insider risk variant of the supply chain exists in the Cloud in a number of different areas.
Probably the most notable being in the container images provided by their Cloud provider. It is conceivable that a Cloud provider could be compelled by government to build images that would attack a specific or set of customers as part of an investigation, or that an employee would do so under compulsion or in service of personal interests.
This is not a new risk, in fact, management of internal and external dependencies has always been core to building secure systems. What has changed is that in the rush to the Cloud and Open Source users have adopted the tools and resources these cloud providers have built to make this migration easier without fully understanding and managing this risk that they have assumed in doing so.
In response to this reality, Cloud providers are starting to provide tools to help mitigate this risk, some such examples include:
- Providing audit records of employee access to customer data and services,
- Building solutions to provide hardware-based trusted execution environments that provide some level of protection from cloud providers.
- Offering hardware key management solutions provided by third-parties to protect sensitive key material,
- Cryptographically signing the binaries and images that are published so that their distribution is controlled and tampering post-production can be detected.
Despite these advancements, there is still a long way to go to mitigate these risks in a holistic fashion.
One effort in this area I am actively involved in is in the adoption of the concept of Binary Transparency. This can be thought of as an evolution of legacy code signing models. In these solutions, a publisher places a cryptographic signature using a private key associated with a public certificate of some sort that is either directly trusted based on package origin and signature (such as with GPG signatures) or is authenticated based on the legal identity of the publisher of the package (as is the case with Authenticode).
These solutions, while valuable, help you authenticate a package but they do not provide you the tools to understand the history of that package. As a result, these publishers can produce packages either accidentally or on purpose that are malicious in nature that is signed with their “trusted keys” and it is not detectable until it is too late.
As an example of this risk, you only need to look at RealTek, over the years numerous times their code signing key has been compromised and used to produce malware, some of it targeted such as in the case of Stuxnet .
Binary Transparency addresses this risk in a few ways. At its core Binary Transparency can be thought of as an append-only ledger listing of all versions of a given binary, each of these versions having a pointer to a content addressable store where that binary is available.
This design enables the runtime that will execute the binary to do a few things that were not possible, It can, for example, ensure it is running the most recent version of a binary and to only run the binary when it, and some number of previous revisions are publicly discoverable. This also enables the relying parties of the published binaries and images to comp it can inspect all versions and potentially diff those versions to understand the differences.
When this technique is combined with the concept of reproducible builds, as is provided by Go  and a community of these append-only logs and auditors of those logs you can get strong assurances that:
- You are running the same version as everyone else,
- That the binary you are running is reproducible from the source you can review,
- The binary are running has not neen modified since it was published,
- That you, and others, will not run binaries or images that have not been made publicly available for inspection.
A system with these properties disincentivizes the attacker from executing these attacks as it significantly increases the probability of being caught and helps bound the impact of any compromise.
Importantly, by doing these things, it makes it possible to increase the trust in the Cloud offering because it minimizes the amount of trust the user must put into the Cloud provider to remain honest.
A recent project that implements these concepts is the Go Module Transparency project  .
Over time we will see these same techniques applied to other areas   of the software supply chain, and with that trend, users of open source packages, automatic update systems, and the Cloud will be able to have increased peace of mind that their external dependencies are truly delivering on their promises.
-  Node.js Event-Stream Hack Exposes Supply Chain Security Risks
-  Facebook Engineers Can Access Your Account Without A Password
-  STUXNET Malware Targets SCADA Systems
-  REPRODUCING GO BINARIES BYTE-BY-BYTE
-  Proposal: Secure the Public Go Module Ecosystem
-  Transparent Logs for Skeptical Clients
-  Firefox Security/Binary Transparency
-  Contour: A Practical System for Binary Transparency
How are split views handled in a binary transparency system?
Binary Transparency, like Certificate Transparency and Key Transparency, is dependent on an ecosystem of logs, auditors and monitors. To address split views binaries must be published to an ecosystem that provides the needed properties to delivery on the associated security guarantees.
What about the runtime environment? Can’t an attacker who can tamper with the runtime environment still bypass the security properties of a Binary Transparency system?
Microsoft used to publish “Ten Immutable Laws Of Security” in these “Laws” the first three talk directly to this threat:
Law #1: If a bad guy can persuade you to run his program on your computer, it’s not your computer anymore,
Law #2: If a bad guy can alter the operating system on your computer, it’s not your computer anymore,
Law #3: If a bad guy has unrestricted physical access to your computer, it’s not your computer anymore.
With that said, not all is lost, with the help of “Secure Execution Environment”, “Trusted Execution Environments”, “Enclaves” and other trustworthy computing environments you can help address some of these risks by relying on the runtime security guarantees they can offer.
In short, you should think of Binary Transparency as the pipeline that delivers verifiable code to your verifiable runtime environment.
Since to have runtime verification is a required for high assurance that the code is doing what it is supposed to, does that mean that Binary Transparency isn’t valuable unless you have a trusted execution environment of some sort?
No, it really depends on your threat model.
In many cases, it is possible to harden your systems to the point where you do have sufficient confidence in your runtime environment that it is not necessary.
In other cases, it may be necessary to have a more secure execution environment, such as one based on hardware and remote attestation of runtime integrity and state.
As a general rule you can get the value of each of these systems independently and together they offer an even better story.