Shostack + Friends Blog

 

Threat Model Thursday: ARM's Network Camera TMSA

[no description provided]

Last week, I encouraged you to take a look at the ARM Network Camera Threat Model and Security Analysis, and consider:

First, how does it align with the 4-question frame ("what are we working on," "What can go wrong," "what are we going to do about it," and "did we do a good job?") Second, ask yourself who, what, why, and how.

Before I get into my answers, I want to re-iterate what I said in the first: I'm doing this to learn and to illustrate, not to criticize. It’s a dangerous trap to think that “the way to threat model is.” And this model is a well-considered and structured analysis of networked camera security, and it goes a bit beyond

So let me start with the who, what, why and how of this model.

Who did this? The models were created, analyzed and documented by Prove & Run, a French software firm, on contract to Arm.

What is this? It's a Common Criteria Protection Profile. If you're not familiar with the Common Criteria, it's an attempt to use the buying power of major governments to improve the security of the things they buy, and to reduce costs for manufacturers by aligning their security requirements. That fundamental nature, of being a Protection Profile, controls the form of the document, and the models within it. The models of 'what we're working on' vary by purpose. We construct models to help us analyze and to help us communicate. We might want to communicate to persuade, to discuss, or to document. Our documents might be hyper-transient on a whiteboard or napkin, or designed for archival use. Both are choices. It takes work to write the more formal version.

Why "With the inherent diversity of IoT there will be a greater need for device manufacturers to have a reference TMSA for their product. Arm has created a series of reference English language Protection Profiles for IoT products to show how this might be done in a way that is understandable by non-security experts. These security analyses are accompanied by at a glance summary documents and useful appendices that show how Arm TrustZone and CryptoIsland technology can be used to meet some of the SFRs. We hope that you find these documents useful as a starting point for creating a TMSA for your IoT device."

So to restate that, Arm wants to help their customers threat model, and understand how to use Arm's feature sets to mitigate threats to a common class of device. Cool! It's an important goal, and I'm glad Arm is investing in it. (And we will return to this goal.)

How they do this is they:

  1. Give an overview of the camera TOE, and its use and major security features. By TOE, they mean Target of Evaluation, which is a subset of the camera. So how they do this is very, very, strongly grounded in the Common Criteria, to an extent that it's hard for anyone not grounded in that world to read.
  2. Provide a diagram of what's in scope, and a set of assets to be protected.
  3. Offer a set of threats. I'll analyze these below.
  4. List a set of expected security policies that the end user will have. Some of these, frankly, are optimistic, such as "the admin shall change the default passwords," and "are assumed to follow and apply administrative guidance." However, optimistic or not, they are explicit, which allows us to evaluate them, and decide if they work for us. (Alternate approaches might be to not have a password to the device, and to remotely administer it. There are associated security issues, which we could also evaluate.)
  5. Tie the security objectives to a set of threats.
  6. Derive security requirements to meet the objectives.
  7. Compare the requirements to Arm's CryptoIsland, Trustzone, and Root of Trust "products."

With that, let me turn to the 4-question frame.

1. What are we working on?
This is addressed by the TOE, the diagram, and the set of security policies.

2. What can go wrong?
The structured approach to the interplay between threats, objectives and requirements is interesting. It may be one step more than is needed, but perhaps not, especially when we consider the goal of experts in areas other than security using them "as a starting point for creating a TMSA for your IoT device."

More interesting is where do the threats come from? The threats which they list are:

  • T.impersonation
  • T.MITM
  • T.firmware_abuse
  • T.tamper
  • P.Credential_Management (The admin will change the password)
  • A.Trusted_Admin

It is unclear where it comes from. Why these threats? What else was considered and rejected? (It may be that one more versed in Common Criteria than me finds the answers obvious; I suspect the overlap of that set with their target audience is less than 100%.)

Still taking the list as it is, I think that the TOE does not mean to me what it means to them. The TOE excludes the network, and so excludes impersonation and MITM. And so I think there's an alternate TOE, which includes my yellow line, although not the blue box. With that added, the impersonation and MITM threats make more sense. Also, having added it, I am now concerned about denial of service threats to and through the network interface, and also, potentially from the miria-d devices out there. (Although the protections of secure storage and authenticated firmware would protect most cameras with these features from permanently joining such a botnet.)

Network camera framed copy

We are now in the realm of the list of threats list as it is not, which is the issue of video privacy, or information disclosure by cameras. To be fair, that could be out of scope. I might argue that it should not be, but I will argue that it should not silently be out of scope. (It could also be subsumed under impersonation and the P.Credential_Management.)

There are also no threats listed relative to the 'general purpose operating system' which may be present. And from a Common Criteria perspective, that's sensible — it has a PP. We'll return to this in asking if we did a good job.

3. What are we going to do about it?
The answers to this are a set of security functional requirements in section 6. I want to touch on one, and here, I will take the liberty of disagreeing. The line reads: MITM "assumes that the TOE can be attacked by intercepting or spying communications with remote servers. This threat is countered by the security objective OT.COMMUNICATION that ensures authentication of remote servers and protection in confidentiality and integrity of exchanged data." Authenticating remote servers is not sufficient to meet this goal. At a base, the difference between TLS 1.2 and 1.3 is about this problem. Solving it fully is hard -- does the camera need to reach out to a Certificate Authority for a cert? I want more here.

4. Did we do a good job?
I'm going to re-formulate that question, and ask instead, Is the protection profile the right a good form to meet Arm's goals? As a reminder, the goal is to help non-security experts use them "as a starting point for creating a TMSA for your IoT device."

To me, much of the form is sensible for that goal, especially section 6, and that sensible form may be doubly hidden. It's hidden on page 18 and onwards, and it's hidden behind a complex and intimidating form. The meat of the document starts 9 pages in. The language is heavily formal. A useful side effect of formality is the language is quite clear, especially compared to a lot of documents I see.

There is not enough discussion of vulnerabilities of the operating system and other software components. This does a dis-service to the non-expert customers for this document. They should get some guidance, perhaps couched in an acknowledgement that it's outside the norms of a protection profile. There may be Common-Criteria specific reasons that doing that is bad, including drawing the attention of the evaluation labs. If that's the case, then adding a third document to the zip file seems appropriate.

In closing, I found this to be a really interesting model to examine. What do you think of it? What else should we look at?