owned this note
owned this note
Published
Linked with GitHub
# Summary of Designing for Security - Adam Shostack CH - 1,2,6
###### tags: `Tag(HashCloak - Validator Privacy)`
Paper: https://moodle.ufsc.br/pluginfile.php/2377555/mod_resource/content/2/Threat%20Modeling.pdf
Table of Contents
[ToC]
### Introduction
:::info
Introduction - Short summary
:::
> Treat Modeling - Definition: In short, threat modeling is the use of abstractions to aid in thinking about
risks
Threat modeling is done every day. Threat modeling is important to individuals, teams, and organizations to help find security issues early, improve understanding of security requirements, and to engineer and deliver better products.
* Find Security Bugs Early
* Understand Your Security Requirements
* Engineer and Deliver Better Products
* Address Issues Other Techniques Won’t
### Chapter 1 - Getting Started
:::info
Using this area for personal commentary: excited about topic. (security/modeling)
:::
1. start with a whiteboard diagram of how data flows through the system
2. draw the trust boundaries as boxes to show what’s inside each with a label (can be very helpful to number each process)
#### STRIDE is a mnemonic for things that go wrong in security.
It stands for Spoofing,Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege:
* Spoofing is pretending to be something or someone you’re not.
* Tampering is modifying something you’re not supposed to modify. It can include packets on the wire (or wireless), bits on disk, or the bits in memory.
* Repudiation means claiming you didn’t do something (regardless of whether you did or not).
* Denial of Service are attacks designed to prevent a system from providing service, including by crashing it, making it unusably slow, or filling all its storage.
* Information Disclosure is about exposing information to people who are not authorized to see it.
* Elevation of Privilege is when a program or user is technically able to do things that they’re not supposed to do.
There are two main types of validation activities you should do. The first is
checking that you did the right thing with each threat you found. The other is
asking if you found all the threats you should find.
### Chapter 2
:::info
Software-centered threat modeling: Why it’s the most helpful and effective approach, and how to do it.
:::
Software-centric models are models that focus on the software being built or a system being deployed.
* Data flow models are often ideal for threat modeling; problems tend to follow the data flow, not the control flow. Data flow models more commonly exist for network or architected systems than software products, but they can be created for either.
>Accessability: Color can add substantial amounts of information without appearing overwhelming. For example, Microsoft’s Peter Torr uses green for trusted, red for untrusted and blue for what’s being modeled (Torr, 2005). Relying on color alone can be problematic. Roughly one in twelve people suffer from color blindness, the most common being red/green confusion (Heitgerd, 2008). The result is that even with a color printer, a substantial number of people are unable to easily access this critical information. Box boundaries with text labels address both problems. With box trust boundaries, there is no reason not to use color.
#### When to Validate Diagrams
For software products, there are two main times to validate diagrams: when you create them and when you’re getting ready to ship a beta. There’s also a third triggering event (which is less frequent), which is if you add a security boundary.
For operational software diagrams, you also validate when you create them, and then again using a sensible balance between effort and up-to-dateness. That sensible balance will vary according to the maturity of a system, its scale, how tightly the components are coupled, the cadence of rollouts, and the nature of new rollouts.
### Chapter 6
:::info
Much like security threats violate a required security property, privacy threats are where a required privacy property is violated
:::
#### Solove’s Taxonomy of Privacy
* The harm of surveillance is twofold: First is the uncomfortable feeling of being watched and second are the behavioral changes it may cause.
* Identification means the association of information with a flesh-and-blood person.
* Insecurity refers to the psychological state of a person made to feel insecure, rather than a technical state.
* The harm of secondary use of information relates to societal trust.
* Exclusion is the use of information provided to exclude the provider (or others) from some benefit.
#### Privacy Considerations for Internet Protocols
An informational RFC “Privacy Considerations for Internet Protocols,” outlines a set of security-privacy threats, a set of pure privacy threats, and offers a set of mitigations and some general guidelines for protocol designers (Cooper, 2013)
The privacy-specific threats are as follows:
* Correlation
* Identifi cation
* Secondary use
* Disclosure
* Exclusion (users are unaware of the data that others may be collecting)
#### Privacy Impact Assessments (PIA)
> As outlined by Australian privacy expert Roger Clarke in his “An Evaluation of Privacy Impact Assessment Guidance Documents,” a PIA “is a systematic process that identifi es and evaluates, from the perspectives of all stakeholders, the potential effects on privacy of a project, initiative, or proposed system or scheme, and includes a search for ways to avoid or mitigate negative privacy impacts.” Thus, a PIA is, in several important respects, a privacy analog to security threat modeling.
PIAs are often focused on a system as situated in a social context, and the evaluation is often of a less technical nature than security threat modeling.
#### The Nymity Slider and the Privacy Ratchet
University of Waterloo professor Ian Goldberg has defi ned a measurement he calls nymity, the “amount of information about the identity of the participants that is revealed [in a transaction].”
When using nymity privacy in threat modeling, the goal is to measure how much information a protocol, system, or design exposes or gathers. This enables you to compare it to other possible protocols, systems, or designs. The nymity slider is thus an adjunct to other threat-finding building blocks, not a replacement for them.
#### Contextual Integrity
Contextual integrity is a framework put forward by New York University professor Helen Nissenbaum. It is based on the insight that many privacy issues occur when information is taken from one context and brought into another.
Contextual integrity is violated when the informational norms of a context are breached. Norms, in Nissenbaum’s sense, are “characterized by four key parameters: context, actors, attributes, and transmission principles.”
One area of concern is that the effort to spell out all the aspects of a context may be quite time consuming, but without spelling out all the aspects, the privacy threats many be missed. This sort of work is challenging when you’re trying to ship software and Nissenbaum goes so far as to describe it as “tedious” (Privacy In Context, page 142).
#### LINDDUN
LUNDDUN is a mnemonic developed by Mina Deng for her PhD at the Katholieke Universiteit in Leuven, Belgium (Deng, 2010). LINDDUN is an explicit mirroring of STRIDE-per-element threat modeling. It stands for the following violations of privacy properties:
* Linkability
* Identifiability
* Non-Repudiation
* Detectability
* Disclosure of information
* Content Unawareness
* Policy and consent Noncompliance
> The privacy terminology it relies on will be challenging for many readers. However, it is, in many ways, one of the most serious and thought-provoking approaches to privacy threat modeling, and those seriously interested in privacy threat modeling should take a look. As an aside, the tension between non-repudiation as a privacy threat and repudiation as a security threat is delicious.
---