# EMSE6540 Chapter 2
###### tags: `Course` `EMSE6540`
---
:::danger
**Word Learning**
* nefarious (adj.)
~ (typically of an action or activity) wicked or criminal.
* coin (v.)
~ to invent a new word or expression, or to use one in a particular way for the first time.
* calamity (n.)
~ a serious accident or bad event causing damage or suffering.
* purview (n.)
~ the limit of someone's responsibility, interest, or activity.
* pundit (n.)
~ A person who knows a lot about a particular subject and is therefore often asked to give an opinion about it.
* taxonomy (n.) 分類
~ A system for naming and organizing things, especially plants and animals, into groups that share similar qualities
* granular (adj.) 粒狀
~ Made of, or seeming like granules.
* complementary (adj.)
~ Useful or attractive together.
* impede (v.)
~ To make it more difficult for something to happen or more difficult for someone to do something.
:::
:::success
**Key Terms**
* phreaking
~ It refers to the “hacking” of the systems and computers used by a telephone company to operate its telephone network.
* information assurance
~ The availability of the systems and information when we want them.
* COMSEC
~ Communications security and deals with the security of telecommunication systems.
* Confidentiality
~ To ensure that only those individuals who have the authority to view a piece of information may do so.
* Integrity
~ Only authorized individuals should ever be able to create or change (or delete) information.
* Availability
~ To ensure that the data, or the system itself, is available for use when the authorized user wants it.

* Authentication
~ To ensure that an individual is who they claim to be. The need for this in an online transaction is obvious.
* Nonrepudiation
~ Deals with the ability to verify that a message has been sent and received and that the sender can be identified and verified.
* Auditability
~ Whether a control can be verified to be functioning properly.
* Low-Water-Mark policy
~ This policy in many ways is the opposite of the *-property in that it prevents subjects from writing to objects of a higher integrity level. The final rule contained in the Low-Water-Mark policy states that a subject can execute a program only if the program’s integrity level is equal to or less than the integrity level of the subject. This ensures that data modified by a program only has the level of trust (integrity level) that can be placed in the individual who executed the program.
> Drawbacks: eventually lowering the integrity levels of all subjects to the lowest level on the system (unless the subject always views files with the same level of integrity).
* Ring policy
~ Allowing any subject to read any object without regard to the object’s level of integrity and without lowering the subject’s integrity level.
> Drawbacks: can lead to a situation where data created by a subject after reading data of a lower integrity level could end up having a higher level of trust placed upon it than it should.
:::
:::info
**Model**
* The Fortress Model
~ Keep the bad out, allow in the good. Time has shown that this model is not realistic. The fortress model has been shown to not provide sufficient defenses, yet, like "endpoint security", it is still a valuable component in a modern security program.
* The Operational Model of Computer Security
~ Every security technique and technology falls into at least one of the three elements of the equation.

* Time-Based Security
~ Bringing the concept of time to the operational security model puts it in line with modern security defense practices.

* Cybersecurity Framework Model (risk-based approach)
~ Its purpose is to complement and enhance risk management efforts through the following actions:
1. Determining the current cybersecurity posture
2. Documenting the desired target state with respect to cybersecurity
3. Determining and prioritizing improvement and corrective actions
4. Measuring and monitoring progress toward goals
5. Creating a communication mechanism for coordination among stakeholders

* Confidentiality Models
~ Data confidentiality has generally been the chief concern of the military.
* Bell-LaPadula Model:
Employs both mandatory and discretionary access control mechanisms when implementing its two basic security principles.
* Simple Security Rule:
No subject (such as a user or a program) can read information from an object (such as a file) with a security classification higher than that possessed by the subject itself.
* *-property:
A subject can write to an object only if the target’s security classification is greater than or equal to the object’s security classification.

* Brewer-Nash security model (Chinese Wall model):
Controlling read and write access based on conflict of interest rules.

* Integrity Models
~ Emphasis on integrity rather than confidentiality.
* The Biba Security Model
In the Biba security model, instead of security classifications, integrity levels are used. A principle of integrity levels is that data with a higher integrity level is believed to be more accurate or reliable than data with a lower integrity level.

The Biba security model implements a hybrid of the Ring and Low-Water-Mark policies.
* The Clark-Wilson Security Model
It uses transactions as the basis for its rules. It defines two levels of integrity only: constrained data items (CDIs) and unconstrained data items (UDIs).
* CDIs: CDIs data is subject to integrity controls.
* UDIs: UDIs data is NOT subject to integrity controls.
It defines two types of processes:
* Integrity verification processes (IVPs): ensure that CDI data meets integrity constraints (to ensure the system is in a valid state).
* Transformation processes (TPs): change the state of data from one valid state to another.
Data in this model cannot be modified directly by a user; it must be changed by trusted TPs, access to which can be restricted (thus restricting the ability of a user to perform certain activities).
:::
:::warning
**Security Tenets**
* Session Management
~ Session management includes all the activities necessary to manage the session—from establishment, during use, and at completion of the conversation.
* Exception Management
~ When the operation of a system encounters an exception, whether it is invoked by a person, process, technology, or combination thereof, the system must effectively handle the condition.
* Configuration Management
~ The proper configuration and provisioning of all the components in a system is essential to the proper operation of the system. The design and operation of the elements to ensure the proper functional environment of a system is referred to as configuration management.
**Security Approaches**
* Host Security
~ Host security takes a granular view of security by focusing on protecting each computer and device individually instead of addressing protection of the network as a whole.
* Network Security
~ In network security, an emphasis is placed on controlling access to internal computers from external entities. This control can be through devices such as routers, firewalls, authentication hardware and software, encryption, and intrusion detection systems (IDSs).
> Now, most security experts now generally agree that a combination of both is needed to adequately address the wide range of possible security threats.
**Security Design Principles**
* Least privilege
~ Use minimum privileges necessary to perform a task.
* Separation of privilege
~ Access should be based on more than one item.
Drawbacks: This cost is manifested in both time and money.
* Fail-safe defaults
~ Deny by default (implicit deny) and only grant access with explicit permission.
> The concept of white lists and black lists.
* Economy of mechanism
~ Mechanisms should be small and simple.
> Example:
> 1. If an application has 4000 lines of code, there are a lot fewer places for buffer overflows, for example, than in an application of two million lines of code.
> 2. Concerns the number of services that you allow your system to run.The keep-it-simple principle tells us to eliminate or disable those services we don’t need.
* Complete mediation
~ Protection mechanisms should cover every access to every object.
* Open design
~ Protection mechanisms should not depend on the secrecy of the mechanism itself.
~ In most security circles, "security through obscurity" is considered a poor approach, especially if it is the only approach to security. Security through obscurity simply attempts to hide an object; it doesn’t implement a security control to protect it.
> The true protection relies upon the secrecy and complexity of the keys.
* Least common mechanism
~ Protection mechanisms should be shared to the least degree possible among users.
> Example: if there is a module that enables employees to check their payroll information, a separate module should be employed to change the information, lest a user gain access to change versus read access. Although sharing and reuse are good in one sense, they can represent a security risk in another.
* Psychological acceptability (least astonishment)
~ Protection mechanisms should not impact users, or if they do, the impact should be minimal.
* Defense in Depth (layered security)
~ It is characterized by the use of multiple, different defense mechanisms with a goal of improving the defensive response to an attack.

~ The layers usually are depicted starting at the top, with more general types of protection, and progressing downward through each layer, with increasing granularity at each layer as you get closer to the actual resource.

* Diversity of Defense
~ Diversity of defense is a concept that complements the idea of various layers of security.
~ * Encapsulation
The principle of encapsulation is used all of the time in protocols. When a higher-level protocol is used to carry a lower protocol, the lower protocol is encapsulated in the data portion of the higher protocol.
~ * Isolation
Isolation is the concept of separating items so that they cannot interfere with each other.
~ * Trust Relationships
Trust is defined as having an understanding of how a party will react to a given stimulus.
Changes in trust occur at trust boundaries—logical boundaries the surround specific levels of trust in a system. When outside input is entered into a computer program, it is crossing a trust boundary, and a decision has to be made as to whether or not the input should be trusted. Another name for the boundary around a system where external inputs can interact with a system is referred to as the attack surface. A key element in limiting hostile inputs is attack surface minimization, or the limiting of trusting outside information.
:::