Topic:
A new language for formal verification of security protocols
Structure of a literal review:
- Introduction
- Body
- History
- Existing languages
- How they work?
- What they are good at?
- Where do they fall short?
- Conclusion
Conferences:
- IEEE S&P
- USENIX
- NDSS
- ACM CCS
- Eurocrypt
- Crypto
- CHES
- IEEE EuroS&P
- Asiacrypt
- ACM AsiaCCS ...
- http://jianying.space/conference-ranking.html
Keywords:
- Tools:
- ProVerif
- Scyther
- Athena
- Avispa
- Casper/FDR
- SAL model checker
- SMV
- Murphi
- NuSMV
- Verifpal
- Coq
- CPSA
- DEEPSPEC
- F7
- Maude-NPA
- Tamarin
- FDR
- Fundamental theories
- Linear temporal logic
- Symbolic model checking
- Dolev and Yao
- Adversarial capabilities
- Adversary
- Symbolic model
- Computational model
- Fundamental hardware components
- Trusted Platform Module (TPM)
- Approaches:
- Model extraction
- PL code (e.g. C++) -> formal language -> prover
- "A Framework for Formal Verification of Security Protocols in C++"
- Code generaion
- Formal language -> PL code
- Security properties (What people care about in this space)
- Confidentiality
- Integrity
- Availability
- Accuracy
- Authentication
- Aliveness
- Weak Agreement
- Non-injective agreement
- Injective agreement
- Remote attestation
- Equivalences
- Common attacks (Why people care)
- Eavesdropping
- MitM
- DoS
- Battery Exhaustion attack(?)
- Protocols in real life (Show relevance to the reader)
- Wifi, e.g. WEP
- 5G AKA
- DP-3T pandemic-tracking protocol
- Signal
- Scuttlebutt
- TLS 1.3
- Scenarios
- Internet of Things (IoT)
- Vehicle to Vehicle (V2V)
- Mobile healthcare
Questions
- Do I introduce cryptography?
Mind map:
- Tools
- Existing protocols
- Security properties (objectives)
- Attacks (possible motivation for objectives)
- Fundamentals
- High-level approaches
- Scenarios of protocol application
Storyline:
## History
- People have begun designing security protocols long ago, but protocols thought to be secure were discovered to be faulty after many years
- e.g. Needham-Schroeder public key protocol and Lowe
- Takeaway: manual analysis is insufficient, automated analysis is necessary
- Existing (at the time) automated analysis is not sufficient either
- Functional tests are not sufficient, as you need to simulate a adversary
- (Possible) Functional tests on modern computers are still not sufficient, reference attacks on modern protocols
- What people come up with next are symbolic model and computational model
- Symbolic model is good for automation
- Computational model is good for details
- (Possible) Show success stories of using these models
- Bring up found complex attacks
## Background (+ state of the art discussion)
- We pick symbolic model as our focused approach, as it's most relevant to us
- Fundamentals of theories in symbolic model so reader can understand the next section
- What are the subcategories of tools within symbolic model?
- Strength and weaknesses of each tool
- We want to showcase what the languages look like in these tools
## General workflow
- Talk about how the tools are used in practice
- Approaches of modelling
- Modelling the specification
- Model extraction from existing codebase
- Approaches of implementation post-analysis
- Code generation
## Case studies for mainly modelling
- Protocol categories (different categories have different focus)
- What people are hunting for (security properties) in each scenario
- Here we reference the real life protocols (case studies)
- Maybe we point out some frictions/problems
## Case studies for mainly post-analysis use
- Code generation (derive a good implementation from the logical model)
- Generate litmus tests