## General Comments Questionnaire
- **Summary**: A 1-2 sentence summary of findings. It is recommended to write this last :)
**- Project Context:**
- Does the project being reviewed do anything interesting that would create extra challenges worth noting?
- They're creating an *eventually-consistent content-addressable storage backed by a DHT with a WASM virtual environment for remote execution.*
- This presents a lot of challenges and attack vectors.
- Were there any challenges with how we reviewed this project that we should note?
- Scope is a difficult question as always. The Holochain team had a specific idea of what they wanted audited, but before we could audit that, we needed to figure out how the `hdi` crate fit into the rest of the system.
- Additionally, the team removed several design documents from scope after we started our audit, so we were basically told to only use the code as the source of truth.
- This is fine, but doing so makes it harder to compare design to implementation and thus, makes design reviews harder.
- Was the scope sufficient or were there key parts left out?
- The scope was sufficient, but it was a little bit vague at points, and if anything it might have been too broad.
- They wanted us to specifically audit and examine `crates/hdi` in the repository we were given.
- With the explicit caveat that leaving that package was fine as long as it was relevant to the audit.
- What about any dependency concerns?
- Not more than any other project.
- They rely on WASM and SQLite
**- Code Quality:**
- Is the code well-organized or a mess?
- The code is generally well organized. For a Rust project, it is easy enough to get into and understand where things are going.
- Is it well-commented?
- There are a good amount of high-level comments and package-level comments. Typically the `lib.rs` file in a given package would contain a lengthy comment about what that module was doing. These were quite helpful throughout the audit.
- Are there tests?
- There are a lot of tests, and they seem to be following test driven development standards.
- Does it follow standard or best practices?
- Yes, for a Rust project it mostly follows standards.
- They do some weird stuff with macros for serialization and deserialization.
**- Documentation:**
- Is it sufficient?
- We were left wanting to more documentation.
- Denis: I would say - there is no documentation. Because of that the audit is not so effective and efficient. There is no spec, we have to use general intro documentation like https://developer.holochain.org/concepts/ or ask the holochain team.
- Denis: without doubt the holochain team knows how the system works but prefers to store the knowledge in their head instead of papers, spec, etc. That is a red flag for me.
- Accurate and helpful?
- The documentation we were initially given was not helpful and was explicitly stated to be out of date even though original project estimates were made with that documentation in mind since the team provided it.
**- System Design:**
- Is there anything overall concerning about how they’ve approached the design?
- They are running arbitrary code in wasm runtimes, which presents remote execution attack vectors by definition.
- Is it clear that they’ve considered security?
- Yes, they're obviously approaching their system with a mind for security and attack vectors are considered in the code and comments.
- That being said, they still have a relatively large set of attack surfaces they must guard.
- Arbitrary execution and remote execution vectors
- Sybil attacks
- DoS attacks
- Is there room for improvement with their design approach or tactics?
- Holochain is an *eventually consistent network*.
- They use the "cascade", which is their term for a fallthrough cache that checks different, successively further layers for keys as needed.
- Agents are expected to traverse this cache to "discover their own truth" which is a behavior adopted by most of the system.
- They fold this in under their "Agent-centric" network definition.
- The cascade can be found in `crates/holochain_cascade`
- However, they offload all conflict resolution onto the Agent whose chain is being mutated.
- This presents a case where, on a busy Agent's chain, they could easily see conflicts and rejected changes that were entirely valid when they were crafted.
- We consider this a problem the team needs to solve.
- They won't find a solution to the CAP theorem, but they can shore up their guarantees around how write conflicts are handled from the Agent's point of view.
- They basically create arbitrary sandbox environments with zomes
- They're just wasm binaries stuffed into a shared box that everyone validates.
- If an attacker finds a way to break out of the WASM environment, it presents a dire privilege escalation situation.
- DDoS attacks
- Shareef has some thoughts here, we need to work with him on those.
- Zome validation calls
- Zome
**- Particular Components:**
- Do we want to call out particular components of the system and provide specific feedback that doesn’t necessarily apply to everything?
- The WASM sandbox environment
- CAP theorem issues
- Impossibility theory that Nicole was discussing
**- Patterns with Issues:**
- Are you seeing any patterns in the issues found?
- There is a sort of general handwavy-ness around the conflict resolution of writes from an Agent's point of view.
- Any general areas of weakness?
- Do you see opportunities for improvement?
- Better definition around how write conflicts are handled.
- Maybe they could consider a CRDT approach?
- Are there general areas of concern that are just ignored?
- They're aware of the eventual-consistency and write-conflict issues, but they have mostly accepted those for now.
- **Follow-up Audits: Add a note if this is a follow-up audit to provide context.**
- Do you notice a significant change since the last report?
- Did they better address comments, issues and suggestions from that previous report? Or does some feedback still apply?