# TEAM 34
## Description of our implementation:
### Designe
Our implementation is split into three logically separated chunks a*rgument parsing*, *command handling*, and *exception management* which makes it easy for newcomers to the project to find the files they need and make quick changes.
The argument parser is responsible for reading and validating all input before anything else happens. It converts raw command line arguments into a well-formed Argument object that downstream components can trust.
The proccess of parsing arguments is divided into two steps. Firstly, general values are checked. Examples of these are: syntax correctness of commands used, existance of standard input option, and other. These checks are done within `outer validators`. Depending on the results of first check, the `inner validators` are selected. An example of this would be: If `outer validator` finds that `script-expression` command was used, a `inner validator` for checking the `script-expression value` will be used in step two.
The commands module houses each command’s business logic, completely isolated from parsing concerns. By the time a command’s **execute()** method is called (all four assignment required commands implement this interface), it can assume that every argument and value has already been **validated** and packaged into an Argument object.
The program’s entry point lives in Main, Invokes the parser, then dispatch to the appropriate command based on the parsed arguments.
For exceptions, we defined a set of custom exception classes so that users get clear messages.
Finally, our testing strategy is split into unit and integration tests. Unit tests cover individual classes and simple functions, achieving roughly 90 % coverage across the codebase. Integration tests written with Mockito simulate end to end CLI runs by feeding input via stdin or streams and verifying the overall behavior.
### What would we change:
We agree the current argument parsing logic is difficult to follow. Splitting validators into “outer” and “inner” layers actually hurts readability.
The original idea was to use validatrs so the `SOLID` principles were followed and it was clear what each validator is responsible for. Later we introduced the two layer validation, which may seem complicated to read but made things way easier to code. Since validators were also responsible for parsing and filling the `argument entity`, the fact that not all validators were running at all times was helpfull.
A better solution in my opinion, would be introducing classes for parsing and validation. Taking away the responsibility to also parse from the validators would make the project way more comfortable to read and understand. I don't find the idea of validators wrong in this project but I did find it a little bit tricky to actually implement.
### Tools for analysis
#### Static analysis
#### IntelliJ IDEA “Project Default” profile
---
I ran IntelliJ’s code inspection on the project after making changes based on issues identified and analysis from the team and reviewer.
Warnings cover things like fields that can be converted to local variables, parameters that always use the same constant value, unused interfaces or methods, catch blocks that silently ignore exceptions, and some simple style issues (unnecessary return statements and package names starting with an uppercase letter). Several classes could be simplified into records, and a couple of conditional checks are always true.
Most of these findings are low to medium impact.
#### SonarQube
---
After reviewing the SonarQube analysis. Issiues primarily related to maintainability, reliability, and adherence to coding standards where found. A significant number of findings involved the use of System.out and System.err for console output and error handling across multiple classes, which is considered a bad practice in production-grade applications. But i think for our purpose is fine.
In the validation components, repetitive and potentially unsafe regular expression patterns were flagged. These patterns could lead to stack overflow errors when processing large inputs, indicating a risk to runtime reliability.
Overall, the findings suggest a range of low to moderate-impact issues.

The one E rating is becouse of the local setup on soonar cube so i hardcoded the token but this was removed afther the analysis.
#### Dynamic analysis
---
#### Code Coverage with InteliJ Coverage
For code coverage, I used IntelliJ Coverage. During the Break-It phase of the project, I had to use JaCoCo since the tests were only executing the `.jar` file directly. However, this was not necessary here.
As a result of the code coverage analysis, I added a few tests. However, the coverage was already quite good prior to this, so only a few additional tests were necessary.
The percentage of lines covered by our tests typically ranges between 90% and 100%, with the exception of the custom exception (`Bip380RootException`) and the main class. This should not cause any issues, as the untested lines are not critical to the proper functioning of the application.
Below, you can find the actual results of the code coverage run.
###### Top level coverage:

###### Coverage of ArgumentParser class:

###### Coverage of Arguments entity:

###### Coverage of Enums:

###### Coverage of Standard Input Reader:

###### Coverage of Validation Support Classes

###### Coverage of Inner Validators

###### Coverage of Outer Validators

###### Coverage of Command Classes

###### Coverage of Custom Exception

----
#### Fuzzing analysis
To fuzz I decided to use jazzer. It instruments fuzzing directly into the JVM and will be much faster and efficient than fuzzing a wrapper around the JAR with something like AFL++.
There are multiple ways to run jazzer. You can run it through JUnit, or use bazel. I decided to run jazzer by itself from the released precompiled binaries.
The goal of the fuzzing was to detect unhandled behaviour. We have a exception handler about the runApp method, so I've fuzzed the method runApp itself.
##### Exit hooking
The main problem I've encountered during the fuzzing process was that the System.exit method was called, which terminated the whole fuzzing process. The way I've decided to solve this issue was to create a hook around the System.exit function call and throw an Exception instead that I could handle. Jazzer has this functionality built-in.
### Interesting issues we encountered:
When we first started testing our code that utilized the bitcoinj library, we encountered an issue after compiling the resulting jar.
```
Exception in thread "main" java.lang.NoClassDefFoundError: org/bitcoinj/params/MainNetParams
```
We solved this by adding the `maven-shade-plugin`. After that, we ran into another issue,
```
java.lang.SecurityException: Invalid signature file digest for Manifest main attributes
```
We solved this by adding a configuration of the `maven-shade-plugin` and filtering out the manifest files from the built jar.
## team-37-is-reviewed-by-team-34 description
### Designe
program is split into three parts. The Main class handles the help argument, checks which command was requested, and then calls the matching method. The utils package provides a class that reads and validates all input, reads from standard input, and reports errors in a consistent way. It also includes an enum that names the value types:
seed, public key, and private key. Each command class then takes its own validated inputs and does its job:
Derive key command parses seeds or keys and derives new keys with bitcoinj.
Key expression command checks that expressions follow the correct form Script expression command parses descriptor scripts such as pk pkh multi and sh and computes or verifies their checksums.
All validation failures or unexpected exceptions are caught and shown as clear messages before the program exits. To verify end-to-end behavior, there are integration tests under src/test/java/org/bip380/integration.
### What would we change
We would refactor the argument parsing logic into its own class.
Merge related validators into a single layer to present validation as a straightforward linear sequence.
Split the large Utils class into two focused helpers one for input output and one for error reporting to enforce single responsibility.
Enhance the built in help output by generating usage text directly from the parser definitions and by including examples.
Add unit tests for the parser and for the checksum routines in ScriptExpression to provide quick feedback without running the full CLI. This also ties to other commands.
### Tools for analysis
#### Static analysis
#### IntelliJ IDEA “Project Default” profile
---
The IntelliJ scan highlighted a few spots where missing null checks in our KeyExpression and ScriptExpression modules could lead to unexpected crashes if someone passes in bad input. It also pointed out a handful of simple test mistakes—like reversed parameters in an assertion—and a couple of long methods that would be easier to follow if split into smaller helpers.
#### SonarQube
---
SonarQube picked up on a few other common maintenance issues: an overly complex regular expression that could hang on malicious input, a few copy‑and‑paste code smells, and use of direct console printing instead of a logging framework. None of these are show‑stoppers, but cleaning them up will make the code clearer and more robust.
#### Summary
---
The most urgent fixes are adding null guards around user inputs and simplifying our regex logic. After that, we can tidy up the tests, split up any sprawling methods, switch to a proper logger, and remove duplicate code. Tackling these will reduce the risk of runtime errors and make life easier for anyone new joining the project.
#### Dynamic analysis
---
#### Code Coverage with JaCoCo
While reviewing team 37 I ran into a problem with creating a code coverage for their tests. Only integrational tests were implemented and they were executing the `.jar` file directly. That means that I had to add JaCoCo as a project dependency and attach a JaCoCo agent to the processes.
As a result of creating code coverage for this project i suggested:
- to create unit tests. This would help with debugging and refactoring.
- implement test cases for `--help` command since they were missing.
- implement test cases for when program is run without any arguments.
---
#### Profiling with InteliJ Profiler
For profiling the other team's project I used InteliJ Profiler. The results of this profiling did not provide any usefull information. The only thing shown was that most of the memory allocations and CPU samples were taken by `bitcoinj` library. Since this library plays key role in this project, there wasn't much the other team could have done to fix this.
---
#### Fuzzing analysis
In comparison to fuzzing of our applicaiton, I've used bazel to compile and run the fuzzers. It needed more setup, therefore I've only extended the example project that exists in the repo. Other than that the techonology and setup was same.
While fuzzing their project, I've encountered one issue, but that was already discovered by manual testing.
### Interesting issues we encountered:
## Comparison between our projects
Our implementation and Team 37’s both use a three layer design with a Main class for program entry, a utils package for shared tasks, and separate classes for derive key, key expression, and script expression commands. We apply a two-stage validation approach with outer and inner validators so that only the checks relevant to each command actually run. Team 37, by contrast, performs a single validation pass in its utils helper before handing off to the command. We defined custom exception classes to deliver precise error messages, whereas Team 37 catches all failures in utils and prints clear but more generic messages.
## Feedback to project:
We’d like to say a big thank you to the reviewers and teachers for all the helpful feedback and support. The reviews were thoughtful and made a lot of sense, and any changes to the specification were clearly explained and felt well justified. It’s been clear that a lot of care went into this class and project.
### Disclaimer
We wanted to create this document thru https://hackmd.io/ so the work done on this doucument is written belowe:
#### Dušan
Designe 1/2, What would we change, Static analysis 1/2, Comparison between our projects
#### Jakub Majer
Designe 1 ,What would we change 1,Dynamic analysis
#### Adam
Fuzzing analysis,Interesting issues we encountered 1/2