# Issues found in gossamer syncing
### 1. HandleTransactionMessage()
**Issue Description:** After execution of **nth** block **(n+1)th** block throwing below error `failed to find root key` and stopping further syncing.
```logs=
WARN[09-01|02:37:45] failed to load state for block pkg=sync block=0xdb367756f79d5269fbf4443b9b3cc09599121adb911abaacc585f533eae5dfd1 error="failed to find root key=0xd9dd920475455acfb5f8a3d7396586ed5d00311e54ea4ac80f7375862ab34023: Key not found" caller=syncer.go:203
```
**Analysis:** On parallel execution of [HandleTransactionMessage()](https://github.com/ChainSafe/gossamer/blob/development/dot/core/messages.go#L28) and [handleblock()](https://github.com/ChainSafe/gossamer/blob/development/dot/sync/syncer.go#L346) for nth block while syncing. The trie state is getting modified by [HandleTransactionMessage()](https://github.com/ChainSafe/gossamer/blob/development/dot/core/messages.go#L28) by **(n-1)th** block. HandelBlockImport() will store incorrect of data **(n-1)th** block trie in DB instead of nth block trie and import nth block successfully. On **executing (n+1)th block** throws an error in [TrieState()](https://github.com/ChainSafe/gossamer/blob/development/dot/state/storage.go#L162) due to absence of **nth** trie storage in db.
**Suggestion:** There should be storageState locking mechanism at all places where we are setting storage context for runtime. There should be a ticker and channel in HandleTransactionMessage() else it might getting called continuously and syncing could be stopped due to lock.
### 2. Issue due to Failed to call the `Core_execute_block` exported function at some x block.
```logs=
EROR[09-01|16:30:04] failed to handle block pkg=sync number=14383 error="failed to execute block 14383: Failed to call the `Core_execute_block` exported function." caller=syncer.go:268
```
**Analysis:** On failure while syncing block, [handleBlockDataFailure()](https://github.com/ChainSafe/gossamer/blob/development/dot/network/sync.go#L842) will reset requestData map and push request for q.current. The will lead to fetch block responses from q.current to q.current+128.
The issue is when we have more than 128 responses while executing blocks in handleBlock() and we get error in some block after 128 index block (let failed at nth block). [handleBlockDataFailure()](https://github.com/ChainSafe/gossamer/blob/development/dot/network/sync.go#L842) will push request for q.start to q.start+128 blocks. This range dosen't include the block which got failed and will never fetch the block response for the required failed block.
Also, [handleResponseQueue()](https://github.com/ChainSafe/gossamer/blob/development/dot/network/sync.go#L261) will try to pushRequest() for failed block but since we had data previously for failed block data range `(n-(n%128)+1)th` to `[(n-(n%128)+1)+128]th` blocks, the requestData map have old values (data.sent: true ,data.received: true) for `(n-(n%128)+1)th` block. This will never trigger to get blocks data for above range.
**Suggestion:** Instead resetting map for q.start block and pushing request for q.start, there should be resetting of map for `(n-(n%128)+1)th` block and pushing request for `(n-(n%128)+1)th` at [handleBlockDataFailure()](https://github.com/ChainSafe/gossamer/blob/development/dot/network/sync.go#L842).
### 3. handleBlockAnnounce()
**Analysis:** handleBlockAnnounce() doesn't have any mechanism to check the redundant requests and immediately we are pushing the request for the response. This will leads to fetch response for the blocks which is nearby 9 million and including to the block response for executing blocks. These blocks execution will always leads to failed to handle block data due to parent key not found `error="failed to find root key`.
- [handleBlockDataFailure()](https://github.com/ChainSafe/gossamer/blob/development/dot/network/sync.go#L829) will create request for parent block but that too belongs to 9 million range. Thus, nowhere during handeling blocks we are creating request for block after head.
- syncAtHead() will trigger request for block after head and syncing will continue.
- Due to continuous announce block data this will happen continuously and also slows down the syncing process.
- TBD. Since requestch channel is having buffer size 6. This might block the actual syncing process also.
**Suggestion:** The block announce should not happen till our node head is nearby peers head. There should be handeling for the required blocks from syncing instead