# Transient Worldstate Mismatch
> tl;dr sometimes Besus bonsai data store gets corrupted, leading to incorrect worldstate root calculations, causing Besu to treat a new block as bad and halt its chain.
The Besu team believes that the cause of the mismatch has to do with a non-threadsafe in-memory implementation of the Bonsai trie, and how it is being written to disk via the local RocksDB instance.
When this occurs, a variety of symptoms may present. The most common (by a longshot) is a disagreement on the stateroot hash for an incoming block, resulting in Besu incorrectly categorizing it as bad, and refusing to add it or its children to the cannonical blockchain. While processing a block, other failures prior to state calculation may manifest such as failed transactions due to corrupted source data such as account state or nonces.
The most recent failure on Mainnet Shadowfork #7 is the first time the problem has occurred in such a widespread fashion, or with any correlation to the merge. All Besus that participated failed; one before TTD arrived, and the rest all after it passed. After TTD, consensus clients interact with Besu in a more asychronous fashion, which may have exacerbated the existing concurrency issue, however this theory is still under study.
### Is this merge related?
No. We've seen this both before and after the merge on shadowforks, as well as on networks yet to be merged. This is a Besu problem, not a merge problem, however the async behavior of consensus clients helps us to find the problem.
### What was the impact on shadowfork 7?
When all the Besus eventually halted, their consensus clients were no longer able to make any attestations, and network participation dropped. Clients were still able to propose blocks, which were empty. Since Besu made up 25% of the network, the best possible participation we could have was 75%, leaving little room for error if other clients should fail.
Once that happened and participation dropped, the network stopped finalizing.
### Can you reproduce the issue?
Sorta! Concurrency issues are incredibly hard to reproduce, making them slow to analyze and make concrete deductions about. Until recently, we have been inferring from what we read in the code and correcting potential problems as we suspect them. Recently, we've made enough progress adopting Hive tests that they have proved useful in consistently reproducing the symptoms.
Mismatches happen more often when Besu is run from within a Docker container, only 2 reports of it occurring outside of Docker. The vast majority (not exclusively though) of observations have occurred during mainnet shadowforks, which (like the Hive tests above) run Besu in Docker containers. The Besu team has many canary instances of Besu running bonsai which show no symptoms of the problem, which do not run in Docker containers.
There are a number of activities around correcting this issue; it is the entire teams focus and has been ongoing since late April. A number of incremental fixes have been applied, which may make the issue easier to reproduce, possibly contributing to why shadowfork 7 showed a much more pronounced affect than any prior test.
2022-04-22 First known occurrence during mainnet shadowfork 1 on a single instance of Besu serving a Teku client.
2022-05-08 Two clients showed the problem during mainnet shadowfork 4, Lodestar and Nimbus were the clients affected.
2022-05-18 Besu disagreed with the rest of the network as to the nonce state for an EOA during mainnet shadowfork 5. Besu rejected a block as having a bad transaction in it, halting the chain.
2022-06-07 All Besus fail, details here https://hackmd.io/5tEHRMGORwuHumJs419pfw
### Tracking issues and prs.
https://github.com/hyperledger/besu/issues/3891 - The one time this has been reported in the wild outside of Docker.
https://github.com/hyperledger/besu/issues/3855 - A rare case where we suspect the Bonsai trie corruption changed an EOA nonce.
https://github.com/hyperledger/besu/issues/3909 - Related, but likely in memory, fixed on restart.