# Dec 9th Status update
## On data availability checks
Data availability checks might be more viable than I initially thought they would be. Given that, Twitter has approximately 10,000 tweets per second, and each tweet contains a maximum of 280 characters, we can estimate that 10,000 tweets will be at most (280 * 4 bytes (UTF-8) + some metadata)* 10,000 ~= 12MB. If we expect the state root of the data gets updated every Ethereum slot (12 seconds), we get 144MB per slot. Hence a sequencer/validium node is required to download and check 144MB of data every 12 seconds (the validity and data availability checks can be done efficiently using SNARKs, etc). 144MB every block would be horrible for an L1 blockchain, but actually not too bad for the data availability layer of a non-financial validium.
## DA solutions and narrowing our scope
Initially, I thought that simply hosting off-chain data on IPFS, and maybe uploading it to Arweave, etc for permanence would be sufficient.
However, decentralized storage networks like IPFS or Arweave aren't built for sharing large amounts of data quickly; at this moment there are more tailored to hosting.
But as mentioned above, data availability checks for Twitter-scale decentralized applications might be feasible. And we can stop concerning DA ourselves and leave the development of DA infrastructure to protocols like Celestia/Polygon Avail. **Therefore, narrowing our focus on decentrlaizing the verification of signatures/zkps is likely the optimal path for now.**
## Integrating DA solutions with our validium
There are potentailly great benefits to collaborte with DA providers, to fuly leverage our validium data model. That is, since our validiium doesn't have transaction ordering nor a VM, synching data between full nodes could be made extremely efficient with somewhat straightforward engineering (compared to SNARKifying the entire Ethereum blockchain).