# The Next JS Framework Benchmark The [JS Framework Benchmark](https://github.com/krausest/js-framework-benchmark) has served the baseline that we've evaluated performance for years. And honestly even though it is dated and doesn't mean as much anymore as most frameworks have solved this equation by now, it still represents the best we have in terms of looking at basic DOM operation costs. This was the benchmark where the modern Signals framework was born. The Marko team at some point created the [Isomorphic UI Benchmark](https://github.com/marko-js/isomorphic-ui-benchmarks) which while not nearly as prolific as the other and realistically wasn't very great in the browser, has been a really good test for raw SSR speeds. It was with this benchmark Solid's SSR approach was born and battle tested. It lead to some great optimization in escaping logic. So we know how fast frameworks render and update in the client, and how fast the server render. What we are missing is a good measure of hydration. It is impossible to know what is optimal when we don't have the measure to compare. So that is the starting point of this exploration. ## Hydration Cost It comes in 4 forms. But I'm going to focus on the first 3. #### 1. Cost of Code Execution #### 2. Cost of Bundle Transfer #### 3. Cost of Serialization #### 4. Cost of Blocking Resources (CSS etc...) Smart inlining of CSS etc is not common enough today and difficult enough to pull off I don't think it should be our focus. Talking to experts in the field we definitely feel that 3, serialization cost is probably the most expensive, and the first 2 vary more in expense. Bundle transfer may be difficult to test in simple examples. Remember a benchmark is only really successful if is simple enough to replicate in many solutions. Execution and Serialization size are easier to simulate what I intend to push here. ## We Already Have a Test? The Hackernews Comments page actually is a pretty good hydration test for Execution and Serialization. There are huge amounts of data and a lot of nested components. It is also trivial to implement in VanillaJS. With fixed prefetched data this could serve as a baseline. However, we need to be a bit more restrictive so implementations don't cheat. While this might seem discriminatory my interest is only in Client Routed solutions. We know that MPAs are fast to load. While VanillaJS should be able to do the most optimal approach I think the frameworks must be able to show persisted state with client side routing in the example. The criteria would involve a interactive counter that when you navigate to the next page that relies on that count to decide what to show, it should show and hydrate properly. Adding this shared persistent client state should be sufficient I hope to prevent manual optimization creeping in. ## What do we measure? We care about interactivity here more than render speed. However serialization does impact render speed so we can't ignore it. LCP, INP, seem really important but we might need to do more manual measurement. LCP could be impacted by streaming more than the difference of data over the wire. Misko(Qwik) had a test where a button is pushed the moment it is visible, and then afterwards. To show the impact of interacting before and after hydration. I care less about after hydration. The challenge there was input was lost for a lot of solutions beforehand. How do we know if something is interactive? Maybe the best we can do is potentially keep pushing updates until they are reflected on the screen. And measure the distance from first click to first update.