***Please note that this is an automatically generated transcript and will contain errors.*** --- Me and Sen are going to talk about this simulation tool we have been building for the past couple of months. And, I don't want to lose any time, so. Okay, the motivation is that, this geographical decentralization research that we wanted to do, there's this inherent, conflict with latency optimization and preserving decentralization of, topology of a network. And this could be because, like, you could have an incentive to co-locate with, like, sex with, relay with any kind of information source. But then this may end up with everyone being in the same region, threatened by any kind of governmental censorship issues, sanctions, etc., and then, of course, we want to incentivize, like running validators from, different regions from Buenos Aires, from South Africa and so on. And we want to, understand how can we design protocols which could then achieve this decentralization while still there's incentives to like, minimize latency and so on. So we also thought about modeling this like for a serum setting for a PBS setting. But besides the trivial equilibrium, it's actually hard for us to, come to any solution with that. So we moved into the simulation, approach. And we want to develop an agent based model, which we can simulate different protocols, compare them in terms of like at, at the limit. How does their geographical decentralization look like, both in terms of like, positions of the nodes of the entities, but also in terms of like utility difference of different agents. And we are going to zoom in on Ethereum timing games. So the very simple model is that, we simulate and slots at each slot. We randomly pick one validator as the proposer. The proposer has a position in the network and it can choose when to release the block. So like play timing games or not. And then also we let every proposer before their slot to choose a new location to move to. And a tester is then when they get the block, if it's before the four second, deadline, they vote to it and each block has to get some certain question. So when there's a new location, you mean like a simulated geographic location? Yeah. Yeah, yeah. And and for a block to become canonical, it has to get a certain fraction of the votes. And the simple utility function of the proposer is that, it gets some of utility from every millisecond, every time step that they wait. This then is simply like on top of the, previous block. How much time they waited times the utility they get from per millisecond. And then this is and then we also consider the probability of getting enough votes. So like if there are 100 validators and this fraction is at two thirds, what is the probability that if they release the block at 100 millisecond, they will get 66, that testers see the block. And we have two types of agents the validator with its location determining the latency to other nodes, its moving strategy and its, proposal timing strategy. And then we have a couple of relays with different properties, which I will talk about. So location and latency simply, we sample the validators from the real world, distributions using F data. And then for each proposal for each, validator, we find the nearest GCP region to them. And then we simply use the latencies between the GCP regions because Google published this data and take that as our latency metrics. And in our, simulations, what happens is that the proposer communicates with the relay to get the latest bid. And then when they want to commit to a block, simply the relay propagates the block. So like, you have to find the relay which has like the, two thirds of their testers, closest by so that you can delay the block proposal as much as you can. And the timing strategy is could be like find the latest time, which you can propose to block or always propose at a fixed deadline, or choose a random deadline, which we think it's unlikely. And then the moving strategy could be if, like you are a local staker, you probably won't be able to move. So like me, takes 10% of the validators are as like they can't move and then for the remaining 90%, we allow them to move. And they simply then, try to find for that given slot what is the best position I can move to so that I can wait longer and then maximize the reward that I get for that slot. And then send. Yeah, you can take over. Yeah. Okay. Yeah. So as it works I don't know. Yeah. So let me speed it off. So in the best settings we have 1000 validators. And as we said wine is randomly selected as a proposer in that slot. And we do a simulation for 10,000 stars. The validator and they don't move with the ten slot windows. And we compute this our matches per slot. And we are still in the, virtualization part. And here we consider that the value a relay team provides. So it's an abstraction of the mover space is different. So different relay they can provide different views because they have different rotation. And with our other centralized, that centralized channels like to buy NAS or they are just censoring. So they are B the values will be different. Yeah. So the question here is that if I read relay, they they are in a region where I feel like data provides, more valuable part. So what would the biogeography distribution of the validators. Yeah. So here we imagine for the hub might more geographically decentralized relays. So how you layer. So you have three layers Azure layers. So as we also have some relays in the sensor meter or R3 meter. So to provide some incentive for this to load head with the relay relay with less details. So in this relay time t the B value is off. But in the amount of relays or actually to relay at that high T, the B is alpha plus epsilon. So there are BS from this Fourier's more valuable. So, so they have essentially authority with these relays. And we'll show a demonstration for the simulation. Which. Just let's choose. Oh, the entire. Yeah. Yes, yes. So this is our verse ization of our simulation results. Yeah. So I sent. But let's play let's say the house. So here we have maybe we go to the dark mode. Yeah. So that's more maybe better. Yeah. Okay. So yeah this is a ward. We have multiple relays who has also made two relays and African relays. And we have multiple nodes. And this node represents the validators. And density is with different colors. So purple means they are valid validators here. And the yellow means they are more validators. Yeah. So we also can observe from this matches. There are multiple gladiators. They tolerate that they log head in the last days or the in the Europe. Yeah. So you know Ward was possible relays and we start the simulation. Yeah. So we can find sound matches change and the density also change. Yeah. And maybe with all this just so fast. Yeah. Let's just move it to the end. Okay. So it's end of simulations. So here we're fine. Sorry. 16th about the average latency for the relays. So the average for different relays, type the task titles. So the distance between the white directors and the white you has a relay and, you list increase and by to the distance between the, Africa relay or the South American relay decrease. And we have found that the more validators they try to move to the outfitter or the Samaritans. Yeah. So, yeah, in this world, let me see how, might be more decentralized. The validators that we are not sure maybe they will type in the challenge in the end, or it will be to bad that the, no validator will change their rotations in the end. So that's will still back our open positions, but just those who just code to do so. Yes, you. Yeah. Yeah. Yeah. So so yeah. So the the initial simulations that we wanted to do like the ultimate question we are asking is okay, like we can simulate something closer to a current state and then see like over many rounds probably all the validators will have an incentive to co-locate with one of the two relays and it will centralize. So we want to understand, okay, what kind of a protocol or a system that we should design so that we maintain some kind of a decentralized version. So we tried out like introducing some relays on like regions which don't have any relays today, which have like less validator density and then incentivize validators to co-locate with them. So this could be maybe like those relay subsidize or whatever and then see, like if we can incentivize validators to move there. But then also still we don't want at the end everyone co-locating with the Binance Iris relay. So overall like we want to understand what is the and state that we want to reach to reach to. And then for this we really need to define like what is your decentralization? Like how can we say a system is zero decentralized? Is it the number of validators that every country has or is it like the total distance between the validators? So we still need a good decentralization metric, and we would be happy to discuss about that today in breakouts and so on. Also like with a simulation tool like this, we were interested in, what questions can we answer. So like one question came from the side is should be fund relay. So that like we have geographically distributed relays and we preserve geo decentralization of the theorem for example. And then of course any input on the assumptions that we made in terms of like moving and timing strategies would be, valuable because like we have like pretty bold assumptions right now, like only the proposer can move before it's their slot and so on. And there's no cost associated right now with moving. So any, any, any, input on how we can make the simulator more realistic would be appreciated. And that's going to be it. Thank you. Any questions? In, in your simulation and what time did validator release the payload? Is it similar to reality or very different. So we can look here. Which will here. So so so so I don't think it's fair to compare these to, like, real numbers. But like what we see, for example, here and this proposal times is that, it goes down that is because like, well, like a higher value means you can delay your proposal further when it starts to, goes down. It's like this happens because validators are co-locating with, say, South Africa relay, which is further away from other testers so they can wait less. But because the validate, relay provides more value, it's okay for them to like say wait 50 milliseconds less because per millisecond they get more rewards. So in these simulations, like we have this downwards trend for some time, but it's still like super unrealistic because like we only have like some fixed latencies and so on. But yeah, if if the pool becomes more realistic in the future, we can compare like they do get some, euro like unexpected, results. But what's the most unexpected? We don't have unexpected results right now. Please get the results. Why is why is never going down. Yeah. These unexpected, Because our proposal times going down, and it seems like they want to optimize. That's cool. Yeah, yeah. Good question. It could be good question. I don't know, we need to look like. So let's say the validation data and the location. Which has their locations. And then, which means you need to propose earlier compared to it. Yeah. But they changed locations so that they get more rewards. That for the next proposers, they added they didn't wait longer because previous method that is already changed. And so the latency already. Yeah. That's it. Yeah. It's not. Actually like this. The second wave obviously from the first the first, the first question was I wasn't sure about the intention here was the intention to model the current reality or to model a proposed change. The intention is like have a flexible tool which we can model different proposals, and then compare them. So, but, but it's based off on the current time. And then the second question is did you attempt model with assumptions that reflects travel kind of. Yeah. You can fix all that. But currently if you if you know that having to rely on South Africa, I don't think that will be more profitable for North. Oh no no, no, this specific simulation is just like, hypothetical one. Like, what if, like, this happened, like, if you run simulation, why? You tried to model current reality and. Yeah. Yeah, I think we also have that like, right now still like we don't have like results that to show like, of of of the current setup simulations and so on. It's just that we want to find the interesting research questions for us and like the metrics to compare. So the current the current setup when it's simulated. So with this tool it just provides like trivial results where everyone co-locate. So it's already co-located EU and US relays. Max I'm curious because other than that I mean, Max, who runs a who knows that other things, or not, but most of the time, most of you know, I'm curious in this research and I think other things you'll see today there is this modeling of incentives to move around, for validators. But I'm curious, as a remote operator, how do you think about moving around, if at all, you know, and how is it just purely economic returns? Would you move around for that or for us? Like, because we are not using any cloud, everything running, everything from Brian from Europe. Yeah. Oh, not where for us is pretty impossible to move because we have to like full control of the hardware. There's no one else who can actually touch our hardware. So the team is based in Prague. So for me, like, it doesn't make any sense to go somewhere else, but like what I will optimize is basically like the network latency to basically like build, fastest link or how to communicate with the rest of the world as fast as possible. So quickly, John, you're running the answer. Sorry, it's not a question with more that answer that I'm not a cosmos cluster. So I have also run measurement validator. There. I don't want to validate it myself, but I have heard from a lot of validators and a lot, a lot of them do run, like co-location or bare metal or on planets and stuff. Don't go to the cloud. So for them, some of them, like it will be quite difficult for them to just move because they have natural physical machine somewhere. Thank you. Just for turning. That's good. Yeah. We don't have a framework like this. So to give. You don't have a framework for, to reason about with essentially within the protocol for how do, how would you, do it in one minute? Why? That's the problem becomes how many different administrative the methods I get. So there are two categories for decentralization. Administrative domain g geographic, which cloud you are running on? I don't know, what what version of software you're running on. You developed it. Each of these categories would you want to have at least for and it is in order to get storage. What's wrong with this? Yes, I think maybe one just as a strong. Yeah. So I think maybe one question we try to answer is what critical design decisions we should make to shape, trajectory of a network, and with, you know, can you make it like, you know, the theory, upgrades today that preserve the network's ability to be decentralized tomorrow? He's the kind of question who, like to answer, I see. So when you every time you finish an epoch and you configure, you want the network itself to enforce certain diversity. I'm not, I'm not, I'm not even really, prescriptive about what mechanism to use to do it. I'm just I'm just saying if the, like, definition of the, the concept we're trying to define, I think is all the property of the network to that and how much it tends to centralize over time. Well more here. So just for a concrete example, maybe having multiple concurrent proposers would be more decentralized than a single monolithic advisor. Yeah. Doesn't have to be always like yeah, that's right. Decisions like that. The question that you gave is mainly also like the, repository for stakers binaries that include someone who is like super centralized in URL. Anyway, finally, I'm not staking with someone like, I don't know, like I forgot it was Buenos Aires, probably because they used the higher. Yes, yields is it's up. By the way I'm zero is again I live I didn't enjoy myself before but it's you guessed it. Yeah. That's true. You shouldn't try to go. But it sounds like, something that you score a very complex and boring, policy. Again, various categories that I can prove. My identity. Identity being a part of running on, like, this location. And then you have all sorts of complex policies, to verify it, and that's it. Why is this a fundamental question for us as a community? Why is it so foundational? I think this leads into, full presentation. Exactly. I know I was going to say that although I view this and I will I will once again ask, John out there before I feel, sorry, how would you go about doing it? How would you how would you solve the chicken and egg problem? Arthur Long, how the the I would do. As a is so that I mean, I, I was going to say the same thing, which is at least to me, it seems reasonable to have, you know, a set of metrics, things like geographical distance. Right, in different geographies, that sort of different countries, different jurisdictions, which could be like a superset or a lot of countries, different physical locations, like measuring just distance, which can cover a different set of attacks like a football or physical attacks instead of of the Firefox. The, how what was the most efficient? And then it becomes a question of, can you prove that, what can you prove and how easy it is to cheat? I think that, so I by the way, I in my days, as you know, city of, neighbors last year, this is what we argued to regulators. Regular don't actually care about who has control. Who doesn't have control. What kind of business continuity does your service have? We went to them and said, we enforce that in Arkansas. You. So there are, deployments on at least, four clouds. No fraud has more than the sort of the metric we could enforce that. So, but, you know, a lot of, you know, a lot of, versions of software disasters. What cloud? What's the jurisdiction? We actually, when we committed to regulators that we're not going to have more than a third of our nodes in, any viewed, location, in any cloud, in any administrative domain, because, the easiest thing the consortium members to do is, delegated to blockchain. And I also think, the hardest way most is a no allow that. And even blockchain to diversify. Yeah. For instance, ideally you actually have separate teams, at least four different teams developing completely autonomously versions of the software, different languages, different programing ideas. We couldn't satisfy all the, project for this is an ideal solution. And then, okay, so we as a consortium, the consortium were able to supervise and monitor this. The question is how does the network monitor and is this something that you can prove that manually? That's not a very good way. Thanks. I'm sure you see a foundational question. This is very operational. I don't know, lady. Feel something, change your mind or not? We'll see. So what do you see? Probably that I'm not good at changing people's minds.