Recording in progress.
Sounds like Siri a little bit.
Anyway, welcome everyone to the June 20th stand up.
Let's see what I have on my list here today.
Okay.
Version 1.9.
a bunch of stuff here in regards to Tuyen’s observations.
So I'll just quickly read it out for the transcript there.
So he's not seeing any sync issues.
Mesh peers are a bit less stable,
but on average the same to version 1.8.
More attestations process, less attestations dropped,
but gossip attestation process time
is increased as a trade-off.
The possible block till become head is 0.2 seconds increase in average.
CPU time is increased from 250% to 300%.
Event lag loop is increased a bit,
especially from submit pool attestation.
REST API time is increased from less than 200 milliseconds
to about 200 to 800 milliseconds.
missed attestations seem to be the same.
Any other observations you guys want to add to that?
So I think from the CIP node, even though we
have some I/O lag issue, I think the missed attestation
ratio is the same, which is acceptable to me.
There is a small concern when I review the beta node.
I just redeployed that right before the meeting
because before that I deployed with use worker as true.
I saw that the event loop lag, which to me looks to be a little bit high.
I'm not sure if that's a thing to be concerned about or not.
But for the CIP nodes, it's fine.
Are there are any other concerns that anybody else is seeing at this point that would prevent
RC3 from being a potential release? Still have another day and a half of testing required still
on it? Yes, I mean, at a high level, the, the, the, it seems like the CIP notes seem fine.
It seems stable enough, I guess, to me.
It's a very subjective kind of take.
But just looking at top line items of how are the validators performing?
They're performing as good or maybe even a little bit better.
Mesh peers seem stable enough.
And yeah, the only concerning thing
is what we saw on the beta mainnet node,
where the event loop lag was like wildly different
than what we see on the mainnet on our CIP node.
So I don't really know what to make of that.
I mean, we could try rebooting it,
seeing if it was a fluke of some sort.
Or we could be digging in and trying to figure out
exactly what's wrong with it.
But the fact that this is running well in production
already kind of gives me more confidence
to push through it instead of hold and try
diagnose more but I'm curious what other people think about it.
Yes, I have the same idea. I think if we want to improve the IO lag on main thread only it may
take forever to improve while we have another approach to use the worker thread. So to me as long as
the validator works well with CIP I would leave a really light for that too.
>> All right. Yeah, that sounds good. We'll keep it running for another 36 hours and
see if anything drastic changes from that and make a call tomorrow about that.
That also brings up a good point, Cayman, in regards to how
Goerli is slowly becoming less and less reliable as a metric for us.
I don't know if people are seeing it as more of a network issue with Goerli,
or if it's something that we can improve in our testnet infrastructure.
I think we're using a lot of like our mainnet with no validator nodes to sort of gauge
how things are actually going rather than depending on the the Goerli metrics because it seems to be
consistently different at this point. Yeah, I guess. Yeah, you brought up that
where there's going to be a new testnet that's going to be launched, I guess in September.
Can't remember the name, but right now, but it's going to be a lot bigger than mainnet.
And I guess I was suggesting that we
start doing more of our testing there if we can,
instead of Goerli, because it seems
like the biggest parameter that affects our performance--
well, there's two.
One is the total number of validators
in the active validator set.
And then how many validators we have connected to the beacon
node is also a factor.
But right now, yeah, even though we have 1,000 validators
connected to our large nodes, looking
at that versus looking at the mainnet node
with no validators connected is like--
often we see very different things.
And so I don't know what we can do
to make our testing more reliable,
if we could run more mainnet nodes.
But I feel like running that with validators attached
is crucial.
Testing with that is crucial.
So I don't know how we can do that with mainnet,
or if we would even want to be doing that with mainnet testing
because of how unstable things sometimes are.
so.
Generally, we have some pretty good, I mean, with the CIP nodes, because we run those ourselves and they technically belong to us.
um you know we have 72 validators to get metrics on um however
I you know if things do perform well we do need some sort of um reiteration of what we're seeing
there could be a potential that we may just you know try to deploy it on one instance of lido
but then again this is kind of like a you know we're doing we're doing a service for lido so if
things perform wildly different from what we see in the CIP nodes when it makes sense to go there.
Right. Yeah, I would feel pretty uncomfortable doing all the testing on Lido nodes because we're
held to a... Right, as you said, it's a service and we're held to a higher standard.
But maybe we could do something like split out our CIP nodes even further. So we have a 16 and a
whatever 72 minus 16 is, but it might make sense to have like a beacon node that's just running
one validator testing like a home, a home staker kind of setup. And then, you know, around 16 would
be I don't know, that's pretty expensive. Home staker setup, but still, you know, kind of
around 16 and then whatever's left is going to be.
The way it's set up right now is our canary nodes have 16 in them. So that's a good indication of
what may be like for our solo staker. And then the other CIP beacon node has 56 in it,
the remaining 56. So that's probably the best we can get that at.
That's currently the way it's set up at the moment. I don't know.
I guess I was wondering, does it make sense to pull one
from our 56, make it 55, and then run a third canary,
like a canary two, that just has a single validator attached
and maybe with a more modest hardware,
not a really big, beefy guy, but just to--
Like, that makes a lot of sense to me,
because that is, realistically, people
are going to run one or two or something small at home.
Okay. Yeah, we can definitely look into that and see what like optimal home staking hardware would
be where we can run that one validator. Yeah, I just think it might make sense,
at least in the meantime, to try to rely a little bit more on our CIP nodes for testing,
at least kind of in the final stages, like, you know, once we have something that's beta-ish level,
and then we can deploy to the CIP nodes
and feel pretty confident that in whatever outcome we see.
Yeah.
All right.
Yeah, that sounds good.
Tuyen had mentioned that I think beta is currently running
the worker thread right now.
Is there any discussion that needs
we had in terms of the network thread at the moment?
Yeah, I have one update.
I have a PR, and I guess Cayman has another PR to add more sleep zero.
I think it helps a lot to the worker thread.
I think it's usable for now.
Maybe we can enable the flag in unstable.
Also, we have some improvement in the gossip sub, we can make a hotfix and release that
and use that with the unstable, then perhaps that's a candidate for 1.9.1.
[BLANK_AUDIO]
>> Yeah, so is 1.9 RC3, is that based off of unstable or is that just?
[BLANK_AUDIO]
>> We did not have that improvement.
Okay, so we just cherry picked on top of RC2?
No, I got latest unstable at the time the sleep zero was not merged.
Okay.
If we want, we can just get latest unstable, but I’m not sure it is a good time to do that.
Yeah, let's just wait.
Okay, yeah, that sounds like a good plan.
We can look at a patch release for that inclusion afterwards, just because we're halfway into
testing RC3 already.
Or we just wait for 1.10 and do more work to get whatever else we can get in 1.10 in
as well.
Like the libp2p upgrade.
It would be nice to just have that also done.
For the TCP fixes.
Yeah, sounds like there's a lot of upgrades coming to 1.10.
So, I don't know if we want to batch it altogether.
Generally, we find merging a bunch of large upgrades together
together can also create testing issues as well.
But I don't know if anybody else has any strong opinions on that,
but we're kind of backlogged right now because of the delay in 1.9.
So speaking of the network thread,
we do have approval to re-approach that maintainer of Live UV.
So we have free will up to $10,000 to actually use with him.
So I'll get probably Lion or whoever was speaking to him before,
I believe it was Lion to just compile additional questions and stuff for him
and continue using him hopefully to help figure out
some of our problems on the network thread.
So that's good to go.
Just make sure his invoices get sent to me.
Any additional planning points for today?
I think we should start looking to having
Yamux in Lodestar too.
I can have a look at that.
Maybe we can have the network thread work too.
- Yeah, my ideal 1.10 would have the network
use worker turned on, have the libp2p update,
and use node 20 by default,
or support node 20,
and then as a nice to have yamux.
Three were kind of what I was imagining in 1.10,
as big kind of items.
- Sorry, you cut out for a bit there.
I'm not sure if anybody else had came and cut off there,
but it was basically node 20, libP2P upgrade,
and then Yamux.
Were those the three big things?
Did I miss something?
- The network worker thread turned on by default.
- Right, okay.
Sounds like a good plan.
Any other updates for planning?
Okay, we'll do a round of updates.
Just FYI, later this week I'll be at Heath Waterloo, so I'll be a little bit slow to
respond on Thursday and Friday.
There's anything specific that you would like me to take a look into on there, whether that's
trying to find our unicorn or something else, let me know. I think there's going to be quite a few
people there, including Vitalik. So we'll see what happens there. And yeah, that's where I'm
going to be later this week. So just in case you're trying to reach me.
All right, let's start
today with Matt.
Did a small PR, finished up the spell checking
for all of our Docs and Readmes.
I got the cash check, so I wrote up a doc
about how to look for cash hits and cash misses.
And I did verify that we're only missing three extra percent
cache miss on the network thread.
So it's not a huge difference, not enough
in order to dig deeper, in order to figure out what level it's
actually missing at.
But I did write up all that and put it into a doc there.
So that could probably be another tool in the tool belt.
And then I finally found why I was having a seg fault.
I was looking in the wrong place, is why I couldn't find it.
It was not that it was mixing old keys and new keys,
it's that the keys were moving during garbage collection
once I started to bundle all of the aggregates
and attestations.
So that was the first function I had written
and I never went back apparently
and double checked it after I saw,
I need to refactor that one.
But that will definitely solve the problem.
And I'm also going to end up writing unit tests in Blast
in order to... I'm going to trigger the garbage collector
and make sure that it's actually going to be stable.
So there'll be unit tests for that.
All right. Thanks, Matt.
Next up, we got Cayman.
Hey, yeah, a few things. I was out yesterday. It was a holiday in the U.S. But what I've
been working on, found a bug in Gossip Sub last week that I patched. And I was testing
out using the new libp2p library with various different patches applied, trying to get something
that's stable or has equal performance.
Still not there yet.
I was testing on feature one last week.
So if anyone's curious, you can dig back through the metrics.
And other than that, I have a few PRs
that are ready to be merged.
But I've been holding off until we have a stable 1.9
because I was kind of just not wanting
to have to deal with cherry picking things.
So there's things that are ready to go,
but possibly I didn't want to hurt performance
or have anything risky, so I was just leaving them open.
But I'll be working towards
getting Lodestar Node 20 ready this week.
That's kind of the top level item I'm trying to get to.
So that'll be my priority.
And then libp2p kind of a secondary.
- Thanks, Cayman.
All right, and Tuyen, you got anything to add?
- Yes, so just a quick summary
of the previous RC3.
Also, I took over the PR from Lion to skip serializing block
after we fetch it from request response on gossip and have some small improvement in the gossip.
So the next thing I can have with the yamux integration.
So yeah, if you have any specific thing on libP2P
just let me know, otherwise I will work with yamux.
- Sounds good.
- It should be pretty straightforward.
You should just be able to drop in yamux
where we're using Mplex.
I don't think there's anything special
other than you have to maybe worry
about what version is compatible
with whatever version you decide to go roll with.
Yeah, but from what I remember, I think the code change is small
but the performance is not as good as with mplex.
Right.
All right, thanks, Tuyen.
Next up we have Gajinder.
- Hey guys.
So previous week on DevNet 6,
there were tons of issues that were discovered
not in Lodestar, but with other clients.
So like, so basically, you know,
DevNet 6 right now is sort of dead.
Nodes are not being able to sync to head
on the Lighthouse, base nodes are up.
And Lighthouse is not properly serving the blobs.
So basically what it's doing is not providing matching blobs
with the commitments in the block.
So as of now, DevNet 6 is dead
and Geth also had issues with block proposals.
And the main reason chain got forked was some EL client basically included block transactions with the blobs. So you
are in the block. So, so the EL clients get block transactions with blobs, commitments
and proof, but to include in the block in the execution payload transactions.
So they have to serialize by in a different format, which does not have blobs only have
version hashes. So some client produced those kind of payloads and that sort of broke the
testnet again. And since the validators were very much skewed, there was a skewed allocation
of the validators on this testnet. So it seems like it's not recoverable. So there would
be another devnet 7, which will be 4844 focused. Earlier the plan was that devnet 7 would be
have other features that are planned to be included in Deneb.
So this is the update on DevNet 6. And apart from basically tracking DevNet 6, I did
BLST review for the basic PR that Matt basically refactored and did some further comments on it.
So we'll look whenever Matt pushes the comments back, updates it.
And cleaned up my own Free the Blobs PR and re-based it with the latest changes,
with some extractions that already I have done into the main master. So very few extractions
are needed to just merge the feeder blocks we are into main master, main unstable. So
most of the work is there, but yes, a few critical pieces are missing, which I'll try to push this week.
And did a PR for fixing the proposal flow for DVT validators
where they are, they require that the block execution
in general, local execution engine block,
not be produced or used against the block
that all the different validators,
all the distributed validators will get from the relay.
So that sort of made sense.
And thanks Nico for feedback and discussion on that.
So this week I'm planning to work on other Deneb focused PRs
like included beacon block route
in the execution payloads.
So that beacon state,
so proofs against beacon state can be done
in contracts in the EVM and also aligns the EIP
for making voluntary exit sort of non-expirable.
So that perpetually valid.
And another thing that I'm looking is the deposit snapshot
for WSS sync, weak subjectivity sync.
So what really happens right now is that
when you do a weak subjectivity sync,
then you tell execution client to fetch,
to basically sync.
So they might do snap sync or they might do beacon sync
and backfill all the data,
but they have to also backfill all the logs
just so that they can provide that deposit tree
to the beacon client.
And this deposit snapshot basically solves that problem
that you get a deposit snapshot tree from the other CL
and so that the EL will now not have to sync.
It's all history since the deposit contract was deployed.
So this is, I think, an important change
and checkpoints, the software that provides WSS checkpoints,
we're sort of now, it has started relying on
deposit snapshot.
So I'll be working on this as well.
And I'll also pick up Lion’s
PR regarding metric proposals.
So yeah, this is currently on my page.
- Thank you Gajinder.
All right, go ahead, Nico.
Yeah.
Hey, so I guess last time I mentioned this problem with the beacon node not shutting
down.
So I debug that and found the issue that it's related to libp2p in Cayman already found
the issue, I guess.
I think it was an update in a sub library, right, Cayman?
- Yes, I think so.
- So yeah, once we get that in, I guess,
I can further debug it.
But this was, I think, the last thing now
that showed up that we are dialing peers basically,
even though the beacon node is already shutting down.
Besides that, there was some other minor things
I fixed on the API level.
that I noticed or one thing was also reported I think from DevNet 6.
Yeah, so I fixed that.
Also added some end-to-end tests for that.
I think actually we could add a bit more end-to-end tests
because I noticed they are pretty cheap to run actually
and would be pretty good sanity checks or overall sanity
tests if the beacon state is correct, because I guess if the API returns a good response,
we can usually assume that what the state is in the beacon node is also good.
So yeah, I'm looking to see if I can maybe add a bit more tests for other APIs as well.
One other thing was I added this, what we discussed earlier about the checkpoint sync,
so this can now be forced.
I guess it's usually only required during development.
And I noticed some users were asking this before.
If they're not offline for a while, for a few days,
the soon can still take quite some time.
So this can now be forced to pull from checkpoint.
Yeah, besides that, I'm just looking
into a few interop issues.
I still need to run--
so my goal basically is to run all other VCs
against our beacon node.
So far it helped at least to find some issues on loadstar
where we are not compliant.
Yeah, but further want to look into that.
I also found one issue,
or I'm not sure if it's an issue
with the doppelganger protection.
So basically it's, I noticed it produced a false positive,
but I'm not sure if that can be prevented.
I still need to further debug this,
but I think it happened because we did an attestation late in the previous epoch
before I shut down the validator basically
and it was probably included late in the next epoch
so that's my assumption right now
yeah, but is it expected that it can produce a false positive
the doppelganger protection?
No, it should not
Yeah, so I can definitely further debug this because I mean I reviewed the code it looks
correct because we start checking the next epoch. So I mean it should should not detect anything in
that case. But yeah, it might be either on the validator side or maybe the beacon node responded
with the wrong likeness basically for the epoch.
Yeah, and then I want to start look
into the state and region topic this week
and latest beginning next week.
So yeah, that's the goal.
- Cool.
- If you are able to do proper end-to-end test for the API,
I will marry you.
Yeah, I still need to check actually how easy it is to properly test them.
But yeah, I think our current API test that we have is probably the thing I hate the most.
And but I found it to be kind of cheap.
So when I edit them, they are so stupid.
Like every time I do a change, and I need to update the test, I just delete the file.
Yeah.
Yeah, yeah, I will definitely give it another take and see maybe there can be some improvement.
Reminds me of pilots that pull out fuse breakers when they get alarms all the time in their
cockpit. And then they just pull out the fuse and ignore it. It just gets annoying.
All right. >> Some of the tests are so retarded. There's
a function, an API implementation that calls for choice get block. And then the test passes a step
that has a function that's called get block. And then the test checks that you call that function.
Of course you will. That's the fucking code. But there is no guarantee that the whole thing
actually produces anything of value. It's really retarded. Anyway, should I go?
>> Yeah, you should go. >>
Cool. Okay. So I've been involved with a bunch of different spec initiatives.
and there is one that's really cool and the design space is getting pretty big. So I want to hijack
this opportunity because we have home stakers and we are operators and see what do you guys
think about the different trade-offs. So let me can I share my screen?
Can you see? Yep, you're good. Okay, so there is this proposal to increase the
is the max effective balance.
The idea here is in, so for example,
as the chain safe entity, the lodestar entity
as a LIDAR operator is one entity,
but we have 7,000 keys.
Every single index adds overhead to the state,
to the P2P network that provides no security value
in terms of like decentralization of voting power.
Like in theory, if we had only one validator
with the same weight as 7,000,
it would be equivalent in terms of like security and network,
but it will be much cheaper
because the state will be so much more,
we will not have like so many testers and everything.
So that's the idea.
Why don't we just raise the max effective balance
so that whales and operators can have these chunky validators
and we reduce the size.
And this is actually critical to unlock
like single slot finality, PBS and WISC.
'Cause like with the state size that we have,
it's none of those it's viable.
And basically what this guy did is,
can I just figure out a way to do the minimal possible diff
to do that?
And it's actually pretty nice.
'Cause like the div is 50 lines.
It's not difficult at all to raise this
'cause everything,
Like you will get selected as a tester always.
So the only thing that matters is
how much effective balance you have on the four choice.
So no changes to the four choice, this works.
The chance that you're selected as proposer
is proportional to your effective balance.
So the chances you being a proposer gonna change,
same for SYNC committee.
So all is good.
Now the questions come from,
there are a few mechanisms at play here.
we have a churn to enter the stake,
like enter the network and to exit.
We don't want more than a specific number of ETH
coming every epoch because otherwise the chain
could be overwhelmed and taken over by some malicious entity.
And on the exit side, if too much ETH can exit at once,
they can slash themselves or cause havoc
and then escape penalties because they exit too quickly.
So you want to have this limit on how much they can exit.
So these are the changes here.
Now the question is, how do we handle like top ups?
How do we handle exit?
Exits, how do we handle partial withdrawals?
How do we handle full withdrawals?
For example, now if we have validators
that would go up to 2000 ETH, one key,
what happens with partial withdrawals?
Like partial withdrawals rely on the assumption
that you can have at most 32
and everything above 32 is useless.
But if we allow to raise the max effective balance,
the cool thing is that we could allow you
to automatically compound.
So you don't partially withdraw anything.
And whenever you have 33,
that one extra eight will start to accrue rewards.
But how do you, like, everyone now expects
that partial withdraws happen.
We cannot just suddenly change and turn everyone
into like auto-compounding.
But like the solution that it here is,
okay, if you want to compound,
now you have to opt in via new withdrawal credential.
So you have to exit and re-enter to opt in into compounding,
which is kind of annoying.
The other point is when you do a deposit,
let me see somewhere here.
So when you do the deposit,
it could happen that your public key is known
to the validator set and then you do a top up.
What we do now is we just increase your balance directly
without applying any rate limiting.
Because you can at most have 32 if
and you will not be active until you have 32 if.
So like when people top up,
they are actually getting around the security measure
of the limiting.
But because you only have like this 16 if per validator
in case they get leaked, which is very rare,
we don't consider that to be an issue.
But if now we raise the limit from 32 to 2000,
and people just do a top up, it will be like they do,
I don't know, they deposit 10 validators
completely getting around the queue, which is not good.
So the question is, what the hell do we do with top ups?
Like the basic, the initial proposal,
that's what they presented in last Thursday,
is the deposit is burned.
So if you top up everything above your
max effective balances gets destroyed, which is not good.
So how do we deal with that?
We have different ideas.
So let's go step by step.
One option would be you could submit a new deposit
and inside of the withdraw credentials,
the first bytes that now are zero could be repurposed
to encode what's your new desired max effective balance.
Does that make sense?
So imagine you have one validator
with 32 maximum effective balance like the default.
And you say, no, I want to opt out of partial withdrawals
and I want to start compounding.
Then you send a deposit, you will have to pay one if,
but that's how the deposit contract works.
And in the withdrawal credential field,
you will specify the value of your new max effective balance,
which could be, I don't know, 64.
And then the chain will stop issuing partial withdrawals
and it will start to compound up to that level.
Does that make sense?
Like, is that a good idea or not?
- I mean, there have been arguments
against the compounding effect,
but seems like, you know, for the whales, it does not really matter because, for example,
if you have too many keys, you can always spin up a new validator. In fact, only small
stakers who will basically wait until they get an ETH for another validator. So basically,
so there is a delayed compounding effect for them.
So, this is mainly to equalize the operators with a smaller.
Correct. So, this is a good, I think this is very good, in my opinion, so that it makes
the field equal. Otherwise, right now, wheels will get richer faster than in terms of percentage.
say I have x validators and how, how, when will I be to x so builds will be to x earlier
than the small stakers because small stakers will have to wait longer to put up a new validator.
So I think this is a very good proposal.
So I guess the question is, I'm asking you personally, would you opt out into this? And
Do you think it's acceptable for you to stop getting partial withdrawals?
Yeah, I'm not doing anything with my partial withdrawals now, so I'm happy to let it compound.
What about the rest of the call?
What would you get partial withdrawals once you hit your own individual max effective balance?
Yes.
I see. You'd stop at that point you're stopping your compounding is stopping and now you're
just
Yeah, correct.
Okay.
You can essentially I think send another depositor within your max effective balance, right?
Yeah.
I see. Yeah, I would definitely I would be opting into this because I would rather be
compounding than having the having the eth just end up in an account which I would like to be
compounding but I can't because it's not 32 eth. Right and I guess you have to put it into an LSD
if you want to compound before you can spin up another validator which is also not good right
So it gives more liquidity to the pools or more centralization.
Basically, they are not, I mean, the base protocol solutions, right? So it's like.
So this whole idea of using a deposit to change your max effective balance, that's the way a way
to be able to turn this feature on for your solo validator for your validators without
having to withdraw and then redeposit.
Correct.
Because with the month and a half queue, if you have to withdraw and redeposit, it's going
to stop people from wanting to actually withdraw.
Yeah.
So, in order to do that, so that I guess this is then the con, the cost of doing this for
Core Devs, fellow Core Devs, is that we will have to now queue deposits because now deposits,
top ups, will have to be part of the churn and then we need a new data structure in the
state. Which does anyone care? Cool.
No because well, I'm kind of curious, are there any other related efforts to shrink
the validator set? So this is like this can allow the validator set to be shrunk if money
is migrated into like a smaller number of validator slots in the state, but then like,
how do we like defragment the validator set?
I think there could be a global minima of what is the minimum each that you need for
deposits that is totally based upon number of validators because you wouldn't want validators
set to drop below as well. So even if it's being run by the same people, people might
actually run it on different posts and have some sort of, you know, contingency mechanism,
not all the validators might go down at the same time. So there could be a global minimum
above it, you can set you can so there could be a global minimum that is required above
that you know you can set whatever max you want. So there is this initiative
who has feedback?
I think it's good gender. Yeah can you mute the gender? Oh we're no we're good.
Oh can I interrupt? That I can hear myself through you. Let me try the mute.
you know.
Brutal.
Just raise your hand or something.
Let me know when you want to unmute if you can't do yourself, Virginia.
So actually, with using the number of validator entries in the state, there is no proposal
for that.
And I'm not sure that you have the hand up.
What this would do this proposal is in the case that there is no pressure to increase
the validator set, and there are some like,
stale records, those can be reused.
So, I guess the most important thing is that
the active number decreases, that's the most important.
Then, reducing the actual total,
I think that'd be good, but it's not that critical.
- Okay, cool.
That answers my question.
I knew about this, I didn't know if there was anything
to reduce the total validator set.
I guess it's not as important.
And if there's a mechanism that would drive people
to maybe reduce the number of validators
that they are running or consolidate their validators then.
- I mean, I have-- - 'Cause it seems like,
without anything, without any mechanism
to get people to consolidate, it's like,
people just wanna join, and there's,
the money doesn't really wanna leave,
it's the money that just wants to keep on--
- Correct, and it's just gonna keep growing,
there's gonna be more and more ETH locked up,
and there's not gonna be anything in circulation.
It's all gonna be supporting validators.
- I had a proposal where the,
I don't know the formula, but the idea is,
the higher the number of total indexes in the network,
active indexes, the less that reward the network will get.
So basically penalize the entire network
if the network doesn't consolidate.
But I'm not sure if that's too crazy or rough, I don't know.
Especially because you have to choose numbers.
You have to pick a specific formula.
So you have to choose what level of consolidation
you believe is okay.
And at what level of consolidation
you get like 0% penalty or like 10% penalty, 25% penalty, whatever.
So it's a bit iffy.
That would be penalizing smaller stakers more percentage wise than bigger stakers.
No, like it because it will reduce the base reward fee.
That's how everyone's rewards are factored.
So it will be the whole network will just make less
proportionally.
And I guess the idea with that is kind of
to place an incentive in everyone,
maybe socially or whatever,
just to pressure everyone else
that has the capacity to consolidate to consolidate.
So that's why it's important to find ways
that are not too aggressive to consolidate.
So this is one, I also have this proposal
to do actually consolidation
without having to exit at all.
So what this would do is you would sign an operation
and transfer the entire balance
from one validator to the other.
But in order to keep the security,
the target validator will be liable
for any slashings of the source validator.
So, and that's, I mean,
it's very difficult to track it in another way.
Basically, if the source validator has 32 bit
and you're consolidating to a validator that has 2008,
now that 2008 is liable for the slashing of the 32 original,
which is not ideal, but I don't know how to make it simpler.
There is a bunch of questions on how to actually
authenticate this thing, because there are two parties.
The source validator has to accept this,
'cause it's basically a withdrawal.
So you're transferring the ownership of the end funds
from one set of withdrawal credentials to another.
And then the target withdraw,
so the target validator has to accept to be liable.
And now that we have 0x1 credentials
and the withdrawal credentials are smart contracts,
it makes everything a bit difficult.
But I wonder if this was available, would you use this?
- Yeah, I would want to consolidate my validators
and set a high max effective balance
so that I would, I mean, I wouldn't have to be,
it would be using less internet.
I feel like it'd be probably more stable.
I probably would get,
I'd probably get a little bit better performance.
I wouldn't, yeah.
I would use this, 'cause then otherwise I'd have to
fully withdraw, you know, almost all the validators
just to re-deposit the funds back in
some of the existing validators.
Now, is it worth doing this?
I don't know.
But I would use it if it was there.
- Would this also consolidate the security risk
of everything just being like one now,
'cause like some people will have different,
I guess, even like mnemonics to generate
different validator keys and stuff like that too.
- Well, I guess like key management is kind of out of scope,
but I don't see a problem.
Like if you are able to secure all the keys,
now you only have to secure one key.
So your set of responsibilities is strictly decreases.
And you don't have to consolidate to one per se. If you had 10, you could consolidate down
to two. You wanted some redundancy or wanted to split where the withdrawals would eventually
be going.
Yeah.
So I guess this ties a bit on having like the shorter mentality of where we are now
and where we're going to be and assuming long term, having this thing useful because these
these whole consolation feature is to remedy the current situation.
And then the other things about onboarding are, okay, how, like, let's assume that everyone
is well past the time that we do this work, what's the most useful situation to support
different cases?
One of the proposals was you cannot change your max effective balance ceiling.
You set it once and that's it.
Like, for example, if we do that, then we don't need the list for the top ups because
you cannot top up.
So it's a trade off.
But I think, like, do you guys consider that having the capacity long-term to raise your
ceiling and be able to top up your value on the run is an important feature to have in
the long-term?
Raise or shrink?
No, like, raise or just start with a higher one.
And you put 32E, but you have the ceiling up to 100.
So you can keep topping up if you want to.
Yes, raising it, basically raising it.
- You can never shrink, is that right?
- You can never shrink?
- Can you shrink your max effective balance?
Like let's say you want to like,
it's basically doing a partial withdrawal, right?
Like let's say you have it raised to 64.
I would say that the thing so if you shrink it, you are effectively exiting capital in
a significant way. And then that has to go through the churn. So we would need to implement
another queue on the out.
And again, that has the hand up.
Yep. So, can you hear me? Yes. Okay. So shrinking is would be like, you know, exiting validator
and it would open up to long range attacks
because now there is no churn queue.
- Right.
- Right, and I guess using a deposit of one ETH
in order to do a partial withdraw
is also very ham-fisted,
I think it's not the right mechanism to do that, but.
- Right.
Yeah, and I guess this is the other prediction.
So right now we have,
I think it's like almost 20% of all ETH supply
is staked on the Bitcoin chain.
Which sounds like a fucking crazy number.
We can not go, well, I guess yes.
Like we talk about this in the call.
And in Genesis, you said, yeah, we will be 20% stake.
People will be like, are you fucking crazy?
So could we go to 40%?
- Yeah.
- Maybe.
This isn't the conversation of justifying this feature
because, I mean, we can probably grow, but not,
we can not grow by yet an X.
That's for sure.
That's guaranteed.
Like, silly, we can do a 3X or 4X from here.
So it's definitely, now we are at the point
where doing, in my opinion, features like these
that only affect the current set, but not the future one,
is still beneficial just for the massive scale
that we have now?
- Like there's gonna be supply and demand issues.
- Go ahead, Matthew.
- The less ETH in circulation is gonna cause it
to be more valuable for usage.
So you're going to have supply and demand issues like how monetary policy works today.
Like the supply, like the velocity of funds is what sets the value relative to the demand.
So like if there's going to be less ETH in circulation because it's all tied up in staking,
it'll be more value.
So like there's going to be a reinforcing cycle of it will go up.
Like it may affect the value if it's locked up.
Well, actually, Justin Drake thinks the opposite.
He says that the fact that there is so much of the if stake, if supply staked, it means
that we are overpaying for security and that we should reduce rewards.
Because thinking in terms of percentage, there are other competing purposes for if, for example,
like Dai or other collaterals which have a significant higher order effect in terms of
the ETH economy.
So say if only 5% of total ETH is in MakerDAO but 40% is in the beacon chain, that's not
super good.
It can also be argued that people are locking up their ETH, that means they have less utility
of your reach. So this basically goes against how much valuable the ecosystem is to people.
So there has to be a balancing act.
Well, and there's more risk locking your money up in DAI than there is staking, or it's just
maybe a different kind of a risk.
Yeah, another thing that I wanted to mention was, so is, is this proposal also includes
that you know, the validator exit time would also become a factor of will also be increased
times the factor of how much you have with the whatever your max effective balance is
to 32.
Yes, not the max effective balance like the current effective balance.
So when you compute the exit, the exit epoch, now you don't use units, you use it.
So saying if you have a validator with a lot of ease, you would probably just take the
whole turn of that epoch.
Or it could be that your one validator needs to use the churn of three to four epochs to
exit.
And the same happens on the inbound.
I don't know that this -- I don't know that the -- what is it called?
Moving -- consolidating?
I don't think the consolidate thing is necessary.
I think it's kind of, it's nice to have, I guess now,
but I don't know.
I guess it's an intermediate,
it's a shorter to medium term fix
for the problem that exists now.
- Yeah.
- One thing that I feel like we would also wanna have
is being able to withdraw not just the problem,
**Sebastian:** Yeah, so like execution triggered partial withdrawals.
**Jason:** Execution triggered partial withdrawals.
**Sebastian:** Maybe. I don't think there's anyone looking into it.
So basically withdrawals below the max effective balance.
- Yeah, yeah.
But like, so like if you set your,
now if you set your, if you have one validator
and you set your max effective balance to 64 ETH
and you're up to like, I don't know, 55 or 60 ETH,
but then you want to do a withdrawal,
it's like you need to take out some money to,
I don't know, pay a bill or something.
It's like, I just want to pull out two ETH,
but keep on staking.
Well, you can't do that. You have to pull out all 55 ETH and then now, you know, stop
staking and then redeposit that.
And wait for your turn again.
Wait for your turn again. So, yeah, I feel like just being able to increase your max
effective balance and forego the partial withdrawals kind of opens the door for needing an additional
type of withdrawal.
Yeah, basically, having multiple validators with the same pubkey is how I will sort of
characterize this entire thing, right?
I mean, of course, data structures will be a bit different.
But what you're saying is that, you know, you can spend multiple validators with the
same pubkey.
And you can sort of exit few, you can add few, whatever you want to do.
And your properties remain the same.
Right, but I mean, I guess it's even more granular than that because it's like, once you get above 32, then it's not just like 32 and 64, it's 32, 32.5.
I mean, maybe there is a lower bound,
maybe it's one ETH that you have to withdraw.
So you avoid, what is it?
Dust transactions, but you should be able
to withdraw just one ETH at a time.
- Okay, yeah, thank you all for the comments.
I think that's all I wanted to share today.
Just in case any--
- You can add just to add.
- Yeah.
- But withdrawal of that can't be instant.
I mean, because it drops your max effective balance.
So it has to sort of follow the churn queue kind of concept
so that long range attacks don't happen
because the security is determined by max effective balance
of a particular validator.
Yeah, the execution, pretty good arbitrary withdrawals, like the mechanism to communicate
from execution to consensus is not even final.
So yeah, it will have all these complications.
So it will be a bit more annoying.
So does this mean partial withdrawal would just be updating the max effective balance
then?
So if I have 35 ETH and my max effective balance is 36 and I want to get one ETH, I would just
set this to 34 then.
And then I would get that one ETH because it's above my max balance.
So in that case, you would only have to update the 0x2 credentials basically with this first
bit you mentioned where you specify how much your balances or max balance. I don't know,
like we have to think about it.
I was hoping it wouldn't be that way. I was hoping you would just decrement your balance
but not change your max effect the balance.
Like I think what Nico's suggesting is not safe. I'm not sure like, we have to think
about it.
- Or you can have a time period in which your
max effective balance drop will take an effect.
Basically you can say, okay, you know, whatever again,
on the valid data exit conditions, you know,
you can figure out that, okay,
this must be, it must take this much time to exit
out of stake. So this drop of max effective balance will take for example, effect in 10
days. And that's when it will suddenly be effective and the next automatic partial withdrawal
will deposit it into your account.
Yeah, I don't know. We have to think about it like it is pretty tricky.
All right.
Any additional questions for Lyon?
Thanks for summarizing that.
Yeah, I had one more thought.
I think that if you had the ability to do manual partial withdrawals, then you probably
would-- it seems like you wouldn't necessarily need to be updating your max effective balance
more than once. Because it's like, the trade-off is if you have your effective balance higher,
you're not getting partial withdrawals. Well, if you can manually partial withdrawal,
then there's not this need for this extra knob, this extra dial to be setting your effective
balance. It's just like multiple times. Like, Oh, I need to raise it. I need to keep raising it.
It's just, well, it's set high. And then if you want to pull money out, you just pull money out.
- Right. The thing is, if you don't do that, then you force everyone to redeposit fully.
Like you have to lose your index. And also, if you support top-ups of any form,
you have to build all this machinery.
So you might as well allow to change
that with the effective balance.
- Okay.
- Thanks, Lime, for summarizing this proposal.
That definitely gives us some good thinking points
to take back
and perhaps think about other mechanisms
that we can contribute to.
- Yeah, sure.
- Thank you.
All right, any other remaining points for today?
Do you mind, Lion, if I share your spec
sort of part of the video?
- Yeah, sure.
- Yeah, that way it will be easy for other people
to chime in and understand what's going on.
Thank you.
- Thank you everyone.
Have a good week.
- Good week, take care.
- Bye.
- Thanks.
- Bye.
- All right guys, bye bye.