Bastian Köcher

@bkchr

Joined on Apr 4, 2019

  • 2023 Commit: c5cf3959639f070fcd9b68dae3f8680d39990366 Command: rm -rf target && cargo build --all --profile PROFILE Rust: rustc 1.74.0 (79e9716c9 2023-11-13) Apply the following patch: diff --git a/substrate/frame/nomination-pools/src/lib.rs b/substrate/frame/nomination-pools/src/lib.rs index c3fd6a98e88..5b1d6af7782 100644 --- a/substrate/frame/nomination-pools/src/lib.rs
     Like  Bookmark
  • The referenda was approved by the community, but it failed on execution. The execution of the referenda was at block 19907903 and it failed with scheduler (CallUnavailable). This means that the pre-image couldn't be found and thus, the call could not be executed. What happened? The pre-image was registered with the following transaction: { "bytes": "0x13030f0040d9dd884d0a000948fe4034185578ec11298db785082bdc7ab98c82e14aac164b4a8d924c0d53" } The referenda itself was then submitted with the following transaction:
     Like 1 Bookmark
  • Why Westend is stalled since trying to build block 16164349. The reason for this is the runtime upgrade that was applied in the parent block. This runtime upgrade contained some storage migrations and one of these migration didn't properly upgrade the parachains configuration struct. Oliver fixed the bug in the following pr: https://github.com/paritytech/polkadot/pull/7340 How to fix the issue? The runtime was upgraded in block 16164348 using an extrinsic that executed set_code and the last finalized block is 16164346. Thus, the block containing the set_code extrinsic was not yet finalized. This should make it very easy to create a fork at 16164347 that will not contain the set_code extrinsic. As Parity controls most of the validators on Westend the easiest solution should be that we revert the unfinalized blocks on one validator, prevent this validator to connect to any other node and let it build some blocks. In one of the blocks we need to include a transaction send by the account registered as sudo on chain. This should prevent that when we connect to the other nodes, that some validator tries to include the faulty set_code again. It should then fail as the nonce changed and thus, the extrinsic would be invalid. The following steps should be done: Stop on validator. Run polkadot revert --chain westend -d PATH_TO_THE_BASE_PATH. -d/--base-path is only required when there was a custom path passed when running the validator. We just need to ensure that we revert the blocks in the same database as we will use when runnin the node.
     Like 2 Bookmark
  • Moonsama executed the schedule_code_upgrade call on the relay chain to upgrade their parachain. The problem with this call is that it sets the GoAhead signal which then triggers the parachain to fail as it is not expecting the signal. There is an issue open to solve this by not setting the signal: https://github.com/paritytech/polkadot/issues/7202 I think we should be able to fix this from the parachain by overwriting the runtime code on the parachain side and then issuing a new upgrade. Prepare a new runtime upgrade Take the same code that was used to generate the runtime that is currently running on the parachain. Not the runtime that was passed to schedule_code_upgrade. Patch cumulus to use a fork (again same branch/commit as being used for the on chain runtime). In this fork you need to remove the following code: https://github.com/paritytech/cumulus/blob/21919c89ad99fc4e2e6f12dd6679eb2d41892ac3/pallets/parachain-system/src/lib.rs#L397-L419 Build the runtime that includes this patch (let's call it PATCHED_RUNTIME). Apply the runtime to your parachain
     Like  Bookmark
  • There can be multiple pull requests to the fellowship runtime repository. The fellowship members are required to approve these pull requests. When it is time to create a new release, someone needs to open a new pull request. The bare minimum that this pull request needs to do is to alter the CHANGELOG file (this will then trigger the release process once merged to master). This pull request could also include other changes like:New weights plus the description of the machine that was used to generate them (which is actually part of the weight file, so this should be enough). Adding migrations and ensuring they are correct When the pull request for a new version is ready, it needs to be approved by the fellowship members again. The pull request hits master and the release logic triggers a new release. This release can then be proposed on chain.
     Like  Bookmark
  • People are using all kind of UIs/wallets to sign their transactions to send them to the chain. As they do not want to trust random UIs/wallets on the internet to not steal their private keys, they rely on hardware wallets. These hardware wallets only support signing of given data without exposing the key (at least when you are using a proper one ;)). There are multiple different hardware wallets. In the Polkadot ecosystem the best known are Polkadot Vault, Ledger and Kampela (still in an early phase, but paid by the Polkadot treasury). These hardware wallets need to be able to interpret the data send by the online UI/wallet. There are currently two different ways this is done in the Polkadot ecosystem. First, being a fixed parser for the transaction format of Polkadot that has to be updated to support new versions of the Polkadot runtime. Second, being a transaction parser that uses the metadata for decoding. So, this requires to have the full metadata on the device (more than 1MiB) and a way to securely update the metadata. Both approaches currently rely on central entities to either sign the metadata or the "fixed" parser. Also supporting parachains isn't that easy currently because they need to be included into the metadata portal or would require some custom Ledger app. The problem with the metadata portal is that someone needs to run this portal and need to sign the metadata. The custom Ledger apps would have the problem of getting an approval for the official app store from Ledger and also the parser would need to be changed for every parachain as the FRAME runtimes don't enforce a particular format of the transaction. So, we have two ways of implementing hardware wallets. "Fixed" parsers or metadata based parsers (okay, you could also use blind signing, but we should not even think about this). As Polkadot supports forkless runtime upgrades for itsself and all of its Parachains, things can change very quickly. So, using a "fixed" parser is almost a no go, because it would require constant updating as chains are evolving. We are left with the metadata based parser, which currently has the following two problems. First, metadata is to big and would not fit on every possible hardware wallet currently out there. Second, trusting the metadata currently requires to trust a central identity signing the metadata for you. We can solve both of these problems by introducing merkelized metadata. Merkelized metadata basically means to chunk the metadata into individual pieces, put them into a accumulator (e.g. a merkle tree) and use the digest of this accumulator (e.g. root hash of a merkle tree) as secure identifier for a particular metadata instance. This would solve the problem of the metadata size by only requiring invidual chunks of the metadata and not the full to decode the transactions. A hardware wallet wanting to sign a transaction would get proofs for these chunks from the online wallet. The hardware wallet would use the digest of the accumulator to ensure that the proofs are correct. It can get the digest of the accumulator also from the online wallet. To ensure that the online wallet hasn't provided the wrong digest and proofs, the hardware wallet will include the digest in the signed payload of the transaction. The chain that is aware of the digest as well, will ensure that the digest is the same by also including it on chain when checking the signature of the transaction. If the hardware wallet was using an incorrect digest, the transaction would be rejected on chain (actually before entering the transaction pool). So, a user can not get fooled into signing an incorrect transaction. This will solve both of the stated problems. We now need to work with the teams of Zondax (for the Ledger app) and Kampela together to come up with a working implementation. The main questions that are left for the implementation are, what kind of accumulator to use and how to chunk the data. Both of these questions are very important, as we want to ensure that the metadata is chunked as efficient as possible to produce as small as possible proofs when decoding a transaction. This will require some research work and tinkering. As Kampela should have enough memory/store to cache these proofs, only the Ledger app will require some more optimizations to make it work. However, the optimizations will need to be done on the Ledger app side by e.g. streaming the proofs and decoding the transaction on the fly (Ledger only has around 4KiB of memory), but that is nothing that should make the implementation impossible.
     Like  Bookmark
  • Commits to include https://github.com/paritytech/substrate/commit/44998291834b3a36a11c8dede63ce60362f1a3a5 https://github.com/paritytech/substrate/commit/5766265306d104b4a5d7cf1082bcae78986dfff0 https://github.com/paritytech/polkadot/commit/7dea63643ba0736d0dad30ee3b657cb1d755fe14 https://github.com/paritytech/polkadot/commit/bc7dcf2136eedae65064e8d6772be9a987dd0820 https://github.com/paritytech/polkadot/commit/1fe1702508f24b96b136f696fc180d6b27145c8b https://github.com/paritytech/polkadot/commit/632f1ee1d940027bb17ebf3fb38c3ccae0a3e5a0 Justification for the release We have seen multiple reports of node falling out of sync. The assumption is that this was releated to the sync refactoring that changed the way the syncing code communicates with the networking code. Before the refactoring the communication was done synchronously as part of networking code. With the refactoring the networking communicated via an asynchronous channel with the syncing code. The metrics indicated that this channel was growing significantly. A fix was merged that should improve the polling behavior of the channel to speedup the communication between the networking and the syncing code.
     Like 1 Bookmark
  • I don’t think that we require any special/sophisticated CI setup for the fellowship repo. We should be able to use github actions. Executing cargo test --workspace --features runtime-benchmarks should be enough to ensure that everything compiles and works as expected. When it comes to benchmarking, I don’t think that the benchmarking machines need to be under control of the fellowship. The nice property of our benchmarking system is that all the weights are in Rust files, which means someone can make a pr to update them. So, on some regular basis (could also be done by some external bot) there will be a pull request to the fellowship repo. This pull request will propose to use the given new weights and explain on what kind of machine the benchmarking was executed. The fellowship members will then be able to re-execute the benchmarking and ensure that they are correct (or just trust the pull request as it is done by some known community member/bot :P). This means that we don’t need any benchmarking machinery in control of the fellowship. Updating the fellowship runtimes Now let’s come to the important point, updating the fellowship runtimes. As we are only using crates.io releases for the Substrate/Polkadot/Cumulus crates in the runtimes, we require that all of the crates are updated regularly on crates.io. Then someone will need to make a pull request to the fellowship updating the runtimes etc. Here we should start eating our own dogfood, that means no companion job or whatever that yells at people in our repos if they break the fellowship stuff. What does eating our own dogfood means? Start creating proper changelogs. I already brought this up somewhere else, but TLDR is that we should start adding “@changelog frame-system Added support for xy\n Add more context. You need to change the following etc” like it is done at rust-analyzer. When we compile the crates.io releases, we should then scan all the prs for these special comments. The exact implementation should be discussed somewhere else, but it should be quite simple. This will also help the ecosystem quite a lot, because we don’t throw them down the hill for every change we are doing. They will be thanking us if we improve on this. For certain, more substantial changes, I could imagine that the one that creates the pull request against Parity repos is then proposing certain configuration parameters to the fellowship as well as some pull request. This companion like pull request in the fellowship repo would not get merged directly, it would wait for the general update cycle to happen. Then it could maybe be directly included into the update pull request or merged after the update pull request got merged with some some sane configuration values. This overall process will require that we have a rock solid crates.io release process. There will be times when we need to release a new runtime upgrade instant, because of some security related issues or minor fixes. Maybe for really critical things we will need to fall back to use git dependencies for the moment to not highlight the changes we want to integrate too early. In general this means that we also support patch/minor version releases from master or whatever. Fellowship merge rights We only want that multiple fellowship members above a certain rank can merge pull requests. For that we should be able to use our already existing (very nice) github action pr-custom-review. I think we should prevent that we invite all the people to the fellowship repo, this wouldn’t be decentralized :stuck_out_tongue: The good is that we already have a list of all fellowship members with their ranks, on chain. This list is always up-to-date. So, I would propose that we write based on smoldot a tool that fetches the list fellowship members from Polkadot (live in every run of the action). We would also require that every member adds its github account as identity into the additional fields. I think this would a really good show case on how to have decentralized merge rights using our tech stack. It should also not be too complicated to achieve this.
     Like  Bookmark
  • Slot duration up to now is more a constant variable that should not be changed after genesis. However, there are requests to change this, because it was for example configured incorrectly or the chain wants to build blocks faster or slower. To achieve this, we will need to write some custom migration to support this. At time of writing this "guide", it will assume we are talking about a Parachain that uses AURA as block authoring implementation. Problem description When migrating the slot duration we change the way the slot is calculated. A slot is basically caculated by taking the current time and dividing it by the slot duration. Let's assume the time is 10 and our slot duration is 5, so we are at slot 2. If you change the slot duration to 2 (aka decreasing the time between)
     Like  Bookmark
  • We are searching for the following validators. Below you see the multiaddresses of these validators. Each multiaddress contains the ip address plus the peer id. If you control any of these validators, please get into contact with us, we would like to get some information about your setup. Update: The list now also contains the public authority discovery session keys. Multiaddresses of validators: EgF6pyuUZ332JFe8QvU8EVdvqoncbEPFZzBneNcrZpmwnu9: /ip4/147.182.147.191/tcp/30333/p2p/12D3KooWKaXFSVUzyb2BQaDKWYtNMkC6ZfYazwJBM9Hi98ZjiGSq /ip6/2604:a880:cad:d0::e65:8001/tcp/30333/p2p/12D3KooWKaXFSVUzyb2BQaDKWYtNMkC6ZfYazwJBM9Hi98ZjiGSq EJbbhGibWSjLzgiMerUiQ1dGTYvjqGHswBVciUi83xaySG8: /ip4/35.231.20.182/tcp/40333/p2p/12D3KooWHUowBEgqt1FY5CXzQk5pmgBopoXKfNKxcgsTS9rxXFGf
     Like  Bookmark