# Validator Key Usage Differences
## references
### Stereum
https://github.com/stereum-dev/ethereum2-ansible/tree/main/roles
### Eth-docker
It uses CLI. So it has a validator-import that can be run, which abstracts what each client needs to get the keys in. Users then input password to the workflow of the specific client.
[6:22 PM]
I could probably abstract even further, but wanted to keep the wrapper light so users have the trust factor of interacting directly with the client, where that is an option. For Teku it's just a script that creates files, yes.
[6:23 PM]
So for the user, regardless of client, it's "place the keys into .eth/validator_keys and then run docker-compose run --rm validator-import". The workflow this kicks off is client-specific. The end result is always the same: Keys are imported to the client and the client can be started with them.
### dapplion
https://github.com/ethereum/beacon-APIs/pull/151/files
## Current CLI wrappers
### Stereum
https://github.com/stereum-dev/ethereum2-ansible/tree/main/roles
### Eth-Docker
https://github.com/eth2-educators/eth-docker/blob/main/prysm-validator.yml#L61
also relevant https://github.com/eth2-educators/eth-docker/blob/main/prysm/validator-import.sh
@Yorick | cryptomanufaktur.io is expert
So what most people would be using is the -base.yml like https://github.com/eth2-educators/eth-docker/blob/main/lh-base.yml
[4:51 PM]
This exists for all 5 clients.
[4:51 PM]
And has a validator-import service in it, which then does "the thing" depending on client
### Rocket Pool
It looks like this is where Rocket Pool handles keys - each client has it's own implementation of StoreValidatorKey. It actually doesnt look like they use the Prysm api, but generate the wallet themself at first glance?
interface: https://github.com/rocket-pool/smartnode/blob/774ec28531a702a446edcd30756691c0f4942d2a/shared/services/wallet/keystore/keystore.go
prysm: https://github.com/rocket-pool/smartnode/blob/774ec28531a702a446edcd30756691c0f4942d2a/shared/services/wallet/keystore/prysm/keystore.go
fwiw, here is where RP uses each beacon/validator cli to spin them up:
https://github.com/rocket-pool/smartnode-install/blob/master/amd64/rp-smartnode-install/network/mainnet/chains/eth2/start-beacon.sh
https://github.com/rocket-pool/smartnode-install/blob/master/amd64/rp-smartnode-install/network/mainnet/chains/eth2/start-validator.sh
https://forum.dappnode.io/t/proposal-of-grant-to-fund-development-and-integration-of-alternative-eth2-clients/1247
## Benefits
multiclient guis, multiclient abstractions
projects like rocketpool, eth-docker, dappnode, wagyu, and stereum depend on it.
All currently CLI based, instead of REST API based...
a proposal for similar REST API proposed
impact of CLI change is huge...
## Proposed Solutions
Dapplion proposed that we have a common API set for
https://github.com/ethereum/beacon-APIs/pull/151
#from rolfyone
Having CLI access implies a level of admin access, and I guess with that a level of responsibility, and it'd be potentially less open to error with export/import processes...
If we do go this way, it would probably be pertinent to not require a login/logout structure, but also to be able to just issue an authenticated api call that has no real 'session'. Partially I say this because sessions can be particularly painful to manage in balanced environments.
we would need some level of protection calling the api while the validator is running or something else if it already has keys
### from mcdee
- beacon node doesn't expose APIs ( but may contain validator in it like teku/ nimbus) Maybe we shouldn't have APIs do this... Also validators nessassarily own the keys for us to do this...
- First, and most importantly, this locks in a specific way of thinking about a validator, specifically that there is an endpoint that is responsible for both validating and signing. This is becoming more commonly not the case, with multiple remote signer implementations available and being used. This is not only a conceptual issue but a practical one: which process runs the API? This repo builds an API that is run on the beacon node, and that's no use for these endpoints in many cases. Ditto the validator client, as it may not have ownership of the keys (but does need to know the pubkeys for which it is validating). The signer part is the most likely place to put this, or some of this, but they are generally built to be highly secure and allowing creation and deletion of keys seems to be an unlikely fit wit with their existing security model.
- Second, the general idea of accessing slashing protection data from an active endpoint is a Bad Thing. By definition, fetching this data from a running system means that it is out of date as soon as it has been retrieved and so cannot be trusted to be a complete picture when stored or imported elsewhere.
- no HTTPS makes this less secure...
- More of an implementation "detail", but the lack of HTTPS is something that seems to be massively insecure. The idea that the keystore, plus the passphrase to access its private data, are both sent in the same request and in plaintext is a very concerning.
- how much of a benefit will commonality be?
- The operations to import validators, although not standard between implementations, is relatively simple and so would not benefit massively from having an API rather than its existing CLI.
### from dapplion
This API should not be run by the beacon node, it should be run by the validator client unless using remote signing.
### from paul hauner (lighthouse)
Here is the link to the endpoints we currently have on the Lighthouse API now: https://lighthouse-book.sigmaprime.io/api-vc-endpoints.html This API needs some work before it can become GUI-ready. Among other things, we need to add some endpoints around importing/exporting slashing protection data.
For the record, I generally support the standardization of the VC API.
I think we should keep the authentication for the API and the encryption for validator keys as separate systems. This is for a few reasons:
- It's not clear to me changing the UI password should result in the changing of keystore passwords, which makes me think that it could catch other people by surprise to find that their keystores are no longer encrypted with the password they wrote down when they generated them.
- Changing keystore passwords involves working through the KDF function twice. That generally means 4-6 seconds per validator keystore, which will make for a laggy experience with multiple validators.
- We want to keep the validator open to using various sources for accessing keys. E.g., remote signers and hardware wallets. It might not always be clear which things can and can't have their passwords changed.
- The storage of passwords is somewhat complex across clients. Lighthouse allows it to be provided via the validator_definitions.yml file, or a separate file. It can be rather complex making sure all these methods are updated. If there are clients that support environment variables then these users will be locked-out next time they boot.
- It's going to be difficult to atomically update all the keystores. For example, happens if we get a file permissions error or full disk during the middle of the re-encrypting multiple keystores. We might have half-half new password and old password.
I would say that even baking in any concept of a "wallet" is best avoided. The Lighthouse validator client doesn't pay any attention to wallets (as defined in EIP-2386). We simply have a bunch of encrypted keystores; we don't care if they were derived from a common seed or not.
I believe we generally steer clear of pagination in this repository. I'd be a fan of that here, I can't imagine any responses being prohibitively large and it saves a lot of complexity.
### from cheatfate
- Avoid cookies?
Is it possible to avoid cookies? Because as soon as you introduce cookies for authentication server become stateful. Also because Max-Age is not specified its possible to create cookies which will never be deleted, and so can be used if cookie become stolen.
- response from paul
Lighthouse has an auth scheme in its existing VC API:
A secret is passed by the Authorization HTTP header: https://lighthouse-book.sigmaprime.io/api-vc-auth-header.html
Since the secret is a secp256k1 public key, the server also includes a Signature header in all responses (note, this is not safe from replay attacks): https://lighthouse-book.sigmaprime.io/api-vc-sig-header.html
Here's an example of a curl request using this scheme:
curl -v localhost:5062/lighthouse/version -H "Authorization: Basic api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123"
This approach is simpler for the VC, but it doesn't allow for user specified passwords. Our design ethos is to keep the VC HTTP server as simple as possible and to handle all the user-facing authentication in the GUI. Here's a design doc we drew up a while ago: https://hackmd.io/zS3h-Kk_TTmYhvyoHap6DQ
Basically, we expect the VC and BN to be hiding behind a reverse-proxy provided by the GUI web app. This avoids needing to provide HTTPS certificates for the BN, VC and web application.
Michael Sproul
Exporting the slashing protection data is futile if the validator client continues to sign messages with the keys after the export. The exported data will become stale almost immediately and offer no protection.
I think any API that exports slashing protection data should also atomically disable the keys for which the data was exported. We could support this either by keeping the API as-is and specifying that the exported keys will also be disabled, or by creating a new unified /export endpoint that:
Exports the keystores for the requested validator public keys (optionally all of them), including passwords and
Exports the slashing protection data for the requested validators
whilst
Disabling all requested validator public keys so that no new messages are signed which aren't included in the slashing data from (2)
We could create nice symmetry by allowing the API for importing keystores to accept optional slashing protection data, so that you can feed the output of the export endpoint into the import endpoint in order to migrate validators from one VC to another (we've been discussing this for Lighthouse here: sigp/lighthouse#2557).
If there are no objections to the above design I could try speccing it as a PR to this PR.