Vitalik recently posted an article to his blog critiquing Worldcoin and proof of personhood approaches in general. There are some great thoughts which I'd like to respond to, and make it better known what BrightID's approach is, especially since a lot of it is tied to Aura–a new verification method for BrightID, which had an initial beta release and is undergoing major changes based on the feedback from beta testers.
There are also some features of BrightID core (blind signatures with expirations) and Unitap (universal distributions with selection), which help with some of the issues Vitalik has highlighted such as selling or renting accounts.
I'll create sections for discussing the relevant parts of Aura, BrightID core, and Unitap, and then try to comment on some the article's points in order.
I gave a talk on BrightID / Aura at EthDenver 2023.
Aura is a new verification method for BrightID. We created it based on the observation that most BrightID users aren't interested in helping others get verified; they just want a BrightID verification to qualify for an app they want to use (such as CLR.fund or Unitap) that requires BrightID.
For Aura verification, you just need to know one (or more) Aura "players" (role names in Aura have sports analogues), and those players will do the work for you to get you verified. This is similar to the role of a host in a BrightID meets verification party, or a Worldcoin orb owner.
Aura adds some decentralization to the appointment of BrightID verifiers. The path to add more verification hosts in the old "meets" verification goes through a vote of "seed dao" which is comprised of existing hosts, but scaling seed daos is cumbersome and just hasn't happened. Aura aims to fix this by allowing anyone to start as a provisional player, and then acquire more verification power as other Aura participants ("trainers") notice they're doing well.
One of the key features of Aura is that it can quickly respond to invalidate whole regions of the graph if an attack is detected. Players derive energy (the power to verify) from each other, and astute participants ("managers") can adjust or cut off the flow of energy as needed.
The portion of the graph that is analyzed by SybilRank is moved inwards–instead of analyzing the whole social graph, it just analyzes the graph of Aura managers, making the entire scheme much more scalable. If 1% of BrightID users are Aura participants and 1% of Aura participants are managers, then SybilRank only needs to process a graph of around 1 million nodes rather than up to 10 billion, which was the maximum that might be reached in the original BrightID algorithm where everyone was expected to participate in verification.
There are a few ways in Aura that a player can participate in verification.
Even though the burden of verification is shifted from BrightID users to Aura players, there is still a bit of work for BrightID users to do. First, a BrightID user will have to make a connection to an Aura player. Next, an Aura player may ask a BrightID user to make connections to people they know, so the player can ensure that no key people are missing that might have allowed the user to have gotten verified by a different player using a different BrightID.
This tool for evaluating mutual connections already exists in Aura, and is what players primarily use today to help people get verified.
A potentially simpler alternative or addition to "mutual connections" is for an Aura player to make an assertion about semi-public information (phone number, address, twitter account, full name + birthdate + location) and check those assertions with other Aura players.
Using a socialist millionaire protocol, this can be done without revealing any information to someone who doesn't already know it, and to avoid publicly connecting any of this information to a specific BrightID.
If I know Bob's twitter account and phone number, I can check whether another Aura player has already asserted a link between that data and a specific BrightID, and if so, whether that BrightID matches the one I'm verifying. I can know whether or not the BrightIDs we're evaluating match without revealing or learning a new BrightID. If another Aura player has linked that data to a different BrightID than the one I linked it to, that could be a sign Bob is sybil attacking or someone who knows the information is griefing them to block their verification.
I would love to build the known-identifiers tool into Aura. We will build it eventually, as we always have with important BrightID features–it just takes longer with low funding, but we've become accustomed to getting things done with very little money.
A manager can use their knowledge of connections to investigate and cut off verification potential (energy) from suspicious regions in the graph. The aura graph explorer tool is helpful for this.
I won't go into Aura's design for decentralization in full here, but you can find more information in the guide, and these rough write ups on Aura definitions and Aura levels. The documentation is rough so far, for which I apologize. Please reach out to me directly in the Aura discord if you have questions about how Aura works.
Essentially, managers evaluate and give energy to other managers using a weighted SybilRank algorithm, with a concept of "teams" providing added resiliency.
BrightID stands as a simple and convenient way to make pseudonymous, signed, verifiable connections to another person. Farcaster and Lens also allow this, but didn't exist when BrightID launched in January 2019. Farcaster and Lens require the user to have a crypto wallet, which BrightID doesn't.
When first designing BrightID, I considered using Facebook or Twitter connections, but one of the several problems that lead to low quality connections is that connections aren't cryptographically signed. That was a deal breaker for me, so I created our own connection-making service.
A key feature in BrightID providing privacy and unlinkability is that verifications use blind signatures–meaning when a BrightID node signs a verification, if that signed verification were later presented to the node, it wouldn't be able to know which BrightID it represents. Apps using BrightID don't reference users by their BrightID, but instead by an "AppUserId" which is a set of unique identifiers the user and app create themselves, and which BrightID nodes sign.
BrightID has social recovery built in, which makes it cheap and easy to replace one's BrightID if compromised. You could sell your BrightID and then rug pull the buyer. Aura provides an even more fool-proof anti-collusion option, which is to go back to the Aura player or players that got you verified and tell them you sold or lost your account. They will do the work for you to unverify the old BrightID and verify your new one.
Blind signature verifications have an expiration, so you'll have to wait for the next expiration to start using your new BrightID. Verification expirations are set per-app (remember that verifications use AppUserIds not BrightIDs) and have lengths that make sense for the app. For example, in CLR.Fund, verifications expire right before the start of a new voting round. Relinking a new verification takes seconds.
Unitap is a universal faucet. Anyone can put up coins for distribution to unique humans. Unitap currently distributes over 20 network and test network gas tokens, as well as Bitcoin, $Bright, and SONG.
A key feature of Unitap is that each participant much choose a limited selection of tokens they will receive from a larger list of tokens (currently five selections per week for gas tap and three for token tap).
This helps to prevent BrightID account renting. Account renting occurs, for example, when Alice and Jane both receive many types of voting tokens in airdrops to unique humans, but Alice only wants to vote in A,B, and C, and Jane only wants to vote in D,E, and F, so they sell their voting rights to each other. If instead both Alice and Jane could only choose to receive three types of voting tokens, the chances are much higher neither will have anything they want to sell.
Similarly, BrightID nodes could limit the number of apps any one person can link in an expiration period to make it much less likely that anyone will have app access leftover to rent.
This works as long as there are enough compelling choices so that everyone can find something they want to use for themselves. If I can collect $100 in fungible UBI tokens or a DAO voting token worth $100 to someone else (but not both), why would I bother to collect and sell the DAO voting token? I'll collect the UBI tokens instead.
I wrote about universal dividends with choice and vote buying in several of my blog posts.
With that background information out of the way, I think I can provide concise responses to some clips from Vitalik's post.
more ambitious proof-of-personhood project: Worldcoin
BrightID is no less ambitious, just much less funded because we operate as a public good rather than a VC-backed company.
For example, signing up to Proof of Humanity means publishing your face on the internet. Joining a BrightID verification party doesn't quite do that, but still exposes who you are to a lot of people.
Ideally, BrightID verification comes only from people you already know. Contrast "meets" verification (which I consider a stop-gap) with Aura verification.
I would never try to assert that BrightID as a whole is anonymous, though. The anonymous graph could be overlayed with other graphs. People you connect to on BrightID could try to dox you. Some people (including me) have published their own BrightIDs. You might be considered "guilty by association" if an authority confiscates your device and looks at your BrightID connections.
If an adversary can forcibly (or secretly) scan your iris, then they can compute your iris hash themselves, and check it against the database of iris hashes to see whether or not you participated in the system.
What happens if your iris scans are stolen–for example by a fake orb? What if the identity thief is also a real orb holder? If you scan your irises into a fake orb and then someone else uploads them to a real orb–they own your unique identity. Or even easier–a real orb holder claims to help you get verified on Worldcoin, but instead they keep your new account for themselves.
How do you recover if you sell your unique account or if your irises were stolen? Does a new registration always invalidate an old one? If so, then can someone who has a picture of your irises continuously steal or invalidate your account? If not, do we need to maintain an iris court to handle these disputes? How is it governed?
BrightID has social recovery. An Aura player can easily reverify you and unverify your old account. How do you recover from stolen irises?
What prevents orb holders (or manufacturers) from creating massive sybil attacks? What does the graph of accountability look like among orb holders? If you create a decentralized system of accountability like this, you've created Aura (and probably don't need biometrics any more.)
most people in the world are not even aware of proof-of-personhood protocols, and if you tell them to hold up a QR code and scan their eyes for $30 they will do that.
I totally agree with this. We've seen people selling their BrightID accounts because they don't know what they are. Once someone becomes educated later and realizes it was a mistake to sell their identity what can they do? BrightID has a simple solution (social recovery or make a new account). What if you sold your irises? How do you get them back? (See above.)
re-register, canceling the previous ID.
Ok, so how do you do this in Worldcoin? Is there a court system? (See above.)
a UBI coin provides an easily understandable incentive for people
This is very important. See the part about value and incentive in the Unitap section above.
A common fear is that this makes it too easy to create "permanent records" that stick with a person for their entire life.
This is also very important. See the part about BrightID blind signatures above.
This can be done either by a project like Worldcoin or Proof of Humanity maintaining its own bureaucracy for this task, or by revealing more information about how an ID was registered (eg. in Worldcoin, which Orb it came from), and leaving this classification task to the community.
I believe this bureaucracy (or rather decentralized classification by a community of experts) is actually the key to proof of personhood. We will need this in whatever system we use. This is exactly what Aura is trying to establish.
Presumably, governance could limit how many valid Orbs each manufacturer can produce, but this would need to be managed carefully, and it puts a lot of pressure on governance to be both decentralized and monitor the ecosystem and respond to threats effectively: a much harder task than eg. a fairly static DAO that just handles top-level dispute resolution tasks.
This monitoring of decentralization is done by Aura teams and leagues.
Renting out your ID is not prevented by re-registration. This is okay in some applications: the cost of renting out your right to collect the day's share of UBI coin is going to be just the value of the day's share of UBI coin. But in applications such as voting, easy vote selling is a huge problem.
See Unitap's approach above. I will reiterate that it works as long as there is a limited choice, and enough compelling choices so that everyone can find something (such as UBI tokens) they want to use for themselves.
Bootstrapping: for a user to join a social-graph-based system, that user must know someone who is already in the graph.
I'd say this is BrightID's main challenge.
risks excluding entire regions of the world that do not get lucky in the initial bootstrapping process.
I think the world actually connects together pretty quickly. I don't think it'd be hard to have Aura players around the world.
A social-graph-based system bootstrapped off tens of millions of biometric ID holders, however, could actually work.
Biometric bootstrapping may work better short-term, and social-graph-based techniques may be more robust long-term, and take on a larger share of the responsibility over time as their algorithms improve.
I've thought about this, too, and I guess you're saying you'd rather use Worldcoin than ePassports to bootstrap.
Of course, zero-knowledge technology can mitigate this (eg. see this proposal by Barry Whitehat), but the interdependency inherent in a graph and the need to perform mathematical analyses on the graph makes it harder to achieve the same level of data-hiding that you can with biometrics.
Yes! I remember Barry came up with "blind find" after some chats we had at EthDenver three years ago. I wish I could snap my fingers and make it so.
not having any proof-of-personhood at all has its risks too: a world with no proof-of-personhood seems more likely to be a world dominated by centralized identity solutions, money, small closed communities, or some combination of all three.
Yes. It's useful to think about what happens if we don't build this.