Skip to main content

2 posts tagged with "governance"

View All Tags

Jagrut Kosti

Token vesting is one of the less talked about topics in Web3. And probably it's for the best! Let's explore, what it is, why it can be useful and if it is actually solving any problem.

What's Vesting?

Token vesting refers to a certain percentage of tokens that you know for sure will be available for use and trade. It's similar to ESOPs (Employee Stock Option Plans) with a major difference that ESOPs, even when vested, cannot be freely traded right away.

There are a lot of nuances on the structure of token allocation and what vesting exactly means. This changes from project to project. For example, a project can make only 20% of the total supply of tokens available for claiming/air drop. The rest can either be allocated later in further rounds of allocation, or they are part of the token emissions via staking rewards, etc.

Why Vesting?

Most Web3 projects deploy vesting strategies to prevent mass dumping of their token after the initial allocation or air drop, and therefore saving the token price from plummeting. The rationale is that since there are more tokens to be vested (unlocked), no one can dump all of their tokens and exit. Which leaves room for speculation whether the future vested tokens will be dumped or not, preventing the token price from completely nose diving.

Broadly speaking, there are generally 2 different types of vesting strategies:

  1. Only a certain percentage of total supply is in circulation and distributed via air drops. The unvested (locked) tokens are then brought into circulation via protocol metrics like validator rewards, emissions, etc. The unvested tokens are NOT air dropped again to the same set or a large subset.
  2. Similar to 1, but unvested tokens are brought into circulation via more air drops and the new vested tokens are distributed to the same set or large subset.

Next section is only applicable to 2nd type.

Is it effective?

I'll try to explain why vesting is actually worse than fully unlocked/vested tokens at the time of launch. Assumptions:

  • This analysis is from a token holder's / air drop recipient's perspective only. Does not include token buying or investors.
  • The token holder knows about all their vested tokens and their schedule.
  • The vesting schedule is fixed for all token holders i.e. the tokens are vested at the same time for all token holders.

From game theoretic perspective, any air drop is a positive-sum game, i.e. there is no monetary loss even if the price of the tokens drops to zero. The utility is to derive the maximum profits. Consider a scenario where a new token launch has a vesting schedule of 20% at the time of launch and every quarter of the year, another 20%. So by the end of the year, all tokens would be vested.

Every user (U) is playing a game with the market (M). Each of them can either Sell (S) or Hold (H). Based on relative utility and the assumption that there will be more tokens available in the future, the following utility matrix resembles the outcome.

M \ USH
S(1,1)(1, -3)
H(0,2)(0,0)

The effect of a user's action on the market is lower than a market's action on the user. Therefore, if both U and M sell at the time of air drop, they both derive the utility of (1,1). But if the U holds and M sells, there will be a much larger impact of that on user, hence their utility is (1, -3). If the U sells and M holds, there is a slightly higher utility for the user since they are selling at current holding price and hence we have (0,2). This can also be (0,1) and it would not make much difference. Important point to note here is that the action of holding gives a utility of 0 to both. People familiar with basics of Game Theory can quickly tell that there exists a pure-strategy Nash Equilibrium i.e. irrespective of what other player's actions are, your best strategy is to play the action with guaranteed maximum payoff. In our case, for both U and M, their best strategy is to Sell and get a payoff of (1,1).

This matrix is only valid when User knows that there are more tokens that will be available in the future. This is because all assets can theoretically gain value over time and it would be ideal to hold some portion with a certain probability of the protocol succeeding. Note that the last vesting schedule is identical to protocols with no vesting schedule as it then leaves it up to the user and their reasoning of the protocol's success. Which implies that all the vesting prior to the final one, simply gives more chance for users to dump their token.

From the protocol's perspective, this is disastrous. If the token price is somehow considered an indicator of the market's belief in protocol's success, then irrespective of how good the protocol is technically, it will get a bad rep. Furthermore, a non-vesting schedule air drop, entirely leaves it up to the market on how they perceive the protocol and let the market decide the optimal price point.

The reality

If we look at most protocol's distribution of the tokens, majority are distributed to investors, core team and early adopters. For them, vesting or no-vesting should not make a difference and they would ideally Hold till the protocol gains some traction.

On the other hand, vesting still gives them disproportional power to start dumping at the most opportunistic moment, especially investors, whose major utility is realising profits over time. Don't get me wrong, I'm not trying to throw negative light on investors, I'm simply talking about their utility from a rational perspective.

Irrespective of how the newly vested tokens are distributed, new circulating supply decreases the FDV (Fully Diluted Valuations)and increases selling pressure. See a relevant report on Low Float, High FDV.

Any solution?

Yes! A trivial sounding but not-so-trivial to implement on chains, is the idea of randomized vesting schedule. For each user, when the tokens will be vested is randomized. This creates a situation where a market cannot act in one way at one time because there is a fluctuating volume of tokens being vested every now and then.

For trading, randomized vesting schedules work to avoid nose dive of the token price. For governance, certain things need to be taken care of. Since all tokens are not vested at same time, a voting process should take into account all tokens, including the un-vested, to determine voting power (If the protocol uses one-token-one-vote mechanism).

So how can randomized vesting schedules be implemented? Through DVRFs! (Distributed Verifiable Random Function) The randomness from VRFs can be used to securely introduce random vesting schedule for each user.

ChainSafe R&D is currently working on implementing a better version of distributed verifiable random functions. More on that coming soon!

Jagrut Kosti

Ever had to go out with your colleagues for lunch and experienced how hard it can be to come to a conclusion on where to eat? Individual decision making is hard. Collective decision making is much harder (an understatement)! But it is extremely important, especially in decentralized autonomous organizations (DAOs). To be able to come to a conclusion using code as mediator is challenging. But before we dive into the complexities of governance whether on-chain or off-chain, let's understand some of the fundamentals from social choice theory and the decades of work that has been put into understanding decision making in a democratic structure.

Most of the DAOs that have implemented governance mechanism, essentially boils down their voting method into single choice "yes", "no" or "abstain" option. While this seems intuitively simple and completely eliminates the complex intricacies involves in ranked choice voting, it's not the best system for all scenarios. And most of the projects do not seem to highlight this:

Let's say for platform X (sarcasm intended), a decision has to be made about which logo, out of logoA, logoB and logoC, should be adopted. One can argue that instead of creating one proposal and tallying using ranked choice preference, we can split the proposal into 3 and carry-on with the usual voting methods adopted by DAOs:

  • Should we go with logoA? Options: "yes", "no", "abstain"
  • Should we go with logoB? Options: "yes", "no", "abstain"
  • Should we go with logoC? Options: "yes", "no", "abstain"

This opens up a can of worms! What happens if logoA and logoB both have "yes" as the winner? Should we then create another proposal to resolve the tie? Are the same voters going to vote on that? What if some significant portion of the old voters does not show up? What if new voters show up? Would this not increase voter fatigue?....

While most DAOs try to avoid such proposals, it can still happen depending on the topic of discussion. There is a reason why ranked choice preference tallying is avoided but that does not mean it cannot be made practical. In this article, we will look into a few of the well-known theorems in social choice theory and to keep in mind when designing any governance mechanism.

May's Theorem

The reason that most DAOs use single choice simple majority voting method is because of the May's theorem:

For two alternative voting, May's theorem states that simple majority is the only anonymous, neutral and positively responsive social choice function.

Anonymous: There is no distinction between the votes and each voter is identical.

Neutral: Reversing the choice of each voter, reverses the group outcome.

Positively Responsive: If some voters change their preference in favor of one option, while others remain the same, the proposals outcome does not change in opposite direction. If the previous outcome was a tie, the tie is broken in the direction of the change.

The theorem only applies if there are two options. In most DAO's ballot (set of options in proposals), "abstain" vote is not counted and hence the theorem applies.

Arrow's Impossibility Theorem

Arrow's impossibility theorem applies only for ranked choice voting. A voting rule is a method of choosing winner from a set of options (ballot) on the basis of voter's ranking of those options. Before jumping into the theorem, lets examine a couple of different voting rules.

In plurality rule, the winning option is the option which was ranked first the most than any other option. E.g. for 3 options XX, YY & ZZ, if 40% voters liked XX best i.e. ranked it first, 35% liked YY best and 25% liked ZZ best, then XX wins, even though it is short of an over-all majority (greater than 50%).

40%35%25%
XYZ

In majority rule, the winning option is the option that is preferred by a majority to each other option. E.g. for 3 options XX, YY & ZZ, if 40% voters rank X>Y>ZX>Y>Z, 35% rank Y>Z>XY>Z>X and 25% rank Z>Y>XZ>Y>X, the winner is YY because majority of voters (35% + 25% = 60%) prefer YY to XX and a majority (40% + 35% = 70%) prefer YY to ZZ.

40%35%25%
XYZ
YZY
ZXX

Note that plurality and majority rule leads to different outcome. This prompts the question: Which outcome is "right"? Or, which one is better to use? We can then ask a general question: Among all possible voting rules, which is the best?

Arrow proposed that we should first identify what we want out of the voting rule i.e. what properties should be satisfied. The best voting rule will then be the one that satisfy all of them. Those properties are:

1. Decisive/Unrestricted domain

All preferences of all voters should be accounted for and there should always be a winner and there shouldn't be more than one winner.

2. Pareto Principle

If all voters rank XX above YY and XX is on the ballot, YY should not be the outcome.

3. Non-dictatorship

No single voter's choice should decide the outcome.

4. Independence of irrelevant alternatives

Given a voting rule and voter's ranking, if XX is the winner, then if we remove YY from the ballot as it was not the winning choice (irrelevant), XX should still win i.e. independent from an irrelevant choice. To give an example, in the plurality rule example above, if option ZZ was removed and all those who chose ZZ as their first choice now chooses YY, making YY the winning choice with 60%. Politically speaking, ZZ is a spoiler. Even though ZZ was not going to win in either cases, it ended up determining the outcome. This happens in democratic elections several times. This property serves to rule out spoilers.

Plurality rule is vulnerable to spoilers and hence violates the independence property. Majority rule, satisfies the independence property i.e. if XX beats each of the other choices, it continues to do so if one of the choices is dropped. But majority rule does not satisfy the decisiveness property i.e it doesn't always produce a winner. E.g. in the following table of ranked choices, YY beats ZZ by a majority (68% to 32%), XX beats YY by a majority (67% to 33%) and ZZ beats XX by a majority (65% to 35%) - so there is no option which beats the other two. This is called Condorcet's Paradox.

35%33%32%
XYZ
YZX
ZXY

Arrow tried to find a voting rule that satisfies all the properties but eventually led to the conclusion that there is no voting rule that satisfies all 4 properties!

The name of the theorem is itself a source of pessimism: if something is "impossible", its pretty hard to accomplish. This theorem prompts the question: Given that no voting rule satisfies all properties, which rule satisfies them most often? One plausible answer is that in majority voting, if one particular class of ranking (e.g. Z>Y>XZ>Y>X) is removed with high probability of it not occurring, then majority rule will always have an outcome. In this case, majority rule does not violate the decisive property.

There is another theorem called the domination theorem. It states that for any voting rule that differs from majority rule, if it works well for a particular class of rankings, then majority rule must also work well for that class. Furthermore, there must be some other class of rankings for which majority rule works well and the voting method we started with does not. Whenever another voting rule works well, majority rule must work well too, and there will be cases where majority rule works well and the other voting rule does not.

This applies only if there is a possibility of identifying a class of ranking that is highly unlikely to occur. In case of DAOs, the question arises who is responsible for identifying and eliminating such a class of ranking for each proposal. Simply eliminating the least voted class of ranking results into utter neglect of minority.

Gibbard–Satterthwaite Theorem

Gibbard-Satterthwaite's theorem is applicable on ranked choice voting that chooses a single winner. It follows from Arrow's impossibility theorem. For every voting rule, one of the 3 things must hold:

  1. There is a dictatorship i.e. one voter's choice determines the outcome OR
  2. There are only two choices (in ballot) and hence only two possible outcome OR
  3. Tactical voting is possible i.e. there exist some strategy where a voter's ranked choice does not show their sincere opinion but gives the outcome they want.

Borda count: For the ranked choice ballot of each voter with nn options, assign n1n-1 points to the top option, n2n-2 to the second option, ... and 00 to the last option. Option with the most point is the winner.

To demonstrate the theorem, consider 3 voters AA, BB & CC and 4 options WW, XX, YY & ZZ and their ranked preference is as follows:

VoterChoice 1Choice 2Choice 3Choice 4
AliceWXYZ
BobYXZW
CarolYXZW

Based on Borda's count (WW: 3, XX: 6, YY: 7, ZZ: 2), YY is the winner. But if Alice changes its ballot as follows:

VoterChoice 1Choice 2Choice 3Choice 4
AliceXWZY
BobYXZW
CarolYXZW

From Borda count (WW: 2, XX: 7, YY: 6, ZZ: 3), XX is the winner and Alice's preference of XX over YY is still maintained. We can say that there exists a strategy where Borda count is manipulable.

Conclusion

Before designing any governance mechanism for DAOs, it is extremely important to understand that on-chain voting is a great tool but it does not solve the inherent problems in social choice theory. We also want to experiment with new systems but first, base our work on decades of research that have already been proven to work or not work. This article gives an overview of why ranked choice voting is complex to implement and even though most DAOs are currently opting for single choice voting, it is also prone to manipulation.

Most DAOs allow the voters to weigh the preference by either using more tokens or time-locking tokens (conviction voting). This is limited to putting the weight towards one option in single choice voting. Ranked choice voting is complex to begin with and introducing weights can potentially add more complexities and result in unforeseen outcomes. As shown in Gibbard-Satterthwaite theorem, Borda count is manipulable. And adding weights will open up more possibilites to game the system. But nonetheless, a great domain to research and experiment!