UIP: Staking Boost

Currently, Penumbra’s stake distribution is more centralized than desirable. While this has been gradually shifting over time and is expected to decentralize gradually, and UI changes like Encourage Decentralization in Minifront's Staking Page have aimed to accelerate that process, it’s worth considering ideas about how to economically incentivize decentralization.

This post describes one such idea, Staking Boost, in an initial form for feedback from the community. The core idea is simple:

  • Currently, at the end of each epoch, the chain computes the new issuance over the epoch and then allocates it to each validator’s delegation pool on a pro-rata basis by adjusting the exchange rate for that validator’s delegation token.
  • Instead, a new chain parameter staking_boost_percent would define a percentage of staking rewards allocated to the new staking boost mechanism (for purposes of discussion, let’s say 20%).
  • At the end of each epoch, 80% of the staking rewards are distributed on a pro-rata basis as before, and the other 20% is allocated to a single validator’s delegation pool, selected uniformly at random from the validator set (i.e., all validators equally weighted).

The effect of this mechanism would be to incentivize consensus participation by smaller validators, for whom the fixed-size boost is larger relative to the size of their delegation pool. To clarify, the boost is applied to the validator’s entire delegation pool (delegators), not to the validator directly, though validator commission would apply as normal.

Consider, for instance, the current chain parameters: epochs are normally 34,560 blocks (~2 days), and issuance is set at 63411upenumbra per block, or 2,191.484160 UM per epoch.If there were 100 validators and the stake boost was set to 20%, that would be a boost of ~440UM, with each validator getting the boost 1% of the time, or approximately 2.2 UM per day in expectation. Now consider what this does to a “giant” or “tiny” validator, assuming the current 16.75% staking participation remains unchanged:

  • A “tiny” validator with the 100UM minimum bond is expected to win the boost ~1.8x per year (180 epochs per year / 100 validators), for ~800 UM of boost in expectation. They also receive (100 / 16750000) * (0.8 * 2200 * 180) = 1.89 UM of rewards, giving a year-end delegation pool size of 902 UM, or 902% APY.
  • A “giant” validator with 10% of the voting power (1,675,000 UM) is also expected to win the boost ~1.8x per year for ~800 UM of boost in expectation. They receive (0.1) * (0.8 * 2200 * 180) = 31680 UM of rewards, giving a year-end delegation pool size of 1,707,480 UM, or 1.94% APY.

These numbers are illustrative of the dynamics only, and would not be reflective of real-world scenarios, as delegating to the smallest validators would become much more rewarding, and the increased size of the delegation pool dilutes the boost. But this is exactly the behaviour the mechanism seeks to incentivize.

To select a validator at random, the ideal solution would be to use a VRF, to prevent a proposer from manipulating the randomness used to select the boosted validator. However, this seems unnecessary in this instance relative to some kind of “good enough” mechanism like using the app hash or the last commit info, because malfeasance would be extremely easy to detect, and a validator who manipulates the mechanism could be socially slashed.

1 Like

This is an interesting proposal!

I’m curious about the reason for the randomness mechanism: would it not be economically equivalent (but more predictable and impossible to game) to instead take the staking boost rewards and equally split them between all validators in each epoch?

If I’m understanding the proposal correctly, a uniform per-epoch re-distribution like this would have the same expectation of reward amortized over time, but would not require randomness (verified or otherwise), and therefore would be simpler to implement.

The randomness is more fun. Also the implementation seems easier because there’s no need to compute different rates for all the validators, you just have to pick one.

I think adding any kind of consensus-randomness opens up a can of worms that’s rather left closed, because generating randomness in a non-biasable and non-grindable way is very annoying, and adds a lot of protocol complexity; I think doing an explicit redistribution is simpler, and has the same effect.

Another concern is that the economic resources you gain from validating can be used also to spin up other validators, and so the “strategy” for large validators will be to spin up many small validators to maximize the boost, and this seems like an undesirable outcome compared to just staking with one validator, which is more transparent and reflects voting power more clearly.

Thanks all for the comments. To make the proposal more concrete, let’s say that the randomness will be chosen as the app hash of the previous block. This is, in principle, vulnerable to manipulation by the proposer. However, I strongly believe that this and other potential issues with the proposal in the discussion above won’t be issues in practice. Let’s see why.

  • Proposer Manipulation: In order to manipulate the app hash, a validator (1) needs to be selected as the proposer (2) needs to write custom software to manipulate the block contents (3) needs to value the staking boost more than the social consequences of their (visible) manipulation. I believe that all three of these factors will be determinative against this occurring. First, the staking power on Penumbra is currently more concentrated than would be ideal – that’s part of the problem this proposal aims to address – but as a result, the proposer weighting is concentrated with a smaller number of validators, all of whom have longer term alignment with the entire ecosystem. The proposer at the end of an epoch (who would be in a position to perform manipulation) is most likely to be a large validator, for whom the value of the staking boost is relatively minimal. Second, such a validator would need to have custom software to manipulate the block. While this is certainly possible, it would require software development effort whose cost probably exceeds the potential reward. Third, such manipulation would either be a one-off – in which case the benefit is also minimal – or it would be recurrent – in which case there’s public evidence of malfeasance, exposing the validator to social slashing.
  • Validator Sybiling: There are two key factors that suggest this won’t be a problem. The first is that the value of the boost is constant, so it’s relatively large for solo stakers but trivial for large validators. The second is that the 100 validator limit (or whatever other limit is chosen by governance, I’m using the current value as a shorthand) means that there’s a floor on the effectiveness of sybiling. Consider the marginal decisionmaking for a validator choosing whether to split into 2 validators to sybil. For very large validators, there’s little benefit to sybiling, because the value of the boost is trivial relative to their existing staking rewards. But for small validators, splitting into 2 validators risks both validators being pushed below the 100 validator limit by other entrants who are also competing in hope of a boost. Fundamentally, there’s no way to have an incentive mechanism that supports small validators / solo stakers without incentivizing sybils to some degree. But I believe the shape of this mechanism minimizes the effect of this in practice.