On Long-Term Cryptocurrency Distribution Models

On Long-Term Cryptocurrency Distribution Models

  • Post author:
  • Post category:Ethereum

[ad_1]

One of the challenges when creating a new cryptocurrency is figuring out what the distribution model is going to be. Who is going to receive the currency units, at what time, and what is the mechanism that decides? Despite the crucial importance of this question, there has actually been comparatively little thought into the issue compared with other aspects of currency, like consensus algorithms and feature sets. The question is particularly challenging because, just like many other problems in the cryptocurrency space that have parallels in the ā€œreal worldā€ at large, cryptocurrencies also face the requirement of decentralization: it is considered unacceptable to have a cryptographic platforms whose continued operation depends on the existence of any specific party in the long term. Given this rather stringent requirement, how should a new currency distribute itself?

So far, the problem is still in its very early stages of discussion. While the question of short-term distribution is a highly dynamic debate between different types of asset carryovers, one-way transfers, two-way pegs, pre-mines, pre-sales and other mechanisms coming out almost every month, long-term distribution in nearly every cryptocurrency now follows one of two strategies: nothing at all, or mining. The reason why having a fixed never-growing supply is undesirable is obvious: it encourages wealth concentration and creates a static community of holders without an effective way for new people to get in, and it means that the coin has no way to incentive any specific kind of activity in the long term. The issue with mining, however, is more subtle. Cryptocurrency mining generally serves two functions; first, it provides a way of securing the network, and second, it serves as a distribution model, giving hundreds of thousands of people around the world a way of getting access to a few coins. So far, mining has been considered necessary for the former, and an effective way of doing the latter. More recently, however, there has been a substantial amount of interest and research into proof of stake, including strategies such astransactions as proof-of-stake, delegated proof of stake and a partial solution to nothing-at-stake, Slasher, suggesting that mining might not be necessary after all. Second, the rise of both ASICs and professional GPU farms is turning mining itself into an increasingly concentrated and quasi-centralized community, so any new mining-distributed currency will quickly be dominated by professional companies and not ā€œthe peopleā€ at large. If both trends continue, and mining proves to be a bad model for distribution, it will therefore need to be replaced. But then, the question is, by what?

So far, we know of several answers:

  • Pretend that the problem does not exist. This is the solution that has been taken by most proof-of-stake cryptocurrencies, and surprisingly enough even proof-of-work currencies, today.
  • Centralized distribution: let some central authority hand out coins according to some formula.
  • Useful proof-of-work: hand out coins to anyone who performs a particular socially useful computation, eg. weather prediction. This algorithm need not be used for consensus; it can exist simply to distribute coins while proof-of-stake does the hard work of maintaining consensus.
  • Algorithmic consensus distribution. Essentially, some kind of dynamic, adaptive consensus-based process for determining who gets new coins.

The second is theoretically the most powerful; currency units can be distributed either to everyone in the world for maximum fairness or to pay bounties for protocol development, external charitable causes or anything else. However, at the same time actually using such a mechanism arguably kills the whole point of a cryptocurrency: that it is decentralized and depends on no specific party for its continued existence. Thus, we can think of the centralized distributor as an ideal that we want to approach, sort of like the ideal of a bureaucrat god found in economic efficiency theory, and see how close to that ideal we can approach while still maintaining a structure that is guaranteed, or at least highly likely, to remain stable in the long term.

Useful Proof of Work As Distribution: A Relaxed Algorithm

Useful proof of work is likely the simpler idea. Initially, it was considered impossible to make a proof of work based on useful computation because of the verification problem: a proof-of-work task cannot take longer than a few thousands steps because every node in the network also needs to verify it to accept the block. Primecoin was the closest we got, and even there computing chains of prime numbers is not really all that useful. Now, thanks to the existence of a programming environment with a built-in computational stack trace mechanism, there is actually an alternative approach that removes this particular obstacle, using spot-checking and deposit sacrifices to make sure that work is being done correctly. The approximate algorithm for doing so is as follows.

  1. Suppose that F(k) is a function that takes 32 bytes of random data as an input, carries out some computation taking n steps (where n is fairly large, say ten billion) and then returns a value R which is socially useful.

  2. In order to perform one round of mining, start off by choosing a random m, and let B be the block header. Let k = sha3(B + m) as the seed.

  3. Define a function STEP(P, D) -> D’ where P is the program code, D is some tuple of data perhaps including stack, memory and program counter representing the state of the computation, and STEP carries out one computational step and returns the modified computational state D’.

  4. Let D[0] = { pc: 0, stack: [], memory: [k] } (or some other construction involving k in a different computational model). Let D[i] = STEP(P, D[i-1]) where P is the program corresponding to the evaluation of F. D[n] should, in some appropriate fashion, contain the result of F.

  5. Define H as a hash function of D[i]; something like sha3(pc + str(stack) + str(memory)) satisfies as a quick-and-dirty option. Let H[i] = H(D[i]). Compute all D[i] and all H[i] and let R be the root of a Merkle tree of all H[i]. If R < 2^256 / D then the work is valid and the miner is entitled to a reward.

Basically, we take the state of the program after each computational step (we can optionally make STEP process the execution of a few thousand computational steps for greater efficiency; this does not seriously compromise anything), and build a Merkle tree out of the whole thing and look at the root. This is somewhat tricky to implement; fortunately, however, the Ethereum virtual machine and block structure is already almost an exact replica of this algorithm, so one could take that code and use it almost verbatim.

The algorithm described above by itself has an obvious hole in it: it is not easy-to-verify, so fraudulent miners can easily pollute the network with bad-seeming blocks. Thus, as an anti-spam and anti-fraud mechanism, we require the following:

  1. To be able to mine, nodes must purchase a ā€œmining bondā€ of price N * R (say, R = 10^18 and N = 100), which returns to the miner after 10000 blocks. Each mining bond allows the miner to submit one work at a time.

  2. If a miner submits a seemingly-valid work, including the m and k values, the root, and the socially useful output, then the mining bond reward increases by R

  3. Anyone else with a mining bond can check the work themselves. If the Merkle root at the end is inconsistent, then they can publish a ā€œchallengeā€ transaction consisting of some number (say, 16) of sub-nodes. At that point, the original submitter has the choice of either giving up (as defined by not posting a response within 25 blocks), sacrificing their entire mining bond to the checker, or make a ā€œresponseā€ transaction pointing out the first of those subnodes that they disagree with. If a response is submitted, the challenger must respond going down one level further, providing the sixteen subnodes between the last agreed subnode and the first disagreed subnode, and so forth, until the process converges upon the interval between two adjacentH[i] and H[i+1] values in the tree. At that point, the miner must submit the values of D[i] and D[i+1] in a transaction, which is considered valid if and only if P(D[i]) = D[i+1].

The problem is, however, that the process of checking takes as long as the original computation itself, so there does need to be an explanation as to why anyone would do it. If all miners attempt to cheat frequently, then it makes sense to perform spot-checks in order to collect the deposit (which we assumed to be 100x), but if miners realize this and as a result donā€™t cheat then there is no longer an incentive to check, so no one would check and miners would have free rein to cheat. This is a classichawk-dove equilibrium paradox, and can be solved by game theory (here, we assume that mining has a cost of 0.5 and a reward of 1):

Cheats Does not cheat
Checks (-100, 101) (0.5,-0.5)
Does not check (1,0) (0.5,0)

Computing a mixed-strategy equilibrium in this simplified two-player model shows the miner cheating 0.5% of the time and the checker checking 0.5% of the time; under those two conditions, each player is indifferent to the strategy of the other so there is no opportunity for either one to further optimize and cheat. If we push closer to the economic equilibrium of mining and we say that mining has a cost of 0.9, then the equilibrium has a cheating rate of 0.9% and a checking rate of 0.9%. Thus, economically driven spot-checking is a legitimate strategy for ratting out fraudulent mining attempts, and can keep cheating rates arbitrarily low if we are willing to push up collateral requirements.

So what kind of work can we do? First of all, it might be better not to include computation that is incapable of handling noise, ie. where a bad answer accepted as a good answer does more than 100x as much bad as an actual good answer. Second, the algorithm here allows for work that is not easy-to-verify, but it does nothing to allow work that is data-heavy. For example, SETI is data-heavy ā€“ you need to have a picture of the sky in order to search it for aliens. Third, the algorithm must be parallelization-friendly. Running a machine learning algorithm on terabytes of data is not really something that can be split into discrete chunks, even large-sized ones. The second criterion can potentially be relaxed; because there isnā€™t really any benefit to mining with bad data versus good data, an SETI foundation can be set up which provides a stream of data for miners to work with, and adds a very small subsidy to encourage miners to use it. Theoretically, the foundation can even be decentralized and run as a proof-of-stake-voting algorithm on a blockchain. The simplest kind of socially useful computation to use, however, might be genetic algorithms. Genetic algorithms are often used to find solutions to problems that are intractable in closed-form, like finding optimal radio antenna shapes, spaceflight trajectories, aerodynamic shapes, and so forth; the blockchain may provide an ideal environment for doing such computation on everyoneā€™s nodes for free. Certain classes of data search and aggregation puzzles could also potentially be split up, though they are much more data-heavy whereas genetic algorithms are close to data-free once launched.

Parliaments And Better Algorithms

Algorithmic consensus distribution is the more interesting possibility. What if there can be a consensus algorithm to distribute tokens over time, where that algorithm can reward arbitrary good work? For example, one might want to pay bounties to people who contribute to the ecosystem, or even to the world in general. The simplest approach here seems to be to randomly select a ā€œparliamentā€ ā€“ every N blocks, stakeholders can vote on 200 nodes that will make the decision of where the newly generated funds will go.

The obvious question to ask is: what are the economics of this? In theory, the nodes will want to select the distribution that optimally benefits the community as a whole, so as to maximize their chance of getting re-elected. However, are there opportunities for corruption? We all know that traditional democracy is highly imperfect, so how do we know that our crypto-enabled wealth distribution scheme will be any better? Fortunately, there is one strong argument to be made that it actually will be. The reason is that traditional democracies have a number of very serious failure modes; for example, a parliament can seize peopleā€™s property, conscript people into armies for war, restrict free speech, etc. In this case, however, there is a very clear and obvious upper bound on how much damage a parliament could do: it could redirect the money to split among itself. There is also the risk that the parliament will crowdfund something which is a public bad to society, but a public good among themselves (eg. a war), but they have no existing military apparatus to latch onto and no existing public consensus that they are supposed to be using coercive power for any reason at all so they are in no better a position to do such a thing than any other group commanding a similar level of economic resources. Thus, if we suppose that parliaments fail, say, 33% of the time, then we can see how in a democracy this would be catastrophic but here it only means that the distribution mechanism becomes 67% as useful as it could be.

Another criticism is that such a mechanism, no matter how it may be constructed, will invariably create some sort of political governance class, and thus will stabilize around a particular small set of political viewpoints, generate its own form of inequality, and eventually lead to a long-term hostile takeover. This would be limited in effect, but even still at its worst 100% of the new currency issuance will be siphoned off by a crypto-political elite. One solution is to make parliaments randomly selected (ie. demarchy) rather than elected, reducing the chance of such conspiracies further but at the cost of weakening the parliamentā€™s expected level of expertise on optimal distribution and its ability to form long-term consistent institutions; however, if we want to create a system that has the political image of being neutral and decentralized that is perhaps something that we actually want.

However, we probably can, and certainly must at least try, to be more imaginative. Parliaments and voting are only the simplest and crudest form of having a decentralized organization; there are almost certainly better alternatives based on principles such as holarchy, liquid democracy, futarchy and various combinations of these and other ideas that we have not thought of but that will become possible because of the much higher degree of both interconnectedness and information processing efficiency provided by modern technology. Ideally, as much of the process as possible would be in some fashion automated ā€“ the process should function as a DAO, not a DO, and the position of highest power, or the closest philosophical analog of such a thing, should be held by an algorithm and not a set of people ā€“ perhaps a sacrifice from the point of view of optimality at any particular time, but, one might argue, a boon for long-term stability, and an especially appropriate choice for a cryptographic platform that intends to claim some concept of neutrality.

A simple futarchy-based implementation might work as follows. Suppose that there are N projects asking for a grant consisting of the entire currency supply to be distributed during some time period, and the desire is to select the one that will maximize the value of the coin after one year. We create N sub-tokens, T[0] … T[N-1], where the value of T[i] is zero if project i does not get chosen but can be redeemed for one currency unit after one year if the project does get chosen. Then, we create subtokens R[0] … R[N-1], where the value of R[i] is zero if the project does not get chosen or an amount of currency units equal to 232 computational steps in value (we include a small useful-PoW or useless-PoW market into the coin for this purpose) if the project does get chosen. Now, suppose that the probability of project i getting chosen is P[i] and the value of the token in the event that project i gets chosen after one year is V[i]. We note that the value of T[i] is P[i] _ V[i] and the value of R[i] is P[i] _ K where K is the cost of computing 232 computational steps. Hence, the project with maximumP[i] / R[i] also maximizes V[i] / K and hence V[i], so that project is assumed to maximize the value of the coin and hence chosen. The only challenge left is figuring out what the risks of market manipulation attacks are assuming there are individual parties with non-negligible market power. This strategy seems more mathematically clean and less vulnerable to turning into something centralized, but on the other hand there seem to be fewer safeguards to prevent it from becoming evil. The best response might simply be that a coin run by an evil DAO will lose public support, and hence will lose value, so the futarchy algorithm itself might select against such undesirable actions. Second, of course, the futarchy does not command a military and there is no pre-existing public consensus that it is entitled to employ any kind of coercion.

Ultimately, both of these approaches could be combined. One can have a parliament, or a futarchy, select useful proof of work algorithms or even data for specific useful proof of work algorithms, or one can have a parliament or futarchy with useful proof of work as its voting mechanism. However, one important conclusion here is that both of the algorithms described are complicated; there is no easy solution to figuring out how to distribute coins in a good way. Which, given the state of the financial system at large, makes sense; if it was easy to distribute coins fairly then the US dollar and other fiat currencies would have likely been overthrown in favor of such alternatives in at least some parts of the world a long time ago. Because of the complexity involved, it is unlikely that either of these will be used for ether itself; ether is intended to be boring crypto-gasoline with simple properties to target maximum stability and reliability, not a super-advanced economically innovative decentralized autonomous organization. So if you want to see GeneticAlgoCoin, FutarchyCoin and ParliamentCoin developed, feel free to run them on top of Ethereum as sub-currencies; the Serpent compiler is all yours to play with.

Credit to Neal Koblitz for suggesting the idea of spot-checking and convincing me of the importance of useful PoW, Robin Hanson for inventing futarchy, and realistically probably at least several cryptographers who came up with the concept of multi-round challenge-response protocols before me

[ad_2]

Source link