From ba652fed703616a1ecb75e42fe99ff16a4efde16 Mon Sep 17 00:00:00 2001
From: lapwat Toward A Balance
any protocol changes, and their freedom, and credible threat, to "fork
off" if someone attempts to force changes on them that they consider
hostile (see also: http://vitalik.ca/general/2017/05/08/coordination_problems.html).
Tightly coupled voting is also okay to have in some limited contexts - for example, despite its flaws, miners' ability to vote on the gas limit is a feature that has proven very beneficial on multiple diff --git a/site/general/2017/12/31/pos_faq.html b/site/general/2017/12/31/pos_faq.html index df007b8d..9eabc12d 100644 --- a/site/general/2017/12/31/pos_faq.html +++ b/site/general/2017/12/31/pos_faq.html @@ -723,7 +723,7 @@
One of the main goals of Ethereum protocol design is to minimize complexity: make the protocol as simple as possible, while still making a blockchain that can do what an effective blockchain needs to do. The Ethereum protocol is far from perfect at this, especially since much of it was designed in 2014-16 when we understood much less, but we nevertheless make an active effort to reduce complexity whenever possible.
+One of the main goals of Ethereum protocol design is to minimize complexity: make the protocol as simple as possible, while still making a blockchain that can do what an effective blockchain needs to do. The Ethereum protocol is far from perfect at this, especially since much of it was designed in 2014-16 when we understood much less, but we nevertheless make an active effort to reduce complexity whenever possible.
One of the challenges of this goal, however, is that complexity is difficult to define, and sometimes, you have to trade off between two choices that introduce different kinds of complexity and have different costs. How do we compare?
One powerful intellectual tool that allows for more nuanced thinking about complexity is to draw a distinction between what we will call encapsulated complexity and systemic complexity.
BLS signatures and Schnorr signatures are two popular types of cryptographic signature schemes that can be made with elliptic curves.
BLS signatures appear mathematically very simple:
Signing: \(\sigma = H(m) * k\) Verifying: \(e([1], \sigma) \stackrel{?}{=} e(H(m), K)\)
-\(H\) is a hash function, \(m\) is the message, and \(k\) and \(K\) are the private and public keys. So far, so simple. However, the true complexity is hidden inside the definition of the \(e\) function: elliptic curve pairings, one of the most devilishly hard-to-understand pieces of math in all of cryptography.
+\(H\) is a hash function, \(m\) is the message, and \(k\) and \(K\) are the private and public keys. So far, so simple. However, the true complexity is hidden inside the definition of the \(e\) function: elliptic curve pairings, one of the most devilishly hard-to-understand pieces of math in all of cryptography.
Now, consider Schnorr signatures. Schnorr signatures rely only on basic elliptic curves. But the signing and verification logic is somewhat more complex:
Elliptic curve pairings in general are a powerful "complexity sponge" in that they contain large amounts of encapsulated complexity, but enable solutions with much less systemic complexity. This is also true in the area of polynomial commitments: compare the simplicity of KZG commitments (which require pairings) and the much more complicated internal logic of inner product arguments (which do not).
+Elliptic curve pairings in general are a powerful "complexity sponge" in that they contain large amounts of encapsulated complexity, but enable solutions with much less systemic complexity. This is also true in the area of polynomial commitments: compare the simplicity of KZG commitments (which require pairings) and the much more complicated internal logic of inner product arguments (which do not).
One important design choice that appears in many blockchain designs is that of cryptography versus cryptoeconomics. Often (eg. in rollups) this comes in the form of a choice between fraud proofs and validity proofs (aka. ZK-SNARKs).
-ZK-SNARKs are complex technology. While the basic ideas behind how they work can be explained in a single post, actually implementing a ZK-SNARK to verify some computation involves many times more complexity than the computation itself (hence why ZK-SNARKs for the EVM are still under development while fraud proofs for the EVM are already in the testing stage). Implementing a ZK-SNARK effectively involves circuit design with special-purpose optimization, working with unfamiliar programming languages, and many other challenges. Fraud proofs, on the other hand, are inherently simple: if someone makes a challenge, you just directly run the computation on-chain. For efficiency, a binary-search scheme is sometimes added, but even that doesn't add too much complexity.
+One important design choice that appears in many blockchain designs is that of cryptography versus cryptoeconomics. Often (eg. in rollups) this comes in the form of a choice between fraud proofs and validity proofs (aka. ZK-SNARKs).
+ZK-SNARKs are complex technology. While the basic ideas behind how they work can be explained in a single post, actually implementing a ZK-SNARK to verify some computation involves many times more complexity than the computation itself (hence why ZK-SNARKs for the EVM are still under development while fraud proofs for the EVM are already in the testing stage). Implementing a ZK-SNARK effectively involves circuit design with special-purpose optimization, working with unfamiliar programming languages, and many other challenges. Fraud proofs, on the other hand, are inherently simple: if someone makes a challenge, you just directly run the computation on-chain. For efficiency, a binary-search scheme is sometimes added, but even that doesn't add too much complexity.
But while ZK-SNARKs are complex, their complexity is encapsulated complexity. The relatively light complexity of fraud proofs, on the other hand, is systemic. Here are some examples of systemic complexity that fraud proofs introduce:
Often, the choice with less encapsulated complexity is also the choice with less systemic complexity, and so there is one choice that is obviously simpler. But at other times, you have to make a hard choice between one type of complexity and the ther. What should be clear at this point is that complexity is less dangerous if it is encapsulated. The risks from complexity of a system are not a simple function of how long the specification is; a small 10-line piece of the specification that interacts with every other piece adds more complexity than a 100-line function that is otherwise treated as a black box.
diff --git a/site/general/2021/11/16/retro1.html b/site/general/2021/11/16/retro1.html index adca7de2..97f0c736 100644 --- a/site/general/2021/11/16/retro1.html +++ b/site/general/2021/11/16/retro1.html @@ -140,7 +140,7 @@The full data is here. +href="https://vitalik.eth.limo/files/misc_files/institution_analysis.ods">here. I know that many people will have many disagreements over various individual rankings I make, and readers could probably convince me that a few of my scores are wrong; I am mainly hoping that I've included a diff --git a/site/general/2023/04/14/traveltime.html b/site/general/2023/04/14/traveltime.html index 2c071ce5..929a0841 100644 --- a/site/general/2023/04/14/traveltime.html +++ b/site/general/2023/04/14/traveltime.html @@ -233,7 +233,7 @@
Finally, just for fun, I added some data for how long it would take
to travel to various locations in space: the
@@ -246,7 +246,7 @@ Travel time ~= 750 * distance ^ 0.6
Centauri.
You can find my complete code here.
Here's the resulting chart: