Replies: 2 comments 1 reply
-
I agree with your analysis. I also recommend grounding in this work: https://www.jstor.org/stable/30218421
|
Beta Was this translation helpful? Give feedback.
-
Just a small addition to your framework: When you introduce an intermediary F, you shift the costs (Cc) and risks (Rc) to F. But now you need to estimate how risky F. Arguably, the intermediary risks are often hidden away and live in a different layer. Assessing the risks requires a lot of analysis. To save Cc, usually, people apply heuristics, such as reputation and public opinion. Just as any heuristic it may be manipulated. I guess the only way to keep the mediators in check is a constant opposition and competition. |
Beta Was this translation helpful? Give feedback.
-
Update: started adding defintions with references that may be noteworthy or interesting.
We had numerous discussions on the definition of trust in the lab before, and I thought it would be helpful to start collecting different conceptions relevant to us. Recently I was trying to explain to my colleague from the philosophy department why blockchain-enabled trust is something that could be considered ‘good’, and came up with the following explanation. I post my explanation here ‘as is’ in a hope that you could criticise it or add other definitions that you think make more sense.
Disclaimer: this definition was strongly inspired by our last research meeting.
"Think of trust not as a ’trust capital’, or merely as a psychological attitude, but as a peer relation. Alice trusts Bob to act on ‘P’ in such a way that would be beneficial (or not detrimental) for Alice and vice versa. However, such relation happens under the condition of uncertainty and lack of control over the actions of their respective counteragents. Alice betting on this relation carries risks/costs if Bob does not deliver on her expectations, and so does Bob. Let’s call it ‘Rc’ as risk costs. Additionally, both of them carry costs (cognitive or otherwise) of evaluating whether this trust relation is worth it. Let’s call it ‘Cc’ as cognitive costs. This will be important further (and real life Alices often simply ignore Cc resorting to familiar behaviour). Note, this is also not a static snapshot in practice, but dynamic system.
Now, the moral philosopher would say that if such relation occurs under the conditions of coercion, power imbalance, or information asymmetry this is morally undesirable, or even morally impermissible relation. For instance if Bob has leverage over Alice such that she can not afford to default on her promises about P, but Bob can perfectly do that. I would also argue that the action range of Bob (how many actions Bob can feasibly undertake in respect to P) is often inversely correlated with risks (Rc) that Alice has to burden.
To make this situation morally acceptable we can introduce third party ‘F' (notary, referee etc.) to re-distribute risks and uncertainty, limiting Rc for both parties. However, if such third party becomes a single point of failure, we arguably made the situation worse. Now, to interact, Alice and Bob have to trust F who carries no risks, knows a lot about both of them and have tools of coercion to make them behave in way that would benefit F. What can be done here? Well, we can create costs for F, limit field of possible actions for F, or even better, introduce multiple Fs each of them limited in his capacity to harm Alice and Bob, and possessing only limited local knowledge.
This is how in fact many decentralised technologies work. So you can ask is not the case that technological component mediating trust here is just an artefact that invokes only narrow concept of reliance at best? Not really, any working blockchain application (and some other distributed systems) in fact is a complex socio-technical system where you can highlight at least five layers: 1) data structure; 2) consensus protocol; 3) physical infrastructure; 4) code maintainers; 5) system of incentives for participants. Each of these layers is more than mere artefact. Even most ‘pure’ component here - cryptography in fact has strong social component (e.g. see the case of Dual_EC_DRBG). This means that each layer of this system can suffer failures, both of technical and non-technical kind. And this is OK! The field of distributed systems is a study of failures, and how to make systems fault tolerant.
Finally getting back to our Alice and Bob, and sorry for this interlude :) Should Alice trust some coders with their magic internet money? And often this question is asked about Cc as much as about Rc. No she should not, after all these coders also do not like to trust each other. That is why they try to minimise risks (Rc) associated with trusting different layers and participants of these system, writing open source code and limiting damage that colluding participants can cause to Alice and to the system. So when blockchain people talk about ‘trust minimisation’ or even ‘trustless’, they mean that for some specific component or layer, Rc is very low or negligible.
Having said that, I agree, Cc for Alice is high, in fact it is always high even for people who engineer these systems (emergent complexity is a thing). But this is an issue of social epistemology, just like with any expert field, and not that someone tries to sell Alice mere ‘reliance’ under the fancy wrapper of ‘trust’. In fact, with ‘battle tested’ (demonstrably fault tolerant) system Alice now can establish new trust relations online with Bob and Carol and others that we consider less morally problematic, compared to the relations mediated by single black box owned by F."
Definitions in literature
Beta Was this translation helpful? Give feedback.
All reactions