Section III: Distributed Systems Fundamentals
April 9, 2026

Cryptographic trust
Social (behavioral) trust
A valid signature proves who sent a message, but it says nothing about whether they will behave honestly. These layers solve separate problems.
Note
Simplified formula:
\[T(peer) = \sum_{i} \underbrace{S(i)}_{\text{satisfaction}} \times \underbrace{Cr(i)}_{\text{rater credibility}} \times \underbrace{TF(i)}_{\text{context weight}}\]
Each rater \(i\)’s feedback is scaled by how credible they are and how relevant the transaction type is. Low-credibility raters and low-stakes transactions contribute less to the final score.
How do PeerTrust’s factors defend against manipulation?
Simplified formula:
Where: \(s_{ij}\) = net satisfaction (successful minus failed interactions between peers \(i\) and \(j\)); \(c_{ij}\) = normalized local trust that \(i\) places in \(j\); \(C\) = the matrix of all \(c_{ij}\) values; \(\vec{t}\) = the global trust vector; \(k\) = iteration round.
The result is a single trust vector \(\vec{t}\) where each entry is peer \(j\)’s global reputation.
How do EigenTrust’s anchors defend against manipulation?
Now that we’ve examined each model individually:
PeerTrust: Local / Personalized
EigenTrust: Global / Network-Wide
Note
Many real systems blend both: start with a global baseline (EigenTrust-style) to stay safe among strangers, then personalize with local context (PeerTrust-style) as your own interactions accumulate.
A complete trust decision has three steps:
Note
This is the “trust but verify” pattern: let reputation aim you toward good peers, let verification confirm the result, then update your aim for next time.
The attack: A single adversary creates many fake identities to gain outsized influence: flooding feedback, capturing neighbor slots, or surrounding a target.
Scale matters: 1 new identity vs. 1,000:
How each system defends:
The shared principle: influence must be earned, not manufactured. Cheap identities can flood a network, but both systems ensure those identities start with near-zero weight and can only gain influence through sustained, verified good behavior.
The attack: A node with a bad reputation discards its identity and rejoins as a “new” participant with a clean slate.
How each system defends:
Note
The shared principle: there is no free ride for new identities. Trust must be earned through sustained, verified good behavior, making rebranding a slow and costly strategy.
The attack: A group of 10 peers forms a ring. They rate each other 5 stars on every interaction and rate a competing honest peer 1 star, trying to boost themselves and bury the competition.
How each system defends:
The shared principle: praise without external endorsement stays local. Colluders can inflate each other’s ratings, but without credibility (PeerTrust) or anchor-traced trust (EigenTrust), that inflation never reaches the broader network.
The attack: Five malicious peers target an honest node by flooding it with false 1-star ratings after every interaction, trying to destroy its reputation.
How each system defends:
The shared principle: your own experience outweighs distant accusations. Slander fails because low-credibility attackers have limited influence, and direct positive interactions with the target carry more weight than secondhand negatives.
Real-world examples:
The shared principle: trust must be renewed, not banked. Recency weighting ensures that past good behavior cannot indefinitely shield present exploitation. Recent actions always speak louder than old ones.

Trust and P2P — Army Cyber Institute — April 9, 2026