Building Peer-to-Peer Systems With Chord, a Distributed

Download Report

Transcript Building Peer-to-Peer Systems With Chord, a Distributed

3

rd

Generation P2P Systems

Ross Anderson (Cambridge) Frans Kaashoek and Robert Morris (MIT)

Traditional distributed computing: client/server

Server Client Client Internet Client Client • Successful architecture, and will continue to be so • But it’s expensive to make server farms scalable and robust

What is a P2P system?

Node Node Node Internet Node Node • A distributed system architecture: • No centralized control • Nodes are symmetric in function • Large number of (perhaps unreliable) nodes • Enabled by technology improvements

History of P2P Systems

• Speech, post, telephone, email, … • Usenet perhaps the first ‘proper’ system – added data structure (newsgroup) and comms architecture (flood fill) • Eternity Service (Anderson, 1996) was the first proposal for a system resilient to censorship. Response to Fishman case • Followed by Publius, etc

P2P History (2)

• Following Napster, P2P technology adopted for cooperative sharing of music files etc • Gnutella, Morpheus, KaZaA, Bittorrent, etc.

• Lots of attention from the popular press “The ultimate form of democracy on the Internet” “The ultimate threat to copyright protection on the Internet” • There are many interesting legal uses too!

Public / Infrastructure Apps

• Resilient, replicated storage of 500Gb of images of Cambridge’s ancient manuscripts • BBC Creative Archive – several Petabytes of content to be in public domain by end 2006 • Fixing Usenet – at present collapsing under the weight of spam, binaries • Providing a distributed news syndication and distribution service

Business Applications

• Security research so far - 90% on confidentiality, 9% on integrity and 1% on availability • Business expenditures are the other way round!

• Services need to be robust against • Node and communication failures • Load fluctuations (e.g., flash crowds) • Attacks (including DDoS) • This is a serious opportunity for peer-to-peer!

Roadblock (1)

• First problem - getting the incentives right • Many P2P systems put all the eggs in one basket – if you shared recordings of your school band you maybe shared porn too • Our economic modelling shows that even if attack resistance is the primary goal, a federation of clubs is a better way to do it • Share what you control, and control what you share

Roadblock (2)

• Early systems such as gnutella were great for sharing popular files but obscure files got lost • They would share the latest Britney Spears hit just fine, but lose the ancient manuscripts • A federation-of-fan-clubs goes some of the way but not far enough • As often, we need to get two components right – the economics and the engineering • Enter the Distributed Hash Table!

Engineering

reliable

distributed applications is challenging

• Need many servers to handle load • At any point in time, some servers will have failed • If servers are geographically spread, how do you get performance?

Google, Akamai, etc. employ many PhDs!

Distributed hash table (DHT)

put(key, data) Distributed application get (key) Distributed hash table node node ….

data node

• DHT behaves to a programmer like a • Very large and high performance • Automatic load balancing • Fault tolerant • Self-organizing one hard disk • But internally organized as a peer-to-peer system

A DHT has a good interface

• Put(key, value) and get(key)  • Simple interface!

value • API supports a wide range of applications • DHT imposes no structure/meaning on keys • Key/value pairs are persistent and global • Can store keys in other DHT values • And thus build complex data structures

A DHT makes a good infrastructure

shared

• Many applications can share one DHT service • Much as applications share the Internet • Eases deployment of new applications • Pools resources from many participants • Efficient due to statistical multiplexing • Fault-tolerant due to geographic distribution

DHT implementation challenges

1. Scalable lookup 2. Handling failures 3. Coping with systems in flux 4. Network-awareness for performance 5. Data integrity 6. Balance load (flash crowds) 7. Robustness with untrusted participants 8. Heterogeneity 9. Anonymity 10. Indexing

Goal: simple, provably-good algorithms

Facts about Usenet News

• Bulletin board • Has grown exponentially in volume • Current volume is 1.4 Terabyte/day • Hosting full Usenet has high costs • Large storage requirement • Bandwidth required: OC3+ (  $30,000/month) • Only 50 sites with full feed • Goal: save Usenet news by reducing needed storage and bandwidth

Posting a Usenet article

S1 S2 S4 S3 • User posts article to local server • Server exchanges headers & article w. peers • Headers allow sorting into newsgroups

Potential solutions

• Remove all binaries • Current volume is > 99.9% binaries • But, binaries are in high demand • Header-only feeds • Serve binaries from posting site • Had load and availability problems • Consolidate servers • Outsource news service to a few big providers • Centralized: no diversity, open to attack/failure

Our approach: UsenetDHT

• Store article

in shared

DHT • Only “single” copy of Usenet needed • Can scale DHT to handle increased volume

UsenetDHT architecture

S1 S2 S4 S3 DHT • User posts article to local server • Server writes article to DHT • Server exchanges headers only • All servers know about article

UsenetDHT:

potential

savings

Usenet UsenetDHT Net bandwidth 12 Megabyte/s 120 Kbyte/s Storage 10 Terabyte/week 60 Gbyte/week • Suppose 300 site network • Each site reads 1% of all articles

Conclusions

• Peer-to-peer systems have huge promise • DHTs are a good way to build peer-to-peer applications: • Easy to program • Single, shared infrastructure for many applications • Robust in the face of failures • Scalable to large number of servers • Applications range from lawful content distribution through commercial data archiving to resilient infrastructure provision