Wireless Sensor Networks: An Overview

Download Report

Transcript Wireless Sensor Networks: An Overview

《先进数据管理》前沿讲习班

云计算与云数据管理

陆嘉恒 中国人民大学

www.jiahenglu.net

主要内容

 云计算概述 

Google

云计算技术:

GFS

Bigtable

Mapreduce

Yahoo

云计算技术和

Hadoop

 云数据管理的挑战 2

人民大学新开的《分布式系统与 云计算》课程

 分布式系统概述  分布式云计算技术综述  分布式云计算平台  分布式云计算程序开发 3

第一篇分布式系统概述

第一章:分布式系统入门 第二章:客户

-

服务器端构架 第三章:分布式对象 第四章:公共对象请求代理结构

(CORBA)

4

第二篇 云计算综述

第五章:云计算入门 第六章:云服务 第七章:云相关技术比较 7.1网格计算和云计算 7.2 Utility计算(效用计算)和云计算 7.3并行和分布计算和云计算 7.4集群计算和云计算 5

第三篇 云计算平台

第八章:Google云平台的三大技术 第九章:Yahoo云平台的技术 第十章:Aneka 云平台的技术 第十一章:Greenplum云平台的技术 第十二章:Amazon dynamo云平台的技术

6

第四篇 云计算平台开发

第十三章:基于 Hadoop 系统开发 第十四章:基于 HBase 系统开发 第十五章:基于 Google Apps 系统开发 第十六章:基于 MS Azure 系统开发 第十七章:基于 Amazon EC2 系统开发 7

Cloud computing

Why we use cloud computing?

Why we use cloud computing?

Case 1: Write a file Save Computer down, file is lost Files are always stored in cloud, never lost

Why we use cloud computing?

Case 2: Use IE --- download, install, use Use QQ --- download, install, use Use C++ --- download, install, use …… Get the serve from the cloud

What is cloud and cloud computing?

Cloud Demand resources or services over Internet scale and reliability of a data center.

What is cloud and cloud computing?

Cloud computing

is a style of computing in which

dynamically scalable

and often

virtualized

resources are provided as a serve over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure in the "cloud" that supports them.

Characteristics of cloud computing

Virtual.

software, databases, Web servers, operating systems, storage and networking as virtual servers. 

On demand.

add and subtract processors, memory, network bandwidth, storage.

Types of cloud service

SaaS Software as a Service PaaS Platform as a Service IaaS Infrastructure as a Service

SaaS

Software delivery model

 No hardware or software to manage  Service delivered through a browser  Customers use the service on demand  Instant Scalability

SaaS

Examples

 Your current CRM package is not managing the load or you simply don’t want to host it in-house. Use a SaaS provider such as Salesforce.com

 Your email is hosted on an exchange server in your office and it is very slow. Outsource this using Hosted Exchange.

PaaS

Platform delivery model

 Platforms are built upon Infrastructure, which is expensive  Estimating demand is not a science!

 Platform management is not fun!

PaaS

Examples

 You need to host a large file (5Mb) on your website and make it available for 35,000 users for only two months duration. Use Cloud Front from Amazon .

 You want to start storage services on your network for a large number of files and you do not have the storage capacity…use Amazon S3.

IaaS

Computer infrastructure delivery model

   A platform virtualization environment Computing resources, such as storing and processing capacity.

Virtualization taken a step further

IaaS

Examples

 You want to run a batch job but you don’t have the infrastructure necessary to run it in a timely manner. Use Amazon EC2.

 You want to host a website, but only for a few days. Use Flexiscale.

Cloud computing and other computing techniques

The 21 st Century Vision Of Computing

Leonard Kleinrock , one of the chief scientists of the original Advanced Research Projects Agency Network (ARPANET) project which seeded the Internet, said: “ As of now, computer networks are still in their infancy, but as they grow up and become sophisticated, we will probably see the spread of ‘

computer utilities’

which, like present electric and telephone utilities, will service individual homes and offices across the country.”

The 21 st Century Vision Of Computing

Sun Microsystems co founder Bill Joy He also indicated “It would take time until these markets to mature to generate this kind of value. Predicting now which companies will capture the value is impossible. Many of them have not even been created yet.”

The 21 st Century Vision Of Computing

Definitions

utility

Cluster Grid Cloud

Definitions

utility

Cluster Grid Cloud Utility computing

is the packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility

Definitions

utility

Cluster Grid Cloud

A

computer cluster

is a group of linked computers, working together closely so that in many respects they form a single computer.

Definitions

utility

Cluster Grid Cloud Grid computing

is the application of several computers to a single problem at the same time — usually to a scientific or technical problem that requires a great number of computer processing cycles or access to large amounts of data

Definitions

utility

Cluster Grid Cloud Cloud computing

is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet.

Grid Computing & Cloud Computing

 share a lot commonality intention, architecture and technology  Difference programming model, business model, compute model, applications, and

Virtualization

.

Grid Computing & Cloud Computing

 the problems are mostly the same  manage large facilities;  define methods by which consumers discover, request and use resources provided by the central facilities;  implement the often highly parallel computations that execute on those resources.

Grid Computing & Cloud Computing

 Virtualization  Grid   do not rely on virtualization as much as Clouds do, each individual organization maintain full control of their resources Cloud  an indispensable ingredient for almost every Cloud

Any question and any comments ?

2020/4/28 36

主要内容

 云计算概述 

Google

云计算技术:

GFS

Bigtable

Mapreduce

Yahoo

云计算技术和

Hadoop

 云数据管理的挑战 37

Google Cloud computing techniques

The

G o o g l e

File System

The

G o o g l e

File System (GFS)

A scalable distributed file system for large distributed data intensive applications Multiple GFS clusters are currently deployed.

The largest ones have: 1000+ storage nodes 300+ TeraBytes of disk storage heavily accessed by hundreds of clients on distinct machines

Introduction

Shares many same goals as previous distributed file systems performance, scalability, reliability, etc GFS design has been driven by four key observation of G o o g l e application workloads and technological environment

Intro: Observations 1

1. Component failures are the norm

constant monitoring, error detection, fault tolerance and automatic recovery are integral to the system 

2. Huge files (by traditional standards)

Multi GB files are common I/O operations and blocks sizes must be revisited

Intro: Observations 2

3. Most files are mutated by appending new data

This is the focus of performance optimization and atomicity guarantees 

4. Co-designing the applications and APIs benefits overall system by increasing flexibility

The Design

Cluster consists of a single

master

and multiple

chunkservers

and is accessed by multiple

clients

The Master

Maintains all file system metadata.

names space, access control info, file to chunk mappings, chunk (including replicas) location, etc.

Periodically communicates with chunkservers in

HeartBeat

messages to give instructions and check state

The Master

Helps make sophisticated chunk placement and replication decision, using global knowledge For reading and writing, client contacts Master to get chunk locations, then deals directly with chunkservers Master is not a bottleneck for reads/writes

Chunkservers

Files are broken into

chunks

. Each chunk has a immutable globally unique 64-bit

chunk handle.

handle is assigned by the master at chunk creation

Chunk size is 64 MB

Each chunk is replicated on 3 (default) servers

Clients

Linked to apps using the file system API.

Communicates with master and chunkservers for reading and writing Master interactions only for metadata Chunkserver interactions for data Only caches metadata information Data is too large to cache.

Chunk Locations

Master does not keep a persistent record of locations of chunks and replicas.

Polls

chunkservers at startup, and when new chunkservers join/leave for this.

Stays up to date by controlling placement of new chunks and through

HeartBeat

messages (when monitoring chunkservers)

Operation Log

Record of all critical metadata changes Stored on Master and replicated on other machines Defines order of concurrent operations Also used to recover the file system state

System Interactions:

Leases and Mutation Order

Leases

maintain a mutation order across all chunk replicas Master grants a lease to a replica, called the

primary

The primary choses the serial mutation order, and all replicas follow this order Minimizes management overhead for the Master

Atomic Record Append

Client specifies the data to write; GFS chooses and returns the offset it writes to and

appends the data to each replica at least once

Heavily used by Google ’ s Distributed applications.

No need for a distributed lock manager GFS choses the offset, not the client

Atomic Record Append: How?

• • • Follows similar control flow as mutations Primary tells secondary replicas to append at the same offset as the primary If a replica append fails at any replica, it is retried by the client. So replicas of the same chunk may contain different data, including duplicates, whole or in part, of the same record

Atomic Record Append: How?

• GFS does not guarantee that all replicas are bitwise identical.

Only guarantees that data is written at least once in an atomic unit.

Data must be written at the same offset for all chunk replicas for success to be reported.

Detecting Stale Replicas

• • • • • Master has a

chunk version number

up to date and stale replicas to distinguish Increase version when granting a lease If a replica is not available, its version is not increased master detects stale replicas when a chunkservers report chunks and versions Remove stale replicas during garbage collection

Garbage collection

When a client deletes a file, master logs it like other changes and changes filename to a hidden file.

Master removes files hidden for longer than 3 days when scanning file system name space metadata is also erased During

HeartBeat

messages, the chunkservers send the master a subset of its chunks, and the master tells it which files have no metadata.

Chunkserver removes these files on its own

Fault Tolerance:

High Availability

• • •

Fast recovery

Master and chunkservers can restart in seconds

Chunk Replication Master Replication

“ shadow ” masters provide read-only access when primary master is down mutations not done until recorded on all master replicas

Fault Tolerance:

Data Integrity

Chunkservers use

checksums

to detect corrupt data Since replicas are not bitwise identical, chunkservers maintain their own checksums For reads, chunkserver verifies checksum before sending chunk Update checksums during writes

Introduction to MapReduce

MapReduce: Insight

 ”Consider the problem of counting the number of occurrences of each word in a large collection of documents”  How would you do it in parallel ?

MapReduce Programming Model

 Inspired from map and reduce operations commonly used in functional programming languages like Lisp.

 Users implement interface of two primary methods:   1. Map: (key1, val1) → (key2, val2) 2. Reduce: (key2, [val2]) → [val3]

Map operation

 Map, a pure function, written by the user, takes an input key/value pair and produces a set of intermediate key/value pairs.  e.g. (doc —id, doc-content)  Draw an analogy to SQL, map can be visualized as

group-by

clause of an aggregate query.

Reduce operation

 On completion of map phase, all the intermediate values for a given output key are combined together into a list and given to a reducer.

 Can be visualized as

aggregate

function (e.g., average) that is computed over all the rows with the same group-by attribute.

Pseudo-code

map(String input_key, String input_value):

// input_key: document name // input_value: document contents for each word w in input_value: EmitIntermediate(w, "1");

reduce(String output_key, Iterator intermediate_values):

// output_key: a word // output_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result));

MapReduce: Execution overview

MapReduce: Example

MapReduce in Parallel: Example

MapReduce: Fault Tolerance

 Handled via re-execution of tasks.

 Task completion committed through master   What happens if Mapper fails ?

Re-execute completed + in-progress

map

tasks   What happens if Reducer fails ?

Re-execute in progress

reduce

tasks   What happens if Master fails ?

Potential trouble !!

MapReduce:

Walk through of One more Application

MapReduce : PageRank

 PageRank models the behavior of a “random surfer”.

  

PR

(

x

)  ( 1 

d

) 

d i n

  1

PR

(

t i

)

C

(

t i

) C(t) is the out-degree of t, and (1-d) is a damping factor (random jump) The “random surfer” keeps clicking on successive links at random not taking content into consideration.

 Distributes its pages rank equally among all pages it links to.

PageRank : Key Insights

 Effects at each iteration is local. i+1 th depends only on i th iteration iteration  At iteration i, PageRank for individual nodes can be computed independently

PageRank using MapReduce

 Use Sparse matrix representation (M)  Map each row of M to a list of PageRank “credit” to assign to out link neighbours.

 These prestige scores are single PageRank value for a page by aggregating over them.

reduced

to a

PageRank using MapReduce

Map: distribute PageRank “credit” to link targets Reduce: gather up PageRank “credit” from multiple sources to compute new PageRank value Iterate until convergence Source of Image: Lin 2008

Phase 1: Process HTML

 Map task takes (URL, page-content) pairs and maps them to (URL, (PR init , list-of-urls))  PR init is the “seed” PageRank for URL  list-of-urls contains all pages pointed to by URL  Reduce task is just the identity function

Phase 2: PageRank Distribution

 Reduce task gets (URL, url_list) and many (URL,

val

) values  Sum

val

s and fix up with

d to get new PR

 Emit (URL, (new_rank, url_list))  Check for convergence using non parallel component

MapReduce: Some More Apps

MapReduce Programs In Google Source Tree  Distributed Grep.

 Count of URL Access Frequency.

 Clustering (K-means)  Graph Algorithms.

 Indexing Systems

MapReduce: Extensions and similar apps

 PIG (Yahoo)  Hadoop (Apache)  DryadLinq (Microsoft)

Large Scale Systems Architecture using MapReduce

User App MapReduce Distributed File Systems (GFS)

BigTable: A Distributed Storage System for Structured Data

Introduction

 BigTable is a distributed storage system for managing structured data.

 Designed to scale to a very large size  Petabytes of data across thousands of servers  Used for many Google projects  Web indexing, Personalized Search, Google Earth, Google Analytics, Google Finance, …  Flexible, high-performance solution for all of Google’s products

Motivation

  Lots of (semi-)structured data at Google    URLs:  Contents, crawl metadata, links, anchors, pagerank, … Per-user data:  User preference settings, recent queries/search results, … Geographic locations:  Physical entities (shops, restaurants, etc.), roads, satellite image data, user annotations, … Scale is large    Billions of URLs, many versions/page (~20K/version) Hundreds of millions of users, thousands or q/sec 100TB+ of satellite image data

Why not just use commercial DB?

  Scale is too large for most commercial databases Even if it weren’t, cost would be very high  Building internally means system can be applied across many projects for low incremental cost  Low-level storage optimizations help performance significantly  Much harder to do when running on top of a database layer

Goals

   Want asynchronous processes to be continuously updating different pieces of data  Want access to most current data at any time Need to support:   Very high read/write rates (millions of ops per second) Efficient scans over all or interesting subsets of data  Efficient joins of large one-to-one and one-to-many datasets Often want to examine data changes over time  E.g. Contents of a web page over multiple crawls

BigTable

    Distributed multi-level map Fault-tolerant, persistent Scalable  Thousands of servers    Terabytes of in-memory data Petabyte of disk-based data Millions of reads/writes per second, efficient scans Self-managing   Servers can be added/removed dynamically Servers adjust to load imbalance

Building Blocks

  Building blocks:   Google File System (GFS): Raw storage Scheduler: schedules jobs onto machines   Lock service: distributed lock manager MapReduce: simplified large-scale data processing BigTable uses of building blocks:     GFS: stores persistent data (SSTable file format for storage of data) Scheduler: schedules jobs involved in BigTable serving Lock service: master election, location bootstrapping Map Reduce: often used to read/write BigTable data

Basic Data Model

 A BigTable is a sparse, distributed persistent multi-dimensional sorted map

(row, column, timestamp) -> cell contents

 Good match for most Google applications

WebTable Example

    Want to keep copy of a large collection of web pages and related information Use URLs as row keys Various aspects of web page as column names Store contents of web pages in the contents: under the timestamps when they were fetched.

column

Rows

 Name is an arbitrary string   Access to data in a row is atomic Row creation is implicit upon storing data  Rows ordered lexicographically  Rows close together lexicographically usually on one or a small number of machines

Rows (cont.)

Reads of short row ranges are efficient and typically require communication with a small number of machines.

 Can exploit this property by selecting row keys so they get good locality for data access.

 Example: math.gatech.edu, math.uga.edu, phys.gatech.edu, phys.uga.edu VS edu.gatech.math, edu.gatech.phys, edu.uga.math, edu.uga.phys

Columns

   Columns have two-level name structure:  family:optional_qualifier Column family   Unit of access control Has associated type information Qualifier gives unbounded columns  Additional levels of indexing, if desired

Timestamps

   Used to store different versions of data in a cell  New writes default to current time, but timestamps for writes can also be set explicitly by clients Lookup options:  

“Return most recent K values” “Return all values in timestamp range (or all values)”

Column families can be marked w/ attributes: 

“Only retain most recent K values in a cell”

“Keep values until they are older than K seconds”

Implementation – Three Major Components

   Library linked into every client One master server  Responsible for:     Assigning tablets to tablet servers Detecting addition and expiration of tablet servers Balancing tablet-server load Garbage collection Many tablet servers  Tablet servers handle read and write requests to its table  Splits tablets that have grown too large

Implementation (cont.)

 Client data doesn’t move through master server. Clients communicate directly with tablet servers for reads and writes.

 Most clients never communicate with the master server, leaving it lightly loaded in practice.

Tablets

  Large tables broken into tablets at row boundaries   Tablet holds contiguous range of rows  Clients can often choose row keys to achieve locality Aim for ~100MB to 200MB of data per tablet Serving machine responsible for ~100 tablets   Fast recovery:  100 machines each pick up 1 tablet for failed machine Fine-grained load balancing:   Migrate tablets away from overloaded machine Master makes load-balancing decisions

Tablet Location

 Since tablets move around from server to server, given a row, how do clients find the right machine?

 Need to find tablet whose row range covers the target row

Tablet Assignment

 Each tablet is assigned to one tablet server at a time.

 Master server keeps track of the set of live tablet servers and current assignments of tablets to servers. Also keeps track of unassigned tablets.

 When a tablet is unassigned, master assigns the tablet to an tablet server with sufficient room.

API

   Metadata operations  Create/delete tables, column families, change metadata Writes (atomic)    Set(): write cells in a row DeleteCells(): delete cells in a row DeleteRow(): delete all cells in a row Reads  Scanner: read arbitrary cells in a bigtable     Each row read is atomic Can restrict returned rows to a particular range Can ask for just data from 1 row, all rows, etc.

Can ask for all columns, just certain column families, or specific columns

Refinements: Compression

   Many opportunities for compression  Similar values in the same row/column at different timestamps   Similar values in different columns Similar values across adjacent rows Two-pass custom compressions scheme  First pass: compress long common strings across a large window  Second pass: look for repetitions in small window Speed emphasized, but good space reduction (10-to-1)

Refinements: Bloom Filters

 Read operation has to read from disk when desired SSTable isn’t in memory  Reduce number of accesses by specifying a Bloom filter.

 Allows us ask if an SSTable might contain data for a specified row/column pair.

  Small amount of memory for Bloom filters drastically reduces the number of disk seeks for read operations Use implies that most lookups for non-existent rows or columns do not need to touch disk

Refinements: Bloom Filters

 Read operation has to read from disk when desired SSTable isn’t in memory  Reduce number of accesses by specifying a Bloom filter.

 Allows us ask if an SSTable might contain data for a specified row/column pair.

  Small amount of memory for Bloom filters drastically reduces the number of disk seeks for read operations Use implies that most lookups for non-existent rows or columns do not need to touch disk

主要内容

 云计算概述 

Google

云计算技术:

GFS

Bigtable

Mapreduce

Yahoo

云计算技术和

Hadoop

 云数据管理的挑战 102

Yahoo

Cloud computing

Yahoo! Cloud Stack EDGE YCS WEB VM/OS VM/OS APP Cloud Services … Sherpa STORAGE Cloud Services … Hadoop BATCH

Horizontal

… Cloud Services … App Engine

Web Data Management

• Scan oriented workloads • Focus on sequential disk I/O • $ per cpu cycle Large data analysis (Hadoop) Blob storage (SAN/NAS) Structured record storage (PNUTS/Sherpa) • CRUD • Point lookups and short scans • Index organized table and random I/Os • $ per latency • Object retrieval and streaming • Scalable file storage • $ per GB

The World Has Changed

 Web serving applications need:   Scalability!

 Preferably elastic Flexible schemas    Geographic distribution High availability Reliable storage  Web serving applications can do without:   Complicated queries Strong transactions

PNUTS / SHERPA

To Help You Scale Your Mountains of Data

Yahoo! Serving Storage Problem

  Small records – 100KB or less Structured records – lots of fields, evolving  Extreme data scale - Tens of TB  Extreme request scale - Tens of thousands of requests/sec  Low latency globally - 20+ datacenters worldwide  High Availability - outages cost $millions  Variable usage patterns - as applications and users change 110

What is PNUTS/Sherpa?

A 42342 E B 42521 W C 66354 W D 12352 E E 75656 C F 15677 E ) CREATE TABLE Parts ( ID VARCHAR, StockNumber INT, Status VARCHAR … Structured, flexible schema Parallel database A 42342 E B 42521 W C 66354 W D 12352 E E 75656 C F 15677 E Hosted, managed infrastructure A 42342 E B 42521 W C 66354 W D 12352 E E 75656 C F 15677 E Geographic replication 112

What Will It Become?

A 42342 E B 42521 W C 66354 W D 12352 E E 75656 C F 15677 E

Indexes and views

A 42342 E B 42521 W C 66354 W D 12352 E E 75656 C F 15677 E A 42342 E B 42521 W C 66354 W D 12352 E E 75656 C F 15677 E

Design Goals

Scalability    Thousands of machines Easy to add capacity Restrict query language to avoid costly queries Geographic replication   Asynchronous replication around the globe Low-latency local access High availability and fault tolerance  Automatically recover from failures  Serve reads and writes despite failures Consistency    Per-record guarantees Timeline model Option to relax if needed Multiple access paths   Hash table, ordered table Primary, secondary access Hosted service  Applications plug and play  Share operational cost 115

Technology Elements

Applications

PNUTS API

• •

PNUTS

Query planning and execution Index maintenance

Tabular API

YDOT FS

Ordered tables • • •

Distributed infrastructure for tabular data

Data partitioning Update consistency Replication •

YDHT FS

Hash tables •

Tribble

Pub/sub messaging •

Zookeeper

Consistency service 116

Data Manipulation

 Per-record operations   Get Set  Delete  Multi-record operations    Multiget Scan Getrange 117

Tablets —Hash Table

0x0000

Name

Grape Lime Apple 0x2AF3 Strawberry Orange Avocado Lemon 0x911F 0xFFFF Tomato Banana Kiwi

Description

Grapes are good to eat Limes are green Apple is wisdom Strawberry shortcake Arrgh! Don’t get scurvy!

But at what price?

How much did you pay for this lemon?

Is this a vegetable?

The perfect fruit New Zealand

Price

$12 $9 $1 $900 $2 $3 $1 $14 $2 $8 118

Tablets —Ordered Table

A

Name

Apple Avocado Banana Grape H Kiwi Lemon Lime Orange Q Strawberry Tomato Z

Description

Apple is wisdom But at what price?

The perfect fruit Grapes are good to eat New Zealand How much did you pay for this lemon?

Limes are green Arrgh! Don’t get scurvy!

Strawberry shortcake Is this a vegetable?

Price

$1 $3 $2 $12 $8 $1 $9 $2 $900 $14 119

Flexible Schema

Posted date

6/1/07 6/1/07 6/3/07 6/5/07

Listing id

424252 763245 211242 421133

Item

Couch Bike Car Lamp

Price

$570 $86 $1123 $15

Color Condition

Good Fair Red

Detailed Architecture

Local region

Clients REST API Routers Tribble Tablet Controller Storage units

Remote regions

121

Tablet Splitting and Balancing

Each storage unit has many tablets (horizontal partitions of the table) Storage unit may become a hotspot Storage unit Tablet Overfull tablets split Tablets may grow over time Shed load by moving tablets to other servers 122

QUERY PROCESSING

123

Accessing Data

4 Record for key k 1 Get key k SU 3 Record for key k 2 Get key k SU SU 124

Bulk Read

1 {k1, k2, … kn} SU Get k 1 SU Get k 2 Get k 3 2 SU 125 Scatter/ gather server

Range Queries in YDOT

 Clustered, ordered retrieval of records Apple Banana Blueberry Canteloupe Grape Kiwi Lemon Lime Mango Orange Strawberry Tomato Watermelon Blueberry

Storage unit 1 Router

Lime Mango Orange

Storage unit 2 Lime…Pear?

Grapefruit…Lime?

Canteloupe Grape Kiwi Lemon

Storage unit 3

Updates

8 Sequence # for key k

Routers

1 Write key k SU 7 Sequence # for key k SU

Message brokers

3 Write key k 2 Write key k 4 SU 5 SUCCESS 6 Write key k 127

ASYNCHRONOUS REPLICATION AND CONSISTENCY

128

Asynchronous Replication

129

Consistency Model

 Goal: Make it easier for applications to reason about updates and cope with asynchrony  What happens to a record with primary key “Alice”?

Record inserted Update Update Update Update Update Update Update

v. 1 v. 2 v. 3 v. 4 v. 5 Generation 1 v. 6 v. 7 v. 8

Delete As the record is updated, copies may get out of sync.

130

Example: Social Alice

West User

Alice

East Status

___

User

Alice

User

Alice

User

Alice

Status

Busy

Status

Busy

Status

???

User

Alice

User

Alice

Status

Free

Status

???

Record Timeline

___ Busy Free Free

Consistency Model

Read Stale version Stale version Current version

v. 1 v. 2 v. 3 v. 4 v. 5 Generation 1 v. 6 v. 7 v. 8

In general, reads are served using a local copy Time 132

Consistency Model

Read up-to-date Stale version Stale version Current version

v. 1 v. 2 v. 3 v. 4 v. 5 Generation 1 v. 6 v. 7 v. 8

But application can request and get current version Time 133

Consistency Model

Read ≥ v.6

Stale version Stale version Current version

v. 1 v. 2 v. 3 v. 4 v. 5 Generation 1 v. 6 v. 7 v. 8

Time Or variations such as “read forward”—while copies may lag the master record,

every

copy goes through the same sequence of changes 134

Consistency Model

Write Stale version Stale version Current version

v. 1 v. 2 v. 3 v. 4 v. 5 Generation 1 v. 6 v. 7 v. 8

Achieved via per-record primary copy protocol (To maximize availability, record masterships automaticlly transferred if site fails) Can be selectively weakened to eventual consistency (local writes that are reconciled using version vectors) 135 Time

Consistency Model

Write if = v.7

Stale version Stale version

ERROR

Current version

v. 1 v. 2 v. 3 v. 4 v. 5 Generation 1 v. 6 v. 7 v. 8

Test-and-set writes facilitate per-record transactions Time 136

Consistency Techniques

 Per-record mastering  Each record is assigned a “master region”  May differ between records   Updates to the record forwarded to the master region Ensures consistent ordering of updates  Tablet-level mastering  Each tablet is assigned a “master region”  Inserts and deletes of records forwarded to the master region  Master region decides tablet splits  These details are hidden from the application  Except for the latency impact!

Mastering

A 42342 E B 42521 W C 66354 W D 12352 E E 75656 C F 15677 E Tablet master A 42342 E B 42521 W C 66354 W D 12352 E E 75656 C F 15677 E 138 A 42342 E B 42521 W C 66354 W D 12352 E E 75656 C F 15677 E

Bulk Insert/Update/Replace

Client

Source Data

Bulk manager

1. Client feeds records to bulk manager 2. Bulk loader transfers records to SU’s in batches • Bypass routers and message brokers • Efficient import into storage unit

Bulk Load in YDOT

 YDOT bulk inserts can cause performance hotspots  Solution: preallocate tablets

Index Maintenance

 How to have lots of interesting indexes and views, without killing performance?

 Solution: Asynchrony !

 Indexes/views updated asynchronously when base table updated

SHERPA IN CONTEXT

142

Types of Record Stores

 Query expressiveness Simple

S3

Object retrieval

PNUTS

Retrieval from single table of objects/records

Oracle

SQL Feature rich

Types of Record Stores

 Consistency model Best effort

S3

Eventual consistency

PNUTS

Timeline consistency Object-centric consistency

Oracle

ACID Program centric consistency Strong guarantees

Types of Record Stores

 Data model

PNUTS CouchDB

Flexibility, Schema evolution

Oracle

Optimized for Fixed schemas Object-centric consistency Consistency spans objects

Types of Record Stores

 Elasticity (ability to add resources on demand)

Oracle PNUTS S3

Inelastic Limited (via data distribution) VLSD (Very Large Scale Distribution /Replication) Elastic

Data Stores Comparison

 User-partitioned SQL stores   Microsoft Azure SDS Amazon SimpleDB Versus PNUTS    More expressive queries Users must control partitioning Limited elasticity  Multi-tenant application databases   Salesforce.com

Oracle on Demand    Highly optimized for complex workloads Limited flexibility to evolving applications Inherit limitations of underlying data management system  Mutable object stores  Amazon S3  Object storage versus record management

Application Design Space

Get a few things Scan everything Sherpa MySQL Oracle BigTable Everest Records MObStor YMDB Filer Hadoop Files 148

Alternatives Matrix

Sherpa Y! UDB MySQL Oracle HDFS BigTable Dynamo Cassandra 149

QUESTIONS?

150

Hadoop

Problem

 How do you scale up applications?

 Run jobs processing 100’s of terabytes of data  Takes 11 days to read on 1 computer  Need lots of cheap computers  Fixes speed problem (15 minutes on 1000 computers), but…  Reliability problems  In large clusters, computers fail every day  Cluster size is not fixed  Need common infrastructure  Must be efficient and reliable

Solution

 Open Source Apache Project  Hadoop Core includes:  Distributed File System - distributes data  Map/Reduce - distributes application  Written in Java  Runs on  Linux, Mac OS/X, Windows, and Solaris  Commodity hardware

Hardware Cluster of Hadoop

 Typically in 2 level architecture    Nodes are commodity PCs 40 nodes/rack Uplink from rack is 8 gigabit  Rack-internal is 1 gigabit

Distributed File System

 Single namespace for entire cluster  Managed by a single

namenode

.

 Files are single-writer and append-only.

 Optimized for streaming reads of large files.

 Files are broken in to large blocks.

 Typically 128 MB  Replicated to several

datanodes

, for reliability  Access from Java, C, or command line.

Block Placement

 Default is 3 replicas, but settable  Blocks are placed (writes are pipelined):  On same node  On different rack  On the other rack  Clients read from closest replica  If the replication for a block drops below target, it is automatically re-replicated.

How is Yahoo using Hadoop?

 Started with building better applications  Scale up web scale batch applications (search, ads, …)  Factor out common code from existing systems, so new applications will be easier to write  Manage the many clusters

Running Production WebMap

 Search needs a graph of the “known” web  Invert edges, compute link text, whole graph heuristics  Periodic batch job using Map/Reduce  Uses a chain of ~100 map/reduce jobs  Scale  1 trillion edges in graph  Largest shuffle is 450 TB  Final output is 300 TB compressed  Runs on 10,000 cores  Raw disk used 5 PB

Terabyte Sort Benchmark

 Started by Jim Gray at Microsoft in 1998  Sorting 10 billion 100 byte records  Hadoop won the general category in 209 seconds   910 nodes 2 quad-core Xeons @ 2.0Ghz / node      4 SATA disks / node 8 GB ram / node 1 gb ethernet / node 40 nodes / rack 8 gb ethernet uplink / rack  Previous records was 297 seconds

Hadoop clusters

    We have ~20,000 machines running Hadoop Our largest clusters are currently 2000 nodes Several petabytes of user data (compressed, unreplicated) We run hundreds of thousands of jobs every month

Research Cluster Usage

Who Uses Hadoop?

 Amazon/A9  AOL  Facebook  Fox interactive media  Google / IBM  New York Times  PowerSet (now Microsoft)  Quantcast  Rackspace/Mailtrust  Veoh  Yahoo!

 More at http://wiki.apache.org/hadoop/PoweredBy

Q&A

For more information:

Website: http://hadoop.apache.org/core

Mailing lists:

[email protected]

[email protected]

主要内容

 云计算概述 

Google

云计算技术:

GFS

Bigtable

Mapreduce

Yahoo

云计算技术和

Hadoop

 云数据管理的挑战 164

基于云上的数据管理的特点

 计算资源是可伸缩的  数据具有备份  数据存储在大量分布的结点之上

基于云上的数据管理的挑战(一)

 数据的自我管理和自调优

基于云上的数据管理的挑战(二)

 基于大量节点的查询优化算法  基于大量节点的索引结构

基于云上的数据管理的挑战(三)

 资源调度和负载均衡  多租户情况中

Further Reading

Efficient Bulk Insertion into a Distributed Ordered Table (SIGMOD 2008) Adam Silberstein, Brian Cooper, Utkarsh Srivastava, Erik Vee, Ramana Yerneni, Raghu Ramakrishnan PNUTS: Yahoo!'s Hosted Data Serving Platform (VLDB 2008) Brian Cooper, Raghu Ramakrishnan, Utkarsh Srivastava, Adam Silberstein, Phil Bohannon, Hans-Arno Jacobsen, Nick Puz, Daniel Weaver, Ramana Yerneni Asynchronous View Maintenance for VLSD Databases, Parag Agrawal, Adam Silberstein, Brian F. Cooper, Utkarsh Srivastava and Raghu Ramakrishnan SIGMOD 2009 Cloud Storage Design in a PNUTShell Brian F. Cooper, Raghu Ramakrishnan, and Utkarsh Srivastava Beautiful Data, O’Reilly Media, 2009

Further Reading

F. Chang et al. Bigtable: A distributed storage system for structured data. In OSDI, 2006.

J. Dean and S. Ghemawat. MapReduce: Simplified data processing on large clusters. In OSDI, 2004.

G. DeCandia et al. Dynamo: Amazon’s highly available key-value store. In SOSP, 2007. S. Ghemawat, H. Gobioff, and S.-T. Leung. The Google File System. In Proc. SOSP, 2003. D. Kossmann. The state of the art in distributed query processing. ACM Computing Surveys, 32(4):422 –469, 2000.