Transcript Slide 1

Hadoop: Distributed Data Processing
Amr Awadallah
Founder/CTO, Cloudera, Inc.
ACM Data Mining SIG
Thursday, January 25th, 2010
Outline
▪ Scaling
for Large Data Processing
▪ What is Hadoop?
▪ HDFS and MapReduce
▪ Hadoop Ecosystem
▪ Hadoop vs RDBMSes
▪ Conclusion
Amr Awadallah, Cloudera Inc
2
Current Storage Systems Can’t Compute
Interactive Apps
RDBMS (200GB/day)
Ad hoc Queries &
Data Mining
ETL Grid
Non-Consumption
Filer heads are a bottleneck
Storage Farm for Unstructured Data (20TB/day)
Mostly Append
Collection
Instrumentation
Amr Awadallah, Cloudera Inc
3
The Solution: A Store-Compute Grid
“Batch” Apps
Interactive Apps
RDBMS
Ad hoc Queries
& Data Mining
ETL and
Aggregations
Storage + Computation
Mostly Append
Collection
Instrumentation
Amr Awadallah, Cloudera Inc
4
What is Hadoop?
▪
A scalable fault-tolerant grid operating system
for data storage and processing
▪
Its scalability comes from the marriage of:
▪
HDFS: Self-Healing High-Bandwidth Clustered Storage
▪
MapReduce: Fault-Tolerant Distributed Processing
▪
Operates on unstructured and structured data
▪
A large and active ecosystem (many developers
and additions like HBase, Hive, Pig, …)
▪
Open source under the friendly Apache License
▪
http://wiki.apache.org/hadoop/
Amr Awadallah, Cloudera Inc
5
Hadoop History
▪
2002-2004: Doug Cutting and Mike Cafarella started working on Nutch
▪
2003-2004: Google publishes GFS and MapReduce papers
▪
2004: Cutting adds DFS & MapReduce support to Nutch
▪
2006: Yahoo! hires Cutting, Hadoop spins out of Nutch
▪
2007: NY Times converts 4TB of archives over 100 EC2s
▪
2008: Web-scale deployments at Y!, Facebook, Last.fm
▪
April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodes
▪
May 2009:
▪
Yahoo does fastest sort of a TB, 62secs over 1460 nodes
▪
Yahoo sorts a PB in 16.25hours over 3658 nodes
▪
June 2009, Oct 2009: Hadoop Summit (750), Hadoop World (500)
▪
September 2009: Doug Cutting joins Cloudera
Amr Awadallah, Cloudera Inc
6
Hadoop Design Axioms
1.
2.
3.
4.
System Shall Manage and Heal Itself
Performance Shall Scale Linearly
Compute Should Move to Data
Simple Core, Modular and Extensible
Amr Awadallah, Cloudera Inc
7
HDFS: Hadoop Distributed File System
Block Size = 64MB
Replication Factor = 3
Cost/GB is a few
¢/month vs $/month
Amr Awadallah, Cloudera Inc
8
MapReduce: Distributed Processing
Amr Awadallah, Cloudera Inc
9
MapReduce Example for Word Count
SELECT word, COUNT(1) FROM docs GROUP BY word;
cat *.txt | mapper.pl | sort | reducer.pl > out.txt
Split 1
Map 1
(docid, text)
(words, counts)
(sorted words, counts)
Be, 5
“To Be
Or Not
To Be?”
Reduce 1
(sorted words,
sum of counts)
Output
File 1
Be, 30
Be, 12
Split i
(docid, text)
Map i
Be, 7
Be, 6
Split N
(docid, text)
Map M
(words, counts)
Amr Awadallah, Cloudera Inc
Reduce i
(sorted words,
sum of counts)
Reduce R
(sorted words,
sum of counts)
Shuffle
(sorted words, counts)
Output
File i
Output
File R
10
Hadoop High-Level Architecture
Hadoop Client
Contacts Name Node for data
or Job Tracker to submit jobs
Name Node
Job Tracker
Maintains mapping of file blocks
to data node slaves
Schedules jobs across
task tracker slaves
Data Node
Task Tracker
Stores and serves
blocks of data
Runs tasks (work units)
within a job
Share Physical Node
Amr Awadallah, Cloudera Inc
11
ETL Tools
BI Reporting
RDBMS
Pig (Data Flow)
Hive (SQL)
Sqoop
MapReduce (Job Scheduling/Execution System)
HBase (key-value store)
(Streaming/Pipes APIs)
HDFS
(Hadoop Distributed File System)
Amr Awadallah, Cloudera Inc
Avro (Serialization)
Zookeepr (Coordination)
Apache Hadoop Ecosystem
12
Use The Right Tool For The Right Job
Hadoop:
When to use?
Relational Databases:
When to use?
•
Affordable Storage/Compute
•
Interactive Reporting (<1sec)
•
Structured or Not (Agility)
•
Multistep Transactions
•
Resilient Auto Scalability
•
Interoperability
Amr Awadallah, Cloudera Inc
13
Economics of Hadoop
▪
Typical Hardware:
▪
Two Quad Core Nehalems
▪
24GB RAM
▪
12 * 1TB SATA disks (JBOD mode, no need for RAID)
▪
1 Gigabit Ethernet card
▪
Cost/node: $5K/node
▪
Effective HDFS Space:
▪
¼ reserved for temp shuffle space, which leaves 9TB/node
▪
3 way replication leads to 3TB effective HDFS space/node
▪
But assuming 7x compression that becomes ~ 20TB/node
Effective Cost per user TB: $250/TB
Other solutions cost in the range of $5K to $100K per user TB
Amr Awadallah, Cloudera Inc
14
Sample Talks from Hadoop World ‘09
▪
VISA: Large Scale Transaction Analysis
▪
JP Morgan Chase: Data Processing for Financial Services
▪
China Mobile: Data Mining Platform for Telecom Industry
▪
Rackspace: Cross Data Center Log Processing
▪
Booz Allen Hamilton: Protein Alignment using Hadoop
▪
eHarmony: Matchmaking in the Hadoop Cloud
▪
General Sentiment: Understanding Natural Language
▪
Yahoo!: Social Graph Analysis
▪
Visible Technologies: Real-Time Business Intelligence
▪
Facebook: Rethinking the Data Warehouse with Hadoop and Hive
Slides and Videos at http://www.cloudera.com/hadoop-world-nyc
Amr Awadallah, Cloudera Inc
15
Cloudera Desktop
Amr Awadallah, Cloudera Inc
16
Conclusion
Hadoop is a data grid operating
system which provides an
economically scalable solution
for storing and processing large
amounts of unstructured or
structured data over long
periods of time.
Amr Awadallah, Cloudera Inc
17
Contact Information
Amr Awadallah
CTO, Cloudera Inc.
[email protected]
http://twitter.com/awadallah
Online Training Videos and Info:
http://cloudera.com/hadoop-training
http://cloudera.com/blog
http://twitter.com/cloudera
Amr Awadallah, Cloudera Inc
18
(c) 2008 Cloudera, Inc. or its licensors. "Cloudera" is a registered trademark of Cloudera, Inc.. All rights reserved. 1.0