Filer Fundamental Introduction

Download Report

Transcript Filer Fundamental Introduction

暨南大學
Filer教育訓練
Simple
Fast
Reliable
Agenda
(一) Filer Fundamental Introduction
1。NetApp Storage Introduction
2。NetApp Storage OverView
( Break Time)
3。FAS2040 Spec Introduction
4。FAS2040 Fundamental Setup
( Break Time)
(二) Filer Configuration
1。NetApp Fileview Management
( Break Time )
Filer Fundamental Introduction
(一) Filer Fundamental Introduction
1。NetApp Storage Introduction
2。NetApp Storage OverView
3。FAS2000 Spec Introduction
4。FAS2000 Fundamental Setup
Filer Fundamental Introduction
NetApp Storage Introduction
完全不需搬動資料的硬體擴充彈性
Gigabit Ethernet
Frontend: Multiple Ethernet channel
Backend: Multiple FC-AL channel
更換主機即可直接快速升級至更高階機種
資料不受影響,不需透過其他儲存系統或
磁帶系統的備份回復輔助
系統設備升級永遠不需要資料轉換
NetApp FAS
FAS6080
FAS6040
FAS3070
FAS3020
FAS2020
FAS270
84 TB
65TB
16TB
504TB
1176TB
840 TB
不用耗時的資料轉
移即可完成升級至
更高階系統
 EMC: DMXDMX3, CXDMX,
CXDMX3 CXCX,CXCX3,
AXCX3
 IBM: DS DS
 HDS: ThunderLightning, AMS USP,
WMSAMS, WMSUSP
 HP: EVAEVA, XPXP, EVAXP,
MSAEVA, MSAXP
升級需耗時
的資料轉移
時間
Upgrade Paths
FAS6070
504TB
FAS6000
Series
FAS3000
Series
*
FAS270
• Simple “head swap”
• Flexible upgrade options
• Zero data migration
• Investment protection
* Disk shelf conversion
FAS250
4TB
NetApp Unified Storage可在任何SAN 架構下運作
F C - S A N
PROTOCOL TYPE
I
Block
SCSI
P
Block
SCSI
-
S
File
NFS
A
N
File
CIFS
HOST LAYER
FABRIC LAYER
STORAGE LAYER
FCP
IP over GbE
Unified Storage
Fabric Attached Storage
FC DISK & S-ATA DISK
資料儲存必須依照特性選擇不同的存取模式
FC Storage
IP Storage
Database, Email, AP Servers
Web/Streaming
Block
File
iSCSI
Fibre
Channel
File Sharing
Home Directory
NFS CIFS
DAFS
Corporate
LAN
Dedicated
Ethernet
NFS CIFS
SAN(Storage Area Network)
Unified Storage
DAS SAN NAS 的比較
DAS
SAN (Block)
NAS (File)
Application
Server
Application
Server
Application
Server
File
System
File
System
File
System
SCSI,
FCP
FCP
OR
FC Switch
&
Infrastructure
RAID
iSCSI
Ethernet
Switch &
Infrastructure
RAID
Application
Server
Application
Server
NFS, CIFS
Ethernet
Switch
File System
RAID
兩種 SAN 的比較
DAS
FC-SAN
IP-SAN
Application
Server
Application
Server
Application
Server
Application
Server
File
System
File
System
File
System
File
System
SCSI,
FCP
FCP
FCP
Application
Server
NFS
CIFS
iSCSI
FC
Switch
Ethernet
Switch
Virtualization
RAID
RAID
RAID
Unified Multiprotocol Storage
SAN
Enterprise
NAS
Departmental
Enterprise
Departmental
Consolidate file and
block workloads into a
single system
NAS: CIFS, NFS
SAN: iSCSI, FCP
Fibre
Channel
iSCSI
CIFS
LAN
NFS
Dedicated
Ethernet
FAS2000
FAS3000
FAS6000
V-Series
Adapt dynamically to
performance and
capacity changes
Multivendor storage
support with V-Series
HP EMC HDS
12
NetApp Unified Storage
Corporate Data Center
Regional Office
Home Dirs
Exchange
Exchange
CIFS, NFS
Windows®
Servers
LAN
Home Dirs /
Network Shares
CIFS
NetApp
FAS
Series
LAN
CRM
FC SAN
WAN
Linux®
Servers
NetApp
FAS
Series
Regional Data Center
LAN
ERP
iSCSI
SQL Server
LAN
UNIX
Servers
®
Tape Library
iSCSI
Windows
Servers
Exchange &
SQL Server
Windows
Servers
NetApp
FAS
Series
Filer Fundamental Introduction
NetApp Storage OverView
NetApp Storage OverView
Raid_DP
FlexVol
NVRAM
SnapShot
SnapRestore
Flex Clone
SyncMirror
Cluster
各種RAID的特性
RAID Level
0 1
0+1
4 5 6
1+0
  
經濟-以最低的成本提供資料安全保護
效能-讀寫效能皆最高

安全-保護任何一顆硬碟故障的資料安全
重建期間的安全-保護*任意*兩顆硬碟故障
經濟+效能+擴充+安全+重建期間的安全





擴充-不須等待隨時動態擴增一顆或多顆
WAFL
RAID-DP


  




NetApp RAID-DP的優勢
RAID重建期間資料仍有安全保護
20%
17.9%
ATA
15%
Protected with
10%
5%
ATA
RAID-DP
5%
3%
2.6%
ATA
FC
.2%
FC
0%
ATA
1.7%
FC
.0000000001%
Up to 5%
Up to 2.6%
Up to 17.9%
小於10億分之1
平均每年硬碟的
故障率
*硬碟發生磁區
錯誤的機率
*RAID3/4/5在
磁碟重建期間
發生磁區錯誤
導致資料流失
的機率
*RAID-DP在資料
硬碟重建期間又
發生2個磁區錯誤
而導致資料流失
的機率
(以300GB FC /
320GB SATA為例)
*Source: Network Appliance
(以8顆硬碟組成RAID
為例)
(以16顆硬碟組成
RAID為例)
FlexVol
FlexVol
NetApp內部虛擬化讓效能大幅增加
業界最佳
傳統作法
標竿
提高存取效能
App 1
App 1
App 2
App 3
App 2
App 3
即時動態線上檔案系統擴充能力
屬於Dynamic Online File System Growth
最少可一次只增加(減少)數MB的容量
最多可一次增加數(減少)TB的容量
不需等待就能立刻使用新增的容量
不影響系統運作及效能
不需重建檔案系統
對Unix而言--不需更動mounting point的設定
對Windows而言--不需更動網路磁碟的設定
即時動態線上檔案數量上限擴充能力
在檔案系統容量不變的前提下,可隨時增加inode數量(可容
納的最大檔案數量)
避免因檔案數量達到volume的系統上限時,即使仍有剩
餘儲存空間,也無法再存入檔案了
完全不影響系統運作
Snapshot
不搬動區塊的快照功能
空間最省效率最高, 每個volume都有255份的快照備份
快照備份區有RAID的保護
一顆硬碟故障不會導致資料流失
系統管理人員可以
隨時刪除任一快照時間的備份資料而不影響其他快照備份
的內容
依據<時>,<天>,<週>為週期混合使用進行快照或隨時依需
要進行快照
隨時動態調整快照空間所佔比率(0-50%)
調整時不會失去原有快照備份內容
超過設定保留空間上限時會發出警告,仍可正常進行快
照,不會覆蓋原有快照內容
NetApp Storage OverView
SnapRestore
SnapRestore
®
可數秒內瞬間回復整個檔案系統,不限容量大小
Active File System
Snapshot.0
File: SOME.TXT
File: SOME.TXT
A
C
B
Disk blocks
2015/4/6
C´
因有Snapshot.0的
快照備份, App的損
毀只影響C’區塊
SnapRestore
®
可數秒內瞬間回復整個檔案系統,不限容量大小
Active File System
使用單一指令即可瞬間將整個檔案
系統(或單一檔案)回復到某個快照
時間點的備份
A
B
Disk blocks
2015/4/6
C
File: SOME.TXT
NetApp Storage OverView
FlexClone
基於Snapshot技術發展的Flex Clone
FlexVol
Aggregate
Parent
snapshot
Clone
FlexVol Clone
A-SIS
FAS Deduplication: Function
Data
Meta Data
Deduplication
Process
Deduped
(Single Instance) Storage
Meta Data
General data in
flexible volume
Deduped data in
flexible volume
FAS Deduplication: Commands
‘sis’ == single instance storage command
License it
license add <dedup_license>
Turn it on
sis on <vol>
[Deduplicate existing data]
sis start -s <vol>
Schedule when to deduplicate or run manually
sis config [-s schedule] <vol>
sis start <vol>
Check out what’s happening
sis status [-l] <vol>
See the savings!
df –s <vol>
FAS Deduplication:
“df –s” Progress Messages and Stages
netapp1> sis status
Gathering
Sorting
Deduplicating
Checking
Path
State
Status Progress
/vol/vol5
Enabled
Active
Path
State
Status Progress
/vol/vol5
Enabled
Active
Path
State
Status Progress
/vol/vol5
Enabled
Active
Path
State
Status Progress
/vol/vol5
Enabled
Active
30MB Verified
Active
10% Merged
25 MB Scanned
25 MB Searched
40MB (20%) done
OR
/vol/vol5
Complete
Enabled
netapp1> df –s /vol/vol5
Filesystem
used
saved
/vol/vol5/
24072140 9316052
%saved
28%
Typical Space Savings Results
in archival and primary storage environments:
Video Surveillance
PACS
Movies
Email Archive
ISOs and PSTs
Oil & Gas
Web & MS Office
Home Dirs
Software Archives
Tech Pubs archive
SQL
VMWARE
1%
5%
7%
8%
16%
30%
30 - 45%
30 - 50%
48%
52%
70%
40~60%
In data backup environments, space savings can be much higher. For instance, tests with
Commvault Galaxy provided a 20:1 space reduction over time, assuming daily full
backups with 2% daily file modification rate. (Reference:
http://www.netapp.com/news/press/news_rel_20070515)
Stretch MetroCluster
Campus DR Protection
MetroCluster is a unique, costeffective synchronous replication
solution for combined high
availability and disaster recovery
within a campus or metro area
300 meters distance
Site #1
LAN
What:
 Replicate synchronously
 Upon disaster, fail over
to partner filer at remote site
to access replicated data
Site #2
B
B
A
A
FC
Benefits
 No single point of failure
 No data loss
 Fast data recovery
X
Limitations
Y-mirror
 Distance
Y
Replicated
Data
X-mirror
Fabric MetroCluster
Metropolitan DR Protection
Building
B
Building
A
LAN
up to 100km
dark fibre
vol X
vol Y’
Cluster interconnect
Benefits
Disaster protection
Complete redundancy
Up-to-date mirror
Site failover
vol X’
vol Y
Fabric MetroCluster (2)
Metropolitan DR Protection
Building
B
Building
A
LAN
up to 100km
DWDM
vol X
vol Y’
Cluster interconnect
dark fibre
DWDM
Benefits
Disaster protection
Complete redundancy
Up-to-date mirror
Site failover
vol X’
vol Y
Disaster Protection Scenarios
WAN Distances
Campus distances
Primary
Data Center
Within DataCenter
Local High Availability
• Component failures
• Single system failures
Campus Protection
• Human Error
• HVAC failures
• Power failures
• Building Fire
• Architectural failures
• Planned Maintenance
downtime.
Regional Disaster
Protection
• Electric grid failures
• Natural disasters
- Floods
- Hurricanes
- Earth Quake
NetApp DR Solutions Portfolio
WAN Distances
Campus distances
Async SnapMirror
Primary
Data Center
Within DataCenter
Clustered
Failover (CFO)
• High system
protection
MetroCluster (Stretch)
• Cost effective zero
RPO protection
• Most cost effective with
RPO from 10 min. to 1 day
MetroCluster (Fabric)
• Cost effective zero RPO
protection
Sync SnapMirror
• Most robust zero RPO
protection
Disk Failure Protection Solution Portfolio
Increasing
Cost of
Protection
RAID 4
RAID DP
RAID 4
SyncMirror
Any 2 Disks
Any 3 disks
RAID DP +
SyncMirror
Checksums
1 Disk
Classes of Failure Scenarios
Any 5 Disks
NetApp Storage OverView
Cluster
Overview of High Availability
Cluster: A pair of standard NetApp controllers (nodes) that share
access to separately owned sets of disks, in a shared-nothing
architecture
Also referred to as redundant controllers
Logical configuration is active-active. A pseudo active-passive
config is achieved by owning all disks under one controller (except
for a boot disk set under the partner controller).
Dual-ported disks are cross-connected between controllers via
independent Fibre Channel links
High speed interconnect between controllers acts as a “hearbeat”
link and also path for NV-RAM mirror
Provides high availability in the presence of catastrophic
hardware failures
High Availability Architecture
LAN/SAN
Controller B
Controller A
High speed Interconnect
FC
Active path to owned disks
FC
Controller B Disks
Standby path to
partner disks
Controller A Disks
(shared nothing model)
Mirrored NVRAM
LAN/SAN
Controller B
Controller A
NVRAM
Mirrored NVRAM
NVRAM
When a client request is received
• The controller logs it in its local NVRAM
• The log entry is also synchronously copied to the partner’s NVRAM
• Acknowledgement is returned to client
Failover mode
LAN/SAN
X
NVRAM
Controller A
IP addr or WWPN
of Controller A
Virtual instance
of Controller A
NVRAM
Controller B
During the failover process:
• Virtual instance of partner is created on surviving node
• Surviving node takes over the partner’s disk and
replays it’s intended NVRAM log entries
• Partner’s IP addresses (or WWPNs) are set on standby NICs
(or HBAs), or aliased on top of existing NICs (or HBAs).
Takeover and Giveback
Upon detection of failure, failover takes 40~120sec
On the takeover controller, data service is never impacted and is fully
available during the entire failover operation.
On the failed controller, both takeover and giveback are nearly invisible
to clients.
NFS cares only a little (typically stateless connections)
CIFS cares more (connection-based, caches, etc.)
Block protocols (FC, iSCSI): depends on the tolerance of the
application
host HBAs are typically configured for (worst case) 2 minute
timeout
Takeover is manual, automatic or negotiated
Giveback is manual, or automatic
High Availability Architecture
Components Required:
A second Controller
Cluster interconnect kit
4 Crossover FC cables
2 cluster licenses
Cluster Interconnect Hardware
• Open Standards Infiniband link
• Fast (10 Gbit/sec)
• Redundant Connect
• Up to 200m between controllers
• Integrated into NVRAM5 card
Filer Fundamental Introduction
FAS2040A Spec Introduction
FAS2040 Chassis Front Interface
Power
Fault
Controller A
Controller B
NetApp Confidential - Internal Use only
48
FAS2040 Chassis Rear Interface
NVMEM Status
Power
PCM Fault
PSU Fault
FC 0a Port
FC 0b Port
SAS 0d Port
Console Port
e0P Port (ACP)
BMC Mgmt 10/100 Ethernet Port
GigE e0d Port
GigE e0c Port
GigE e0b Port
GigE e0a Port
49 49
FAS2040 Specifications
Filer Specifications
Max. Raw Capacity
Max. Number of Disk Drives
FAS2040
136TB
136
(Internal + External)
Max. Volume/Aggregate Size
16TB
ECC Memory
Ethernet 10/100/1000 Copper
Onboard Fibre Channel
8GB
8
4(1, 2, or 4Gb )
Filer Fundamental Introduction
FAS2040 Fundamental Setup
FAS2000 Fundamental Setup
FAS2000 Fundamental Setup
1。Setup
2。How to add disk to aggr
3。SnapShot & SnapRestore Demo
FAS2000 Fundamental Setup
Setup
How to add disk to aggr
Add Disk to Aggr0
How to add disk to aggr
NetApp Fileview Management
FileView
http://storage_ip/ na_admin/
NetApp Fileview Management
FileView
Q&A