Document 7228529

Download Report

Transcript Document 7228529

F5 Acopia
ARX
Product Demonstration
Troy Alexander
Field Systems Engineer
2
Agenda
Acopia Technology Introduction
Product Demonstration
3
Introducing F5’s Acopia
Technology
4
The Constraints of Today’s Infrastructure
Complex
– Mixed vendors, platforms, file
systems
Inflexible
– Access is tightly coupled to file
location
– Disruptive to move data
Inefficient
– Resources are under and over
utilized
Growing rapidly
– 70% annually (80% are files)
The cost of managing storage is five to ten times the acquisition cost
5
Virtualization Breaks the Constraints
Simplified access
– Consolidated, persistent access
points
Flexibility
– Data location not bound to
physical resources
Optimized utilization
– Balances load across shared
resources
Leverages technology
– Freedom to choose most
appropriate file storage
“File virtualization is the hottest new storage technology in plan today…”
(TheInfoPro)
6
Where Does Acopia Fit?
Users and Applications
LAN
– NAS, File Servers, Gateways
Acopia File Virtualization
NAS
File Servers
EMC
Blocks
HDS
– Can manage SAN data presented
through a gateway or server
No changes to existing
infrastructure
SAN virtualization manages
blocks, Acopia manages files
SAN
HP
Does not connect directly to SAN
– ARX appears as a normal NAS
device to clients
– ARX appears as a normal CIFS or
NFS client to storage
SAN Virtualization
SAN
IBM
Plugs into existing IP /
Ethernet switches
Virtualizes heterogeneous file
storage devices that present
file systems via NFS and/or
CIFS
NTAP
– Data management vs. storage
management
7
What does F5’s Acopia do?
During the demo the F5’s Acopia ARX will virtualize a
multi-protocol (NFS & CIFS) environment
F5 Acopia provides the same functionality for NFS, CIFS
and multi-protocol
• Automates common storage management tasks
– Migration
– Storage Tiering
– Load Balancing
• These tasks now take place without affecting access
to the file data or requiring client re-configuration
8
What are the F5`s Acopia differentiators ?
Purpose-built to meet challenges of global file management
– Separate data (I/O) & control planes with dedicated resources
– Enterprise scale: >2B files, 24 Gbps in a single switch
Real-time management of live data
– Unique dynamic load balancing
– Unique in-line file placement (files are not placed on the wrong share and then
migrated after the fact)
– No reliance on stubs or redirection
Superior reliability, availability and supportability
– Integrity of in-flight operations ensured with redundant NVRAM
– Enterprise server class redundancy
– Comprehensive logging, reporting, SNMP, call-home, port mirroring
Proven in Fortune 1000 enterprises
– Merrill Lynch, Bear Stearns, Yahoo, Warner Music, Toshiba, United Rentals,
Novartis, Raytheon, The Hartford, Dreamworks, etc.
9
How is F5 Acopia Architecture ?
Remote transaction log (mirror)
Control
Path
Local
Transaction
Log
NAS &
File Servers
Clients
Data Path
(Fast Path)
Adaptive Resource Switch
Patented tiered architecture separates data & control paths
– Data path handles non-metadata operations at wire speed
– Control path handles operations that affect metadata & migration
Each path has dedicated processing and memory resources and
each can scale independently; unique scale and availability
PC-based appliances inadequate – single PCI bus, processor & shared memory
10
How will F5 Acopia Virtualization work ?
Virtual
Path
Applications, Users
Virtual IP
arx:/eng/project1/spec.doc
“Virtual Volume Manager”
Routes virtual path to physical path
na:/vol/vol2/project1/spec.doc
NetApp
Volume
EMC
Volume
ILM operation 1
Virtual Volume
NAS
Volume
11
How will F5’s Acopia virtualization work ?
Applications, Users
Virtual IP
New Physical
Path
Virtual Path
does not change
arx:/eng/project1/spec.doc
“Virtual Volume Manager”
Routes virtual path to physical path
emc:/vol/vol2/project1/spec.doc
NetApp
Volume
EMC
Volume
ILM operation 1
Virtual Volume
NAS
Volume
12
How will F5’s Acopia virtualization work ?
Virtual Path
does not change
Applications, Users
Virtual IP
arx:/eng/project1/spec.doc
“Virtual Volume Manager”
Routes virtual path to physical path
nas:/vol/vol1/project1/spec.doc
NetApp
Volume
EMC
Volume
NAS
Volume
ILM operation 2
Virtual Volume
13
F5’s Acopia virtualization layers
Users & Application Servers
Mount Point
Presentation Namespace
Presentation Namespace (PNS)
Adaptive Resource
Switch (ARX)
Virtual Volume
Manager
Attach Points
NAS & File Servers
NAS & File Servers
14
F5’s Acopia architecture
CONTROL
PLANE
ASM
• High
performance
SMP
architecture
• Metadata
services
• Policy
Temperature sensors
Fan sensors
Power sensors
RAID
Drives
Mgmt
Interface
Control
Data
DATA
PLANE
(Fast Path)
NSM
• Wire speed, low latency
• Non-metadata
operations, e.g. file read
/ write
• In-line policy
enforcement
SCM
NVR
15
ARX Architecture Differentiators
Network switch purpose-built to meet challenges of global file management
– Real-time management of live data
3 tiered architecture provides superior scale & reliability
– Separate I/O, control & management planes
– Dedicated resources for each plane
– Each plane can be scaled independently
PC-based appliances are inadequate
– Single PCI bus, processor & shared memory – Bottleneck!
• File I/O, policy, management all share same resources
– Single points of failure
Data integrity and reliability
– Integrity of in-flight operations ensured with redundant NVRAM
– External metadata and ability to repair & reconstruct metadata
16
How will Acopia work?
ARX acts as a proxy for all file servers / NAS devices
– The ARX resides logically in-line
– Uses virtual IP addresses to proxy backend devices
Proxies NFS and CIFS traffic
Provides virtual to physical mapping of the file systems
– Managed Volumes are configured & imported
– Presentation volumes are configured
17
What is File Routing or Metadata ?
Metadata will be stored on a highly available filer
No proprietary data is stored in the metadata; the
metadata can be completely rebuilt (100% disposable)
ARX ensures integrity of in-flight operations
– If the ARX loses power or is reset, the NVRAM has a list of the
outstanding transactions
– When the ARX is booted back up, before it services any user
requests, it validates all of the pending transactions in the NVRAM
& takes the appropriate action
– Ensures transactions are performed in the correct order
ARX provides tools to detect / repair / rebuild metadata
18
Filesets and Placement Policy Rules
Placement rules migrate files
Rules use filesets as sources
– Filesets supply matching criteria for policy rules
– Filesets can match files based on age, size or “name”
• Age = groups files based on last accessed or last modified date /
time
• “Name” = matches any portion of a file’s name using Simple criteria
(similar to DOS/Unix, e.g. *.ppt) or POSIX compliant Regular
Expressions (e.g. [a-z]*\.txt)
– Filesets can be combined to form unions or intersections
Place rules can target a specific share or a share farm
19
What are Acopia’s Policy Differentiators
Load balancing is unique to Acopia
– No other virtualization device is able to do in-line placement and real
time load balancing
Multi-protocol, multi-vendor migration is unique to Acopia
Ability to tier storage without requiring stubs is unique to Acopia
In-line policy enforcement is unique to Acopia
– Competitive solutions require expensive “treewalks” to determine what to
move / replicate
Flexibility and scale of migration / replication capability is unique to
Acopia
– From individual file / fileset to an entire virtual volumes
20
High availability Overview
ARX’s are typically deployed in a redundant pair
The primary ARX keeps synchronized state with the secondary ARX
– In-flight transactions (NVRAM), Global Configuration, Network Lock
Manager Clients (NLM), Duplicate Request Cache (DRC)
The ARX will monitor resources to determine failover criteria
– The operator can optionally define certain resources to be “critical” and
be considered for failover criteria, e.g. default gateway, critical share, etc.
The ARX does not store any user data on the switches
21
How to Deploy Acopia in the Network
Client
ARX HA subsystem uses a 3
voting systems to avoid
“split brain” scenarios
– “Split Brain” is a situation where
a loss in communication causes
both devices in an HA pair to
service traffic requests which
can result in data corruption
Heartbeats are exchanged
between
– The primary switch
– The standby switch
– The Quorum Disk
• The Quorum Disk is a share on
a server/filer
Core Routers /
Layer 3 Switches
Distribution
Switches
Acopia
HA Pair
Switch A
Switch B
Workgroup
Switches
NAS & File Servers
Quorum Disk
22
The Demo Topology
23
Acopia Demo – What Will be Shown
Data Migration
Tiering
Load Balancing (Share Farm)
Inline policy and file placement by name
Shadow Volume Replication
24
Data Migration
25
Usage Scenario: Data Migration
Movement of files between
heterogeneous file servers
Drivers:
– Lease rollover, vendor
switch, platform upgrades,
NAS consolidation
Benefits:
– Reduce outages and
business disruption
– Faster migrations
– Lower operational overhead
• No client reconfiguration,
automated
26
Data Migration with Acopia
Solution:
–
–
–
–
–
–
–
–
–
Transparent migration at any time
Paths and embedded links are preserved
File-level granularity, without links or
stubs
NFS and CIFS support across multiple
vendors
Scheduled policies to automate data
migration
CIFS Local Group translation
CIFS share replication
Optional data retention on source
IBM uses ARX for its data migration
services
Benefits:
–
–
Reduce outages and business disruption
Lower operational overhead
•
•
•
No client reconfiguration
Decommissioning without disruption
Automation
27
Data Migration (One for One)
Transparent migrations occur at the file system level via standard CIFS / NFS
protocols
A file system is migrated in its entirety to a single target file system
All names, paths, and embedded links are preserved
Multiple file systems can be migrated in parallel
Inline policy steers all new file creations to target filer
–
–
No need to go back and re-scan like Robocopy or Rsync
No need to quiesce clients to pick up final changes
File systems are probed by the ARX to ensure compatibility before merging
The ARX uses no linking or stub technology so it is backed out easily
F5 Acopia ARX
Client View
NAS-1
home on “server1” (U:)
NAS-2
28
Data Migration (One for One)
File and directory structure is identical on source and target file systems
All CIFS and NFS file system security is preserved
–
If CIFS local groups are in use SIDs will be translated by the ARX
File Modification and Access times are not altered during the migration
–
The ARX will preserve the create time (depending on the filer)
The ARX can perform a true multiprotocol migration where both CIFS and NFS
attributes/permissions are transferred
–
Robocopy does CIFS only, Rsync NFS only
The ARX can optionally replicate CIFS shares and associated share permissions to
the target filer
Acopia ARX
Client View
NAS-1
home on “server1” (U:)
NAS-2
29
Data Migration: Fan Out
Client View
Fan out migration allows admin to
change sub optimal data layout due
to reactive data management policies
Structure can be re-introduced into
the environment via fileset based
policies that allow for migrations
using:
•
•
•
•
•
•
Anything in the file name or extension
File path
File age (last modify or last access)
File size
Include of exclude (all files except for)
Any combination (union or
intersection)
• For the more advanced user regular
expression matching is available
home on “server1” (U:)
Acopia ARX
NAS-1
NAS-2
NAS-3
Rules operate “in-line”
– Any new files created are
automatically created on the target
storage, no need to re-scan the
source again
NAS-4
30
Data Migration: Fan In
Fan in migration allows admin to take
advantage of larger/more flexible file
system capabilities on new NAS
platforms
Separate file systems can be merged
an migrated into a single file system
The ARX can perform a detailed file
system collision analysis before
merging file systems
– A collision report is generated for
each file system
– Admin can choose to manually
remove collisions or let the ARX
rename offending files and directories
Like directories will be merged
Client View
home on “server1” (U:)
Acopia ARX
NAS-1
NAS-2
NAS-3
– Clients see aggregated directroy
In cases where the directory name is
the same but has different
permissions the ARX can
synchronize the directory attributes
NAS-4
31
Data Migration: Name Preservation
Acopia utilizes a name based takeover method for migrations in most cases
–
No 3rd party namespace technology is required, however DFS can be layered over ARX
All client presentations (names, shares, exports, mount security) are preserved
The source filer’s CIFS name is renamed and then the original name is transferred
to the ARX allowing for transparent insertion of the ARX solution
–
This will help avoid issues with embedded links in MS-Office documents
The ARX will join Active Directory with the original source filer CIFS name
If WINS was enabled on the source filer it is disabled, and the ARX will assume the
advertisement of any WINS aliases
For NFS the source filers DNS entry is updated to point to the ARX
–
Or auto-mounter/DFS maps can be updated
The ARX can assume the filers IP address if needed
Client View
Acopia ARX
\\Server1-old
home on “\\server1” (U:)
\\Server1
32
Case Study: NAS Consolidation
One of the world's leading financial services companies, with global presence
Environment:
Windows file servers, NAS
Critical Issue:
Large scale file server to NAS
consolidation; 24x7 environment
Reasons:
Cost savings in rack space,
power, cooling and operations
Requirements: Move the data without disrupting
the business
Solution:
Result:
ARX6000 clusters
>80 file servers migrated to
NAS without disruption
Migrations completed faster,
with less intervention
“Acopia's products allow us to consolidate our back-end storage resources while providing
data access to our users without disruption.”
Chief Technology Architect
33
Migration Demonstration
34
Acopia Demo Topology
Virtual File Systems
Virtual
View
Virtual Volume
V:
Windows Client
Virtual Server
Physical
View
ARX500
Layer 2 Switch
J:
L:
Physical File Systems
35
Storage Tiering
Information Lifecycle Management
36
Usage Scenario: Tiering / ILM
Match cost of storage to
business value of data
– Files are automatically
moved between tiers based
on flexible criteria such as
age, type, size, etc.
Drivers:
– Storage cost savings, backup
efficiencies, compliance
Benefits:
– Reduced CAPEX
– Reduced backup windows
and infrastructure costs
37
Storage Tiering with F5 Acopia
Solution:
– Automated, non-disruptive data
placement of flexibly defined
filesets
– Multi-vendor, multi-platform
– Clean (no stubs or links)
– File movement can be
scheduled
Benefits:
– Reduced CAPEX
• Leverage cost effective storage
– Reduced OPEX
– Reduced backup windows and
infrastructure costs
38
Storage Tiering with F5 Acopia
Can be applied to all data or a subset via filesets
Operates on either last access or last modify time
The ARX can run tentative “what if” reports to
allow for proper provisioning of lower tiers
Files accessed or modified on lower tiers can be
brought up to tier 1 dynamically
39
Storage Tiering Case Study
Users and Applications
Acopia
ARX1000
NetApp
940c
NetApp
3020
International trade-show
company
Challenges
– Move less business critical data
to less expensive storage, nondisruptively to users
Solution
– ARX1000 cluster
Benefits
Tier 1
Tier 2
“Based upon these savings, we
estimate that we will enjoy a return on
our Acopia investment in well under a
year.”
Reinhard Frumm, Director Distributed IS,
Messe Dusseldorf
– 50% reduction in disk spend
– Dramatic reduction in backup
windows (from ~14 hours to ~3
hours) and backup
infrastructure costs
40
Tiering Demonstration
41
Acopia Demo Topology
Virtual File Systems
Virtual
View
Virtual Volume
V:
Windows Client
Virtual Server
Physical
View
ARX500
Layer 2 Switch
L:
Tier 1
N:
Tier 2
Physical File Systems
42
Load Balancing
43
Load Balancing with Acopia
Solution:
– Automatically balances new
file placement across file
servers
– Flexible dynamic load
balancing algorithms
– Uses existing file storage
devices
Benefits:
– Increased application
performance
– Improved capacity utilization
– Reduced outages associated
with data management
44
Load Balancing
Application
A common problem for our customers is
applications that require lots of space
–
–
–
Administrators are reluctant to provision a large
file system, because if it ever needs to be
recovered it will take too long
They tend to provision smaller file systems and
force the application deal with adding new
storage locations
This typically requires application down time
and complexity being added to the application
The ARX can decouple the application from
the physical storage so that the application
only needs to know a single storage location
–
The application will no longer need to deal with
multiple storage locations
The storage administrator can now keep file
systems small and dynamically add new
storage without disruption
–
No more down time when capacity thresholds
are reached
2 TB
2 TB
2 TB
2 TB
2 TB
2 TB
45
Load Balancing
One or more file systems can be aggregated together into a sharefarm
Within the share-farm the ARX can load balance new file creates
using the following algorithms
– Round Robin, Weighted Round Robin, Latency, Capacity
The ARX will load balance with file level granularity but constraints
can be added to keep files and/or directories together
The ARX can also maintain free space thresholds for each file system
in the share-farm
– When a file system crosses the threshold it is removed from the new file
placement algorithm
The ARX can also be setup to automatically migrate file from a file
system if a certain free space threshold is not maintained
46
Load Balancing Case Study
Compute Nodes
Challenges
Acopia
ARX6000
– Infrastructure was a bottleneck to
production of digital content
– Difficult to provision new storage
Solution
– ARX6000 cluster
Benefits
NetApp 3050
“Acopia’s products increased
our business workflow
by 560%”
Mike Streb, VP Infrastructure, WMG
– Ability to digitize >500% more
music
– 20% reduction OPEX costs
associated with managing storage
– Reduction in disk spend due to
more efficient utilization of existing
NAS
47
Acopia Demo Topology
Virtual File Systems
Virtual
View
Virtual Volume
V:
Windows Client
Virtual Server
Physical
View
ARX500
Layer 2 Switch
Tier1 - Share Farm
L:
Tier 1
M:
Tier 1
N:
Tier 2
Physical File Systems
48
Inline Policy Enforcement\Place by
Name
49
Inline Policy Enforcement\Place by Name
Classification and placement of
data based on name or path
Drivers:
– Tiered storage, business
polices, SLA’s for applications
or projects, migration based on
file type or path
Benefits:
– File level granularity
– Can migrate existing file
systems to comply with current
policy
– Operates inline for real time
policy enforcement for new data
creation
50
File Based Placement
Filesets
– Group of files based on name,
type, extension, string, path,
size
– Unions, intersections, include
and excldue supported
Storage Tiers
– Arbitrary definition defined by
the enterprise
– Can consist of a single share
or share farm with capacity
balancing
File Placement
– Namespace is walked only
once for initial placement of
files
– In-line policy enforcement will
place files on proper tier in
real time
+
51
Demonstration
Inline Policy Enforcement and File
Placement by Name
52
Acopia Demo Topology
Virtual File Systems
Virtual
View
Virtual Volume
V:
Windows Client
Virtual Server
Physical
View
ARX500
Layer 2 Switch
Tier1 - Share Farm
L:
Tier 1
M:
Tier 1
N:
Tier 2
Physical File Systems
53
Shadow Volume Replication
54
Data Replication with Acopia
Applications and Users
Acopia Global Namespace
ARX
Cluster
IP Network
Technology:
– File-set based replication
– NFS & CIFS across multiple
platforms
– Replicas may be viewed
– Supports multiple targets
– Change-based updates only
(file deltas)
Benefits:
Replication
Primary Site
Secondary Site
– Target is not required to be of
like storage type
– WAN bandwidth preservation
– Can be used for centralized
backup applications
55
Data Replication Case Study
World’s largest equipment rental
company
WAN
WAN
Challenges
– Upgrade NAS platform
– Introduce lower cost ATA disk
– File-based Disaster Recovery solution
Solution
Tier 1
Tier 2
Replica
Disaster Recovery
Site
Primary Site
– ARX1000 cluster at primary data center
– ARX1000 at disaster recovery facility
Benefits
– NAS upgrade with no impact to users
– 50% savings through use of ATA disk
“Acopia has reduced our total backup
and replication times by about 70%.”
Bonnie Stiewing, Senior Systems Administrator,
United Rentals
– Cost effective disaster recovery
solution
– Dramatic reduction in backup and
replication times
56
Demonstration
Shadow Volume Replication
57
Acopia Demo Topology
Virtual File Systems
Virtual Volume
V:
Virtual
View
Windows Client
Virtual Server
Virtual Volume
W:
Physical
View
ARX500
Layer 2 Switch
Tier1 - Share Farm
L:
Tier 1
N:
Tier 2
O:
Shadow
Replication
M:
Physical File Systems
58