Introducing Acopia

Download Report

Transcript Introducing Acopia

F5 Acopia ARX Technology Overview

Troy Alexander Field Systems Engineer [email protected]

+44 7815 876 101

Agenda

09:30-10:00 10:00-10:30 10:30-12:30 12:30-13:30: Welcome & Coffee Elevator Pitch Acopia Technical Pitch & Live Demo Lunch 13:30-14:30: Acopia Qualification (Questions to ask, where to hunt, tools, etc...) 14:30-15:00 15:00-16:00 Competition Whiteboard Q&A:

2

Data Assessment:

20% of data is 30-90 days old 5% of data is 0-30 days old 75% of data is 180 days old Understanding the Data Growth Content Heavy:

Email/Excel/Word/PPT File and Print Document Sharing and Transactional

3

• • •

File Servers or email Servers at High Utilization Large Amounts of Duplicate Files No File Management Findings: You are backing up and archiving the same data over and over again

A few Customer stats…

55,000 home drives 1,113+ shared drives 8,600,000 email files – grown by 2,300,000 in 3 months!

1,100,000 duplicates 73% not modified >6 months 50TB+ of data 53,000,000 Office Docs grown by 11,000,000 in 3 months!

Largest file is 16GB 4

The Constraints of Today’s Infrastructure

Complex – Mixed vendors, platforms, file systems Inflexible – Access is tightly coupled to file location – Disruptive to move data Inefficient – Resources are under and over utilized Growing rapidly – 70% annually (80% are files) The cost of managing storage is five to ten times the acquisition cost

5

Virtualization Breaks the Constraints

Simplified access – Consolidated, persistent access points Flexibility – Data location not bound to physical resources Optimized utilization – Balances load across shared resources Leverages technology – Freedom to choose most appropriate file storage “File virtualization is the hottest new storage technology in plan today…” (TheInfoPro)

6

How will Acopia work?

ARX acts as a proxy for all file servers / NAS devices – The ARX resides

logically

in-line – Uses virtual IP addresses to proxy backend devices Proxies NFS and CIFS traffic Provides virtual to physical mapping of the file systems – Managed Volumes are configured & imported – Presentation volumes are configured

7

Where Does Acopia Fit?

Users and Applications LAN Acopia File Virtualization NAS File Servers SAN

IBM HP EMC

Blocks

HDS NTAP

Plugs into existing IP / Ethernet switches Virtualizes heterogeneous file storage devices that present file systems via NFS and/or CIFS

– NAS, File Servers, Gateways Does not connect directly to SAN – Can manage SAN data presented through a gateway or server

No changes to existing infrastructure

– ARX appears as a normal NAS device to clients – ARX appears as a normal CIFS or NFS client to storage

SAN virtualization manages blocks, Acopia manages files

– Data management vs. storage management

8

What does F5’s Acopia do?

During the demo the F5’s Acopia ARX will virtualize a multi-protocol (NFS & CIFS) environment F5 Acopia provides the same functionality for NFS, CIFS and multi-protocol

9

• Automates common storage management tasks –

Migration

– –

Storage Tiering Load Balancing

• These tasks now take place without affecting access to the file data or requiring client re-configuration

What are the F5`s Acopia differentiators ?

Purpose-built to meet challenges of global file management

– Separate data (I/O) & control planes with dedicated resources – Enterprise scale: >2B files, 24 Gbps in a single switch

Real-time management of live data

– Unique dynamic load balancing – Unique in-line file placement (files are not placed on the wrong share and then migrated after the fact) – No reliance on stubs or redirection

Superior reliability, availability and supportability

– Integrity of in-flight operations ensured with redundant NVRAM – Enterprise server class redundancy – Comprehensive logging, reporting, SNMP, call-home, port mirroring

Proven in Fortune 1000 enterprises

– Merrill Lynch, Bear Stearns, Yahoo, Warner Music, Toshiba, United Rentals, Novartis, Raytheon, The Hartford, Dreamworks, etc.

10

How is F5 Acopia Architecture ?

Control Path

Remote transaction log (mirror) Local

Transaction Log Clients NAS & File Servers Data Path

(Fast Path)

Adaptive Resource Switch

Patented tiered architecture separates data & control paths

– Data path handles non-metadata operations at wire speed – Control path handles operations that affect metadata & migration

Each path has dedicated processing and memory resources and each can scale independently; unique scale and availability

PC-based appliances inadequate – single PCI bus, processor & shared memory

11

How will F5 Acopia Virtualization work ?

Applications, Users Virtual Path

Virtual IP

arx:/eng/project1/spec.doc

“Virtual Volume Manager”

Routes virtual path to physical path na:/vol/vol2/project1/spec.doc

NetApp Volume EMC Volume NAS Volume ILM operation 1 Virtual Volume

12

How will F5’s Acopia virtualization work ?

New Physical Path Applications, Users Virtual Path does not change

Virtual IP

arx:/eng/project1/spec.doc

“Virtual Volume Manager”

Routes virtual path to physical path emc:/vol/vol2/project1/spec.doc

NetApp Volume EMC Volume ILM operation 1 Virtual Volume NAS Volume

13

How will F5’s Acopia virtualization work ?

Applications, Users Virtual Path does not change

Virtual IP

arx:/eng/project1/spec.doc

“Virtual Volume Manager”

Routes virtual path to physical path nas:/vol/vol1/project1/spec.doc

NetApp Volume EMC Volume ILM operation 2 Virtual Volume NAS Volume

14

F5’s Acopia virtualization layers

Users & Application Servers

Mount Point

Presentation Namespace Presentation Namespace (PNS) or Microsoft DFS

Attach Points

NAS & File Servers Adaptive Resource Switch (ARX) Virtual Volume Manager NAS & File Servers

15

F5’s Acopia architecture

CONTROL PLANE

ASM

• High performance SMP architecture • Metadata services • Policy

Control Data

DATA PLANE (Fast Path)

RAID Drives Mgmt Interface Temperature sensors Fan sensors Power sensors SCM NVR NSM

• Wire speed, low latency • Non-metadata operations, e.g. file read / write • In-line policy enforcement

16

How to Deploy Acopia in the Network

ARX HA subsystem uses a 3 voting systems to avoid “split brain” scenarios

– “Split Brain” is a situation where a loss in communication causes both devices in an HA pair to service traffic requests which can result in data corruption

Heartbeats are exchanged between

– The primary switch – The standby switch – The Quorum Disk • The Quorum Disk is a share on a server/filer Acopia HA Pair Core Routers / Layer 3 Switches Switch A Distribution Switches Workgroup Switches NAS & File Servers Quorum Disk Client Switch B

21

Data Mobility Transparent Migration

22

Data Migration with Acopia

Solution:

– Transparent migration at any time – Paths and embedded links are preserved – File-level granularity, without links or stubs – NFS and CIFS support across multiple vendors – Scheduled policies to automate data migration – CIFS Local Group translation – CIFS share replication – Optional data retention on source – IBM uses ARX for its data migration services

Benefits:

– Reduce outages and business disruption – Lower operational overhead • • • No client reconfiguration Decommissioning without disruption Automation

23

Data Migration (One for One)

Transparent migrations occur at the file system level via standard CIFS / NFS protocols A file system is migrated in its entirety to a single target file system All names, paths, and embedded links are preserved Multiple file systems can be migrated in parallel Inline policy steers all new file creations to target filer – No need to go back and re-scan like Robocopy or Rsync – No need to quiesce clients to pick up final changes File systems are probed by the ARX to ensure compatibility before merging The ARX uses no linking or stub technology so it is backed out easily

F5 Acopia ARX Client View NAS-1

home on “server1” (U:)

NAS-2 24

Data Migration (One for One)

File and directory structure is identical on source and target file systems All CIFS and NFS file system security is preserved – If CIFS local groups are in use SIDs will be translated by the ARX File Modification and Access times are not altered during the migration – The ARX will preserve the create time (depending on the filer) The ARX can perform a true multiprotocol migration where both CIFS and NFS attributes/permissions are transferred – Robocopy does CIFS only, Rsync NFS only The ARX can optionally replicate CIFS shares and associated share permissions to the target filer

Acopia ARX Client View NAS-1

home on “server1” (U:)

NAS-2 25

Data Migration: Fan In

Fan in migration allows admin to take advantage of larger/more flexible file system capabilities on new NAS platforms Separate file systems can be merged an migrated into a single file system The ARX can perform a detailed file system collision analysis before merging file systems – A collision report is generated for each file system – Admin can choose to manually remove collisions or let the ARX rename offending files and directories Like directories will be merged – Clients see aggregated directroy In cases where the directory name is the same but has different permissions the ARX can synchronize the directory attributes

Client View

home on “server1” (U:)

Acopia ARX NAS-1 NAS-2 NAS-3 NAS-4 26

Data Migration: Fan Out

Client View

Fan out migration allows admin to change sub optimal data layout due to reactive data management policies Structure can be re-introduced into the environment via fileset based policies that allow for migrations using: • Anything in the file name or extension • File path • File age (last modify or last access) • File size • Include of exclude (all files except for) • Any combination (union or intersection) • For the more advanced user regular expression matching is available Rules operate “in-line” – Any new files created are automatically created on the target storage, no need to re-scan the source again

Acopia ARX

home on “server1” (U:)

NAS-1 NAS-2 NAS-3 NAS-4 27

Data Migration: Name Preservation

Acopia utilizes a name based takeover method for migrations in most cases – No 3 rd party namespace technology is required, however DFS can be layered over ARX All client presentations (names, shares, exports, mount security) are preserved The source filer’s CIFS name is renamed and then the original name is transferred to the ARX allowing for transparent insertion of the ARX solution – This will help avoid issues with embedded links in MS-Office documents The ARX will join Active Directory with the original source filer CIFS name If WINS was enabled on the source filer it is disabled, and the ARX will assume the advertisement of any WINS aliases For NFS the source filers DNS entry is updated to point to the ARX – Or auto-mounter/DFS maps can be updated The ARX can assume the filers IP address if needed

Client View Acopia ARX \\Server1-old

home on “\\server1” (U:)

\\Server1 28

Case Study: NAS Consolidation

One of the world's leading financial services companies, with global presence

Environment: Critical Issue: Reasons: Requirements: Solution: Result:

Windows file servers, NAS Large scale file server to NAS consolidation; 24x7 environment Cost savings in rack space, power, cooling and operations Move the data without disrupting the business ARX6000 clusters >80 file servers migrated to NAS without disruption Migrations completed faster, with less intervention

“Acopia's products allow us to consolidate our back-end storage resources while providing data access to our users without disruption.”

Chief Technology Architect

29

Migration Demonstration

30

Acopia Demo Topology

Virtual File Systems

Virtual Volume V:

Virtual Server Virtual View Windows Client Physical View ARX500 Layer 2 Switch Tier1 - Share Farm 31

J: L: M:

Physical File Systems

Storage Tiering Information Lifecycle Management

32

Storage Tiering with F5 Acopia

Solution:

– Automated, non-disruptive data placement of flexibly defined filesets – Multi-vendor, multi-platform – Clean (no stubs or links) – File movement can be scheduled

Benefits:

– Reduced CAPEX • Leverage cost effective storage – Reduced OPEX – Reduced backup windows and infrastructure costs

34

Storage Tiering with F5 Acopia

Can be applied to all data or a subset via filesets Operates on either last access or last modify time The ARX can run tentative “what if” reports to allow for proper provisioning of lower tiers Files accessed or modified on lower tiers can be brought up to tier 1 dynamically

35

Infrastructure - Central

36

Infrastructure - Distributed

37

Storage Tiering Case Study

Users and Applications NetApp 940c Tier 1 Acopia ARX1000 NetApp 3020 Tier 2 “Based upon these savings, we estimate that we will enjoy a return on our Acopia investment in well under a year.” Reinhard Frumm, Director Distributed IS, Messe Dusseldorf International trade-show company Challenges – Move less business critical data to less expensive storage, non disruptively to users Solution – ARX1000 cluster Benefits – 50% reduction in disk spend – Dramatic reduction in backup windows (from ~14 hours to ~3 hours) and backup infrastructure costs

38

Tiering Demonstration

39

Acopia Demo Topology

Virtual File Systems

Virtual Volume V:

Virtual Server Virtual View Windows Client Physical View ARX500 Layer 2 Switch Tier1 - Share Farm

L: M:

Tier 1

N:

Tier 2 Physical File Systems 40

Load Balancing

41

Load Balancing with Acopia

Solution:

– Automatically balances new file placement across file servers – Flexible dynamic load balancing algorithms – Uses existing file storage devices

Benefits:

– Increased application performance – Improved capacity utilization – Reduced outages associated with data management

42

Load Balancing

A common problem for our customers is applications that require lots of space – Administrators are reluctant to provision a large file system, because if it ever needs to be recovered it will take too long – They tend to provision smaller file systems and force the application deal with adding new storage locations – This typically requires application down time and complexity being added to the application The ARX can decouple the application from the physical storage so that the application only needs to know a single storage location – The application will no longer need to deal with multiple storage locations The storage administrator can now keep file systems small and dynamically add new storage without disruption – No more down time when capacity thresholds are reached Application

2 TB 2 TB 2 TB 2 TB 2 TB 2 TB 43

Load Balancing Case Study

Compute Nodes Acopia ARX6000 NetApp 3050 “Acopia’s products increased our business workflow by 560%” Mike Streb, VP Infrastructure, WMG Challenges – Infrastructure was a bottleneck to production of digital content – Difficult to provision new storage Solution – ARX6000 cluster Benefits – Ability to digitize >500% more music – 20% reduction OPEX costs associated with managing storage – Reduction in disk spend due to more efficient utilization of existing NAS

45

Demonstration Load Balancing

46

Acopia Demo Topology

Virtual File Systems

Virtual Volume V:

Virtual Server Virtual View Windows Client Physical View ARX500 Layer 2 Switch Tier1 - Share Farm

L:

Tier 1

M:

Tier 1

N:

Tier 2 Physical File Systems 47

Inline Policy Enforcement\Place by Name

48

Inline Policy Enforcement\Place by Name

Classification and placement of data based on name or path Drivers: – Tiered storage, business polices, SLA’s for applications or projects, migration based on file type or path Benefits: – File level granularity – Can migrate existing file systems to comply with current policy – Operates inline for real time policy enforcement for new data creation

49

File Based Placement

Filesets – Group of files based on name, type, extension, string, path, size – Unions, intersections, include and exclude supported Storage Tiers – Arbitrary definition defined by the enterprise – Can consist of a single share or share farm with capacity balancing File Placement – Namespace is walked only once for initial placement of files – In-line policy enforcement will place files on proper tier in real time

+

50

Demonstration

Inline Policy Enforcement and File Placement by Name

51

Acopia Demo Topology

Virtual File Systems

Virtual Volume V:

Virtual Server Virtual View Windows Client Physical View ARX500 Layer 2 Switch Tier1 - Share Farm

L:

Tier 1

M:

Tier 1

N:

Tier 2 Physical File Systems 52

Shadow Volume Replication

53

Data Replication with Acopia

Applications and Users

ARX Cluster

Acopia Global Namespace IP Network Primary Site Replication Secondary Site

Technology:

– File-set based replication – NFS & CIFS across multiple platforms – Replicas may be viewed – Supports multiple targets – Change-based updates only (file deltas)

Benefits:

– Target is not required to be of like storage type – WAN bandwidth preservation – Can be used for centralized backup applications

54

Data Replication Case Study

Tier 1 Replica Tier 2

Disaster Recovery Site Primary Site “Acopia has reduced our total backup and replication times by about 70%.” Bonnie Stiewing, Senior Systems Administrator, United Rentals World’s largest equipment rental company Challenges – Upgrade NAS platform – Introduce lower cost ATA disk – File-based Disaster Recovery solution Solution – ARX1000 cluster at primary data center – ARX1000 at disaster recovery facility Benefits – NAS upgrade with no impact to users – 50% savings through use of ATA disk – Cost effective disaster recovery solution – Dramatic reduction in backup and replication times

55

Demonstration

Shadow Volume Replication

56

Acopia Demo Topology

Virtual File Systems

Virtual Volume V: Virtual Volume W:

Virtual Server Virtual View Windows Client Physical View ARX500 Layer 2 Switch 57 Tier1 - Share Farm

L:

Tier 1

M: N:

Tier 2

Replication

O:

Shadow Physical File Systems

58