Algorithms for Network Security George Varghese, UCSD Network Security Background  Current Approach: When a new attack appears, analysts work for hours (learning)

Download Report

Transcript Algorithms for Network Security George Varghese, UCSD Network Security Background  Current Approach: When a new attack appears, analysts work for hours (learning)

Algorithms for Network Security

George Varghese, UCSD

Network Security Background

 Current Approach : When a new attack appears, analysts work for hours (

learning

) to obtain a signature. Following this, IDS devices screen traffic for signature (

detection

)  Problem 1 : Slow attacks get faster .

learning by humans does not scale as Example: Slammer reached critical mass in 10 minutes.

 Problem 2 : Detection of signatures at high speeds (10 Gbps or higher) is hard .

 This talk : algorithms.

Will describe two proposals to rethink the learning and detection problems that use interesting

Dealing with Slow Learning by Humans by Automating Signature Extraction

(OSDI 2004, joint with S. Singh, C. Estan, and S. Savage)

Extracting Worm Signatures by

Content Sifting

 Unsupervised learning: monitor network and look for strings common to traffic with worm-like behavior  Signatures can then be used for detection.

PACKET HEADER SRC: 11.12.13.14.3920 DST: 132.239.13.24.5000 PROT: TCP PACKET PAYLOAD (CONTENT) 00F0 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 ................

Kibvu.B

signature captured by EarlyBird on May 14 th , 2004

0120 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 ................

0130 90 90 90 90 90 90 90 90 EB 10 5A 4A 33 C9 66 B9 ..........ZJ3.f.

0140 66 01 80 34 0A 99 E2 FA EB 05 E8 EB FF FF FF 70 f..4...........p

. . .

Assumed Characteristics of Worm Behavior we used for Learning 

Content Prevalence

Payload of worm is seen frequently 

Address Dispersion

Payload of worm is seen traversing between many distinct hosts

The Basic Algorithm

A E Prevalence

Table Detector at Vantage Point

B

cnn.com

C D

Address

Dispersion

Table Sources Destinations

The Basic Algorithm

A E Prevalence

Table

1

Detector at Vantage Point

B

cnn.com

C D

Address

Dispersion

Table Sources Destinations

1 (A) 1 (B)

The Basic Algorithm

A E Prevalence

Table

1 1

Detector at Vantage Point

B

cnn.com

C D

Address

Dispersion

Table Sources Destinations

1 (A) 1 (C) 1 (B) 1 (A)

The Basic Algorithm

A E Prevalence

Table

2 1

Detector at Vantage Point

B

cnn.com

C D

Address

Dispersion

Table Sources Destinations

2 (A,B) 1 (C) 2 (B,D) 1 (A)

The Basic Algorithm

A E Prevalence

Table

3 1

Detector at Vantage Point

B

cnn.com

C D

Address

Dispersion

Table Sources Destinations

3 (A,B,D) 3 (B,D,E) 1 (C) 1 (A)

What are the challenges?

Computation

– – We have a total of 12 microseconds processing time for a packet at 1Gbps line rate Not just talking about processing packet headers, need to do deep packet inspection not for known strings but to learn frequent strings.

State

– On a fully-loaded 1Gbps link the basic algorithm could generate a 1GByte table in less than 10 seconds

Idea 1: Index fixed length substrings 

Approach 1: Index all substrings

– Problem : too many substrings  too much state too much computation  

Approach 2: Index packet as a single string

– Problem : easily evadable (e.g., Witty, Email viruses) 

Approach 3: Index all contiguous substrings of a fixed length ‘S’

– Will track everything that is of length ‘S’ and larger

A B C D E F G H I J K

Idea 2: Incremental Hash Functions  Use hashing to reduce state.

– 40 byte strings  8 byte hash  Use an Incremental hash function to reduce computation.

Rabin Fingerprint

: efficient incremental hash

Insight 3 : Don’t need to track every substring 

Approach 1: sub-sample packets

– If we chose 1 in N, it will take us N times to detect the worm 

Approach 2: deterministic or random selection of offsets

– – Susceptible to simple evasion attacks No guarantee that we will sample same sub-string in every packet 

Approach 3: sample based on the hash of the substring (Manber et al in Agrep)

Value Sampling

: sample fingerprint if last ‘ N ’ bits of the fingerprint are equal to the value ‘ V ’  The number of bits ‘ N ’ can be dynamically set  The value ‘ V ’ can be randomized for resiliency

Implementing Insight 3: Value Sampling  Value Sampling Implementation: – For selecting 1/64 fingerprints  Last 6 bits equal to 0

A B C D E F G H I J K

P track

 Probability of selecting at least one substring of length S in a L byte invariant – For last 6 bits equal to 0  F=1/64 – For 40 byte substrings (S = 40)

P track = 99.64% for a 400 byte invariant

Insight 4: Repeated substrings are uncommon 1 0.998

0.996

0.994

0.992

0.99

0.988

0.986

0.984

1 10 100 1000 10000 100000

Only 1%

of the 40 byte

substrings repeat more than 1 time Number of repeats

 Can greatly reduce memory by focusing only on the high frequency content

Implementing Insight 4: Use an approximate high-pass filter 

Multi Stage Filters

use randomized techniques to implement a high pass filter using low memory and few false positives [EstanVarghese02]. Similar to approach by Motwani et al.

– Use the content hash as a flow identifier 

Three orders of magnitude improvement over the naïve approach (1 entry/string)

Multistage Filters Packet Window Hash 1 Hash 2 Increment Counters Comparator Stage 1 Hash 3 Comparator Stage 2 Comparator Stage 3 INSERT in Dispersion Table If all counters above threshold

Insight 5: Prevalent substrings with high dispersion are rare 10000 1000 100 10 1 0 10 S>1 AND D>1 20 30 Time (minutes) 40 50 S>30 AND D>30 60

Insight 5 : Prevalent substrings with high dispersion are rare  Naïve approach would maintain a list of sources (or destinations) 

We only care if dispersion is high

– Approximate counting suffices 

Scalable Bitmap Counters

– Sample larger virtual bitmap; scale and adjust for error – Order of magnitude less memory than naïve approach and acceptable error (<30%)

Implementing Insight 5: Scalable Bitmap Counters

1 1 Hash(Source)

 Hash : based on Source (or Destination)  Sample : keep only a sample of the bitmap  Estimate  Adapt : scale up sampled count : periodically increase scaling factor

Error Factor = 2/(2

numBitmaps

-1)

 With 3, 32-bit bitmaps, error factor = 28.5%

High Speed Implementation: Practical

Content Sifting

Memory “State” scaling

– –

Hash of fixed sized substrings Multi Stage Filters

–  Allow us to focus on the prevalent substrings  Total size is 2MB

Scalable Bitmap counters

 Scalable counting of sources and destinations 

CPU “Computation” scaling

Incremental hash functions

Value Sampling

 1/64 sampling detects all known worms

Implementing

Content Sifting

IAMAWORM

Multi-stage Filter

Update Multistage Filter

( 0.146)

value sample key ADTEntry=Find(Key)

(0.021) NO

Prevalence Table

Found ADTEntry?

is prevalence >

Scaling bitmap

thold

counters (5 bytes)

YES YES

KEY Repeats Sources Destinations

Update Entry

(0.027)

Create & Insert Entry

(0.37)

Address Dispersion Table

0.042us per byte (in software implementation)

, with 1/64 value sampling

Deployment Experience

1: Large fraction of the UCSD campus traffic

, – – Traffic mix: approximately 5000 end-hosts, dedicated servers for campus wide services (DNS, Email, NFS etc.) Line-rate of traffic varies between 100 & 500Mbps. 

2: Fraction of local ISP Traffic

, (DEMO) – Traffic mix: dialup customers, leased-line customers – Line-rate of traffic is roughly 100Mbps. 

3: Fraction of second local ISP Traffic

, – – Traffic mix: inbound / outbound traffic into a large hosting center. Line-rate is roughly 300Mbps.

False Positives we encountered

Common protocol headers

– – – Mainly HTTP and SMTP headers Distributed (P2P) system protocol headers

Procedural whitelist

 Small number of popular protocols 

Non-worm epidemic Activity

SPAM

GNUTELLA.CONNECT

/0.6..X-Max-TTL: .3..X-Dynamic-Qu erying:.0.1..X-V ersion:.4.0.4..X

-Query-Routing:.

0.1..User-Agent: .LimeWire/4.0.6.

.Vendor-Message: .0.1..X-Ultrapee r-Query-Routing:

Other Experience:  Lesson 1: From experience, static whitelisting is still not sufficient for HTTP and P2P. We needed other more dynamic white listing techniques  Lesson 2: Signature selection is key. From worms like Blaster, we get several options. A major delay today in signature release is “vetting” signatures.

Lesson 3: Works better for vulnerability based mass attacks; does not work for directed attacks or attacks based on social engineering where rep rate is low,  Lesson 4: Major IDS vendors have moved to vulnerability signatures. Automated approaches to this (CMU) are very useful but automated exploit signature detection may also be useful as an addition piece of defense in depth for truly Zero day stuff.

Related Work and issues

 3 roughly concurrent pieces of work: Autograph (CMU), Honeycomb (Cambridge) and EarlyBird (us). EarlyBird is only  Further work at CMU extending Autograph to polymorphic worms (can do with Earlybird in real time as well). Automating vulnerability sigs  Issues: encryption, P2P false positives like Bit Torrent, etc.

Part 2: Detection of Signatures with Minimal Reassembly

(to appear in SIGCOMM 06, joint with F. Bonomi and A.. Fingerhut of Cisco Systems)

Membership Check via Bloom Filter Field Extraction Hash 1 Set BitMap Hash 2 Equal to 1 ?

Stage 1 Hash 3 Stage 2 Equal to 1 ?

Equal to 1 Stage 3 ALERT !

If all bits are set

Example 1: String Matching (Step 1: Sifting using Anchor Strings) ST0 ST1 ST2 A0 A2 A1 Hash Function Bloom Filter STn An Sushil Singh, G. Varghese, J. Huber, Sumeet Singh, Patent Application

String Matching Step 2: Standard hashing ST0 ST1 ST2 A0 A2 A1 Hash Bucket-0 Hash Function Hash Bucket-1 STn An Hash Bucket-m

Matching Step 3: Bit Trees instead of chaining Strings in a single hash bucket ST2 A2 1 ST8 A8 0 ST8 ST11 0 1 ST11 A11 0

LOC L2

ST2 ST17 A17 1 ST17

LOC L1

A8 A11 A2 1 A17 0

LOC L3

0 ST8 L2 0 1 ST11 L1 0 ST17 1 L3 1 ST2

Problem is harder than it may appear  Network IDS devices are beginning to be folded into network devices like switches. Cisco, Force10  Instead of having appliances that work at 1 or 2 Gbps, we need IDS line cards (or better still chips) that scale to 10-20 Gbps.

 Because attacks can be fragmented into pieces which can be sent out of order and even with inconsistent data, the standard approach has been to reassemble the TCP stream and to normalize the stream to remove inconsistency.  Theoretically, normalization requires storing 1 round trip delay worth per connection, which at 20 Gbps is huge, not to mention the computation to index this state.

 Worse, have to do Reg-Ex not just exact match (Cristi)

Headache: dealing with Evasions

SEQ = 13, DATA = “ACK” SEQ = 10, DATA = “ATT” THE CASE OF THE MISORDERED FRAGMENT SEQ =10, TTL = 10, “ATT” SEQ = 13, TTL = 1, “JNK” . . SEQ = 13, “ACK” THE CASE OF THE INTERSPERSED CHAFF SEQ = 10, “ATTJNK” SEQ = 13, ACK THE CASE OF THE OVERLAPPING SEGMENTS

Conclusions

 Surprising what one can do with network algorithms. At first glance, learning seems much harder than lookups or QoS.

 Underlying principle in both algorithms is “sifting”: reducing traffic to be examined to a manageable amount and then doing more cumbersome checks.

 Lots of caveats in practice: moving target