Types of Semantic Attacks
Download
Report
Transcript Types of Semantic Attacks
Amber McConahy
Multifaceted and multidimensional
Marsh & Dibben (2003) definition and layers of
trust
“Trust concerns a positive expectation regarding the
behavior of somebody or something is a situation that
entails risk to the trusting party”
▪ Dispositional Trust – personality trait relating to trust
▪ Learned Trust – tendency to trust based on experience
▪ Situational Trust – trust adjusted based on situational cues
Key Questions
Reliable representation of trust in interactions and
interfaces?
Transforming trust to security and vice versa?
Identification and mitigation of trust failings?
2
Vital to security but poorly understood
Perfect information removes need for trust
Trust without risk is meaningless
Online users must develop knowledge to
make trust decisions
Developers must provide trustable designs
Must trust both people and technology
Halo Effect
Judgment based on attractiveness
Trust is built slowly and destroyed quickly
3
Meyer et al.
Ability to fulfill promises
Integrity relates meeting expectations
Benevolence is acting in best interest of client
Egger’s MoTEC
Superficial trust based on interface
Reasoned trust based on content analysis
Relationship trust based on transactional history
4
Trust
Familiarity
Willingness
to Transact
5
Lee, Kim, and Moon
Trust and transaction cost are opposing factors
Corritore et al.
Credibility, ease of use, and risk affect trust
McKnight et al.
Trusting beliefs, intentions, and behaviors
Riegelsberger et al.
Focuses on incentives rather than opinions and
beliefs
6
Trust and risk are related
Trust relates to beliefs
Ease of use can affect trust
Trust likely develops in stages
External factors and context can be relevant
7
DO
Ensure ease of use
Make design attractive
Convey real world
Include seals of approval
TRUSTe
Explain and justify content
Provide security and privacy
statements
Provide background
Define roles
Personalize service
DON’T
Make spelling mistakes
Mix ads and content
Be inconsistent or
unpredictable
Forget peer evaluations
References
User feedback
Ignore alternatives
Links to other sites
Poor response or
communication
8
Norm of Reciprocity (Goulder 1960)
Information likely to be provided in exchange for
information of services
Leads to increased trust
Could increase vulnerability
Zhu et al.
Study of user behavior under reciprocity attacks
Use of InfoSource software with “Alice” guide
9
Experimental group disclosed more
Over 85% of users found “Alice” helpful
Perception of importance related to
disclosure
Relevance of requested information matters
Income not provided due to perceived irrelevance
Beliefs and attitudes correlated with
willingness to share information
Trust is related to willingness to share
information
10
Users often don’t comprehend what
computer is asking
Presents dilemma rather than decision
Users seek alternative information
resources
Trust is aggregation of clues and
tradeoffs
Large scopes and less context lead to
impede consent
User’s are reluctant to provide
personal data
11
Claims often do not correspond to actions
Consequences are often not fully evaluated
Users don’t like making global decisions
Developers and users have different views
Users confuse terminology
Hacking vs. virus
Software bug vs. virus
12
13
14
Secure default choses “Don’t Install”
Labels changed from “Yes” and “No” to
“Install” and “Don’t Install”
Options provided
Simplified primary text
Evidence via certificates
Auxiliary text separated
“What’s the Risk?” link provided for more
information
15
16
17
Purposeful similarity to ActiveX to promote
consistency
Secure default option “Cancel”
Label changed from “Open” to “Run”
Primary text simplified to single question
Options provided
Evidence of filename and source provided
Assistance text separated with “What’s the
risk?” link
18
Trust decisions should be made in context
Narrow scope and avoid global setups
Make the most trusted option the default
Replace dilemmas with choices
Always provide trusted response option
Convey consequences to actions
Respect the user’s decision
Submit even when decision is not comprehended
by computer
19
Sauvik Das
Physical Attacks
Syntactic Attacks
21
Semantic Attacks:
“. . . Attacks that target the way we, as humans, assign meaning to content. . . .Semantic
attacks directly target the human/computer interface, the most insecure interface on the Internet“
22
Semantic Attacks:
“. . . Attacks that target the way we, as humans, assign meaning to content. . . .Semantic
attacks directly target the human/computer interface, the most insecure interface on the Internet“
http://lol-gonna-log-ur-keys.com
23
Semantic Attacks…
violate trust
deceive
are a new form of “hacking”—Cognitive Hacking
24
“Pump-and-Dump” schemes
Buy penny stocks cheap
Artificially inflate price (spread misinformation)
Sell for profit, leaving others “holding-the-bag”
Dump
Inflate
Pump
25
WTF Stuxnet?
Had elements of semantic attack:
Tricked technicians into believing centrifuges
were operating fine
Looks okay to me
26
And, of course: Phishing
27
Phishing is…:
deceiving users to obtain sensitive information
spoofing “trustworthy” communications
phreaking + fishing
a growing threat
28
It is very lucrative.
$2.4 million to $9.4 million dollars per yer per
million online banking customers
~$2000 on each compromised bank account.
29
It’s easy.
There are Do-it-Yourself Phishing Kits
AND, several easy accessible tutorials
30
It’s hard to defend against.
“You and I can think about things. Symbols in our
brains have meanings. The question is, can a
[computer] think about things, or merely process
digits that have no Aboutness—no meaning—no
semantic content” – Neal Stephenson, Anathem
Meaning
31
Easy to distribute, and low success rate is okay.
4700 per 1,000,000 banking credentials lost on
average (0.47%)
BUT, bad guys still make plenty of money from that
32
With Social Web, phishing is more effective.
Paper by Jagatic et al:
▪ Mined relationships of students using publicly available
information
▪ Using this information, conducted a spear phishing
attack
▪ Found that using social info, people were 4.5x more
likely to fall for phish (16% versus 72%).
33
It all goes back to trust.
1.
2.
3.
4.
People judge legitimacy by design
People do not trust web browser security
Awareness is not a strategy
Severity of the consequences does not seem
to inform behavior
34
Study by Sheng et al.
Women more likely than men
Age 18-25 at highest risk
Lower technical knowledge at higher risk
Generally risk averse people are at lower risk
Not orthogonal.
35
Study by Sheng et al.
Women more likely than men
Age 18-25 at highest risk
Lower technical knowledge at higher risk
Generally risk averse people are at lower risk
Not orthogonal.
36
Study by Sheng et al.
Women more likely than men
Age 18-25 at highest risk
Lower technical knowledge at higher risk
Generally risk averse people are at lower risk
Not orthogonal.
37
How can we mitigate phishing and other
semantic attacks?
Raise Awareness?
Education?
Automatic Detection?
Better Visualizations of Danger?
???
38
It’s a tough problem
Only a small percentage (0.47%) of users need to
be compromised for phishing to continue to be
lucrative
Don’t want to make users afraid to go to
legitimate websites (majority) in the process.
How do current mitigation strategies help?
39
Improve visual cues
40
Improving visual cues
Not as effective as it could be.
People don’t trust their web browsers (ahem…IE)
Dhamija et al. study (Firefox):
▪ Many people do not look at browser-based cues
▪ 23% didn’t look at all
▪ Make incorrect choices about phishing 40% of the time
41
Education
42
Education
Effective…but awareness alone not sufficient
Need to offer course of action
Sheng et al. study:
▪ 40% improvement among participants
▪ Some forms of education inhibit clicking of legitimate
links as well (learn avoidance not phishing awareness)
43
Phishing scams are still increasing!
44
We have some effective strategies, but the
problem is still open.
The Phishing explosion can be attributed to:
Users are still falling for it
DIY Phishing Kits making it increasingly easier to
make phishing scams
We can mitigate the first problem, but what
about the second?
45
Semantic attacks hack a user’s mind
Phishing is one common semantic attack
Deceive users to obtain their sensitive information
Phishing is tough to mitigate because:
It is lucrative
Easy to do
Education seems to be one great way to
reduce the incidence of phishing.
We also need to find ways to make creating
phish less appealing or more difficult.
46
47