Intrusion Detection Systems

Transcription

Intrusion Detection Systems
Intrusion Detection Systems
Aleksandar Milenkoski
Chair of Software Engineering
University of Würzburg
http://se.informatik.uni-wuerzburg.de/
Background information
§  Affiliation history
§  Sep. 2011 - Sep. 2014: Marie Curie Research Fellow at the
Karlsruhe Institute of Technology, Karlsruhe, Germany
§  March 2013 - May 2013: Visiting Researcher at University of
Rennes 1, Rennes, France
§  since Sep. 2014: Doctoral Researcher at University of Würzburg,
Würzburg, Germany
§  Research interests
§ 
§ 
§ 
§ 
Network and system security
Vulnerability analysis
Intrusion detection
Evaluation of intrusion detection systems
Intrusion Detection Systems
2/41
Background information (2)
§  Relevant publications
§  Aleksandar Milenkoski, Marco Vieira, Samuel Kounev, Alberto Avrtizer, and Bryan D. Payne.
Evaluating Computer Intrusion Detection Systems: A Survey of Common Practices.
ACM Computing Surveys, 48(1):12:1-12:41, September 2015, ACM, New York, NY, USA. 5year Impact Factor (2014): 5.949.
§  Aleksandar Milenkoski, Bryan D. Payne, Nuno Antunes, Marco Vieira, Samuel Kounev,
Alberto Avritzer, and Matthias Luft. Evaluation of Intrusion Detection Systems in
Virtualized Environments Using Attack Injection. In The 18th International Symposium on
Research in Attacks, Intrusions, and Defenses (RAID 2015), Kyoto, Japan, November 2015.
Springer. November 2015, Acceptance Rate: 23%.
§  Aleksandar Milenkoski, Bryan D. Payne, Nuno Antunes, Marco Vieira, and Samuel Kounev.
Experience Report: An Analysis of Hypercall Handler Vulnerabilities. In Proceedings of
The 25th IEEE International Symposium on Software Reliability Engineering (ISSRE 2014) Research Track, Naples, Italy, November 2014. IEEE, IEEE Computer Society, Washington
DC, USA. November 2014, Acceptance Rate: 25%, Best Paper Award Nomination.
§  Aleksandar Milenkoski, Samuel Kounev, Alberto Avritzer, Nuno Antunes, and Marco Vieira.
On Benchmarking Intrusion Detection Systems in Virtualized Environments. Technical
Report SPEC-RG-2013-002 v.1.0, SPEC Research Group - IDS Benchmarking Working
Group, Standard Performance Evaluation Corporation (SPEC), 7001 Heritage Village Plaza
Suite 225, Gainesville, VA 20155, USA, June 2013.
http://se.informatik.uni-wuerzburg.de/staff/aleksandar_milenkoski/
Intrusion Detection Systems
3/41
Outline
§  Basics
§  What is an intrusion detection system (IDS)?
§  Types of intrusion detection systems (IDSes)
§  Snort: The de-facto standard open-source IDS
§  Advanced topics
§  IDSes in virtualized environments
§  Evaluation of IDSes
§  Evaluation of IDSes in virtualized environments
Intrusion Detection Systems
4/41
BASICS
Basics
§  The NIST (National Institute of Standards and
Technology) definition
Def.: Intrusion detection is the process of monitoring the
events occurring in a computer or networked system
and analyzing said events for signs of possible
incidents, which are violations or imminent threats of
violation of computer security policies, acceptable use
policies, or standard security practices
Def.: An IDS is a software, or hardware appliance, which
automates the intrusion detection process
Intrusion Detection Systems
5/41
Basics: Basic IDS architecture
Input
Sensors
Intrusion Detection Systems
Analysis
Engine
Output
6/41
Basics: IDS types
Property
IDS Type
Monitored platform
Host-based
Network-based
Hybrid
Attack detection method
Misuse-based
Anomaly-based
Hybrid
Deployment architecture
Distributed
Non-distributed
Non-exhaustive systematization
Intrusion Detection Systems
7/41
Basics: Monitored platform
§  Host-based
§  Monitors the activities on the system (i.e., the host) where it is
deployed to detect local attacks — attacks executed by users
of the targeted system itself
http://ossec.github.io/
§  Network-based
§  Monitors network traffic to detect remote attacks—attacks
carried out over a network connection
https://www.snort.org/
§  Hybrid
Intrusion Detection Systems
8/41
Basics: Attack detection method
§  Misuse-based
§  Evaluates system and/or network activities against a set of
signatures of known attacks
§  Anomaly-based
§  Uses a baseline profile of regular network and/or system
activities as a reference to distinguish between regular and
anomalous activities
§  Hybrid
Intrusion Detection Systems
9/41
Basics: Attack detection method (2)
§  Misuse-based versus anomaly-based IDSes
§  Def.: Zero-day attacks — attacks that exploit vulnerabilities that
have not been publicly disclosed before the execution of the
attacks
§  What is effective: A misuse- or an anomaly-based IDS?
§  Example: Adam always reads his e-mails on Sundays around 5
pm. This Saturday, at 11 am, he accessed his inbox.
§  Def.: False alert — an alert generated by an IDS when there is
no attack/intrusion.
§  What may generate a false alert: A misuse- or an anomalybased IDS?
Intrusion Detection Systems
10/41
Basics: Deployment architecture
§  Non-distributed
§  Non-compound IDS that can be deployed only at a single
location
§  Distributed
§  Compound IDS that consists of multiple intrusion detection
subsystems that can be deployed at different locations and
communicate to exchange intrusion detection-relevant data
Intrusion Detection Systems
11/41
Basics: Deployment architecture (2)
§  Def.: Coordinated attacks --- carefully orchestrated attacks that
target multiple victims at specific moments in time towards
achieving a given malicious goal
§  Example: An attacker using a single IP address (1.1.1.1) first
breaks into a mail server of CityBank deployed in Europe and then
uses stolen (valid) credentials to access a mail server of CityBank
in US.
§ 
What is effective: A non-distributed or distributed IDS?
Alert [1.1.1.1]
Central
analysis
Login event [1.1.1.1]
IDS Europe
IDS US
Deny access to 1.1.1.1
Intrusion Detection Systems
12/41
SNORT
The de-facto standard IDS
Introduction to Snort
§  What is Snort?
§  Snort is a packet analysis tool
§  Network-based intrusion detection system
§  Sniffer
§  Forensic data analysis tool
§  Advantages of Snort
§  Portable (Linux, Windows, MacOS X, Solaris, BSD, IRIX, Tru64,
HP-UX, …)
§  Fast
§  Configurable (Many reporting/logging options)
§  Free (GPL/Open Source Software)
https://www.snort.org/
Intrusion Detection Systems
13/41
Attack detection
§  Snort is a misuse-based IDS
§  Detects „signatures“ of attacks using rules
§  Known attacks have „signatures“ --- sequence of bytes
that characterize a malicious packet almost for sure
§  Example: Code Red Worm 2001
§  Exploited vulnerability in IIS 4.0 and 5.0
§  Buffer overflow vulnerability
/default.ida?
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
NNNNNNNNNNNNNNNNNNNNNNNNNN
%u9090%u6858%ucbcd3%7801%u9090%u6805%ucbd3%u7801
Intrusion Detection Systems
14/41
Architecture of Snort
Snort
Packet Decoder
Preprocessor
Data flow
Packet stream
Sniffing
Detection engine
Output stage
Intrusion Detection Systems
Alerts/Logs
15/41
Architecture of Snort (2)
§  Packet decoder
§  has the job of determining which underlying protocols are used
in the packet (such as Ethernet, IP, TCP, etc.)
§  looks for errors or anomalies in the fields of packet header
headers
§  Preprocessor
§  allows users and programmers to drop modular plugins into
Snort (e.g., SMTP, POP, FTP preprocessors)
§  Detection engine
§  evaluates packets against rules
§  Output stage
§  generates output
Intrusion Detection Systems
16/41
Detection engine: Rules
Rule header
Rule options
alert tcp !10.1.1.0/24 any -> 10.1.1.0/24 any
(flags: SF; msg: “SYN-FIN scan”;)
Alerts to traffic from outside the 10.1.1.x subnet to the 10.1.1.x subnet with the
Syn and the Fin flags set.
Rule header
Rule options
Alert tcp 1.1.1.1 any -> 2.2.2.2 any (flags: SF; msg: “SYN-FIN Scan”;)
Alert tcp 1.1.1.1 any -> 2.2.2.2 any (flags: S12; msg: “Queso Scan”;)
Alert tcp 1.1.1.1 any -> 2.2.2.2 any (flags: F; msg: “FIN Scan”;)
Intrusion Detection Systems
17/41
Output stage
§  “Interesting” packets are sent to log files …
§  … also to various add-ons
§  SnortSnarf (html output)
http://sourceforge.net/projects/snortsnarf/
§  SnortPlot (plots of attacks)
http://www.unix.gr/cgi-bin/cat.cgi?firesoft/snortplot.pl
§  Swatch (email alerts)
http://wiki.ipfire.org/en/addons/swatch/start
Usability is important
Intrusion Detection Systems
18/41
Snort as a distributed IDS
§  Snort „as is“ is a non-distributed IDS
§  However, third-party tools can be used
§  Demarc [now offline]: NIDS management console
Intrusion Detection Systems
19/41
Advanced topics
IDSes in virtualized environments
Evaluation of IDSes
Evaluation of IDSes in virtualized
environments
Intrusion Detection Systems
Introduction
§  Virtualized environment
§  Hypervisor (Xen, KVM, Vmware..)
§  Virtual machines (VMs)
§  The hypervisors „observes“ all VM activities (system
and network activities)
Guest
VM
Network traffic
Intrusion Detection Systems
NIC
Guest
VM
Hypervisor
20/41
Architecture
VMFence, Xenini, OSSEC,
Wizard, Snort …
Intrusion Detection Systems
21/41
Benefits and drawbacks
§  Benefits
§  Isolation from malicious VM users
§  Transparency
§  Drawbacks
§  Some host-based IDSes require modifications of the
hypervisor: Difficult deployment in closed-source
hypervisors (vendor support is a must)
§  If no hypervisor modification: host-based IDSes have
access to low-level, hypervisor data (e.g., memory
dumps): Cannot be easily interpreted by an attack
analysis engine
Intrusion Detection Systems
22/41
Virtual machine introspection
§  If no hypervisor modification: host-based IDSes have
access to low-level, hypervisor data (e.g., memory
dumps): Cannot be easily interpreted by an attack
analysis engine
§  Solution: Interpreter
LibVMI http://libvmi.com/
Memory
dump
Intrusion Detection Systems
Interpreter
Analysis
23/41
Advanced topics
IDSes in virtualized environments
Evaluation of IDSes
Evaluation of IDSes in virtualized
environments
Introduction
§  IDS evaluation answers two major questions:
§  How well this IDS performs?
§  Is this IDS better than that one?
§  Evaluation criteria: attack detection accuracy, performance
overhead…
§  Benefits of evaluation of IDSes
§  Enables the comparison of different IDSes
§  Enables the improvement of the configuration of deployed
IDSes
Reduced risk of security breaches
Intrusion Detection Systems
24/41
Core components
Workloads
Metrics
Intrusion Detection Systems
Measurement
methodology
25/41
Workloads
Categorization criteria
Workload type
Content
Pure benign
Pure malicious
Mixed
Form
Executable
Trace
http://metasploit.com
Intrusion Detection Systems
ExploitDatabase
http://www.exploit-db.com/
PacketStorm
http://packetstormsecurity.com/
Securityfocus
http://www.securityfocus.com/
26/41
Workloads: Honeypots
Intrusion Detection Systems
27/41
Workloads: Trace form
Repository Content
Activities Labelled Realistic Anonymized Metadata
CAIDA
Mixed
Netw.
No
Yes
Yes
Yes
DEFCON
Malicious Netw.
No
No
No
No
DARPA
Mixed
Netw./
Host
Yes
No
No
Yes
ITA
Benign
Netw.
No
Yes
Yes
No
LBNL
Benign
Netw.
No
Yes
Yes
Yes
MAWILab
Mixed
Netw.
Yes
Yes
Yes
Yes
The DARPA datasets: http://www.ll.mit.edu/ideval/data/
„Ground truth“ is important
Intrusion Detection Systems
28/41
Metrics
Metric
False negative rate
True positive rate
False positive rate
True negative rate
Formula
β = P ( neg. A | I )
1-β = P (A | I )
α = P ( A | neg. I )
1-α = P ( neg. A | neg. I )
A An IDS generates an alert
I
An attack is performed
P Probability
These metrics originate from signal detection theory
J. Hancock and P. Wintz, Signal Detection Theory. New York: McGraw–Hill, 1966.
Intrusion Detection Systems
29/41
Metrics (2)
§  ROC (Receiver Operating Characteristic) curve
§  Plots true positive rate (1-β) against the corresponding false
positive rate (α) for each IDS operating point
Common goal:
Identification of an optimal operating point
Intrusion Detection Systems
Intrusion detection capability (CID )
§  Def.: IDS operating point --- IDS configuration yielding
1
(α, 1-β)
0.8
0.6
0.4
0.2
0
0.5
1
1.5
2
False positive rate (↵)
2.5
3
·10
30/41
3
Measurement methodology
§  Attack detection accuracy is not the only relevant IDS
property
§  Is accuracy of any relevance if attacks are detected too late?
Attack-detection-related
Attack detection accuracy
Attack coverage
Resistance to evasion
techniques
Attack detection and
reporting speed
Resource consumption-related
CPU consumption
Memory consumption
Network consumption
Others
Performance overhead
Workload processing capacity
Intrusion Detection Systems
31/41
IDS Evaluation: Historical overview
1996 1997
Puketza et al. develop an approach and a framework
for evaluating IDSes in a systematic manner
1995
1998
Debar et al. develop a workbench
for evaluating IDSes
2011
Dumitras et al. present the WINE datasets
and a platform for evaluating IT security systems
Small-scale IDS evaluation studies are carried out by researchers designing novel IDSes
and occasionally appear in trade magazine articles
Researchers from Lincoln Laboratory at MIT generate trace files
2000 - 2014
for evaluating IDSes (i.e., the DARPA datasets) and evaluate multiple IDSes
1998 1999 2000
Intrusion Detection Systems
2014
32/41
Advanced topics
IDSes in virtualized environments
Evaluation of IDSes
Evaluation of IDSes in virtualized
environments
IDSes in virtualized environments
§  IDSes that detect virtualization-specific attacks
§  Attacks targeting hypervisors
§  Hypercalls
§  Identical to system calls
§  Critical attack surface of hypervisors [Rutkowska, J.,
Wojtczuk, R. @ BlackHat USA 2008]
§  Hypercall IDSes
§  Examples: Collabra, Xenini, CC Detector, OSSEC, ...
§  Components in the hypervisor, anomaly-based
Intrusion Detection Systems
33/41
An open issue
§  How do we extensively evaluate the accuracy of
hypercall IDSes?
§  There are no workloads: no traces, attack scripts targeting
hypercall (hypervisor) weaknesses are extremely rare
§  An approach for generating IDS evaluation workloads
§  Injection of malicious hypercall activities (e.g., attacks,
covert channel operations) during regular operation of VMs
§  Live testing of hypercall IDSes
Intrusion Detection Systems
34/41
Attack injection
§  hInjector
§  Publicly available at https://github.com/hinj/hInj
MVM!
Configuration!
User!
Hypervisor!
Logs!
1!
6!
LKM!
Kernel!
Filter!
Injector!
! 2!
3!
4!
5!
Hardware!
vCPU!
monitors!
IDS !
(in SVM)!
Intrusion Detection Systems
Memory!
Hypercall
handler!
shared_info!
3!
5!
35/41
Attack injection (2)
§  Design criteria for realistic and practically feasible IDS
evaluation
§  Injection of realistic attacks [35 PoCs, new attacks can be
easily configured]
§  Injection during regular system operation
§  Non-disruptive attack injection
§  Low performance overhead
Intrusion Detection Systems
36/41
IDS evaluation experiments
§  IDS under test: Xenini [Maeiro et al. 2011]
§  Sequence of hypercalls of length n [n=10]
§  Calculates anomaly scores between 0 and 1 and fires an
alert if a given threshold th is exceeded
§  Scenarios
§  [Scenario #1] Evaluate the attack detection accuracy of
Xenini for th in [0.1; 0.5]
§  [Scenario #2] Evaluate Xenini‘s ability to detect IDS
evasive attacks --- „mimicry“ and „smoke screen“ attacks
Intrusion Detection Systems
37/41
IDS evaluation experiments (2)
§  The SPEC_sc2013 environment
SPECweb 2005
workload driver!
SPECimap !
workload driver!
SPECjAppServer2004!
workload driver!
SPECbatch!
workload driver!
Clients!
Servers!
Web server VM!
[front-end]!
Network file server VM!
[back-end]!
Mail server VM!
[front-end]!
Batch server VM!
[front-end]!
Web server!
Apache 2.4.7!
Network file server!
sshfs 2.5!
IMAP mail server!
Dovecot 2.2.9!
Batch server!
SPECbatch server!
J2EE Application
server!
GlassFish 4.0!
Database server!
PostgreSQL 9.3.5!
OS!
Linux 3.17.2 x86_64!
OS!
Linux 3.17.2 x86_64!
OS!
Linux 3.17.2 x86_64!
OS!
Linux 3.17.2 x86_64!
OS!
Linux 3.17.2 x86_64!
OS!
Linux 3.17.2 x86_64!
Application server VM! Database server VM!
[front-end]!
[back-end]!
Hypervisor !
Xen 4.4.1!
Intrusion Detection Systems
38/41
IDS evaluation experiments (3)
§  [Scenario #1]
§  IDS training
§  Attack injection and calculation of metric values
1
th = 0.5
Detected
CVE-2012-3495
✔
CVE-2012-5525
x
CVE-2012-5513
✔
CVE-2012-5510
✔
CVE-2013-4494
x
CVE-2013-1964
x
0.8
True positive rate
Targeted vulnerability
2
[0.078 ⇥ 10
]
[0.079 ⇥ 10
]
0.6
[0.23 ⇥ 10
0.4
[0.3 ⇥ 10
2
2
]
]
0.2
0
0
1
2
False positive rate
Intrusion Detection Systems
2
3
4
·10
6
39/41
IDS evaluation experiments (4)
iret
iret
iret
i r e t event channel op stack switch
g e t d e b u g r e g e v e n t c h a n n e l o p vcpu op g r a n t t a b l e o p
iret
iret
iret
i r e t event channel op stack switch
g e t d e b u g r e g e v e n t c h a n n e l o p vcpu op g r a n t t a b l e o p
„Mimicry“ version of CVE-2013-1964
benign hypercall activity
grant_table_op
grant_table_op
...
...
0.5 seconds
[~13647 hypercall sequences]
„Smoke screen“ version of CVE-2013-1964
Intrusion Detection Systems
40/41
IDS evaluation experiments (5)
§  [Scenario #2]
§  IDS training until time ts = 5285 sec.
§  Attack injection and calculation of metric values
Targeted vulnerability Anomaly scores
Unmodified
„Mimicry“
„Smoke screen“
CVE-2012-3495
1.0
0.17
0.25
CVE-2012-5513
0.32
0.107
0.28
CVE-2012-5510
1.0
0.14
0.31
CVE-2013-4494
0.21
0.14
0.14
CVE-2013-1964
0.25
0.14
0.14
Intrusion Detection Systems
41/41
THANK YOU!