Document Type

Report

Abstract

Anomaly detection focuses on modeling the normal behavior and identifying significant deviations, which could be novel attacks. The previously proposed LERAD algorithm can efficiently learn a succinct set of comprehensible rules for detecting anomalies. We conjecture that LERAD eliminates rules with possibly high coverage, which can lead to missed detections. This study proposes weights that approximate rule confidence and are learned incrementally. We evaluate our algorithm on various network and host datasets. Compared to LERAD, our technique detects more attacks at low false alarm rates with minimal computational overhead.

Publication Date

1-19-2007

Share

COinS