Document Type

Report

Abstract

Until recently the state of the art in lossless data compression was prediction by partial match (PPM). A PPM model estimates the next-symbol probability distribution by combining statistics from the longest matching contiguous contexts in which each symbol value is found. We introduce a context mixing model which improves on PPM by allowing contexts which are arbitrary functions of the history. Each model independently estimates a probability and confidence that the next bit of data will be 0 or 1. Predictions are combined by weighted averaging. After a bit is arithmetic coded, the weights are adjusted along the cost gradient in weight space to favor the most accurate models. Context mixing compressors, as implemented by the open source PAQ project, are now top ranked on several independent benchmarks.

Publication Date

12-21-2005

Share

COinS