A couple of days ago (2006) I answered a question on random access in gzip streams and as the result of that conversation realised that there isn't much documentation out there, so here's a brain dump related to compression.
Compression and delta-encoding are part of the same general case, which is why they are inter-mingled below.
Making a stream smaller requires finding redundancy and duplication in data. Random data does not compress and never will. Compression tends to take three main steps:
Transform/ Preprocessing | Duplicate elimination | Bit reduction | |
---|---|---|---|
deflate | ≤32kB Sliding dictionary | Huffman tree | |
bzip2 | RLE, ≤900kB BWT, then Move-to-Front | Multiple Huffman trees | |
LZMA | ≤1GB Sliding dictionary | Range coding | |
rzip | ≤900MB LZ77 dictionary | Bzip2 stack | |
PNG (deflate) | Line:next-line, pixel:next-pixel predictors | ≤32kB Sliding dictionary | Huffman tree |
7-zip (deflate,lzma,bzip2) | BCJ2 executable jump compression | (various) | (various) |
MS LZX (aka cabinet) | 0xe8 x86 Jump convertor | 32kB-2MB Sliding dictionary | Huffman tree |
Courgette | PE/COFF x86 disassembler | modified bsdiff | 7-zip stack |
I have some compression projects on the go at the moment:
pyflate.py
: stand-alone pure-Python .gz/.bz2 decoderpyflate.py
will decode and decompress DEFLATE (as
found in zlib, gzip, ZIP/pkzip, PNG, PDF...) and bz2 streams
(from the BWT-based Bzip2). This is mainly designed as a research
project to learn about the algorithms and for producing indexes
for random-access. Around 650 lines of Python. BSD/GPL/LGPL/DFSG licensed.
There was a later, completely refactored version that fixed a number of bugs. I don't have a copy of it—it died along with a laptop; remember to sync early and sync often!
succinct
: efficient downloading for upgrades.tar.gz
,
.deb
(ar
→.tar.gz
x2), .zip
(.jar
,
.odt
), .pdf
, .rpm
debfs
: live filesystem from a bunch of .deb/.tar archivesUsing debfs will go something like this
$ cd ~/alternate-cd $ find pool/ -name \*.deb | mkfs.debfs > livecd.debfs $ echo livecd.debfs will be tiny. It just contains meta data. $ mount -t debfs livecd.debfs
debfs will then populate the filesystem; files will go into the root, control scripts will be presented in /var/lib/dpkg/info/* and the debs themselves aliased (in directly compressed form) in /var/cache/apt/archives/*; deb unpacking is avoided but the control scripts will still need to be configured.
At this stage, a unionfs
needs to be mounted.
Once editing as been completed, the contents of backing store for
the unionfs (tmpfs, or a on-disk directory tree) can be tar'ed up
into a tarball and added to the list of debfs files. There's also
an opportunity to take the deletion data from the unionfs (if
there was any left over) and remove those entries from the .debfs manifest
kzip, advancecomp (p7zip converging deflate implementation), bjwflate/deflopt
Ken Silverman discribes, of kzip, "The main one is an exhaustive search of all patterns. The other one is my advanced block splitter."
Ben Jos Walbeehm alludes to an un implemented idea for bjwflate, "Blockwise multiplexing! (like a blockwise ZipMax! (zip level --> file level --> block level)).". I do not know this would be but I've wondered whether it would be possible carefully split the huffman trees into two balances halves, allowing them to be used for separate purposes (streams). A concocted example I can think of would be keeping the data and the tags in a XML document apart.
Required for both succinct and debfs, but likely to have wider use. Store a lookup-table of flush points in the FEXTRA field at the start of a gzip header (or store this meta-data separately). The data can be retrospectively scanned from an existing stream and needs to list tuple pairs of plain-text offset and compressed offset.
Ideally this data could be extended to provide a hint on what method (eg. gzip -9, --rsyncable, or necessary advancecomp options are required to recompress each block, if the stream needs to be recreated identically.
I've been working on this issue for about 1 year now and I keep making discovers and jumps in my understanding of the problem. There are overlapping problem domains.
Data can be decoded from the middle of a block as the Huffman tree details from the closest preceding header can be retrieved. Stored/literal blocks become completely random-access since no alignment map is required.
The amount of dictionary required can be computed by decompressing from that point in the stream and recording the furthest distance referenced in the history buffer.it is not necessary to know the contents of the history buffer in order to produce this analysis. Indeed, the actual bytes accessed within the history buffer could be recorded as a bitmap and a sparse dictionary constructed specific to that starting position be constructed.
During compression it would be possible to have certain sections not reference any history. Temporally turning off history access to make a particular block independently accessibly maybe useful in cases such as the file-header with in a tar-archive. As the history lookup and Huffman decoding are dependent, this would not require resetting or restarting the block.
Add gzip -0
forced store support (can be useful
when stacking compression algorithms.
7zip (.7z
) is a container format for a series of
other algorithms. Many algorithms are tried for each set of data
and the one producing the smallest result is picked. The two
highest performing methods are bzip2 and a new kid on the block
called LZMA. Methods may be stacked and the data
preprocessed before compression.
LZMA is effectively deflate (zlib, gzip, zip) with a larger dictionary size, 32MB instead of 32kB. LZMA stands for Lempel-Ziv-Markov chain-Algorithm, after string back-references have been located, values are reduced using a Markov chain range-encoder (aka arithmetic coding) instead of huffman.
7zip also produces gains on x86, arm and other binary format files by converting relative jumps into an absolute addresses, increasing the likely of a match. In the case of some code where multiple places (different points in a 'for' loop) branch to the same lcoation:
.loop_start: test eax, eax jcc .loop_start (jcc -4) ... jmp .loop_start (jmp -23) ... jnz .loop_start (jnz -56)
Then the BCJ/BCJ2 preprocessors would step though the stream and convert all relative locations into jumps to the same absolute value, eg. 7472. The change is performed by add the value already present with the offset in the current file. The latter steps of the compression (string matching, huffman/range-coding) easily take care of compressing the three duplicated values in the stream.
Bzip2 is block-based, not stream based as gzip is. A fixed amount of input is taken (normally 900kB) and the block in considered on its own in a completely independant fashion. First a BWT transform is performed, placing likely similar letters closer together. Next is a Move-to-Front transform resulting in values clustered around zero. Both BWT and MTF processing steps are reversible and do not alter the length of the text. Final compression is performed using Huffman, much the same as for gzip.
Julian Steward includes a section about the limitations of the bzip stream format. A further limitation not listed is that there is no obvious way to add arbitary data in a compatible. Gzip provides storage space for a timestamp, filename, filecomment and a method for structured meta-data storage.
The only flexibility that I've located any flexibility in within the format is the encoding of the Huffman tables. The Huffman tables store the code length for each used symbol in the data-stream. A zero-bit means Continue at current bit-length, a one-bit means Modify; the following bit indicating whether to increment or decrement.
# current code length is 5, current symbol is A 0 B: 5 bits 0 C: 5 bits 0 D: 5 bits 11 # decrement to 4-bits 0 E: 4 bits 10 # increment to 5-bits 0 F: 5 bits
What is not documented is that multiple increment and decrements can be intermixed before the submission of a continue. This is the scope that may allow embedded of meta-data in a backwards compatible manner. A transform of binary data into increment and decrement operations (50% efficiency):
# current code length is 5, current symbol is A # dec, dec, inc, dec, inc, dec, dec, inc 11 11 10 11 10 11 11 10 0 B: 3 bits
However, to the current decoder, the apparent bit length must remain with in the range of valid lengths (1-19) and hovering around a central value. One option is encode each bit of our binary meta-data as clustered pairs of operations; increment-decrement being one bit value and decrement-increment being the other (25% efficency):
# current code length is 5, current symbol is A # dec, inc, inc, dec, inc, dec, dec, inc (11 10) (10 11) (10 11) (11 10) 0 B: 5 bits
To gain back some of the lost efficency, we could use one of several "DC free" encoding schemes. One option is 8b/10b encoding which ensures a maximum walk of 5 bits and a result after each byte that is not more than 1 bit out in either direction. 8B/10B encoding brings long-term DC at the cost of look-up table based decoding (40% efficency).
# current code length is 5, current symbol is A # dec, inc, inc, dec, inc, dec, dec, inc ((11 10 10 11 10 11) (11 11 10 10)) 0 B: 5 bits
Generally the order of files inside a tar archive makes no difference to the decoding of the files. A pre-processing step can be taken to reorder the individual file blocks in a tar steam to increase the likelyhood of increased compression. Note, that if the archive or contents are checksummed, then the order of components may be significant.
To take advantage of cross-file similarity in a tarball, the contents of two separate, but similar, files must be located within the same window. The window size is normally 32kB for gzip, or 900kB with bzip2 -9. Ideally some effort can also be made to ensure that the gzip, or bzip2 stream is not reset between two similar files—which would result in no compression improvement from having matched and located the similar data closely together.
If either file is bigger than the window size of the expected compression algorithm, then the comparision that needs to be performed is whether the end of one file closely matches the start of a second file.
For a tarball containing two files A and B the possible file orderings are A,B or B,A. Either of these orderings generates a tar file of equivalent length, however when compressed with a dictionary-based compression algorithm the final compressed length is likely to be less. To solve completely for any set of files there are N! factorial possibilites to investigate.
Performing the tarfile reordering is a four stage-process:
Once the window-size is known, a additional step can be inserted between (3) and (4) to rearrange the cluster chains so that (ideally) chains are not broken across a window size if using fixed blocks, such as bzip2.
A simpler ordering method, involves clustering based purely on filename and extension can be produced with a command similar to:
cat filelist.txt | rev | sort | rev >
neworder.txt
This sorting process workings by reversing each line in the file; hello.text would become txet.olleh allowing files with similar file extensions or basenames to be ordered adjacently. The filenames are reversed again producing the file order; this method appears to work well for language-packs containing translated strings, resulting in a 14% improvement using bzip2 compression both before and afterwards, or 2% if using gzip (most files are larger than the 32kB window size).
I came across a paper (without source code) which discusses pre-ordering for efficient zdelta encoding as well as the tarfile ordering: Compressing File Collections with a TSP-Based Approach (PDF). For this paper, a relatively simple, greedy method is chosen, yeilding compression improvements of ~10-15% on webpages of online news services.
One of the problem is transmitting binary files through email is that certain character, or certain sequences of characters take on special meanings. Examples of this include:
The normal way to avoid NUL bytes or other undesirable sequences appearing in a final output message is to use an established encoding method such as uuencode, base64 or ascii-85. The three methods trade a larger output size, in exchange for a reduced symbol set, such as only common letters as upper and lowercase Latin letters, Arabic numerals and punctuation.
It may, with repeated trial and error be possible to produce a value deflate stream that does not contain the blacklisted sequences.
In the case of simply needing to avoid the NUL (zero) byte appearing the output, the following problems all present themselves:
This idea would be to have an encoding that is understood by existing decoders and which has an expansion less than the ~1.4 of uudecode and base64, or ~1.27 of Ascii85.
ADD LZ EOF COPY REPAIR DELETE REPLACE ALTER diffe Y Y Y rzip Y Y gzip 1 3-258@1-32768 Y vcdiff Y Y N Y reijers Y 1-65536 Y Y bsdiff Y Y Y Y succinctY Y Y Y Y
Succinct wins because
After coming up with some crack-pot idea, it's often fun to go hunting while trying to come up with guesses for what other people may have called related ideas!
2007-03-17: Had a good long in-depth IRC discussion with Ben Jos Walbeehm about deflate optimisation and the inner-workings of BJWflate and DeflOpt along with speculating about what KZIP might be doing. DeflOpt is mainly working in the area of optimising the storage of the Huffman trees themselves but the low-hanging fruit is likely in the area of more clever block splitting.
2009-07-20: Briefly add Google's Courgette x86 pre-processor.
2009-07-24: Minor grammatic tweaks, update of a broken URL and a line noting the similarity/relationship and compression and delta generation.