flate

Imports

Imports #

"math"
"math/bits"
"sort"
"bufio"
"io"
"math/bits"
"strconv"
"sync"
"errors"
"fmt"
"io"
"math"
"math"
"io"

Constants & Variables

BestCompression const #

const BestCompression = 9

BestSpeed const #

const BestSpeed = 1

DefaultCompression const #

const DefaultCompression = *ast.UnaryExpr

HuffmanOnly const #

HuffmanOnly disables Lempel-Ziv match searching and only performs Huffman entropy encoding. This mode is useful in compressing data that has already been compressed with an LZ style algorithm (e.g. Snappy or LZ4) that lacks an entropy encoder. Compression gains are achieved when certain bytes in the input stream occur more frequently than others. Note that HuffmanOnly produces a compressed output that is RFC 1951 compliant. That is, any valid DEFLATE decompressor will continue to be able to decompress this output.

const HuffmanOnly = *ast.UnaryExpr

NoCompression const #

const NoCompression = 0

badCode const #

const badCode = 255

baseMatchLength const #

The LZ77 step produces a sequence of literal tokens and pair tokens. The offset is also known as distance. The underlying wire format limits the range of lengths and offsets. For example, there are 256 legitimate lengths: those in the range [3, 258]. This package's compressor uses a higher minimum match length, enabling optimizations such as finding matches via 32-bit loads and compares.

const baseMatchLength = 3

baseMatchOffset const #

const baseMatchOffset = 1

bufferFlushSize const #

bufferFlushSize indicates the buffer size after which bytes are flushed to the writer. Should preferably be a multiple of 6, since we accumulate 6 bytes between writes to the buffer.

const bufferFlushSize = 240

bufferReset const #

Reset the buffer offset when reaching this. Offsets are stored between blocks as int32 values. Since the offset we are checking against is at the beginning of the buffer, we need to subtract the current and input buffer to not risk overflowing the int32.

const bufferReset = *ast.BinaryExpr

bufferSize const #

bufferSize is the actual output byte buffer size. It must have additional headroom for a flush which can contain up to 8 bytes.

const bufferSize = *ast.BinaryExpr

codeOrder var #

var codeOrder = [...]int{...}

codegenCodeCount const #

The number of codegen codes.

const codegenCodeCount = 19

codegenOrder var #

The odd order in which the codegen code sizes are written.

var codegenOrder = []uint32{...}

endBlockMarker const #

The special code used to mark the end of a block.

const endBlockMarker = 256

errWriterClosed var #

var errWriterClosed = *ast.CallExpr

fixedHuffmanDecoder var #

var fixedHuffmanDecoder huffmanDecoder

fixedLiteralEncoding var #

var fixedLiteralEncoding *huffmanEncoder = *ast.CallExpr

fixedOffsetEncoding var #

var fixedOffsetEncoding *huffmanEncoder = *ast.CallExpr

fixedOnce var #

Initialize the fixedHuffmanDecoder only once upon first use.

var fixedOnce sync.Once

hashBits const #

const hashBits = 17

hashMask const #

const hashMask = *ast.BinaryExpr

hashSize const #

const hashSize = *ast.BinaryExpr

hashmul const #

const hashmul = 0x1e35a7bd

huffOffset var #

huffOffset is a static offset encoder used for huffman only encoding. It can be reused since we will not be encoding offset values.

var huffOffset *huffmanEncoder

huffmanChunkBits const #

const huffmanChunkBits = 9

huffmanCountMask const #

const huffmanCountMask = 15

huffmanNumChunks const #

const huffmanNumChunks = *ast.BinaryExpr

huffmanValueShift const #

const huffmanValueShift = 4

inputMargin const #

These constants are defined by the Snappy implementation so that its assembly implementation can fast-path some 16-bytes-at-a-time copies. They aren't necessary in the pure Go implementation, as we don't use those same optimizations, but using the same thresholds doesn't really hurt.

const inputMargin = *ast.BinaryExpr

lengthBase var #

The length indicated by length code X - LENGTH_CODES_START.

var lengthBase = []uint32{...}

lengthCodes var #

The length code for length X (MIN_MATCH_LENGTH <= X <= MAX_MATCH_LENGTH) is lengthCodes[length - MIN_MATCH_LENGTH]

var lengthCodes = [...]uint32{...}

lengthCodesStart const #

The first length code.

const lengthCodesStart = 257

lengthExtraBits var #

The number of extra bits needed by length code X - LENGTH_CODES_START.

var lengthExtraBits = []int8{...}

lengthShift const #

2 bits: type 0 = literal 1=EOF 2=Match 3=Unused 8 bits: xlength = length - MIN_MATCH_LENGTH 22 bits xoffset = offset - MIN_OFFSET_SIZE, or literal

const lengthShift = 22

levels var #

var levels = []compressionLevel{...}

literalType const #

const literalType = *ast.BinaryExpr

logWindowSize const #

const logWindowSize = 15

matchType const #

const matchType = *ast.BinaryExpr

maxBitsLimit const #

const maxBitsLimit = 16

maxCodeLen const #

const maxCodeLen = 16

maxFlateBlockTokens const #

The maximum number of tokens we put into a single flate block, just to stop things from getting too large.

const maxFlateBlockTokens = *ast.BinaryExpr

maxHashOffset const #

const maxHashOffset = *ast.BinaryExpr

maxMatchLength const #

const maxMatchLength = 258

maxMatchOffset const #

const maxMatchOffset = *ast.BinaryExpr

maxNumDist const #

const maxNumDist = 30

maxNumLit const #

The next three numbers come from the RFC section 3.2.7, with the additional proviso in section 3.2.5 which implies that distance codes 30 and 31 should never occur in compressed data.

const maxNumLit = 286

maxStoreBlockSize const #

const maxStoreBlockSize = 65535

minMatchLength const #

const minMatchLength = 4

minNonLiteralBlockSize const #

These constants are defined by the Snappy implementation so that its assembly implementation can fast-path some 16-bytes-at-a-time copies. They aren't necessary in the pure Go implementation, as we don't use those same optimizations, but using the same thresholds doesn't really hurt.

const minNonLiteralBlockSize = *ast.BinaryExpr

numCodes const #

const numCodes = 19

offsetBase var #

var offsetBase = []uint32{...}

offsetCodeCount const #

The largest offset code.

const offsetCodeCount = 30

offsetCodes var #

var offsetCodes = [...]uint32{...}

offsetExtraBits var #

offset code word extra bits.

var offsetExtraBits = []int8{...}

offsetMask const #

const offsetMask = *ast.BinaryExpr

skipNever const #

const skipNever = math.MaxInt32

tableBits const #

const tableBits = 14

tableMask const #

const tableMask = *ast.BinaryExpr

tableShift const #

const tableShift = *ast.BinaryExpr

tableSize const #

const tableSize = *ast.BinaryExpr

typeMask const #

const typeMask = *ast.BinaryExpr

windowMask const #

const windowMask = *ast.BinaryExpr

windowSize const #

const windowSize = *ast.BinaryExpr

Type Aliases

CorruptInputError type #

A CorruptInputError reports the presence of corrupt input at a given offset.

type CorruptInputError int64

InternalError type #

An InternalError reports an error in the flate code itself.

type InternalError string

byFreq type #

type byFreq []literalNode

byLiteral type #

type byLiteral []literalNode

token type #

type token uint32

Interfaces

Reader interface #

The actual read interface needed by [NewReader]. If the passed in io.Reader does not also have ReadByte, the [NewReader] will introduce its own buffering.

type Reader interface {
io.Reader
io.ByteReader
}

Resetter interface #

Resetter resets a ReadCloser returned by [NewReader] or [NewReaderDict] to switch to a new underlying [Reader]. This permits reusing a ReadCloser instead of allocating a new one.

type Resetter interface {
Reset(r io.Reader, dict []byte) error
}

Structs

ReadError struct #

A ReadError reports an error encountered while reading input. Deprecated: No longer returned.

type ReadError struct {
Offset int64
Err error
}

WriteError struct #

A WriteError reports an error encountered while writing output. Deprecated: No longer returned.

type WriteError struct {
Offset int64
Err error
}

Writer struct #

A Writer takes data written to it and writes the compressed form of that data to an underlying writer (see [NewWriter]).

type Writer struct {
d compressor
dict []byte
}

compressionLevel struct #

type compressionLevel struct {
level int
good int
lazy int
nice int
chain int
fastSkipHashing int
}

compressor struct #

type compressor struct {
compressionLevel
w *huffmanBitWriter
bulkHasher func([]byte, []uint32)
fill func(*compressor, []byte) int
step func(*compressor)
bestSpeed *deflateFast
chainHead int
hashHead [hashSize]uint32
hashPrev [windowSize]uint32
hashOffset int
index int
window []byte
windowEnd int
blockStart int
byteAvailable bool
sync bool
tokens []token
length int
offset int
maxInsertIndex int
err error
hashMatch [*ast.BinaryExpr]uint32
}

decompressor struct #

Decompress state.

type decompressor struct {
r Reader
rBuf *bufio.Reader
roffset int64
b uint32
nb uint
h1 huffmanDecoder
h2 huffmanDecoder
bits *[*ast.BinaryExpr]int
codebits *[numCodes]int
dict dictDecoder
buf [4]byte
step func(*decompressor)
stepState int
final bool
err error
toRead []byte
hl *huffmanDecoder
hd *huffmanDecoder
copyLen int
copyDist int
}

deflateFast struct #

deflateFast maintains the table for matches, and the previous byte block for cross block matching.

type deflateFast struct {
table [tableSize]tableEntry
prev []byte
cur int32
}

dictDecoder struct #

dictDecoder implements the LZ77 sliding dictionary as used in decompression. LZ77 decompresses data through sequences of two forms of commands: - Literal insertions: Runs of one or more symbols are inserted into the data stream as is. This is accomplished through the writeByte method for a single symbol, or combinations of writeSlice/writeMark for multiple symbols. Any valid stream must start with a literal insertion if no preset dictionary is used. - Backward copies: Runs of one or more symbols are copied from previously emitted data. Backward copies come as the tuple (dist, length) where dist determines how far back in the stream to copy from and length determines how many bytes to copy. Note that it is valid for the length to be greater than the distance. Since LZ77 uses forward copies, that situation is used to perform a form of run-length encoding on repeated runs of symbols. The writeCopy and tryWriteCopy are used to implement this command. For performance reasons, this implementation performs little to no sanity checks about the arguments. As such, the invariants documented for each method call must be respected.

type dictDecoder struct {
hist []byte
wrPos int
rdPos int
full bool
}

dictWriter struct #

type dictWriter struct {
w io.Writer
}

hcode struct #

hcode is a huffman code with a bit code and bit length.

type hcode struct {
code uint16
len uint16
}

huffmanBitWriter struct #

type huffmanBitWriter struct {
writer io.Writer
bits uint64
nbits uint
bytes [bufferSize]byte
codegenFreq [codegenCodeCount]int32
nbytes int
literalFreq []int32
offsetFreq []int32
codegen []uint8
literalEncoding *huffmanEncoder
offsetEncoding *huffmanEncoder
codegenEncoding *huffmanEncoder
err error
}

huffmanDecoder struct #

type huffmanDecoder struct {
min int
chunks [huffmanNumChunks]uint32
links [][]uint32
linkMask uint32
}

huffmanEncoder struct #

type huffmanEncoder struct {
codes []hcode
freqcache []literalNode
bitCount [17]int32
lns byLiteral
lfs byFreq
}

levelInfo struct #

A levelInfo describes the state of the constructed tree for a given depth.

type levelInfo struct {
level int32
lastFreq int32
nextCharFreq int32
nextPairFreq int32
needed int32
}

literalNode struct #

type literalNode struct {
literal uint16
freq int32
}

tableEntry struct #

type tableEntry struct {
val uint32
offset int32
}

Functions

Close method #

func (f *decompressor) Close() error

Close method #

Close flushes and closes the writer.

func (w *Writer) Close() error

Error method #

func (e *WriteError) Error() string

Error method #

func (e *ReadError) Error() string

Error method #

func (e InternalError) Error() string

Error method #

func (e CorruptInputError) Error() string

Flush method #

Flush flushes any pending data to the underlying writer. It is useful mainly in compressed network protocols, to ensure that a remote reader has enough data to reconstruct a packet. Flush does not return until the data has been written. Calling Flush when there is no pending data still causes the [Writer] to emit a sync marker of at least 4 bytes. If the underlying writer returns an error, Flush returns that error. In the terminology of the zlib library, Flush is equivalent to Z_SYNC_FLUSH.

func (w *Writer) Flush() error

Len method #

func (s byFreq) Len() int

Len method #

func (s byLiteral) Len() int

Less method #

func (s byLiteral) Less(i int, j int) bool

Less method #

func (s byFreq) Less(i int, j int) bool

NewReader function #

NewReader returns a new ReadCloser that can be used to read the uncompressed version of r. If r does not also implement [io.ByteReader], the decompressor may read more data than necessary from r. The reader returns [io.EOF] after the final block in the DEFLATE stream has been encountered. Any trailing data after the final block is ignored. The [io.ReadCloser] returned by NewReader also implements [Resetter].

func NewReader(r io.Reader) io.ReadCloser

NewReaderDict function #

NewReaderDict is like [NewReader] but initializes the reader with a preset dictionary. The returned [Reader] behaves as if the uncompressed data stream started with the given dictionary, which has already been read. NewReaderDict is typically used to read data compressed by NewWriterDict. The ReadCloser returned by NewReaderDict also implements [Resetter].

func NewReaderDict(r io.Reader, dict []byte) io.ReadCloser

NewWriter function #

NewWriter returns a new [Writer] compressing data at the given level. Following zlib, levels range from 1 ([BestSpeed]) to 9 ([BestCompression]); higher levels typically run slower but compress more. Level 0 ([NoCompression]) does not attempt any compression; it only adds the necessary DEFLATE framing. Level -1 ([DefaultCompression]) uses the default compression level. Level -2 ([HuffmanOnly]) will use Huffman compression only, giving a very fast compression for all types of input, but sacrificing considerable compression efficiency. If level is in the range [-2, 9] then the error returned will be nil. Otherwise the error returned will be non-nil.

func NewWriter(w io.Writer, level int) (*Writer, error)

NewWriterDict function #

NewWriterDict is like [NewWriter] but initializes the new [Writer] with a preset dictionary. The returned [Writer] behaves as if the dictionary had been written to it without producing any compressed output. The compressed data written to w can only be decompressed by a [Reader] initialized with the same dictionary.

func NewWriterDict(w io.Writer, level int, dict []byte) (*Writer, error)

Read method #

func (f *decompressor) Read(b []byte) (int, error)

Reset method #

func (f *decompressor) Reset(r io.Reader, dict []byte) error

Reset method #

Reset discards the writer's state and makes it equivalent to the result of [NewWriter] or [NewWriterDict] called with dst and w's level and dictionary.

func (w *Writer) Reset(dst io.Writer)

Swap method #

func (s byLiteral) Swap(i int, j int)

Swap method #

func (s byFreq) Swap(i int, j int)

Write method #

func (w *dictWriter) Write(b []byte) (n int, err error)

Write method #

Write writes data to w, which will eventually write the compressed form of data to its underlying writer.

func (w *Writer) Write(data []byte) (n int, err error)

assignEncodingAndSize method #

Look at the leaves and assign them a bit count and an encoding as specified in RFC 1951 3.2.2

func (h *huffmanEncoder) assignEncodingAndSize(bitCount []int32, list []literalNode)

availRead method #

availRead reports the number of bytes that can be flushed by readFlush.

func (dd *dictDecoder) availRead() int

availWrite method #

availWrite reports the available amount of output buffer space.

func (dd *dictDecoder) availWrite() int

bitCounts method #

bitCounts computes the number of literals assigned to each bit size in the Huffman encoding. It is only called when list.length >= 3. The cases of 0, 1, and 2 literals are handled by special case code. list is an array of the literals with non-zero frequencies and their associated frequencies. The array is in order of increasing frequency and has as its last element a special element with frequency MaxInt32. maxBits is the maximum number of bits that should be used to encode any literal. It must be less than 16. bitCounts returns an integer slice in which slice[i] indicates the number of literals that should be encoded in i bits.

func (h *huffmanEncoder) bitCounts(list []literalNode, maxBits int32) []int32

bitLength method #

func (h *huffmanEncoder) bitLength(freq []int32) int

bulkHash4 function #

bulkHash4 will compute hashes using the same algorithm as hash4.

func bulkHash4(b []byte, dst []uint32)

close method #

func (d *compressor) close() error

copyData method #

copyData copies f.copyLen bytes from the underlying reader into f.hist. It pauses for reads when f.hist is full.

func (f *decompressor) copyData()

dataBlock method #

Copy a single uncompressed data block from input to output.

func (f *decompressor) dataBlock()

deflate method #

func (d *compressor) deflate()

dynamicSize method #

dynamicSize returns the size of dynamically encoded data in bits.

func (w *huffmanBitWriter) dynamicSize(litEnc *huffmanEncoder, offEnc *huffmanEncoder, extraBits int) (size int, numCodegens int)

emitLiteral function #

func emitLiteral(dst []token, lit []byte) []token

encSpeed method #

encSpeed will compress and store the currently added data, if enough has been accumulated or we at the end of the stream. Any error that occurred will be in d.err

func (d *compressor) encSpeed()

encode method #

encode encodes a block given in src and appends tokens to dst and returns the result.

func (e *deflateFast) encode(dst []token, src []byte) []token

fillDeflate method #

func (d *compressor) fillDeflate(b []byte) int

fillStore method #

func (d *compressor) fillStore(b []byte) int

fillWindow method #

fillWindow will fill the current window with the supplied dictionary and calculate all hashes. This is much faster than doing a full encode. Should only be used after a reset.

func (d *compressor) fillWindow(b []byte)

findMatch method #

Try to find a match starting at index whose length is greater than prevSize. We only look at chainCount possibilities before giving up.

func (d *compressor) findMatch(pos int, prevHead int, prevLength int, lookahead int) (length int, offset int, ok bool)

finishBlock method #

func (f *decompressor) finishBlock()

fixedHuffmanDecoderInit function #

func fixedHuffmanDecoderInit()

fixedSize method #

fixedSize returns the size of dynamically encoded data in bits.

func (w *huffmanBitWriter) fixedSize(extraBits int) int

flush method #

func (w *huffmanBitWriter) flush()

generate method #

Update this Huffman Code object to be the minimum code for the specified frequency count. freq is an array of frequencies, in which freq[i] gives the frequency of literal i. maxBits The maximum number of bits to use for any literal.

func (h *huffmanEncoder) generate(freq []int32, maxBits int32)

generateCodegen method #

RFC 1951 3.2.7 specifies a special run-length encoding for specifying the literal and offset lengths arrays (which are concatenated into a single array). This method generates that run-length encoding. The result is written into the codegen array, and the frequencies of each code is written into the codegenFreq array. Codes 0-15 are single byte codes. Codes 16-18 are followed by additional information. Code badCode is an end marker numLiterals The number of literals in literalEncoding numOffsets The number of offsets in offsetEncoding litenc, offenc The literal and offset encoder to use

func (w *huffmanBitWriter) generateCodegen(numLiterals int, numOffsets int, litEnc *huffmanEncoder, offEnc *huffmanEncoder)

generateFixedLiteralEncoding function #

Generates a HuffmanCode corresponding to the fixed literal table.

func generateFixedLiteralEncoding() *huffmanEncoder

generateFixedOffsetEncoding function #

func generateFixedOffsetEncoding() *huffmanEncoder

hash function #

func hash(u uint32) uint32

hash4 function #

hash4 returns a hash representation of the first 4 bytes of the supplied slice. The caller must ensure that len(b) >= 4.

func hash4(b []byte) uint32

histSize method #

histSize reports the total amount of historical data in the dictionary.

func (dd *dictDecoder) histSize() int

histogram function #

histogram accumulates a histogram of b in h. len(h) must be >= 256, and h's elements must be all zeroes.

func histogram(b []byte, h []int32)

huffSym method #

Read the next Huffman-encoded symbol from f according to h.

func (f *decompressor) huffSym(h *huffmanDecoder) (int, error)

huffmanBlock method #

Decode a single Huffman block from f. hl and hd are the Huffman states for the lit/length values and the distance values, respectively. If hd == nil, using the fixed distance encoding associated with fixed Huffman blocks.

func (f *decompressor) huffmanBlock()

indexTokens method #

indexTokens indexes a slice of tokens, and updates literalFreq and offsetFreq, and generates literalEncoding and offsetEncoding. The number of literal and offset tokens is returned.

func (w *huffmanBitWriter) indexTokens(tokens []token) (numLiterals int, numOffsets int)

init method #

func (d *compressor) init(w io.Writer, level int) (err error)

init function #

func init()

init method #

Initialize Huffman decoding tables from array of code lengths. Following this function, h is guaranteed to be initialized into a complete tree (i.e., neither over-subscribed nor under-subscribed). The exception is a degenerate case where the tree has only a single symbol with length 1. Empty trees are permitted.

func (h *huffmanDecoder) init(lengths []int) bool

init method #

init initializes dictDecoder to have a sliding window dictionary of the given size. If a preset dict is provided, it will initialize the dictionary with the contents of dict.

func (dd *dictDecoder) init(size int, dict []byte)

initDeflate method #

func (d *compressor) initDeflate()

length method #

func (t token) length() uint32

lengthCode function #

func lengthCode(len uint32) uint32

literal method #

Returns the literal of a literal token.

func (t token) literal() uint32

literalToken function #

Convert a literal into a literal token.

func literalToken(literal uint32) token

load32 function #

func load32(b []byte, i int32) uint32

load64 function #

func load64(b []byte, i int32) uint64

makeReader method #

func (f *decompressor) makeReader(r io.Reader)

matchLen method #

matchLen returns the match length between src[s:] and src[t:]. t can be negative to indicate the match is starting in e.prev. We assume that src[s-4:s] and src[t-4:t] already match.

func (e *deflateFast) matchLen(s int32, t int32, src []byte) int32

matchLen function #

matchLen returns the number of matching bytes in a and b up to length 'max'. Both slices must be at least 'max' bytes in size.

func matchLen(a []byte, b []byte, max int) int

matchToken function #

Convert a < xlength, xoffset > pair into a match token.

func matchToken(xlength uint32, xoffset uint32) token

maxNode function #

func maxNode() literalNode

moreBits method #

func (f *decompressor) moreBits() error

newDeflateFast function #

func newDeflateFast() *deflateFast

newHuffmanBitWriter function #

func newHuffmanBitWriter(w io.Writer) *huffmanBitWriter

newHuffmanEncoder function #

func newHuffmanEncoder(size int) *huffmanEncoder

nextBlock method #

func (f *decompressor) nextBlock()

noEOF function #

noEOF returns err, unless err == io.EOF, in which case it returns io.ErrUnexpectedEOF.

func noEOF(e error) error

offset method #

Returns the extra offset of a match token.

func (t token) offset() uint32

offsetCode function #

Returns the offset code corresponding to a specific offset.

func offsetCode(off uint32) uint32

readFlush method #

readFlush returns a slice of the historical buffer that is ready to be emitted to the user. The data returned by readFlush must be fully consumed before calling any other dictDecoder methods.

func (dd *dictDecoder) readFlush() []byte

readHuffman method #

func (f *decompressor) readHuffman() error

reset method #

func (d *compressor) reset(w io.Writer)

reset method #

Reset resets the encoding history. This ensures that no matches are made to the previous block.

func (e *deflateFast) reset()

reset method #

func (w *huffmanBitWriter) reset(writer io.Writer)

reverseBits function #

func reverseBits(number uint16, bitLength byte) uint16

set method #

set sets the code and length of an hcode.

func (h *hcode) set(code uint16, length uint16)

shiftOffsets method #

shiftOffsets will shift down all match offset. This is only called in rare situations to prevent integer overflow. See https://golang.org/issue/18636 and https://github.com/golang/go/issues/34121.

func (e *deflateFast) shiftOffsets()

sort method #

func (s *byLiteral) sort(a []literalNode)

sort method #

func (s *byFreq) sort(a []literalNode)

store method #

func (d *compressor) store()

storeHuff method #

storeHuff compresses and stores the currently added data when the d.window is full or we are at the end of the stream. Any error that occurred will be in d.err

func (d *compressor) storeHuff()

storedSize method #

storedSize calculates the stored size, including header. The function returns the size in bits and whether the block fits inside a single block.

func (w *huffmanBitWriter) storedSize(in []byte) (int, bool)

syncFlush method #

func (d *compressor) syncFlush() error

tryWriteCopy method #

tryWriteCopy tries to copy a string at a given (distance, length) to the output. This specialized version is optimized for short distances. This method is designed to be inlined for performance reasons. This invariant must be kept: 0 < dist <= histSize()

func (dd *dictDecoder) tryWriteCopy(dist int, length int) int

write method #

func (d *compressor) write(b []byte) (n int, err error)

write method #

func (w *huffmanBitWriter) write(b []byte)

writeBits method #

func (w *huffmanBitWriter) writeBits(b int32, nb uint)

writeBlock method #

func (d *compressor) writeBlock(tokens []token, index int) error

writeBlock method #

writeBlock will write a block of tokens with the smallest encoding. The original input can be supplied, and if the huffman encoded data is larger than the original bytes, the data will be written as a stored block. If the input is nil, the tokens will always be Huffman encoded.

func (w *huffmanBitWriter) writeBlock(tokens []token, eof bool, input []byte)

writeBlockDynamic method #

writeBlockDynamic encodes a block using a dynamic Huffman table. This should be used if the symbols used have a disproportionate histogram distribution. If input is supplied and the compression savings are below 1/16th of the input size the block is stored.

func (w *huffmanBitWriter) writeBlockDynamic(tokens []token, eof bool, input []byte)

writeBlockHuff method #

writeBlockHuff encodes a block of bytes as either Huffman encoded literals or uncompressed bytes if the results only gains very little from compression.

func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte)

writeByte method #

writeByte writes a single byte to the dictionary. This invariant must be kept: 0 < availWrite()

func (dd *dictDecoder) writeByte(c byte)

writeBytes method #

func (w *huffmanBitWriter) writeBytes(bytes []byte)

writeCode method #

func (w *huffmanBitWriter) writeCode(c hcode)

writeCopy method #

writeCopy copies a string at a given (dist, length) to the output. This returns the number of bytes copied and may be less than the requested length if the available space in the output buffer is too small. This invariant must be kept: 0 < dist <= histSize()

func (dd *dictDecoder) writeCopy(dist int, length int) int

writeDynamicHeader method #

Write the header of a dynamic Huffman block to the output stream. numLiterals The number of literals specified in codegen numOffsets The number of offsets specified in codegen numCodegens The number of codegens used in codegen

func (w *huffmanBitWriter) writeDynamicHeader(numLiterals int, numOffsets int, numCodegens int, isEof bool)

writeFixedHeader method #

func (w *huffmanBitWriter) writeFixedHeader(isEof bool)

writeMark method #

writeMark advances the writer pointer by cnt. This invariant must be kept: 0 <= cnt <= availWrite()

func (dd *dictDecoder) writeMark(cnt int)

writeSlice method #

writeSlice returns a slice of the available buffer to write data to. This invariant will be kept: len(s) <= availWrite()

func (dd *dictDecoder) writeSlice() []byte

writeStoredBlock method #

func (d *compressor) writeStoredBlock(buf []byte) error

writeStoredHeader method #

func (w *huffmanBitWriter) writeStoredHeader(length int, isEof bool)

writeTokens method #

writeTokens writes a slice of tokens to the output. codes for literal and offset encoding must be supplied.

func (w *huffmanBitWriter) writeTokens(tokens []token, leCodes []hcode, oeCodes []hcode)

Generated with Arrow