org.apache.hadoop.hbase.io.hfile
Class HFileReaderV1
java.lang.Object
org.apache.hadoop.hbase.regionserver.metrics.SchemaConfigured
org.apache.hadoop.hbase.io.hfile.AbstractHFileReader
org.apache.hadoop.hbase.io.hfile.HFileReaderV1
- All Implemented Interfaces:
- Closeable, HeapSize, HFile.CachingBlockReader, HFile.Reader, SchemaMetrics.SchemaAware
public class HFileReaderV1
- extends AbstractHFileReader
HFile
reader for version 1. Does not support data block encoding,
even in cache only, i.e. HFile v1 blocks are always brought into cache
unencoded.
Fields inherited from class org.apache.hadoop.hbase.io.hfile.AbstractHFileReader |
avgKeyLen, avgValueLen, cacheConf, closeIStream, comparator, compressAlgo, dataBlockEncoder, dataBlockIndexReader, fileInfo, fileSize, fsBlockReader, hfs, istream, istreamNoFsChecksum, lastKey, metaBlockIndexReader, name, path, trailer |
Constructor Summary |
HFileReaderV1(org.apache.hadoop.fs.Path path,
FixedFileTrailer trailer,
org.apache.hadoop.fs.FSDataInputStream fsdis,
long size,
boolean closeIStream,
CacheConfig cacheConf)
Opens a HFile. |
Method Summary |
protected int |
blockContainingKey(byte[] key,
int offset,
int length)
|
void |
close()
|
void |
close(boolean evictOnClose)
Close method with optional evictOnClose |
DataInput |
getDeleteBloomFilterMetadata()
Retrieves delete family Bloom filter metadata as appropriate for each
HFile version. |
DataInput |
getGeneralBloomFilterMetadata()
Retrieves general Bloom filter metadata as appropriate for each
HFile version. |
byte[] |
getLastKey()
|
ByteBuffer |
getMetaBlock(String metaBlockName,
boolean cacheBlock)
|
HFileScanner |
getScanner(boolean cacheBlocks,
boolean pread,
boolean isCompaction)
Create a Scanner on this file. |
boolean |
isFileInfoLoaded()
|
org.apache.hadoop.hbase.io.hfile.HFile.FileInfo |
loadFileInfo()
Read in the index and file info. |
byte[] |
midkey()
|
HFileBlock |
readBlock(long offset,
long onDiskBlockSize,
boolean cacheBlock,
boolean pread,
boolean isCompaction,
BlockType expectedBlockType)
|
Methods inherited from class org.apache.hadoop.hbase.io.hfile.AbstractHFileReader |
getComparator, getCompressionAlgorithm, getDataBlockIndexReader, getEncodingOnDisk, getEntries, getFirstKey, getFirstRowKey, getLastRowKey, getName, getPath, getScanner, getTrailer, indexSize, length, toString, toStringFirstKey, toStringLastKey |
HFileReaderV1
public HFileReaderV1(org.apache.hadoop.fs.Path path,
FixedFileTrailer trailer,
org.apache.hadoop.fs.FSDataInputStream fsdis,
long size,
boolean closeIStream,
CacheConfig cacheConf)
throws IOException
- Opens a HFile. You must load the index before you can
use it by calling
loadFileInfo()
.
- Parameters:
fsdis
- input stream. Caller is responsible for closing the passed
stream.size
- Length of the stream.cacheConf
- cache references and configuration
- Throws:
IOException
loadFileInfo
public org.apache.hadoop.hbase.io.hfile.HFile.FileInfo loadFileInfo()
throws IOException
- Read in the index and file info.
- Specified by:
loadFileInfo
in interface HFile.Reader
- Overrides:
loadFileInfo
in class AbstractHFileReader
- Returns:
- A map of fileinfo data.
- Throws:
IOException
- See Also:
HFile.Writer.appendFileInfo(byte[], byte[])
getScanner
public HFileScanner getScanner(boolean cacheBlocks,
boolean pread,
boolean isCompaction)
- Create a Scanner on this file. No seeks or reads are done on creation. Call
HFileScanner.seekTo(byte[])
to position an start the read. There is
nothing to clean up in a Scanner. Letting go of your references to the
scanner is sufficient.
- Parameters:
cacheBlocks
- True if we should cache blocks read in by this scanner.pread
- Use positional read rather than seek+read if true (pread is
better for random reads, seek+read is better scanning).isCompaction
- is scanner being used for a compaction?
- Returns:
- Scanner on this file.
blockContainingKey
protected int blockContainingKey(byte[] key,
int offset,
int length)
- Parameters:
key
- Key to search.
- Returns:
- Block number of the block containing the key or -1 if not in this
file.
getMetaBlock
public ByteBuffer getMetaBlock(String metaBlockName,
boolean cacheBlock)
throws IOException
- Parameters:
metaBlockName
- cacheBlock
- Add block to cache, if found
- Returns:
- Block wrapped in a ByteBuffer
- Throws:
IOException
getLastKey
public byte[] getLastKey()
- Returns:
- Last key in the file. May be null if file has no entries.
Note that this is not the last rowkey, but rather the byte form of
the last KeyValue.
midkey
public byte[] midkey()
throws IOException
- Returns:
- Midkey for this file. We work with block boundaries only so
returned midkey is an approximation only.
- Throws:
IOException
close
public void close()
throws IOException
- Throws:
IOException
close
public void close(boolean evictOnClose)
throws IOException
- Description copied from interface:
HFile.Reader
- Close method with optional evictOnClose
- Throws:
IOException
readBlock
public HFileBlock readBlock(long offset,
long onDiskBlockSize,
boolean cacheBlock,
boolean pread,
boolean isCompaction,
BlockType expectedBlockType)
getGeneralBloomFilterMetadata
public DataInput getGeneralBloomFilterMetadata()
throws IOException
- Description copied from interface:
HFile.Reader
- Retrieves general Bloom filter metadata as appropriate for each
HFile
version.
Knows nothing about how that metadata is structured.
- Throws:
IOException
getDeleteBloomFilterMetadata
public DataInput getDeleteBloomFilterMetadata()
throws IOException
- Description copied from interface:
HFile.Reader
- Retrieves delete family Bloom filter metadata as appropriate for each
HFile
version.
Knows nothing about how that metadata is structured.
- Throws:
IOException
isFileInfoLoaded
public boolean isFileInfoLoaded()
- Specified by:
isFileInfoLoaded
in class AbstractHFileReader
Copyright © 2013 The Apache Software Foundation. All Rights Reserved.