Deprecated Methods |
org.apache.hadoop.hbase.HServerLoad.addRegionInfo(byte[], int, int, int, int)
Use HServerLoad.addRegionInfo(RegionLoad) |
org.apache.hadoop.hbase.io.SequenceFile.getCompressionType(Configuration)
Use
SequenceFileOutputFormat.getOutputCompressionType(org.apache.hadoop.mapred.JobConf)
to get SequenceFile.CompressionType for job-outputs. |
org.apache.hadoop.hbase.regionserver.HStoreFile.getFilterDir(Path, int, byte[])
|
org.apache.hadoop.hbase.io.SequenceFile.Reader.next(DataOutputBuffer)
Call SequenceFile.Reader.nextRaw(DataOutputBuffer,SequenceFile.ValueBytes) . |
org.apache.hadoop.hbase.filter.RegExpRowFilter.setColumnFilter(byte[], byte[])
Column filtering has been replaced by ColumnValueFilter
Specify a value that must be matched for the given column. |
org.apache.hadoop.hbase.filter.RegExpRowFilter.setColumnFilters(Map)
Column filtering has been replaced by ColumnValueFilter
Set column filters for a number of columns. |
org.apache.hadoop.hbase.io.SequenceFile.setCompressionType(Configuration, SequenceFile.CompressionType)
Use the one of the many SequenceFile.createWriter methods to specify
the SequenceFile.CompressionType while creating the SequenceFile or
SequenceFileOutputFormat.setOutputCompressionType(org.apache.hadoop.mapred.JobConf, org.apache.hadoop.io.SequenceFile.CompressionType)
to specify the SequenceFile.CompressionType for job-outputs.
or |