|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Put
operation to the list of mutations
Delete
operation to the list of mutations
field
to the sequence of accumulated fields.
Put.add(byte[], byte[], byte[])
.
Put.add(byte[], byte[], long, byte[])
.
Put.add(byte[], ByteBuffer, long, ByteBuffer)
.
val
.
val
.
val
.
Base64.Base64InputStream
will read data from another
InputStream, given in the constructor, and
encode/decode to/from Base64 notation on the fly.Base64.Base64InputStream
in DECODE mode.
Base64.Base64InputStream
in either ENCODE or DECODE mode.
HTable.batch(List, Object[])
instead.
HTableInterface.batch(List, Object[])
instead.
HTableInterface.batch(List, Object[])
, but with a callback.
HTable.batchCallback(List, Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)
instead.
HTableInterface.batch(List, Object[])
, but with a callback.
HTableInterface.batchCallback(List, Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)
instead.
Service
subclass for each table
region spanning the range from the startKey
row to endKey
row (inclusive), all
the invocations to the same region server will be batched into one call.
Service
subclass for each table
region spanning the range from the startKey
row to endKey
row (inclusive), all
the invocations to the same region server will be batched into one call.
Service
subclass for each table
region spanning the range from the startKey
row to endKey
row (inclusive), all
the invocations to the same region server will be batched into one call.
Service
subclass for each table
region spanning the range from the startKey
row to endKey
row (inclusive), all
the invocations to the same region server will be batched into one call.
Bytes.compareTo(byte[], byte[])
.conf
.
ByteRange
.Bytes.ByteArrayComparator
that treats the empty array as the largest value.Cell
instances.sn
.
tableName
ClientScanner.ClientScanner(Configuration, Scan, TableName)
Scan
's start row maybe changed changed.
ClientScanner.ClientScanner(Configuration, Scan, TableName, HConnection)
ClientScanner.ClientScanner(Configuration, Scan, TableName, HConnection,
RpcRetryingCallerFactory, RpcControllerFactory)
instead
Scan
's start
row maybe changed changed.
Scan
's start row maybe changed.
Scan
's start row maybe changed changed.
Order
.
Configuration
instances.Import.CF_RENAME_PROP
in conf that tells
the mapper how to rename column families.
Service
s.RpcChannel
instance
connected to the active master.
RpcChannel
instance
connected to the passed region server.
RpcChannel
instance connected to the
table region containing the specified row.
Service
subclass for each table
region spanning the range from the startKey
row to endKey
row (inclusive),
and invokes the passed Batch.Call.call(T)
method with each Service
instance.
Service
subclass for each table
region spanning the range from the startKey
row to endKey
row (inclusive),
and invokes the passed Batch.Call.call(T)
method with each Service
instance.
RpcChannel
instance connected to the
table region containing the specified row.
Service
subclass for each table
region spanning the range from the startKey
row to endKey
row (inclusive),
and invokes the passed Batch.Call.call(T)
method with each Service
instance.
Service
subclass for each table
region spanning the range from the startKey
row to endKey
row (inclusive),
and invokes the passed Batch.Call.call(T)
method with each Service
instance.
UserGroupInformation
instance.
conf
instance.
conf
instance.
conf
instance.
conf
instance.
CopyTable.createSubmittableJob(String[])
instead
User
instance specifically for use in test code.
DataType
is the base class for all HBase data types.T
from the buffer src
.
byte[]
from the buffer src
.
byte[]
from the buffer src
.
byte[]
from the buffer src
.
index
.
src
.
src
.
byte
value from the buffer src
.
byte
value from the buffer buff
.
src
.
src
.
double
value from the buffer src
.
double
value from the buffer src
.
double
value from the buffer buff
.
float
value from the buffer dst
.
float
value from the buffer buff
.
int
value from the buffer src
.
int
value from the buffer buff
.
int16
value.
int32
value.
int64
value.
int8
value.
long
value from the buffer src
.
long
value from the buffer src
.
long
value from the buffer buff
.
BigDecimal
value from the variable-length encoding.
double
value from the Numeric encoding.
long
value from the Numeric encoding.
short
value from the buffer src
.
short
value from the buffer buff
.
ByteRange
with new backing byte[] containing a copy
of the content from this
range's window.
HConstants.HBASE_CLIENT_MAX_PERREGION_TASKS
.
HConstants.HBASE_CLIENT_MAX_PERSERVER_TASKS
.
HConstants.HBASE_CLIENT_MAX_TOTAL_TASKS
.
HConstants.HBASE_CLIENT_PAUSE
.
HConstants.HBASE_CLIENT_PREFETCH
.
HConstants.HBASE_CLIENT_PREFETCH_LIMIT
.
HConstants.HBASE_CLIENT_RETRIES_NUMBER
.
HConstants.HBASE_CLIENT_SCANNER_CACHING
HConstants.HBASE_CLIENT_SCANNER_TIMEOUT_PERIOD
.
HConstants.HBASE_META_BLOCK_SIZE
.
HConstants.HBASE_META_SCANNER_CACHING
.
HConstants.HBASE_META_VERSIONS
.
HConstants.HBASE_RPC_SHORTOPERATION_TIMEOUT_KEY
HConstants.HBASE_RPC_TIMEOUT_KEY
HConstants.HBASE_SERVER_PAUSE
.
BigDecimal
values.
HTableDescriptor.DURABILITY
instead.
CompareFilter
Mutation
s
Note that the items must be sorted in order of increasing durabilityDurability
setting for the table.
val
into buffer dst
.
val
to dst
.
val
to buff
.
val
into dst
, respecting voff
and vlen
.
val
into buff
, respecting offset
and
length
.
val
into dst
, respecting offset
and
length
.
val
into buffer dst
.
val
into buffer dst
.
val
into buffer buff
.
DataType
operates.
byte[]
will be.
val
into buffer dst
.
val
into buffer dst
.
val
into buffer buff
.
val
into buffer buff
.
val
into buffer buff
.
val
into buffer dst
.
val
into buffer buff
.
int16
value using the fixed-length encoding.
int32
value using the fixed-length encoding.
int64
value using the fixed-length encoding.
int8
value using the fixed-length encoding.
val
into buffer dst
.
val
into buffer dst
.
val
into buffer buff
.
val
into buffer dst
.
val
into buffer buff
.
HConnectable.connect(org.apache.hadoop.hbase.client.HConnection)
implementation using a HConnection
instance that lasts just for the
duration of the invocation.
Filter
that represents an ordered List of Filters
which will be evaluated with a specified boolean operator FilterList.Operator.MUST_PASS_ALL
(AND
) or FilterList.Operator.MUST_PASS_ONE
(OR
).Filter
s.
Filter
s.
Filter
s and an operator.
Filter
s and an operator.
Filter.filterKeyValue(Cell)
calls.
DataType
implementation as a fixed-length
version of itself.wrapped
.
Put
operations.
Put
operations.
Durability.SKIP_WAL
and the data is imported to hbase, we
need to flush all the regions of the table as the data is held in memory and is also not
present in the Write Ahead Log to replay in scenarios of a crash.
index
.
dst
with bytes from the range, starting from index
.
dst
with bytes from the range, starting from index
.
dst
with bytes from the range, starting from position
.
dst
with bytes from the range, starting from the current
position
.
buf
,
from the position (inclusive) to the limit (exclusive).
Result.getColumnCells(byte[], byte[])
instead.
HColumnDescriptor
of the column families
of the table.
Result.getColumnLatestCell(byte[], byte[])
instead.
Result.getColumnLatestCell(byte[], int, int, byte[], int, int)
instead.
Configuration
object used by this instance.
Configuration
object used by this instance.
HColumnDescriptor.configuration
map.
HTableDescriptor.configuration
map.
NamespaceDescriptor.configuration
map.
User
instance within current execution context.
HColumnDescriptor
of all the column families of the table.
CellUtil.cloneFamily(Cell)
Mutation.getFamilyCellMap()
instead.
HConstants.CATALOG_FAMILY
and
qualifier
of the catalog table result.
Result
.
HTable
.
HTable
.
name
property as an int
, possibly
referring to the deprecated name of the configuration property.
MasterKeepAliveConnection
to the active master
ServerName
.
Order
imposed by this data type, or null when
natural ordering is not preserved.
position
marker.
CellUtil.cloneQualifier(Cell)
CellUtil.getRowByte(Cell, int)
Scan
object.
Scan
object.
Scan.setCaching(int)
and Scan.getCaching()
Scan
objects.
ClusterStatus.getServers()
ServerName
from catalog table Result
.
ImmutableBytesWritable.getLength()
instead
table descriptor
for this table.
table descriptor
for this table.
ClientScanner.getTable()
Cell.getTagsLengthUnsigned()
which can handle tags length upto 65535.
CellUtil.cloneValue(Cell)
ByteBuffer
.
ByteBuffer
.
HTableDescriptor.values
map.
Mutation.getDurability()
instead.
Configuration
instance.
HColumnDescriptor.HColumnDescriptor(String)
and setters
HColumnDescriptor.HColumnDescriptor(String)
and setters
HColumnDescriptor.HColumnDescriptor(String)
and setters
HConnection
s.HFileOutputFormat2
instead.HFile
s.InputFormat
for HLog
files. -ROOT-
or
hbase:meta
, if the table is read only, the maximum size of the memstore,
when the region split should occur, coprocessors associated with it etc...hbase:meta
and -ROOT-
.
hbase:meta
and -ROOT-
.
HConnectionManager.createConnection(Configuration)
.Put
or
Delete
instances)
passed to it out to the configured HBase table.ibw
.
Export
.HTableInterface.incrementColumnValue(byte[], byte[], byte[], long, Durability)
HTable.incrementColumnValue(byte[], byte[], byte[], long, Durability)
HTableInterface.incrementColumnValue(byte[], byte[], byte[], long, Durability)
HTableInterface.incrementColumnValue(byte[], byte[], byte[], long, Durability)
target
in
array
.
target
within array
, or -1
if there is no such occurrence.
Filter
to apply to all incoming keys (KeyValues
) to
optionally not include in the job output
hbase:meta
or -ROOT-
src
uses BlobCopy
encoding, false otherwise.
src
uses BlobVar
encoding, false otherwise.
src
appears to be positioned an encoded value,
false otherwise.
src
uses fixed-width
Float32 encoding, false otherwise.
src
uses fixed-width
Float64 encoding, false otherwise.
src
uses fixed-width
Int32 encoding, false otherwise.
src
uses fixed-width
Int64 encoding, false otherwise.
[a-zA-Z_0-9]
or '_', '.' or '-'.
hbase:meta
region.
hbase:meta
table
src
is null, false
otherwise.
src
uses Numeric
encoding, false otherwise.
src
uses Numeric
encoding and is Infinite
, false otherwise.
src
uses Numeric
encoding and is NaN
, false otherwise.
src
uses Numeric
encoding and is 0
, false otherwise.
byte[]
's
which preserve the natural sort order of the unencoded value.
-ROOT-
region.
HBaseAdmin.isTableEnabled(byte[])
HBaseAdmin.isTableEnabled(byte[])
HBaseAdmin.isTableEnabled(byte[])
HBaseAdmin.isTableEnabled(byte[])
HBaseAdmin.isTableEnabled(byte[])
HBaseAdmin.isTableEnabled(org.apache.hadoop.hbase.TableName tableName)
src
uses Text encoding,
false otherwise.
Iterator
over the values encoded in src
.
buff
.
Result.listCells()
ReplicationAdmin.listPeerConfigs()
Result
s in the cache.
ByteBuffer
.
ByteBuffer
.
long
.
r
to dest
.
MultiTableInputFormat
s..TableSnapshotInputFormat
allowing a MapReduce job to run over one or more table snapshots, with one or more scans
configured for each.TableSnapshotInputFormat
allowing a MapReduce job to run over one or more table snapshots, with one or more scans
configured for each.Bytes.compareTo(byte[], byte[])
.TokenUtil.obtainAuthTokenForJob(HConnection,User,Job)
instead.
TokenUtil.obtainAuthTokenForJob(HConnection,JobConf,User)
instead.
byte[]
.byte[]
of variable-length.OrderedBlob
for use by Struct
fields that
do not terminate the fields list.OrderedBytes
encoding
implementations.float
of 32-bits using a fixed-length encoding.double
of 64-bits using a fixed-length encoding.short
of 16-bits using a fixed-length encoding.int
of 32-bits using a fixed-length encoding.long
of 64-bits using a fixed-length encoding.byte
of 8-bits using a fixed-length encoding.Number
of arbitrary precision and variable-length encoding.String
of variable-length.ResultScanner.next()
.T
.ParseFilter
IOException
.
ServerName
from bytes
gotten from a call to ServerName.getVersionedBytes()
.
ByteRange
with additional methods to support tracking a
consumers position within the viewport.HTableInterface.batch(java.util.List extends org.apache.hadoop.hbase.client.Row>, java.lang.Object[])
instead
Row
implementations.
HTableInterface.batchCallback(java.util.List extends org.apache.hadoop.hbase.client.Row>, java.lang.Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)
instead
val
at index
.
val
at index
.
length
bytes from val
into this range, starting at
index
.
val
at the next position in this range.
val
in this range, starting at the next position.
length
bytes from val
into this range.
Result.rawCells()
DataType
for interacting with values encoded using
Bytes.putByte(byte[], int, byte)
.DataType
for interacting with variable-length values
encoded using Bytes.putBytes(byte[], int, byte[], int, int)
.DataType
that encodes fixed-length values encoded using
Bytes.putBytes(byte[], int, byte[], int, int)
.RawBytesFixedLength
using the specified order
and length
.
RawBytesFixedLength
of the specified length
.
DataType
that encodes variable-length values encoded using
Bytes.putBytes(byte[], int, byte[], int, int)
.RawBytesTerminated
using the specified terminator and
order
.
RawBytesTerminated
using the specified terminator and
order
.
RawBytesTerminated
using the specified terminator.
RawBytesTerminated
using the specified terminator.
DataType
for interacting with values encoded using
Bytes.putDouble(byte[], int, double)
.DataType
for interacting with values encoded using
Bytes.putFloat(byte[], int, float)
.DataType
for interacting with values encoded using
Bytes.putInt(byte[], int, int)
.DataType
for interacting with values encoded using
Bytes.putLong(byte[], int, long)
.DataType
for interacting with values encoded using
Bytes.putShort(byte[], int, short)
.DataType
for interacting with values encoded using
Bytes.toBytes(String)
.DataType
that encodes fixed-length values encoded using
Bytes.toBytes(String)
.RawStringFixedLength
using the specified
order
and length
.
RawStringFixedLength
of the specified length
.
DataType
that encodes variable-length values encoded using
Bytes.toBytes(String)
.RawStringTerminated
using the specified terminator and
order
.
RawStringTerminated
using the specified terminator and
order
.
RawStringTerminated
using the specified terminator.
RawStringTerminated
using the specified terminator.
Base64.Base64InputStream.read()
repeatedly until the end of stream is reached or
len bytes are read.
HColumnDescriptor.parseFrom(byte[])
instead.
HTableDescriptor.parseFrom(byte[])
instead.
WritableUtils.readVLong(DataInput)
but reads from a
ByteBuffer
.
#readAsVLong()
instead.
OutputFormat
.
CompareFilter
implementations, such
as RowFilter
, QualifierFilter
, and ValueFilter
, for
filtering based on the value of a given column.RemoteException
with some extra information.HTableDescriptor.values
map
HTableDescriptor.values
map
HTableDescriptor.values
map
HColumnDescriptor.configuration
map.
HTableDescriptor.configuration
map
NamespaceDescriptor.configuration
map
Get
or Scan
query.Result.rawCells()
.
Result.create(List)
instead.
Result.create(List)
instead.
RetriesExhaustedException
is thrown when we have more information about which rows were causing which
exceptions on what servers.Scan
's start row maybe changed.
stdout
.
ByteRange
over a new byte[].
ByteRange
over a new byte[].
ByteRange
over a new byte[].
HTableInterface.setAutoFlushTo(boolean)
for all other cases.
clearBufferOnFail
clearBufferOnFail
Charset
to use to convert the row key to a String.
HColumnDescriptor.configuration
map.
HTableDescriptor.configuration
map.
NamespaceDescriptor.configuration
map.
Durability
setting for the table.
Mutation.setFamilyCellMap(NavigableMap)
instead.
HTable
.
HTable
.
HColumnDescriptor.setKeepDeletedCells(KeepDeletedCells)
ConfigurationUtil.KVP_DELIMITER
-ROOT-
or hbase:meta
region.
ReplicationAdmin.setPeerTableCFs(String, Map)
position
index.
-ROOT-
region.
Filter
to be used.
Scan.setCaching(int)
Scan
objects.
HConnection
.
TableRecordReader
.
TableRecordReader
.
TableRecordReader
.
HTableDescriptor.values
map
HTableDescriptor.values
map
Mutation.setDurability(Durability)
instead.
ByteRange
that points at this range's byte[].
ByteRange
that points at this range's byte[].
ByteRange
implementation.ByteRange
lacking a backing array and with an
undefined viewport.
ByteRange
over a new backing array of size
capacity
.
ByteRange
over the provided bytes
.
ByteRange
over the provided bytes
.
SimpleByteRange
implementation with position
support.PositionedByteRange
lacking a backing array and with
an undefined viewport.
PositionedByteRange
over a new backing array of
size capacity
.
PositionedByteRange
over the provided bytes
.
PositionedByteRange
over the provided bytes
.
Filter
that checks a single column value, but does not emit the
tested column.src
's position forward over one encoded value.
src
's position forward over one encoded value.
buff
's position forward over one encoded value.
Struct
is a simple DataType
for implementing "compound
rowkey" and "compound qualifier" schema design strategies.Struct
instance defined as the sequence of
HDataType
s in memberTypes
.
Struct
instances.StructBuilder
.
Iterator
over encoded Struct
members.StructIterator
over the values encoded in src
using the specified types
definition.
TableInputFormat
s.Mapper
class to add the required input key
and value classes.TableMapper
and TableReducer
Reducer
class to add the required key and
value input/output classes.TableSplit.TableSplit(TableName, byte[], byte[], String)
TableSplit.TableSplit(TableName, byte[], byte[], String)
DataType
implementation as a terminated
version of itself.wrapped
.
wrapped
.
term
begins within src
,
or -1
if term
is not found.
Bytes.toBytes(boolean)
IOException
.
buf
,
from the index 0 (inclusive) to the limit (exclusive),
regardless of the current position.
Bytes.SIZEOF_SHORT
bytes long.
HRegionInfo.toByteArray()
when writing to a stream and you want to use
the pb mergeDelimitedFrom (w/o the delimiter, pb reads to EOF which may not be what you want).
Struct
represented by this
.
LoadIncrementalHFiles.tryAtomicRegionLoad(HConnection, TableName, byte[], Collection)
Union
family of DataType
s encode one of a fixed
set of Object
s.Union2
over the set of specified
types.
Union
family of DataType
s encode one of a fixed
collection of Objects.Union3
over the set of specified
types.
Union
family of DataType
s encode one of a fixed
collection of Objects.Union4
over the set of specified
types.
MultiTableOutputFormat.WAL_OFF
to turn off write-ahead logging (HLog)
WhileMatchFilter.filterAllRemaining()
as soon
as the wrapped filters Filter.filterRowKey(byte[], int, int)
,
Filter.filterKeyValue(org.apache.hadoop.hbase.Cell)
,
Filter.filterRow()
or
Filter.filterAllRemaining()
methods
returns true.HColumnDescriptor.toByteArray()
instead.
HRegionInfo.toByteArray()
and
HRegionInfo.toDelimitedByteArray()
MessageLite.toByteArray()
instead.
WritableUtils.writeVLong(java.io.DataOutput, long)
,
but writes to a ByteBuffer
.
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |