|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use Result | |
---|---|
org.apache.hadoop.hbase.avro | Provides an HBase Avro service. |
org.apache.hadoop.hbase.catalog | |
org.apache.hadoop.hbase.client | Provides HBase Client |
org.apache.hadoop.hbase.coprocessor | Table of Contents |
org.apache.hadoop.hbase.ipc | Tools to help define network clients and servers. |
org.apache.hadoop.hbase.mapred | Provides HBase MapReduce Input/OutputFormats, a table indexing MapReduce job, and utility |
org.apache.hadoop.hbase.mapreduce | Provides HBase MapReduce Input/OutputFormats, a table indexing MapReduce job, and utility |
org.apache.hadoop.hbase.mapreduce.replication | |
org.apache.hadoop.hbase.master.handler | |
org.apache.hadoop.hbase.regionserver | |
org.apache.hadoop.hbase.rest.client | |
org.apache.hadoop.hbase.security.access | |
org.apache.hadoop.hbase.thrift | Provides an HBase Thrift service. |
org.apache.hadoop.hbase.thrift2 | Provides an HBase Thrift service. |
Uses of Result in org.apache.hadoop.hbase.avro |
---|
Methods in org.apache.hadoop.hbase.avro with parameters of type Result | |
---|---|
static org.apache.avro.generic.GenericArray<AResult> |
AvroUtil.resultsToAResults(Result[] results)
|
static AResult |
AvroUtil.resultToAResult(Result result)
|
Uses of Result in org.apache.hadoop.hbase.catalog |
---|
Methods in org.apache.hadoop.hbase.catalog that return types with arguments of type Result | |
---|---|
static List<Result> |
MetaReader.fullScan(CatalogTracker catalogTracker)
Performs a full scan of .META. . |
static List<Result> |
MetaReader.fullScanOfRoot(CatalogTracker catalogTracker)
Performs a full scan of a -ROOT- table. |
static NavigableMap<HRegionInfo,Result> |
MetaReader.getServerUserRegions(CatalogTracker catalogTracker,
ServerName serverName)
|
Methods in org.apache.hadoop.hbase.catalog with parameters of type Result | |
---|---|
static PairOfSameType<HRegionInfo> |
MetaEditor.getDaughterRegions(Result data)
Returns the daughter regions by reading from the corresponding columns of the .META. |
static PairOfSameType<HRegionInfo> |
MetaReader.getDaughterRegions(Result data)
Returns the daughter regions by reading the corresponding columns of the catalog table Result. |
static HRegionInfo |
MetaEditor.getHRegionInfo(Result data)
|
static ServerName |
MetaReader.getServerNameFromCatalogResult(Result r)
Extract a ServerName
For use on catalog table Result . |
static Pair<HRegionInfo,ServerName> |
MetaReader.parseCatalogResult(Result r)
Extract a HRegionInfo and ServerName. |
static HRegionInfo |
MetaReader.parseHRegionInfoFromCatalogResult(Result r,
byte[] qualifier)
Parse the content of the cell at HConstants.CATALOG_FAMILY and
qualifier as an HRegionInfo and return it, or null. |
boolean |
MetaReader.Visitor.visit(Result r)
Visit the catalog table row. |
Uses of Result in org.apache.hadoop.hbase.client |
---|
Fields in org.apache.hadoop.hbase.client declared as Result | |
---|---|
protected Result |
ClientScanner.lastResult
|
Fields in org.apache.hadoop.hbase.client with type parameters of type Result | |
---|---|
protected LinkedList<Result> |
ClientScanner.cache
|
Methods in org.apache.hadoop.hbase.client that return Result | |
---|---|
Result |
HTable.append(Append append)
Appends values to one or more columns within a single row. |
Result |
HTableInterface.append(Append append)
Appends values to one or more columns within a single row. |
Result[] |
ScannerCallable.call()
|
Result |
HTable.get(Get get)
Extracts certain cells from a given row. |
Result |
HTableInterface.get(Get get)
Extracts certain cells from a given row. |
Result[] |
HTable.get(List<Get> gets)
Extracts certain cells from the given rows, in batch. |
Result[] |
HTableInterface.get(List<Get> gets)
Extracts certain cells from the given rows, in batch. |
Result |
HTable.getRowOrBefore(byte[] row,
byte[] family)
Return the row that matches row exactly, or the one that immediately precedes it. |
Result |
HTableInterface.getRowOrBefore(byte[] row,
byte[] family)
Deprecated. As of version 0.92 this method is deprecated without replacement. getRowOrBefore is used internally to find entries in .META. and makes various assumptions about the table (which are true for .META. but not in general) to be efficient. |
Result |
HTable.increment(Increment increment)
Increments one or more columns within a single row. |
Result |
HTableInterface.increment(Increment increment)
Increments one or more columns within a single row. |
Result |
ClientSmallScanner.next()
|
Result |
ResultScanner.next()
Grab the next row's worth of values. |
Result |
ClientScanner.next()
|
Result[] |
ResultScanner.next(int nbRows)
|
Result[] |
ClientScanner.next(int nbRows)
Get nbRows rows. |
static Result[] |
Result.readArray(DataInput in)
|
Methods in org.apache.hadoop.hbase.client that return types with arguments of type Result | |
---|---|
Iterator<Result> |
AbstractClientScanner.iterator()
|
Methods in org.apache.hadoop.hbase.client with parameters of type Result | |
---|---|
static void |
Result.compareResults(Result res1,
Result res2)
Does a deep comparison of two Results, down to the byte arrays. |
void |
Result.copyFrom(Result other)
Copy another Result into this one. |
static long |
Result.getWriteArraySize(Result[] results)
|
boolean |
MetaScanner.MetaScannerVisitor.processRow(Result rowResult)
Visitor method that accepts a RowResult and the meta region location. |
boolean |
MetaScanner.BlockingMetaScannerVisitor.processRow(Result rowResult)
|
boolean |
MetaScanner.TableMetaScannerVisitor.processRow(Result rowResult)
|
abstract boolean |
MetaScanner.BlockingMetaScannerVisitor.processRowInternal(Result rowResult)
|
static void |
Result.writeArray(DataOutput out,
Result[] results)
|
Uses of Result in org.apache.hadoop.hbase.coprocessor |
---|
Methods in org.apache.hadoop.hbase.coprocessor with parameters of type Result | |
---|---|
Result |
BaseRegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> e,
Append append,
Result result)
|
Result |
RegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> c,
Append append,
Result result)
Called after Append |
void |
BaseRegionObserver.postGetClosestRowBefore(ObserverContext<RegionCoprocessorEnvironment> e,
byte[] row,
byte[] family,
Result result)
|
void |
RegionObserver.postGetClosestRowBefore(ObserverContext<RegionCoprocessorEnvironment> c,
byte[] row,
byte[] family,
Result result)
Called after a client makes a GetClosestRowBefore request. |
Result |
BaseRegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> e,
Increment increment,
Result result)
|
Result |
RegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> c,
Increment increment,
Result result)
Called after increment |
void |
BaseRegionObserver.preGetClosestRowBefore(ObserverContext<RegionCoprocessorEnvironment> e,
byte[] row,
byte[] family,
Result result)
|
void |
RegionObserver.preGetClosestRowBefore(ObserverContext<RegionCoprocessorEnvironment> c,
byte[] row,
byte[] family,
Result result)
Called before a client makes a GetClosestRowBefore request. |
Method parameters in org.apache.hadoop.hbase.coprocessor with type arguments of type Result | |
---|---|
boolean |
BaseRegionObserver.postScannerNext(ObserverContext<RegionCoprocessorEnvironment> e,
InternalScanner s,
List<Result> results,
int limit,
boolean hasMore)
|
boolean |
RegionObserver.postScannerNext(ObserverContext<RegionCoprocessorEnvironment> c,
InternalScanner s,
List<Result> result,
int limit,
boolean hasNext)
Called after the client asks for the next row on a scanner. |
boolean |
BaseRegionObserver.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> e,
InternalScanner s,
List<Result> results,
int limit,
boolean hasMore)
|
boolean |
RegionObserver.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> c,
InternalScanner s,
List<Result> result,
int limit,
boolean hasNext)
Called before the client asks for the next row on a scanner. |
Uses of Result in org.apache.hadoop.hbase.ipc |
---|
Methods in org.apache.hadoop.hbase.ipc that return Result | |
---|---|
Result |
HRegionInterface.append(byte[] regionName,
Append append)
Appends values to one or more columns values in a row. |
Result |
HRegionInterface.get(byte[] regionName,
Get get)
Perform Get operation. |
Result |
HRegionInterface.getClosestRowBefore(byte[] regionName,
byte[] row,
byte[] family)
Return all the data for the row that matches row exactly, or the one that immediately preceeds it. |
Result |
HRegionInterface.increment(byte[] regionName,
Increment increment)
Increments one or more columns values in a row. |
Result |
HRegionInterface.next(long scannerId)
Get the next set of values |
Result[] |
HRegionInterface.next(long scannerId,
int numberOfRows)
Get the next set of values |
Result[] |
HRegionInterface.next(long scannerId,
int caching,
long callSeq)
Get the next set of values |
Result[] |
HRegionInterface.scan(byte[] regionName,
Scan scan,
int numberOfRows)
Perform scan operation. |
Uses of Result in org.apache.hadoop.hbase.mapred |
---|
Methods in org.apache.hadoop.hbase.mapred that return Result | |
---|---|
Result |
TableRecordReaderImpl.createValue()
|
Result |
TableRecordReader.createValue()
|
Methods in org.apache.hadoop.hbase.mapred that return types with arguments of type Result | |
---|---|
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter)
Deprecated. Builds a TableRecordReader. |
Methods in org.apache.hadoop.hbase.mapred with parameters of type Result | |
---|---|
protected byte[][] |
GroupingTableMap.extractKeyValues(Result r)
Deprecated. Extract columns values from the current record. |
void |
IdentityTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Deprecated. Pass the key, value to reduce |
void |
GroupingTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Deprecated. Extract the grouping columns from value to construct a new key. |
boolean |
TableRecordReaderImpl.next(ImmutableBytesWritable key,
Result value)
|
boolean |
TableRecordReader.next(ImmutableBytesWritable key,
Result value)
|
Method parameters in org.apache.hadoop.hbase.mapred with type arguments of type Result | |
---|---|
void |
IdentityTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Deprecated. Pass the key, value to reduce |
void |
GroupingTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Deprecated. Extract the grouping columns from value to construct a new key. |
Uses of Result in org.apache.hadoop.hbase.mapreduce |
---|
Methods in org.apache.hadoop.hbase.mapreduce that return Result | |
---|---|
Result |
TableRecordReaderImpl.getCurrentValue()
Returns the current value. |
Result |
TableSnapshotInputFormat.TableSnapshotRegionRecordReader.getCurrentValue()
|
Result |
TableRecordReader.getCurrentValue()
Returns the current value. |
Methods in org.apache.hadoop.hbase.mapreduce that return types with arguments of type Result | ||
---|---|---|
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
MultiTableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
Builds a TableRecordReader. |
|
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
TableSnapshotInputFormat.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
|
|
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
TableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
Builds a TableRecordReader. |
|
static
|
MultithreadedTableMapper.getMapperClass(org.apache.hadoop.mapreduce.JobContext job)
Get the application's mapper class. |
Methods in org.apache.hadoop.hbase.mapreduce with parameters of type Result | |
---|---|
protected byte[][] |
GroupingTableMapper.extractKeyValues(Result r)
Extract columns values from the current record. |
void |
GroupingTableMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Extract the grouping columns from value to construct a new key. |
void |
IdentityTableMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Pass the key, value to reduce. |
Method parameters in org.apache.hadoop.hbase.mapreduce with type arguments of type Result | ||
---|---|---|
static
|
MultithreadedTableMapper.setMapperClass(org.apache.hadoop.mapreduce.Job job,
Class<? extends org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> cls)
Set the application's mapper class. |
Uses of Result in org.apache.hadoop.hbase.mapreduce.replication |
---|
Methods in org.apache.hadoop.hbase.mapreduce.replication with parameters of type Result | |
---|---|
void |
VerifyReplication.Verifier.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Map method that compares every scanned row with the equivalent from a distant cluster. |
Uses of Result in org.apache.hadoop.hbase.master.handler |
---|
Methods in org.apache.hadoop.hbase.master.handler with parameters of type Result | |
---|---|
static int |
ServerShutdownHandler.fixupDaughters(Result result,
AssignmentManager assignmentManager,
CatalogTracker catalogTracker)
Check that daughter regions are up in .META. |
static boolean |
ServerShutdownHandler.processDeadRegion(HRegionInfo hri,
Result result,
AssignmentManager assignmentManager,
CatalogTracker catalogTracker)
Process a dead region from a dead RS. |
Uses of Result in org.apache.hadoop.hbase.regionserver |
---|
Methods in org.apache.hadoop.hbase.regionserver that return Result | |
---|---|
Result |
HRegion.append(Append append,
boolean writeToWAL)
Perform one or more append operations on a row. |
Result |
HRegion.append(Append append,
Integer lockid,
boolean writeToWAL)
Deprecated. row locks (lockId) held outside the extent of the operation are deprecated. |
Result |
HRegionServer.append(byte[] regionName,
Append append)
|
Result |
HRegionServer.get(byte[] regionName,
Get get)
Perform Get operation. |
Result |
HRegion.get(Get get)
|
Result |
HRegion.get(Get get,
Integer lockid)
Deprecated. row locks (lockId) held outside the extent of the operation are deprecated. |
Result |
HRegion.getClosestRowBefore(byte[] row,
byte[] family)
Return all the data for the row that matches row exactly, or the one that immediately preceeds it, at or immediately before ts. |
Result |
HRegionServer.getClosestRowBefore(byte[] regionName,
byte[] row,
byte[] family)
|
Result |
HRegionServer.increment(byte[] regionName,
Increment increment)
|
Result |
HRegion.increment(Increment increment,
boolean writeToWAL)
Perform one or more increment operations on a row. |
Result |
HRegion.increment(Increment increment,
Integer lockid,
boolean writeToWAL)
Deprecated. row locks (lockId) held outside the extent of the operation are deprecated. |
Result |
HRegionServer.next(long scannerId)
|
Result[] |
HRegionServer.next(long scannerId,
int nbRows)
|
Result[] |
HRegionServer.next(long scannerId,
int nbRows,
long callSeq)
|
Result |
RegionCoprocessorHost.postIncrement(Increment increment,
Result result)
|
Result |
RegionCoprocessorHost.preAppend(Append append)
|
Result |
RegionCoprocessorHost.preIncrement(Increment increment)
|
Result[] |
HRegionServer.scan(byte[] regionName,
Scan scan,
int numberOfRows)
|
Methods in org.apache.hadoop.hbase.regionserver with parameters of type Result | |
---|---|
void |
RegionCoprocessorHost.postAppend(Append append,
Result result)
|
void |
RegionCoprocessorHost.postGetClosestRowBefore(byte[] row,
byte[] family,
Result result)
|
Result |
RegionCoprocessorHost.postIncrement(Increment increment,
Result result)
|
boolean |
RegionCoprocessorHost.preGetClosestRowBefore(byte[] row,
byte[] family,
Result result)
|
Method parameters in org.apache.hadoop.hbase.regionserver with type arguments of type Result | |
---|---|
boolean |
RegionCoprocessorHost.postScannerNext(InternalScanner s,
List<Result> results,
int limit,
boolean hasMore)
|
Boolean |
RegionCoprocessorHost.preScannerNext(InternalScanner s,
List<Result> results,
int limit)
|
Uses of Result in org.apache.hadoop.hbase.rest.client |
---|
Methods in org.apache.hadoop.hbase.rest.client that return Result | |
---|---|
Result |
RemoteHTable.append(Append append)
|
protected Result[] |
RemoteHTable.buildResultFromModel(CellSetModel model)
|
Result |
RemoteHTable.get(Get get)
|
Result[] |
RemoteHTable.get(List<Get> gets)
|
Result |
RemoteHTable.getRowOrBefore(byte[] row,
byte[] family)
|
Result |
RemoteHTable.increment(Increment increment)
|
Uses of Result in org.apache.hadoop.hbase.security.access |
---|
Methods in org.apache.hadoop.hbase.security.access that return Result | |
---|---|
Result |
AccessController.preAppend(ObserverContext<RegionCoprocessorEnvironment> c,
Append append)
|
Result |
AccessController.preIncrement(ObserverContext<RegionCoprocessorEnvironment> c,
Increment increment)
|
Methods in org.apache.hadoop.hbase.security.access with parameters of type Result | |
---|---|
void |
AccessController.preGetClosestRowBefore(ObserverContext<RegionCoprocessorEnvironment> c,
byte[] row,
byte[] family,
Result result)
|
Method parameters in org.apache.hadoop.hbase.security.access with type arguments of type Result | |
---|---|
boolean |
AccessController.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> c,
InternalScanner s,
List<Result> result,
int limit,
boolean hasNext)
|
Uses of Result in org.apache.hadoop.hbase.thrift |
---|
Methods in org.apache.hadoop.hbase.thrift with parameters of type Result | |
---|---|
static List<TRowResult> |
ThriftUtilities.rowResultFromHBase(Result in)
|
static List<TRowResult> |
ThriftUtilities.rowResultFromHBase(Result[] in)
This utility method creates a list of Thrift TRowResult "struct" based on an array of Hbase RowResult objects. |
static List<TRowResult> |
ThriftUtilities.rowResultFromHBase(Result[] in,
boolean sortColumns)
This utility method creates a list of Thrift TRowResult "struct" based on an Hbase RowResult object. |
Uses of Result in org.apache.hadoop.hbase.thrift2 |
---|
Methods in org.apache.hadoop.hbase.thrift2 with parameters of type Result | |
---|---|
static TResult |
ThriftUtilities.resultFromHBase(Result in)
Creates a TResult (Thrift) from a Result (HBase). |
static List<TResult> |
ThriftUtilities.resultsFromHBase(Result[] in)
Converts multiple Result s (HBase) into a list of TResult s (Thrift). |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |