Uses of Class
org.apache.hadoop.hbase.client.Result

Packages that use Result
org.apache.hadoop.hbase   
org.apache.hadoop.hbase.catalog   
org.apache.hadoop.hbase.client Provides HBase Client 
org.apache.hadoop.hbase.coprocessor Table of Contents 
org.apache.hadoop.hbase.mapred Provides HBase MapReduce Input/OutputFormats, a table indexing MapReduce job, and utility 
org.apache.hadoop.hbase.mapreduce Provides HBase MapReduce Input/OutputFormats, a table indexing MapReduce job, and utility 
org.apache.hadoop.hbase.mapreduce.replication   
org.apache.hadoop.hbase.master.handler   
org.apache.hadoop.hbase.protobuf Holds classes generated from protobuf src/main/protobuf definition files. 
org.apache.hadoop.hbase.regionserver   
org.apache.hadoop.hbase.rest.client   
org.apache.hadoop.hbase.security.access   
org.apache.hadoop.hbase.thrift Provides an HBase Thrift service. 
org.apache.hadoop.hbase.thrift2 Provides an HBase Thrift service. 
 

Uses of Result in org.apache.hadoop.hbase
 

Methods in org.apache.hadoop.hbase with parameters of type Result
static PairOfSameType<HRegionInfo> HRegionInfo.getDaughterRegions(Result data)
          Returns the daughter regions by reading the corresponding columns of the catalog table Result.
static HRegionInfo HRegionInfo.getHRegionInfo(Result data)
          Returns HRegionInfo object from the column HConstants.CATALOG_FAMILY:HConstants.REGIONINFO_QUALIFIER of the catalog table Result.
static HRegionInfo HRegionInfo.getHRegionInfo(Result r, byte[] qualifier)
          Returns the HRegionInfo object from the column HConstants.CATALOG_FAMILY and qualifier of the catalog table result.
static Pair<HRegionInfo,ServerName> HRegionInfo.getHRegionInfoAndServerName(Result r)
          Extract a HRegionInfo and ServerName from catalog table Result.
static long HRegionInfo.getSeqNumDuringOpen(Result r)
          The latest seqnum that the server writing to meta observed when opening the region.
static ServerName HRegionInfo.getServerName(Result r)
          Returns a ServerName from catalog table Result.
 

Uses of Result in org.apache.hadoop.hbase.catalog
 

Methods in org.apache.hadoop.hbase.catalog that return Result
static Result MetaReader.getRegionResult(CatalogTracker catalogTracker, byte[] regionName)
          Gets the result in META for the specified region.
 

Methods in org.apache.hadoop.hbase.catalog that return types with arguments of type Result
static List<Result> MetaReader.fullScan(CatalogTracker catalogTracker)
          Performs a full scan of .META..
static List<Result> MetaReader.fullScanOfMeta(CatalogTracker catalogTracker)
          Performs a full scan of a .META. table.
static NavigableMap<HRegionInfo,Result> MetaReader.getServerUserRegions(CatalogTracker catalogTracker, ServerName serverName)
           
 

Methods in org.apache.hadoop.hbase.catalog with parameters of type Result
 boolean MetaReader.Visitor.visit(Result r)
          Visit the catalog table row.
 

Uses of Result in org.apache.hadoop.hbase.client
 

Fields in org.apache.hadoop.hbase.client declared as Result
static Result Result.EMPTY_RESULT
           
 

Methods in org.apache.hadoop.hbase.client that return Result
 Result HTableInterface.append(Append append)
          Appends values to one or more columns within a single row.
 Result HTable.append(Append append)
          Appends values to one or more columns within a single row.
 Result[] ScannerCallable.call()
           
 Result HTableInterface.get(Get get)
          Extracts certain cells from a given row.
 Result HTable.get(Get get)
          Extracts certain cells from a given row.
 Result[] HTableInterface.get(List<Get> gets)
          Extracts certain cells from the given rows, in batch.
 Result[] HTable.get(List<Get> gets)
          Extracts certain cells from the given rows, in batch.
 Result HTableInterface.getRowOrBefore(byte[] row, byte[] family)
          Deprecated. As of version 0.92 this method is deprecated without replacement. getRowOrBefore is used internally to find entries in .META. and makes various assumptions about the table (which are true for .META. but not in general) to be efficient.
 Result HTable.getRowOrBefore(byte[] row, byte[] family)
          Return the row that matches row exactly, or the one that immediately precedes it.
 Result HTableInterface.increment(Increment increment)
          Increments one or more columns within a single row.
 Result HTable.increment(Increment increment)
          Increments one or more columns within a single row.
 Result ResultScanner.next()
          Grab the next row's worth of values.
 Result ClientScanner.next()
           
 Result[] ResultScanner.next(int nbRows)
           
 Result[] ClientScanner.next(int nbRows)
          Get nbRows rows.
 

Methods in org.apache.hadoop.hbase.client that return types with arguments of type Result
 Iterator<Result> AbstractClientScanner.iterator()
           
 

Methods in org.apache.hadoop.hbase.client with parameters of type Result
static void Result.compareResults(Result res1, Result res2)
          Does a deep comparison of two Results, down to the byte arrays.
 void Result.copyFrom(Result other)
          Copy another Result into this one.
static HRegionInfo MetaScanner.getHRegionInfo(Result data)
          Returns HRegionInfo object from the column HConstants.CATALOG_FAMILY:HConstants.REGIONINFO_QUALIFIER of the catalog table Result.
 boolean MetaScanner.MetaScannerVisitor.processRow(Result rowResult)
          Visitor method that accepts a RowResult and the meta region location.
 boolean MetaScanner.DefaultMetaScannerVisitor.processRow(Result rowResult)
           
 boolean MetaScanner.TableMetaScannerVisitor.processRow(Result rowResult)
           
abstract  boolean MetaScanner.DefaultMetaScannerVisitor.processRowInternal(Result rowResult)
           
 

Uses of Result in org.apache.hadoop.hbase.coprocessor
 

Methods in org.apache.hadoop.hbase.coprocessor that return Result
 Result RegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> c, Append append, Result result)
          Called after Append
 Result BaseRegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> e, Append append, Result result)
           
 Result RegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> c, Increment increment, Result result)
          Called after increment
 Result BaseRegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> e, Increment increment, Result result)
           
 Result RegionObserver.preAppend(ObserverContext<RegionCoprocessorEnvironment> c, Append append)
          Called before Append
 Result BaseRegionObserver.preAppend(ObserverContext<RegionCoprocessorEnvironment> e, Append append)
           
 Result RegionObserver.preIncrement(ObserverContext<RegionCoprocessorEnvironment> c, Increment increment)
          Called before Increment
 Result BaseRegionObserver.preIncrement(ObserverContext<RegionCoprocessorEnvironment> e, Increment increment)
           
 

Methods in org.apache.hadoop.hbase.coprocessor with parameters of type Result
 Result RegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> c, Append append, Result result)
          Called after Append
 Result BaseRegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> e, Append append, Result result)
           
 void RegionObserver.postGetClosestRowBefore(ObserverContext<RegionCoprocessorEnvironment> c, byte[] row, byte[] family, Result result)
          Called after a client makes a GetClosestRowBefore request.
 void BaseRegionObserver.postGetClosestRowBefore(ObserverContext<RegionCoprocessorEnvironment> e, byte[] row, byte[] family, Result result)
           
 Result RegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> c, Increment increment, Result result)
          Called after increment
 Result BaseRegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> e, Increment increment, Result result)
           
 void RegionObserver.preGetClosestRowBefore(ObserverContext<RegionCoprocessorEnvironment> c, byte[] row, byte[] family, Result result)
          Called before a client makes a GetClosestRowBefore request.
 void BaseRegionObserver.preGetClosestRowBefore(ObserverContext<RegionCoprocessorEnvironment> e, byte[] row, byte[] family, Result result)
           
 

Method parameters in org.apache.hadoop.hbase.coprocessor with type arguments of type Result
 boolean RegionObserver.postScannerNext(ObserverContext<RegionCoprocessorEnvironment> c, InternalScanner s, List<Result> result, int limit, boolean hasNext)
          Called after the client asks for the next row on a scanner.
 boolean BaseRegionObserver.postScannerNext(ObserverContext<RegionCoprocessorEnvironment> e, InternalScanner s, List<Result> results, int limit, boolean hasMore)
           
 boolean RegionObserver.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> c, InternalScanner s, List<Result> result, int limit, boolean hasNext)
          Called before the client asks for the next row on a scanner.
 boolean BaseRegionObserver.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> e, InternalScanner s, List<Result> results, int limit, boolean hasMore)
           
 

Uses of Result in org.apache.hadoop.hbase.mapred
 

Methods in org.apache.hadoop.hbase.mapred that return Result
 Result TableRecordReaderImpl.createValue()
           
 Result TableRecordReader.createValue()
           
 

Methods in org.apache.hadoop.hbase.mapred that return types with arguments of type Result
 org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplit split, org.apache.hadoop.mapred.JobConf job, org.apache.hadoop.mapred.Reporter reporter)
          Deprecated. Builds a TableRecordReader.
 

Methods in org.apache.hadoop.hbase.mapred with parameters of type Result
protected  byte[][] GroupingTableMap.extractKeyValues(Result r)
          Deprecated. Extract columns values from the current record.
 void IdentityTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, org.apache.hadoop.mapred.Reporter reporter)
          Deprecated. Pass the key, value to reduce
 void GroupingTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, org.apache.hadoop.mapred.Reporter reporter)
          Deprecated. Extract the grouping columns from value to construct a new key.
 boolean TableRecordReaderImpl.next(ImmutableBytesWritable key, Result value)
           
 boolean TableRecordReader.next(ImmutableBytesWritable key, Result value)
           
 

Method parameters in org.apache.hadoop.hbase.mapred with type arguments of type Result
 void IdentityTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, org.apache.hadoop.mapred.Reporter reporter)
          Deprecated. Pass the key, value to reduce
 void GroupingTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, org.apache.hadoop.mapred.Reporter reporter)
          Deprecated. Extract the grouping columns from value to construct a new key.
 

Uses of Result in org.apache.hadoop.hbase.mapreduce
 

Methods in org.apache.hadoop.hbase.mapreduce that return Result
 Result TableRecordReaderImpl.getCurrentValue()
          Returns the current value.
 Result TableRecordReader.getCurrentValue()
          Returns the current value.
 

Methods in org.apache.hadoop.hbase.mapreduce that return types with arguments of type Result
 org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> TableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context)
          Builds a TableRecordReader.
 org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> MultiTableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context)
          Builds a TableRecordReader.
 org.apache.hadoop.io.serializer.Deserializer<Result> ResultSerialization.getDeserializer(Class<Result> c)
           
static
<K2,V2> Class<org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>>
MultithreadedTableMapper.getMapperClass(org.apache.hadoop.mapreduce.JobContext job)
          Get the application's mapper class.
 org.apache.hadoop.io.serializer.Serializer<Result> ResultSerialization.getSerializer(Class<Result> c)
           
 

Methods in org.apache.hadoop.hbase.mapreduce with parameters of type Result
protected  byte[][] GroupingTableMapper.extractKeyValues(Result r)
          Extract columns values from the current record.
protected  void IndexBuilder.Map.map(ImmutableBytesWritable rowKey, Result result, org.apache.hadoop.mapreduce.Mapper.Context context)
           
 void IdentityTableMapper.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapreduce.Mapper.Context context)
          Pass the key, value to reduce.
 void GroupingTableMapper.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapreduce.Mapper.Context context)
          Extract the grouping columns from value to construct a new key.
 

Method parameters in org.apache.hadoop.hbase.mapreduce with type arguments of type Result
 org.apache.hadoop.io.serializer.Deserializer<Result> ResultSerialization.getDeserializer(Class<Result> c)
           
 org.apache.hadoop.io.serializer.Serializer<Result> ResultSerialization.getSerializer(Class<Result> c)
           
static
<K2,V2> void
MultithreadedTableMapper.setMapperClass(org.apache.hadoop.mapreduce.Job job, Class<? extends org.apache.hadoop.mapreduce.Mapper<ImmutableBytesWritable,Result,K2,V2>> cls)
          Set the application's mapper class.
 

Uses of Result in org.apache.hadoop.hbase.mapreduce.replication
 

Methods in org.apache.hadoop.hbase.mapreduce.replication with parameters of type Result
 void VerifyReplication.Verifier.map(ImmutableBytesWritable row, Result value, org.apache.hadoop.mapreduce.Mapper.Context context)
          Map method that compares every scanned row with the equivalent from a distant cluster.
 

Uses of Result in org.apache.hadoop.hbase.master.handler
 

Methods in org.apache.hadoop.hbase.master.handler with parameters of type Result
static boolean ServerShutdownHandler.processDeadRegion(HRegionInfo hri, Result result, AssignmentManager assignmentManager, CatalogTracker catalogTracker)
          Process a dead region from a dead RS.
 

Uses of Result in org.apache.hadoop.hbase.protobuf
 

Methods in org.apache.hadoop.hbase.protobuf that return Result
static Result ProtobufUtil.get(ClientProtos.ClientService.BlockingInterface client, byte[] regionName, Get get)
          A helper to invoke a Get using client protocol.
static Result[] ResponseConverter.getResults(CellScanner cellScanner, ClientProtos.ScanResponse response)
          Create Results from the cells using the cells meta data.
static Result ProtobufUtil.getRowOrBefore(ClientProtos.ClientService.BlockingInterface client, byte[] regionName, byte[] row, byte[] family)
          A helper to get a row of the closet one before using client protocol.
static Result ProtobufUtil.toResult(ClientProtos.Result proto)
          Convert a protocol buffer Result to a client Result
static Result ProtobufUtil.toResult(ClientProtos.Result proto, CellScanner scanner)
          Convert a protocol buffer Result to a client Result
 

Methods in org.apache.hadoop.hbase.protobuf with parameters of type Result
static ClientProtos.Result ProtobufUtil.toResult(Result result)
          Convert a client Result to a protocol buffer Result
static ClientProtos.Result ProtobufUtil.toResultNoData(Result result)
          Convert a client Result to a protocol buffer Result.
 

Uses of Result in org.apache.hadoop.hbase.regionserver
 

Methods in org.apache.hadoop.hbase.regionserver that return Result
 Result HRegion.append(Append append)
          Perform one or more append operations on a row.
protected  Result HRegionServer.append(HRegion region, ClientProtos.MutationProto m, CellScanner cellScanner)
          Execute an append mutation.
 Result HRegion.get(Get get)
           
 Result HRegion.getClosestRowBefore(byte[] row, byte[] family)
          Return all the data for the row that matches row exactly, or the one that immediately preceeds it, at or immediately before ts.
protected  Result HRegionServer.increment(HRegion region, ClientProtos.MutationProto mutation, CellScanner cells)
          Execute an increment mutation.
 Result HRegion.increment(Increment increment)
          Perform one or more increment operations on a row.
 Result RegionCoprocessorHost.postIncrement(Increment increment, Result result)
           
 Result RegionCoprocessorHost.preAppend(Append append)
           
 Result RegionCoprocessorHost.preIncrement(Increment increment)
           
 

Methods in org.apache.hadoop.hbase.regionserver with parameters of type Result
 void RegionCoprocessorHost.postAppend(Append append, Result result)
           
 void RegionCoprocessorHost.postGetClosestRowBefore(byte[] row, byte[] family, Result result)
           
 Result RegionCoprocessorHost.postIncrement(Increment increment, Result result)
           
 boolean RegionCoprocessorHost.preGetClosestRowBefore(byte[] row, byte[] family, Result result)
           
 

Method parameters in org.apache.hadoop.hbase.regionserver with type arguments of type Result
 boolean RegionCoprocessorHost.postScannerNext(InternalScanner s, List<Result> results, int limit, boolean hasMore)
           
 Boolean RegionCoprocessorHost.preScannerNext(InternalScanner s, List<Result> results, int limit)
           
 

Uses of Result in org.apache.hadoop.hbase.rest.client
 

Methods in org.apache.hadoop.hbase.rest.client that return Result
 Result RemoteHTable.append(Append append)
           
protected  Result[] RemoteHTable.buildResultFromModel(CellSetModel model)
           
 Result RemoteHTable.get(Get get)
           
 Result[] RemoteHTable.get(List<Get> gets)
           
 Result RemoteHTable.getRowOrBefore(byte[] row, byte[] family)
           
 Result RemoteHTable.increment(Increment increment)
           
 

Uses of Result in org.apache.hadoop.hbase.security.access
 

Methods in org.apache.hadoop.hbase.security.access that return Result
 Result AccessController.preAppend(ObserverContext<RegionCoprocessorEnvironment> c, Append append)
           
 Result AccessController.preIncrement(ObserverContext<RegionCoprocessorEnvironment> c, Increment increment)
           
 

Methods in org.apache.hadoop.hbase.security.access with parameters of type Result
 void AccessController.preGetClosestRowBefore(ObserverContext<RegionCoprocessorEnvironment> c, byte[] row, byte[] family, Result result)
           
 

Method parameters in org.apache.hadoop.hbase.security.access with type arguments of type Result
 boolean AccessController.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> c, InternalScanner s, List<Result> result, int limit, boolean hasNext)
           
 

Uses of Result in org.apache.hadoop.hbase.thrift
 

Methods in org.apache.hadoop.hbase.thrift with parameters of type Result
static List<org.apache.hadoop.hbase.thrift.generated.TRowResult> ThriftUtilities.rowResultFromHBase(Result in)
           
static List<org.apache.hadoop.hbase.thrift.generated.TRowResult> ThriftUtilities.rowResultFromHBase(Result[] in)
          This utility method creates a list of Thrift TRowResult "struct" based on an array of Hbase RowResult objects.
static List<org.apache.hadoop.hbase.thrift.generated.TRowResult> ThriftUtilities.rowResultFromHBase(Result[] in, boolean sortColumns)
          This utility method creates a list of Thrift TRowResult "struct" based on an Hbase RowResult object.
 

Uses of Result in org.apache.hadoop.hbase.thrift2
 

Methods in org.apache.hadoop.hbase.thrift2 with parameters of type Result
static TResult ThriftUtilities.resultFromHBase(Result in)
          Creates a TResult (Thrift) from a Result (HBase).
static List<TResult> ThriftUtilities.resultsFromHBase(Result[] in)
          Converts multiple Results (HBase) into a list of TResults (Thrift).
 



Copyright © 2013 The Apache Software Foundation. All Rights Reserved.