Uses of Class
org.apache.hadoop.hbase.client.Scan

Packages that use Scan
org.apache.hadoop.hbase.catalog   
org.apache.hadoop.hbase.client Provides HBase Client 
org.apache.hadoop.hbase.client.coprocessor Provides client classes for invoking Coprocessor RPC protocols 
org.apache.hadoop.hbase.coprocessor Table of Contents 
org.apache.hadoop.hbase.coprocessor.example   
org.apache.hadoop.hbase.io   
org.apache.hadoop.hbase.mapred Provides HBase MapReduce Input/OutputFormats, a table indexing MapReduce job, and utility 
org.apache.hadoop.hbase.mapreduce Provides HBase MapReduce Input/OutputFormats, a table indexing MapReduce job, and utility 
org.apache.hadoop.hbase.protobuf Holds classes generated from protobuf src/main/protobuf definition files. 
org.apache.hadoop.hbase.regionserver   
org.apache.hadoop.hbase.rest.client   
org.apache.hadoop.hbase.rest.model   
org.apache.hadoop.hbase.security.access   
org.apache.hadoop.hbase.security.visibility   
org.apache.hadoop.hbase.thrift2 Provides an HBase Thrift service. 
 

Uses of Scan in org.apache.hadoop.hbase.catalog
 

Methods in org.apache.hadoop.hbase.catalog that return Scan
static Scan MetaReader.getScanForTableName(TableName tableName)
          This method creates a Scan object that will only scan catalog rows that belong to the specified table.
 

Uses of Scan in org.apache.hadoop.hbase.client
 

Fields in org.apache.hadoop.hbase.client declared as Scan
protected  Scan ClientScanner.scan
           
 

Methods in org.apache.hadoop.hbase.client that return Scan
 Scan Scan.addColumn(byte[] family, byte[] qualifier)
          Get the column from the specified family with the specified qualifier.
 Scan Scan.addFamily(byte[] family)
          Get all columns from the specified family.
protected  Scan ScannerCallable.getScan()
           
protected  Scan ClientScanner.getScan()
           
 Scan Scan.setFamilyMap(Map<byte[],NavigableSet<byte[]>> familyMap)
          Setting the familyMap
 Scan Scan.setFilter(Filter filter)
           
 Scan Scan.setMaxVersions()
          Get all available versions.
 Scan Scan.setMaxVersions(int maxVersions)
          Get up to the specified number of versions of each column.
 Scan Scan.setReversed(boolean reversed)
          Set whether this scan is a reversed one
 Scan Scan.setStartRow(byte[] startRow)
          Set the start row of the scan.
 Scan Scan.setStopRow(byte[] stopRow)
          Set the stop row.
 Scan Scan.setTimeRange(long minStamp, long maxStamp)
          Get versions of columns only within the specified timestamp range, [minStamp, maxStamp).
 Scan Scan.setTimeStamp(long timestamp)
          Get versions of columns with the specified timestamp.
 

Methods in org.apache.hadoop.hbase.client with parameters of type Scan
 RegionServerCallable<Result[]> ClientSmallScanner.SmallScannerCallableFactory.getCallable(Scan sc, HConnection connection, TableName tableName, ScanMetrics scanMetrics, byte[] localStartKey, int cacheNum, RpcControllerFactory rpcControllerFactory)
           
 ResultScanner HTable.getScanner(Scan scan)
          Returns a scanner on the current table as specified by the Scan object.
 ResultScanner HTableInterface.getScanner(Scan scan)
          Returns a scanner on the current table as specified by the Scan object.
protected  void AbstractClientScanner.initScanMetrics(Scan scan)
          Check and initialize if application wants to collect scan metrics
 

Constructors in org.apache.hadoop.hbase.client with parameters of type Scan
ClientScanner(org.apache.hadoop.conf.Configuration conf, Scan scan, byte[] tableName)
          Deprecated. Use ClientScanner.ClientScanner(Configuration, Scan, TableName)
ClientScanner(org.apache.hadoop.conf.Configuration conf, Scan scan, byte[] tableName, HConnection connection)
          Deprecated. Use ClientScanner.ClientScanner(Configuration, Scan, TableName, HConnection)
ClientScanner(org.apache.hadoop.conf.Configuration conf, Scan scan, TableName tableName)
          Deprecated. 
ClientScanner(org.apache.hadoop.conf.Configuration conf, Scan scan, TableName tableName, HConnection connection)
          Create a new ClientScanner for the specified table Note that the passed Scan's start row maybe changed changed.
ClientScanner(org.apache.hadoop.conf.Configuration conf, Scan scan, TableName tableName, HConnection connection, RpcRetryingCallerFactory rpcFactory)
          Deprecated. Use ClientScanner.ClientScanner(Configuration, Scan, TableName, HConnection, RpcRetryingCallerFactory, RpcControllerFactory) instead
ClientScanner(org.apache.hadoop.conf.Configuration conf, Scan scan, TableName tableName, HConnection connection, RpcRetryingCallerFactory rpcFactory, RpcControllerFactory controllerFactory)
          Create a new ClientScanner for the specified table Note that the passed Scan's start row maybe changed changed.
ClientSideRegionScanner(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path rootDir, HTableDescriptor htd, HRegionInfo hri, Scan scan, ScanMetrics scanMetrics)
           
ClientSmallReversedScanner(org.apache.hadoop.conf.Configuration conf, Scan scan, TableName tableName, HConnection connection)
          Create a new ReversibleClientScanner for the specified table Note that the passed Scan's start row maybe changed.
ClientSmallScanner(org.apache.hadoop.conf.Configuration conf, Scan scan, TableName tableName)
          Create a new ClientSmallScanner for the specified table.
ClientSmallScanner(org.apache.hadoop.conf.Configuration conf, Scan scan, TableName tableName, HConnection connection)
          Create a new ClientSmallScanner for the specified table.
ClientSmallScanner(org.apache.hadoop.conf.Configuration conf, Scan scan, TableName tableName, HConnection connection, RpcRetryingCallerFactory rpcFactory, RpcControllerFactory controllerFactory)
          Create a new ShortClientScanner for the specified table Note that the passed Scan's start row maybe changed changed.
ReversedClientScanner(org.apache.hadoop.conf.Configuration conf, Scan scan, TableName tableName, HConnection connection)
          Create a new ReversibleClientScanner for the specified table Note that the passed Scan's start row maybe changed.
ReversedScannerCallable(HConnection connection, TableName tableName, Scan scan, ScanMetrics scanMetrics, byte[] locateStartRow)
          Deprecated. 
ReversedScannerCallable(HConnection connection, TableName tableName, Scan scan, ScanMetrics scanMetrics, byte[] locateStartRow, PayloadCarryingRpcController rpcFactory)
           
Scan(Scan scan)
          Creates a new instance of this class while copying all values.
ScannerCallable(HConnection connection, byte[] tableName, Scan scan, ScanMetrics scanMetrics)
          Deprecated. Use ScannerCallable.ScannerCallable(HConnection, TableName, Scan, ScanMetrics, PayloadCarryingRpcController)
ScannerCallable(HConnection connection, TableName tableName, Scan scan, ScanMetrics scanMetrics, PayloadCarryingRpcController controller)
           
TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.Path rootDir, org.apache.hadoop.fs.Path restoreDir, String snapshotName, Scan scan)
          Creates a TableSnapshotScanner.
TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.Path restoreDir, String snapshotName, Scan scan)
          Creates a TableSnapshotScanner.
 

Uses of Scan in org.apache.hadoop.hbase.client.coprocessor
 

Methods in org.apache.hadoop.hbase.client.coprocessor with parameters of type Scan
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message>
double
AggregationClient.avg(HTable table, ColumnInterpreter<R,S,P,Q,T> ci, Scan scan)
          This is the client side interface/handle for calling the average method for a given cf-cq combination.
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message>
double
AggregationClient.avg(TableName tableName, ColumnInterpreter<R,S,P,Q,T> ci, Scan scan)
          This is the client side interface/handle for calling the average method for a given cf-cq combination.
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message>
R
AggregationClient.max(HTable table, ColumnInterpreter<R,S,P,Q,T> ci, Scan scan)
          It gives the maximum value of a column for a given column family for the given range.
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message>
R
AggregationClient.max(TableName tableName, ColumnInterpreter<R,S,P,Q,T> ci, Scan scan)
          It gives the maximum value of a column for a given column family for the given range.
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message>
R
AggregationClient.median(HTable table, ColumnInterpreter<R,S,P,Q,T> ci, Scan scan)
          This is the client side interface/handler for calling the median method for a given cf-cq combination.
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message>
R
AggregationClient.median(TableName tableName, ColumnInterpreter<R,S,P,Q,T> ci, Scan scan)
          This is the client side interface/handler for calling the median method for a given cf-cq combination.
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message>
R
AggregationClient.min(HTable table, ColumnInterpreter<R,S,P,Q,T> ci, Scan scan)
          It gives the minimum value of a column for a given column family for the given range.
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message>
R
AggregationClient.min(TableName tableName, ColumnInterpreter<R,S,P,Q,T> ci, Scan scan)
          It gives the minimum value of a column for a given column family for the given range.
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message>
long
AggregationClient.rowCount(HTable table, ColumnInterpreter<R,S,P,Q,T> ci, Scan scan)
          It gives the row count, by summing up the individual results obtained from regions.
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message>
long
AggregationClient.rowCount(TableName tableName, ColumnInterpreter<R,S,P,Q,T> ci, Scan scan)
          It gives the row count, by summing up the individual results obtained from regions.
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message>
double
AggregationClient.std(HTable table, ColumnInterpreter<R,S,P,Q,T> ci, Scan scan)
          This is the client side interface/handle for calling the std method for a given cf-cq combination.
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message>
double
AggregationClient.std(TableName tableName, ColumnInterpreter<R,S,P,Q,T> ci, Scan scan)
          This is the client side interface/handle for calling the std method for a given cf-cq combination.
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message>
S
AggregationClient.sum(HTable table, ColumnInterpreter<R,S,P,Q,T> ci, Scan scan)
          It sums up the value returned from various regions.
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message>
S
AggregationClient.sum(TableName tableName, ColumnInterpreter<R,S,P,Q,T> ci, Scan scan)
          It sums up the value returned from various regions.
 

Uses of Scan in org.apache.hadoop.hbase.coprocessor
 

Methods in org.apache.hadoop.hbase.coprocessor with parameters of type Scan
 RegionScanner BaseRegionObserver.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> e, Scan scan, RegionScanner s)
           
 RegionScanner RegionObserver.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c, Scan scan, RegionScanner s)
          Called after the client opens a new scanner.
 RegionScanner BaseRegionObserver.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> e, Scan scan, RegionScanner s)
           
 RegionScanner RegionObserver.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c, Scan scan, RegionScanner s)
          Called before the client opens a new scanner.
 KeyValueScanner BaseRegionObserver.preStoreScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c, Store store, Scan scan, NavigableSet<byte[]> targetCols, KeyValueScanner s)
           
 KeyValueScanner RegionObserver.preStoreScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c, Store store, Scan scan, NavigableSet<byte[]> targetCols, KeyValueScanner s)
          Called before a store opens a new scanner.
 

Uses of Scan in org.apache.hadoop.hbase.coprocessor.example
 

Methods in org.apache.hadoop.hbase.coprocessor.example with parameters of type Scan
 KeyValueScanner ZooKeeperScanPolicyObserver.preStoreScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c, Store store, Scan scan, NavigableSet<byte[]> targetCols, KeyValueScanner s)
           
 

Uses of Scan in org.apache.hadoop.hbase.io
 

Methods in org.apache.hadoop.hbase.io with parameters of type Scan
 boolean HalfStoreFileReader.passesKeyRangeFilter(Scan scan)
           
 

Uses of Scan in org.apache.hadoop.hbase.mapred
 

Method parameters in org.apache.hadoop.hbase.mapred with type arguments of type Scan
static void TableMapReduceUtil.initMultiTableSnapshotMapperJob(Map<String,Collection<Scan>> snapshotScans, Class<? extends TableMap> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapred.JobConf job, boolean addDependencyJars, org.apache.hadoop.fs.Path tmpRestoreDir)
          Deprecated. Sets up the job for reading from one or more multiple table snapshots, with one or more scans per snapshot.
static void MultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configuration conf, Map<String,Collection<Scan>> snapshotScans, org.apache.hadoop.fs.Path restoreDir)
          Configure conf to read from snapshotScans, with snapshots restored to a subdirectory of restoreDir.
 

Constructors in org.apache.hadoop.hbase.mapred with parameters of type Scan
TableSnapshotInputFormat.TableSnapshotRegionSplit(HTableDescriptor htd, HRegionInfo regionInfo, List<String> locations, Scan scan, org.apache.hadoop.fs.Path restoreDir)
           
 

Uses of Scan in org.apache.hadoop.hbase.mapreduce
 

Methods in org.apache.hadoop.hbase.mapreduce that return Scan
static Scan TableSnapshotInputFormatImpl.extractScanFromConf(org.apache.hadoop.conf.Configuration conf)
           
 Scan TableInputFormatBase.getScan()
          Gets the scan defining the actual details like columns etc.
 Scan TableSplit.getScan()
          Returns a Scan object from the stored string representation.
 

Methods in org.apache.hadoop.hbase.mapreduce that return types with arguments of type Scan
protected  List<Scan> MultiTableInputFormatBase.getScans()
          Allows subclasses to get the list of Scan objects.
 Map<String,Collection<Scan>> MultiTableSnapshotInputFormatImpl.getSnapshotsToScans(org.apache.hadoop.conf.Configuration conf)
          Retrieve the snapshot name -> list mapping pushed to configuration by MultiTableSnapshotInputFormatImpl.setSnapshotToScans(org.apache.hadoop.conf.Configuration, java.util.Map)
 

Methods in org.apache.hadoop.hbase.mapreduce with parameters of type Scan
static void TableInputFormat.addColumns(Scan scan, byte[][] columns)
          Adds an array of columns specified using old format, family:qualifier.
static List<TableSnapshotInputFormatImpl.InputSplit> TableSnapshotInputFormatImpl.getSplits(Scan scan, SnapshotManifest manifest, List<HRegionInfo> regionManifests, org.apache.hadoop.fs.Path restoreDir, org.apache.hadoop.conf.Configuration conf)
           
static void IdentityTableMapper.initJob(String table, Scan scan, Class<? extends TableMapper> mapper, org.apache.hadoop.mapreduce.Job job)
          Use this before submitting a TableMap job.
static void GroupingTableMapper.initJob(String table, Scan scan, String groupColumns, Class<? extends TableMapper> mapper, org.apache.hadoop.mapreduce.Job job)
          Use this before submitting a TableMap job.
static void TableMapReduceUtil.initTableMapperJob(byte[] table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job)
          Use this before submitting a TableMap job.
static void TableMapReduceUtil.initTableMapperJob(byte[] table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars)
          Use this before submitting a TableMap job.
static void TableMapReduceUtil.initTableMapperJob(byte[] table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
          Use this before submitting a TableMap job.
static void TableMapReduceUtil.initTableMapperJob(String table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job)
          Use this before submitting a TableMap job.
static void TableMapReduceUtil.initTableMapperJob(String table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars)
          Use this before submitting a TableMap job.
static void TableMapReduceUtil.initTableMapperJob(String table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, boolean initCredentials, Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
          Use this before submitting a TableMap job.
static void TableMapReduceUtil.initTableMapperJob(String table, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
          Use this before submitting a TableMap job.
static void TableMapReduceUtil.initTableSnapshotMapperJob(String snapshotName, Scan scan, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, org.apache.hadoop.fs.Path tmpRestoreDir)
          Sets up the job for reading from a table snapshot.
 void TableRecordReader.setScan(Scan scan)
          Sets the scan defining the actual details like columns etc.
 void TableInputFormatBase.setScan(Scan scan)
          Sets the scan defining the actual details like columns etc.
 void TableRecordReaderImpl.setScan(Scan scan)
          Sets the scan defining the actual details like columns etc.
 

Method parameters in org.apache.hadoop.hbase.mapreduce with type arguments of type Scan
static void TableMapReduceUtil.initMultiTableSnapshotMapperJob(Map<String,Collection<Scan>> snapshotScans, Class<? extends TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, org.apache.hadoop.fs.Path tmpRestoreDir)
          Sets up the job for reading from one or more table snapshots, with one or more scans per snapshot.
static void TableMapReduceUtil.initTableMapperJob(List<Scan> scans, Class<? extends TableMapper> mapper, Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass, Class<? extends org.apache.hadoop.io.Writable> outputValueClass, org.apache.hadoop.mapreduce.Job job)
          Use this before submitting a Multi TableMap job.
static void TableMapReduceUtil.initTableMapperJob(List<Scan> scans, Class<? extends TableMapper> mapper, Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass, Class<? extends org.apache.hadoop.io.Writable> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars)
          Use this before submitting a Multi TableMap job.
static void TableMapReduceUtil.initTableMapperJob(List<Scan> scans, Class<? extends TableMapper> mapper, Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass, Class<? extends org.apache.hadoop.io.Writable> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, boolean initCredentials)
          Use this before submitting a Multi TableMap job.
static void MultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configuration configuration, Map<String,Collection<Scan>> snapshotScans, org.apache.hadoop.fs.Path tmpRestoreDir)
           
 void MultiTableSnapshotInputFormatImpl.setInput(org.apache.hadoop.conf.Configuration conf, Map<String,Collection<Scan>> snapshotScans, org.apache.hadoop.fs.Path restoreDir)
          Configure conf to read from snapshotScans, with snapshots restored to a subdirectory of restoreDir.
protected  void MultiTableInputFormatBase.setScans(List<Scan> scans)
          Allows subclasses to set the list of Scan objects.
 void MultiTableSnapshotInputFormatImpl.setSnapshotToScans(org.apache.hadoop.conf.Configuration conf, Map<String,Collection<Scan>> snapshotScans)
          Push snapshotScans to conf (under the key MultiTableSnapshotInputFormatImpl.SNAPSHOT_TO_SCANS_KEY)
 

Constructors in org.apache.hadoop.hbase.mapreduce with parameters of type Scan
TableSnapshotInputFormat.TableSnapshotRegionSplit(HTableDescriptor htd, HRegionInfo regionInfo, List<String> locations, Scan scan, org.apache.hadoop.fs.Path restoreDir)
           
TableSnapshotInputFormatImpl.InputSplit(HTableDescriptor htd, HRegionInfo regionInfo, List<String> locations, Scan scan, org.apache.hadoop.fs.Path restoreDir)
           
TableSplit(byte[] tableName, Scan scan, byte[] startRow, byte[] endRow, String location)
          Deprecated. Since 0.96.0; use TableSplit.TableSplit(TableName, byte[], byte[], String)
TableSplit(TableName tableName, Scan scan, byte[] startRow, byte[] endRow, String location)
          Creates a new instance while assigning all variables.
TableSplit(TableName tableName, Scan scan, byte[] startRow, byte[] endRow, String location, long length)
          Creates a new instance while assigning all variables.
 

Uses of Scan in org.apache.hadoop.hbase.protobuf
 

Methods in org.apache.hadoop.hbase.protobuf that return Scan
static Scan ProtobufUtil.toScan(ClientProtos.Scan proto)
          Convert a protocol buffer Scan to a client Scan
 

Methods in org.apache.hadoop.hbase.protobuf with parameters of type Scan
static ClientProtos.ScanRequest RequestConverter.buildScanRequest(byte[] regionName, Scan scan, int numberOfRows, boolean closeScanner)
          Create a protocol buffer ScanRequest for a client Scan
static ClientProtos.Scan ProtobufUtil.toScan(Scan scan)
          Convert a client Scan to a protocol buffer Scan
 

Uses of Scan in org.apache.hadoop.hbase.regionserver
 

Subclasses of Scan in org.apache.hadoop.hbase.regionserver
 class InternalScan
          Special scanner, currently used for increment operations to allow additional server-side arguments for Scan operations.
 

Fields in org.apache.hadoop.hbase.regionserver declared as Scan
protected  Scan StoreScanner.scan
           
 

Methods in org.apache.hadoop.hbase.regionserver with parameters of type Scan
 RegionScanner HRegion.getScanner(Scan scan)
          Return an iterator that scans over the HRegion, returning the indicated columns and rows specified by the Scan.
protected  RegionScanner HRegion.getScanner(Scan scan, List<KeyValueScanner> additionalScanners)
           
 KeyValueScanner HStore.getScanner(Scan scan, NavigableSet<byte[]> targetCols, long readPt)
           
 KeyValueScanner Store.getScanner(Scan scan, NavigableSet<byte[]> targetCols, long readPt)
          Return a scanner for both the memstore and the HStore files.
protected  RegionScanner HRegion.instantiateRegionScanner(Scan scan, List<KeyValueScanner> additionalScanners)
           
 boolean StoreFile.Reader.passesKeyRangeFilter(Scan scan)
          Checks whether the given scan rowkey range overlaps with the current storefile's
 RegionScanner RegionCoprocessorHost.postScannerOpen(Scan scan, RegionScanner s)
           
 RegionScanner RegionCoprocessorHost.preScannerOpen(Scan scan)
           
 KeyValueScanner RegionCoprocessorHost.preStoreScannerOpen(Store store, Scan scan, NavigableSet<byte[]> targetCols)
          See RegionObserver.preStoreScannerOpen(ObserverContext, Store, Scan, NavigableSet, KeyValueScanner)
 boolean MemStore.shouldSeek(Scan scan, long oldestUnexpiredTS)
          Check if this memstore may contain the required keys
 boolean KeyValueScanner.shouldUseScanner(Scan scan, SortedSet<byte[]> columns, long oldestUnexpiredTS)
          Allows to filter out scanners (both StoreFile and memstore) that we don't want to use based on criteria such as Bloom filters and timestamp ranges.
 boolean NonLazyKeyValueScanner.shouldUseScanner(Scan scan, SortedSet<byte[]> columns, long oldestUnexpiredTS)
           
 boolean StoreFileScanner.shouldUseScanner(Scan scan, SortedSet<byte[]> columns, long oldestUnexpiredTS)
           
 boolean MemStore.MemStoreScanner.shouldUseScanner(Scan scan, SortedSet<byte[]> columns, long oldestUnexpiredTS)
           
 

Constructors in org.apache.hadoop.hbase.regionserver with parameters of type Scan
InternalScan(Scan scan)
           
ScanQueryMatcher(Scan scan, ScanInfo scanInfo, NavigableSet<byte[]> columns, long readPointToUse, long earliestPutTs, long oldestUnexpiredTS, long now, byte[] dropDeletesFromRow, byte[] dropDeletesToRow, RegionCoprocessorHost regionCoprocessorHost)
          Construct a QueryMatcher for a scan that drop deletes from a limited range of rows.
ScanQueryMatcher(Scan scan, ScanInfo scanInfo, NavigableSet<byte[]> columns, ScanType scanType, long readPointToUse, long earliestPutTs, long oldestUnexpiredTS, long now, RegionCoprocessorHost regionCoprocessorHost)
          Construct a QueryMatcher for a scan
StoreScanner(Store store, boolean cacheBlocks, Scan scan, NavigableSet<byte[]> columns, long ttl, int minVersions, long readPt)
          An internal constructor.
StoreScanner(Store store, ScanInfo scanInfo, Scan scan, List<? extends KeyValueScanner> scanners, long smallestReadPoint, long earliestPutTs, byte[] dropDeletesFromRow, byte[] dropDeletesToRow)
          Used for compactions that drop deletes from a limited range of rows.
StoreScanner(Store store, ScanInfo scanInfo, Scan scan, List<? extends KeyValueScanner> scanners, ScanType scanType, long smallestReadPoint, long earliestPutTs)
          Used for compactions.
StoreScanner(Store store, ScanInfo scanInfo, Scan scan, NavigableSet<byte[]> columns, long readPt)
          Opens a scanner across memstore, snapshot, and all StoreFiles.
 

Uses of Scan in org.apache.hadoop.hbase.rest.client
 

Methods in org.apache.hadoop.hbase.rest.client with parameters of type Scan
 ResultScanner RemoteHTable.getScanner(Scan scan)
           
 

Uses of Scan in org.apache.hadoop.hbase.rest.model
 

Methods in org.apache.hadoop.hbase.rest.model with parameters of type Scan
static ScannerModel ScannerModel.fromScan(Scan scan)
           
 

Uses of Scan in org.apache.hadoop.hbase.security.access
 

Methods in org.apache.hadoop.hbase.security.access with parameters of type Scan
 RegionScanner AccessController.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c, Scan scan, RegionScanner s)
           
 RegionScanner AccessController.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c, Scan scan, RegionScanner s)
           
 

Uses of Scan in org.apache.hadoop.hbase.security.visibility
 

Methods in org.apache.hadoop.hbase.security.visibility with parameters of type Scan
 RegionScanner VisibilityController.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c, Scan scan, RegionScanner s)
           
 RegionScanner VisibilityController.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> e, Scan scan, RegionScanner s)
           
 

Uses of Scan in org.apache.hadoop.hbase.thrift2
 

Methods in org.apache.hadoop.hbase.thrift2 that return Scan
static Scan ThriftUtilities.scanFromThrift(TScan in)
           
 



Copyright © 2007–2016 The Apache Software Foundation. All rights reserved.