|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use Scan | |
---|---|
org.apache.hadoop.hbase.avro | Provides an HBase Avro service. |
org.apache.hadoop.hbase.catalog | |
org.apache.hadoop.hbase.client | Provides HBase Client |
org.apache.hadoop.hbase.client.coprocessor | Provides client classes for invoking Coprocessor RPC protocols |
org.apache.hadoop.hbase.coprocessor | Table of Contents |
org.apache.hadoop.hbase.coprocessor.example | |
org.apache.hadoop.hbase.ipc | Tools to help define network clients and servers. |
org.apache.hadoop.hbase.mapreduce | Provides HBase MapReduce Input/OutputFormats, a table indexing MapReduce job, and utility |
org.apache.hadoop.hbase.regionserver | |
org.apache.hadoop.hbase.rest.client | |
org.apache.hadoop.hbase.rest.model | |
org.apache.hadoop.hbase.security.access | |
org.apache.hadoop.hbase.thrift2 | Provides an HBase Thrift service. |
Uses of Scan in org.apache.hadoop.hbase.avro |
---|
Methods in org.apache.hadoop.hbase.avro that return Scan | |
---|---|
static Scan |
AvroUtil.ascanToScan(AScan ascan)
|
Uses of Scan in org.apache.hadoop.hbase.catalog |
---|
Methods in org.apache.hadoop.hbase.catalog that return Scan | |
---|---|
static Scan |
MetaReader.getScanForTableName(byte[] tableName)
This method creates a Scan object that will only scan catalog rows that belong to the specified table. |
Uses of Scan in org.apache.hadoop.hbase.client |
---|
Methods in org.apache.hadoop.hbase.client that return Scan | |
---|---|
Scan |
Scan.addColumn(byte[] family,
byte[] qualifier)
Get the column from the specified family with the specified qualifier. |
Scan |
Scan.addFamily(byte[] family)
Get all columns from the specified family. |
protected Scan |
ClientScanner.getScan()
|
protected Scan |
ScannerCallable.getScan()
|
Scan |
Scan.setFamilyMap(Map<byte[],NavigableSet<byte[]>> familyMap)
Setting the familyMap |
Scan |
Scan.setFilter(Filter filter)
Apply the specified server-side filter when performing the Scan. |
Scan |
Scan.setMaxVersions()
Get all available versions. |
Scan |
Scan.setMaxVersions(int maxVersions)
Get up to the specified number of versions of each column. |
Scan |
Scan.setStartRow(byte[] startRow)
Set the start row of the scan. |
Scan |
Scan.setStopRow(byte[] stopRow)
Set the stop row. |
Scan |
Scan.setTimeRange(long minStamp,
long maxStamp)
Get versions of columns only within the specified timestamp range, [minStamp, maxStamp). |
Scan |
Scan.setTimeStamp(long timestamp)
Get versions of columns with the specified timestamp. |
Methods in org.apache.hadoop.hbase.client with parameters of type Scan | |
---|---|
ResultScanner |
HTable.getScanner(Scan scan)
Returns a scanner on the current table as specified by the Scan
object. |
ResultScanner |
HTableInterface.getScanner(Scan scan)
Returns a scanner on the current table as specified by the Scan
object. |
Constructors in org.apache.hadoop.hbase.client with parameters of type Scan | |
---|---|
ClientScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
byte[] tableName)
Create a new ClientScanner for the specified table. |
|
ClientScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
byte[] tableName,
HConnection connection)
Create a new ClientScanner for the specified table Note that the passed Scan 's start row maybe changed changed. |
|
Scan(Scan scan)
Creates a new instance of this class while copying all values. |
|
ScannerCallable(HConnection connection,
byte[] tableName,
Scan scan,
ScanMetrics scanMetrics)
|
Uses of Scan in org.apache.hadoop.hbase.client.coprocessor |
---|
Methods in org.apache.hadoop.hbase.client.coprocessor with parameters of type Scan | ||
---|---|---|
|
AggregationClient.avg(byte[] tableName,
ColumnInterpreter<R,S> ci,
Scan scan)
This is the client side interface/handle for calling the average method for a given cf-cq combination. |
|
|
AggregationClient.max(byte[] tableName,
ColumnInterpreter<R,S> ci,
Scan scan)
It gives the maximum value of a column for a given column family for the given range. |
|
|
AggregationClient.median(byte[] tableName,
ColumnInterpreter<R,S> ci,
Scan scan)
This is the client side interface/handler for calling the median method for a given cf-cq combination. |
|
|
AggregationClient.min(byte[] tableName,
ColumnInterpreter<R,S> ci,
Scan scan)
It gives the minimum value of a column for a given column family for the given range. |
|
|
AggregationClient.rowCount(byte[] tableName,
ColumnInterpreter<R,S> ci,
Scan scan)
It gives the row count, by summing up the individual results obtained from regions. |
|
|
AggregationClient.std(byte[] tableName,
ColumnInterpreter<R,S> ci,
Scan scan)
This is the client side interface/handle for calling the std method for a given cf-cq combination. |
|
|
AggregationClient.sum(byte[] tableName,
ColumnInterpreter<R,S> ci,
Scan scan)
It sums up the value returned from various regions. |
Uses of Scan in org.apache.hadoop.hbase.coprocessor |
---|
Methods in org.apache.hadoop.hbase.coprocessor with parameters of type Scan | ||
---|---|---|
|
AggregateImplementation.getAvg(ColumnInterpreter<T,S> ci,
Scan scan)
|
|
|
AggregateProtocol.getAvg(ColumnInterpreter<T,S> ci,
Scan scan)
Gives a Pair with first object as Sum and second object as row count, computed for a given combination of column qualifier and column family in the given row range as defined in the Scan object. |
|
|
AggregateImplementation.getMax(ColumnInterpreter<T,S> ci,
Scan scan)
|
|
|
AggregateProtocol.getMax(ColumnInterpreter<T,S> ci,
Scan scan)
Gives the maximum for a given combination of column qualifier and column family, in the given row range as defined in the Scan object. |
|
|
AggregateImplementation.getMedian(ColumnInterpreter<T,S> ci,
Scan scan)
|
|
|
AggregateProtocol.getMedian(ColumnInterpreter<T,S> ci,
Scan scan)
Gives a List containing sum of values and sum of weights. |
|
|
AggregateImplementation.getMin(ColumnInterpreter<T,S> ci,
Scan scan)
|
|
|
AggregateProtocol.getMin(ColumnInterpreter<T,S> ci,
Scan scan)
Gives the minimum for a given combination of column qualifier and column family, in the given row range as defined in the Scan object. |
|
|
AggregateImplementation.getRowNum(ColumnInterpreter<T,S> ci,
Scan scan)
|
|
|
AggregateProtocol.getRowNum(ColumnInterpreter<T,S> ci,
Scan scan)
|
|
|
AggregateImplementation.getStd(ColumnInterpreter<T,S> ci,
Scan scan)
|
|
|
AggregateProtocol.getStd(ColumnInterpreter<T,S> ci,
Scan scan)
Gives a Pair with first object a List containing Sum and sum of squares, and the second object as row count. |
|
|
AggregateImplementation.getSum(ColumnInterpreter<T,S> ci,
Scan scan)
|
|
|
AggregateProtocol.getSum(ColumnInterpreter<T,S> ci,
Scan scan)
Gives the sum for a given combination of column qualifier and column family, in the given row range as defined in the Scan object. |
|
RegionScanner |
BaseRegionObserver.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> e,
Scan scan,
RegionScanner s)
|
|
RegionScanner |
RegionObserver.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s)
Called after the client opens a new scanner. |
|
RegionScanner |
BaseRegionObserver.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> e,
Scan scan,
RegionScanner s)
|
|
RegionScanner |
RegionObserver.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s)
Called before the client opens a new scanner. |
|
KeyValueScanner |
BaseRegionObserver.preStoreScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
Scan scan,
NavigableSet<byte[]> targetCols,
KeyValueScanner s)
|
|
KeyValueScanner |
RegionObserver.preStoreScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
Scan scan,
NavigableSet<byte[]> targetCols,
KeyValueScanner s)
Called before a store opens a new scanner. |
Uses of Scan in org.apache.hadoop.hbase.coprocessor.example |
---|
Methods in org.apache.hadoop.hbase.coprocessor.example with parameters of type Scan | |
---|---|
BulkDeleteResponse |
BulkDeleteEndpoint.delete(Scan scan,
byte deleteType,
Long timestamp,
int rowBatchSize)
|
BulkDeleteResponse |
BulkDeleteProtocol.delete(Scan scan,
byte deleteType,
Long timestamp,
int rowBatchSize)
|
KeyValueScanner |
ZooKeeperScanPolicyObserver.preStoreScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
Scan scan,
NavigableSet<byte[]> targetCols,
KeyValueScanner s)
|
Uses of Scan in org.apache.hadoop.hbase.ipc |
---|
Methods in org.apache.hadoop.hbase.ipc with parameters of type Scan | |
---|---|
long |
HRegionInterface.openScanner(byte[] regionName,
Scan scan)
Opens a remote scanner with a RowFilter. |
Uses of Scan in org.apache.hadoop.hbase.mapreduce |
---|
Methods in org.apache.hadoop.hbase.mapreduce that return Scan | |
---|---|
Scan |
TableSplit.getScan()
Returns a Scan object from the stored string representation. |
Scan |
TableInputFormatBase.getScan()
Gets the scan defining the actual details like columns etc. |
Methods in org.apache.hadoop.hbase.mapreduce that return types with arguments of type Scan | |
---|---|
protected List<Scan> |
MultiTableInputFormatBase.getScans()
Allows subclasses to get the list of Scan objects. |
Methods in org.apache.hadoop.hbase.mapreduce with parameters of type Scan | |
---|---|
static void |
TableInputFormat.addColumns(Scan scan,
byte[][] columns)
Adds an array of columns specified using old format, family:qualifier. |
static void |
IdentityTableMapper.initJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job. |
static void |
GroupingTableMapper.initJob(String table,
Scan scan,
String groupColumns,
Class<? extends TableMapper> mapper,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job. |
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass,
Class<? extends org.apache.hadoop.io.Writable> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job. |
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass,
Class<? extends org.apache.hadoop.io.Writable> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a TableMap job. |
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass,
Class<? extends org.apache.hadoop.io.Writable> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job. |
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass,
Class<? extends org.apache.hadoop.io.Writable> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job. |
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass,
Class<? extends org.apache.hadoop.io.Writable> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a TableMap job. |
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass,
Class<? extends org.apache.hadoop.io.Writable> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job. |
void |
TableInputFormatBase.setScan(Scan scan)
Sets the scan defining the actual details like columns etc. |
void |
TableRecordReaderImpl.setScan(Scan scan)
Sets the scan defining the actual details like columns etc. |
void |
TableRecordReader.setScan(Scan scan)
Sets the scan defining the actual details like columns etc. |
Method parameters in org.apache.hadoop.hbase.mapreduce with type arguments of type Scan | |
---|---|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass,
Class<? extends org.apache.hadoop.io.Writable> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a Multi TableMap job. |
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass,
Class<? extends org.apache.hadoop.io.Writable> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a Multi TableMap job. |
protected void |
MultiTableInputFormatBase.setScans(List<Scan> scans)
Allows subclasses to set the list of Scan objects. |
Constructors in org.apache.hadoop.hbase.mapreduce with parameters of type Scan | |
---|---|
TableSplit(byte[] tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location)
Creates a new instance while assigning all variables. |
Uses of Scan in org.apache.hadoop.hbase.regionserver |
---|
Methods in org.apache.hadoop.hbase.regionserver with parameters of type Scan | |
---|---|
RegionScanner |
HRegion.getScanner(Scan scan)
Return an iterator that scans over the HRegion, returning the indicated columns and rows specified by the Scan . |
protected RegionScanner |
HRegion.getScanner(Scan scan,
List<KeyValueScanner> additionalScanners)
|
KeyValueScanner |
Store.getScanner(Scan scan,
NavigableSet<byte[]> targetCols)
Return a scanner for both the memstore and the HStore files. |
protected RegionScanner |
HRegion.instantiateRegionScanner(Scan scan,
List<KeyValueScanner> additionalScanners)
|
long |
HRegionServer.openScanner(byte[] regionName,
Scan scan)
|
RegionScanner |
RegionCoprocessorHost.postScannerOpen(Scan scan,
RegionScanner s)
|
RegionScanner |
RegionCoprocessorHost.preScannerOpen(Scan scan)
|
KeyValueScanner |
RegionCoprocessorHost.preStoreScannerOpen(Store store,
Scan scan,
NavigableSet<byte[]> targetCols)
See RegionObserver.preStoreScannerOpen(ObserverContext, Store, Scan, NavigableSet, KeyValueScanner) |
boolean |
MemStore.shouldSeek(Scan scan,
long oldestUnexpiredTS)
Check if this memstore may contain the required keys |
boolean |
MemStore.MemStoreScanner.shouldUseScanner(Scan scan,
SortedSet<byte[]> columns,
long oldestUnexpiredTS)
|
boolean |
KeyValueScanner.shouldUseScanner(Scan scan,
SortedSet<byte[]> columns,
long oldestUnexpiredTS)
Allows to filter out scanners (both StoreFile and memstore) that we don't want to use based on criteria such as Bloom filters and timestamp ranges. |
boolean |
StoreFileScanner.shouldUseScanner(Scan scan,
SortedSet<byte[]> columns,
long oldestUnexpiredTS)
|
boolean |
NonLazyKeyValueScanner.shouldUseScanner(Scan scan,
SortedSet<byte[]> columns,
long oldestUnexpiredTS)
|
Constructors in org.apache.hadoop.hbase.regionserver with parameters of type Scan | |
---|---|
ScanQueryMatcher(Scan scan,
Store.ScanInfo scanInfo,
NavigableSet<byte[]> columns,
ScanType scanType,
long readPointToUse,
long earliestPutTs,
long oldestUnexpiredTS)
Construct a QueryMatcher for a scan |
|
StoreScanner(Store store,
Store.ScanInfo scanInfo,
Scan scan,
List<? extends KeyValueScanner> scanners,
ScanType scanType,
long smallestReadPoint,
long earliestPutTs)
Used for major compactions. |
|
StoreScanner(Store store,
Store.ScanInfo scanInfo,
Scan scan,
NavigableSet<byte[]> columns)
Opens a scanner across memstore, snapshot, and all StoreFiles. |
Uses of Scan in org.apache.hadoop.hbase.rest.client |
---|
Methods in org.apache.hadoop.hbase.rest.client with parameters of type Scan | |
---|---|
ResultScanner |
RemoteHTable.getScanner(Scan scan)
|
Uses of Scan in org.apache.hadoop.hbase.rest.model |
---|
Methods in org.apache.hadoop.hbase.rest.model with parameters of type Scan | |
---|---|
static ScannerModel |
ScannerModel.fromScan(Scan scan)
|
Uses of Scan in org.apache.hadoop.hbase.security.access |
---|
Methods in org.apache.hadoop.hbase.security.access with parameters of type Scan | |
---|---|
RegionScanner |
AccessController.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s)
|
RegionScanner |
AccessController.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s)
|
Uses of Scan in org.apache.hadoop.hbase.thrift2 |
---|
Methods in org.apache.hadoop.hbase.thrift2 that return Scan | |
---|---|
static Scan |
ThriftUtilities.scanFromThrift(TScan in)
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |