|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use Scan | |
---|---|
org.apache.hadoop.hbase.catalog | |
org.apache.hadoop.hbase.client | Provides HBase Client |
org.apache.hadoop.hbase.client.coprocessor | Provides client classes for invoking Coprocessor RPC protocols |
org.apache.hadoop.hbase.coprocessor | Table of Contents |
org.apache.hadoop.hbase.coprocessor.example | |
org.apache.hadoop.hbase.mapreduce | Provides HBase MapReduce Input/OutputFormats, a table indexing MapReduce job, and utility |
org.apache.hadoop.hbase.protobuf | Holds classes generated from protobuf
src/main/protobuf definition files. |
org.apache.hadoop.hbase.regionserver | |
org.apache.hadoop.hbase.rest.client | |
org.apache.hadoop.hbase.rest.model | |
org.apache.hadoop.hbase.security.access | |
org.apache.hadoop.hbase.thrift2 | Provides an HBase Thrift service. |
Uses of Scan in org.apache.hadoop.hbase.catalog |
---|
Methods in org.apache.hadoop.hbase.catalog that return Scan | |
---|---|
static Scan |
MetaReader.getScanForTableName(byte[] tableName)
This method creates a Scan object that will only scan catalog rows that belong to the specified table. |
Uses of Scan in org.apache.hadoop.hbase.client |
---|
Methods in org.apache.hadoop.hbase.client that return Scan | |
---|---|
Scan |
Scan.addColumn(byte[] family,
byte[] qualifier)
Get the column from the specified family with the specified qualifier. |
Scan |
Scan.addFamily(byte[] family)
Get all columns from the specified family. |
protected Scan |
ScannerCallable.getScan()
|
protected Scan |
ClientScanner.getScan()
|
Scan |
Scan.setFamilyMap(Map<byte[],NavigableSet<byte[]>> familyMap)
Setting the familyMap |
Scan |
Scan.setFilter(Filter filter)
Apply the specified server-side filter when performing the Scan. |
Scan |
Scan.setMaxVersions()
Get all available versions. |
Scan |
Scan.setMaxVersions(int maxVersions)
Get up to the specified number of versions of each column. |
Scan |
Scan.setStartRow(byte[] startRow)
Set the start row of the scan. |
Scan |
Scan.setStopRow(byte[] stopRow)
Set the stop row. |
Scan |
Scan.setTimeRange(long minStamp,
long maxStamp)
Get versions of columns only within the specified timestamp range, [minStamp, maxStamp). |
Scan |
Scan.setTimeStamp(long timestamp)
Get versions of columns with the specified timestamp. |
Methods in org.apache.hadoop.hbase.client with parameters of type Scan | |
---|---|
ResultScanner |
HTableInterface.getScanner(Scan scan)
Returns a scanner on the current table as specified by the Scan
object. |
ResultScanner |
HTable.getScanner(Scan scan)
Returns a scanner on the current table as specified by the Scan
object. |
Constructors in org.apache.hadoop.hbase.client with parameters of type Scan | |
---|---|
ClientScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
byte[] tableName)
Create a new ClientScanner for the specified table. |
|
ClientScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
byte[] tableName,
HConnection connection)
Create a new ClientScanner for the specified table Note that the passed Scan 's start row maybe changed changed. |
|
Scan(Scan scan)
Creates a new instance of this class while copying all values. |
|
ScannerCallable(HConnection connection,
byte[] tableName,
Scan scan,
ScanMetrics scanMetrics)
|
Uses of Scan in org.apache.hadoop.hbase.client.coprocessor |
---|
Methods in org.apache.hadoop.hbase.client.coprocessor with parameters of type Scan | ||
---|---|---|
|
AggregationClient.avg(byte[] tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the average method for a given cf-cq combination. |
|
|
AggregationClient.max(byte[] tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the maximum value of a column for a given column family for the given range. |
|
|
AggregationClient.median(byte[] tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handler for calling the median method for a given cf-cq combination. |
|
|
AggregationClient.min(byte[] tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the minimum value of a column for a given column family for the given range. |
|
|
AggregationClient.rowCount(byte[] tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the row count, by summing up the individual results obtained from regions. |
|
|
AggregationClient.std(byte[] tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the std method for a given cf-cq combination. |
|
|
AggregationClient.sum(byte[] tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It sums up the value returned from various regions. |
Uses of Scan in org.apache.hadoop.hbase.coprocessor |
---|
Uses of Scan in org.apache.hadoop.hbase.coprocessor.example |
---|
Methods in org.apache.hadoop.hbase.coprocessor.example with parameters of type Scan | |
---|---|
KeyValueScanner |
ZooKeeperScanPolicyObserver.preStoreScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
Scan scan,
NavigableSet<byte[]> targetCols,
KeyValueScanner s)
|
Uses of Scan in org.apache.hadoop.hbase.mapreduce |
---|
Methods in org.apache.hadoop.hbase.mapreduce that return Scan | |
---|---|
Scan |
TableSplit.getScan()
Returns a Scan object from the stored string representation. |
Scan |
TableInputFormatBase.getScan()
Gets the scan defining the actual details like columns etc. |
Methods in org.apache.hadoop.hbase.mapreduce that return types with arguments of type Scan | |
---|---|
protected List<Scan> |
MultiTableInputFormatBase.getScans()
Allows subclasses to get the list of Scan objects. |
Methods in org.apache.hadoop.hbase.mapreduce with parameters of type Scan | |
---|---|
static void |
TableInputFormat.addColumns(Scan scan,
byte[][] columns)
Adds an array of columns specified using old format, family:qualifier. |
static void |
IdentityTableMapper.initJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job. |
static void |
GroupingTableMapper.initJob(String table,
Scan scan,
String groupColumns,
Class<? extends TableMapper> mapper,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job. |
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job. |
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a TableMap job. |
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job. |
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job. |
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a TableMap job. |
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job. |
void |
TableRecordReaderImpl.setScan(Scan scan)
Sets the scan defining the actual details like columns etc. |
void |
TableRecordReader.setScan(Scan scan)
Sets the scan defining the actual details like columns etc. |
void |
TableInputFormatBase.setScan(Scan scan)
Sets the scan defining the actual details like columns etc. |
Method parameters in org.apache.hadoop.hbase.mapreduce with type arguments of type Scan | |
---|---|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass,
Class<? extends org.apache.hadoop.io.Writable> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a Multi TableMap job. |
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass,
Class<? extends org.apache.hadoop.io.Writable> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a Multi TableMap job. |
protected void |
MultiTableInputFormatBase.setScans(List<Scan> scans)
Allows subclasses to set the list of Scan objects. |
Constructors in org.apache.hadoop.hbase.mapreduce with parameters of type Scan | |
---|---|
TableSplit(byte[] tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location)
Creates a new instance while assigning all variables. |
Uses of Scan in org.apache.hadoop.hbase.protobuf |
---|
Methods in org.apache.hadoop.hbase.protobuf that return Scan | |
---|---|
static Scan |
ProtobufUtil.toScan(ClientProtos.Scan proto)
Convert a protocol buffer Scan to a client Scan |
Methods in org.apache.hadoop.hbase.protobuf with parameters of type Scan | |
---|---|
static ClientProtos.ScanRequest |
RequestConverter.buildScanRequest(byte[] regionName,
Scan scan,
int numberOfRows,
boolean closeScanner)
Create a protocol buffer ScanRequest for a client Scan |
static ClientProtos.Scan |
ProtobufUtil.toScan(Scan scan)
Convert a client Scan to a protocol buffer Scan |
Uses of Scan in org.apache.hadoop.hbase.regionserver |
---|
Fields in org.apache.hadoop.hbase.regionserver declared as Scan | |
---|---|
protected Scan |
StoreScanner.scan
|
Methods in org.apache.hadoop.hbase.regionserver with parameters of type Scan | |
---|---|
RegionScanner |
HRegion.getScanner(Scan scan)
Return an iterator that scans over the HRegion, returning the indicated columns and rows specified by the Scan . |
protected RegionScanner |
HRegion.getScanner(Scan scan,
List<KeyValueScanner> additionalScanners)
|
KeyValueScanner |
Store.getScanner(Scan scan,
NavigableSet<byte[]> targetCols)
Return a scanner for both the memstore and the HStore files. |
KeyValueScanner |
HStore.getScanner(Scan scan,
NavigableSet<byte[]> targetCols)
|
protected RegionScanner |
HRegion.instantiateRegionScanner(Scan scan,
List<KeyValueScanner> additionalScanners)
|
RegionScanner |
RegionCoprocessorHost.postScannerOpen(Scan scan,
RegionScanner s)
|
RegionScanner |
RegionCoprocessorHost.preScannerOpen(Scan scan)
|
KeyValueScanner |
RegionCoprocessorHost.preStoreScannerOpen(Store store,
Scan scan,
NavigableSet<byte[]> targetCols)
See RegionObserver.preStoreScannerOpen(ObserverContext,
Store, Scan, NavigableSet, KeyValueScanner) |
boolean |
MemStore.shouldSeek(Scan scan,
long oldestUnexpiredTS)
Check if this memstore may contain the required keys |
boolean |
StoreFileScanner.shouldUseScanner(Scan scan,
SortedSet<byte[]> columns,
long oldestUnexpiredTS)
|
boolean |
NonLazyKeyValueScanner.shouldUseScanner(Scan scan,
SortedSet<byte[]> columns,
long oldestUnexpiredTS)
|
boolean |
MemStore.MemStoreScanner.shouldUseScanner(Scan scan,
SortedSet<byte[]> columns,
long oldestUnexpiredTS)
|
boolean |
KeyValueScanner.shouldUseScanner(Scan scan,
SortedSet<byte[]> columns,
long oldestUnexpiredTS)
Allows to filter out scanners (both StoreFile and memstore) that we don't want to use based on criteria such as Bloom filters and timestamp ranges. |
Constructors in org.apache.hadoop.hbase.regionserver with parameters of type Scan | |
---|---|
ScanQueryMatcher(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
long readPointToUse,
long earliestPutTs,
long oldestUnexpiredTS,
byte[] dropDeletesFromRow,
byte[] dropDeletesToRow)
Construct a QueryMatcher for a scan that drop deletes from a limited range of rows. |
|
ScanQueryMatcher(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
ScanType scanType,
long readPointToUse,
long earliestPutTs,
long oldestUnexpiredTS)
Construct a QueryMatcher for a scan |
|
StoreScanner(Store store,
boolean cacheBlocks,
Scan scan,
NavigableSet<byte[]> columns,
long ttl,
int minVersions)
An internal constructor. |
|
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
List<? extends KeyValueScanner> scanners,
long smallestReadPoint,
long earliestPutTs,
byte[] dropDeletesFromRow,
byte[] dropDeletesToRow)
Used for compactions that drop deletes from a limited range of rows. |
|
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
List<? extends KeyValueScanner> scanners,
ScanType scanType,
long smallestReadPoint,
long earliestPutTs)
Used for compactions. |
|
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
NavigableSet<byte[]> columns)
Opens a scanner across memstore, snapshot, and all StoreFiles. |
Uses of Scan in org.apache.hadoop.hbase.rest.client |
---|
Methods in org.apache.hadoop.hbase.rest.client with parameters of type Scan | |
---|---|
ResultScanner |
RemoteHTable.getScanner(Scan scan)
|
Uses of Scan in org.apache.hadoop.hbase.rest.model |
---|
Methods in org.apache.hadoop.hbase.rest.model with parameters of type Scan | |
---|---|
static ScannerModel |
ScannerModel.fromScan(Scan scan)
|
Uses of Scan in org.apache.hadoop.hbase.security.access |
---|
Methods in org.apache.hadoop.hbase.security.access with parameters of type Scan | |
---|---|
RegionScanner |
AccessController.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s)
|
RegionScanner |
AccessController.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s)
|
Uses of Scan in org.apache.hadoop.hbase.thrift2 |
---|
Methods in org.apache.hadoop.hbase.thrift2 that return Scan | |
---|---|
static Scan |
ThriftUtilities.scanFromThrift(TScan in)
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |