org.apache.hadoop.hbase.filter
Class PageRowFilter

java.lang.Object
  extended by org.apache.hadoop.hbase.filter.PageRowFilter
All Implemented Interfaces:
RowFilterInterface, org.apache.hadoop.io.Writable

public class PageRowFilter
extends Object
implements RowFilterInterface

Implementation of RowFilterInterface that limits results to a specific page size. It terminates scanning once the number of filter-passed results is >= the given page size.

Note that this filter cannot guarantee that the number of results returned to a client are <= page size. This is because the filter is applied separately on different region servers. It does however optimize the scan of individual HRegions by making sure that the page size is never exceeded locally.


Constructor Summary
PageRowFilter()
          Default constructor, filters nothing.
PageRowFilter(long pageSize)
          Constructor that takes a maximum page size.
 
Method Summary
 boolean filter(org.apache.hadoop.io.Text rowKey)
          Filters on just a row key.
 boolean filter(org.apache.hadoop.io.Text rowKey, org.apache.hadoop.io.Text colKey, byte[] data)
          Filters on row key and/or a column key.
 boolean filterAllRemaining()
          Determines if the filter has decided that all remaining results should be filtered (skipped).
 boolean filterNotNull(SortedMap<org.apache.hadoop.io.Text,byte[]> columns)
          Filters a row if: 1) The given row (@param columns) has a columnKey expected to be null AND the value associated with that columnKey is non-null.
 boolean processAlways()
          Returns whether or not the filter should always be processed in any filtering call.
 void readFields(DataInput in)
          
 void reset()
          Resets the state of the filter.
 void rowProcessed(boolean filtered, org.apache.hadoop.io.Text rowKey)
          Called to let filter know the final decision (to pass or filter) on a given row.
 void validate(org.apache.hadoop.io.Text[] columns)
          Validates that this filter applies only to a subset of the given columns.
 void write(DataOutput out)
          
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

PageRowFilter

public PageRowFilter()
Default constructor, filters nothing. Required though for RPC deserialization.


PageRowFilter

public PageRowFilter(long pageSize)
Constructor that takes a maximum page size.

Parameters:
pageSize - Maximum result size.
Method Detail

validate

public void validate(org.apache.hadoop.io.Text[] columns)
Validates that this filter applies only to a subset of the given columns. This check is done prior to opening of scanner due to the limitation that filtering of columns is dependent on the retrieval of those columns within the HRegion. Criteria on columns that are not part of a scanner's column list will be ignored. In the case of null value filters, all rows will pass the filter. This behavior should be 'undefined' for the user and therefore not permitted.

Specified by:
validate in interface RowFilterInterface

reset

public void reset()
Resets the state of the filter. Used prior to the start of a Region scan.

Specified by:
reset in interface RowFilterInterface

rowProcessed

public void rowProcessed(boolean filtered,
                         org.apache.hadoop.io.Text rowKey)
Called to let filter know the final decision (to pass or filter) on a given row. With out HScanner calling this, the filter does not know if a row passed filtering even if it passed the row itself because other filters may have failed the row. E.g. when this filter is a member of a RowFilterSet with an OR operator.

Specified by:
rowProcessed in interface RowFilterInterface
See Also:
RowFilterSet

processAlways

public boolean processAlways()
Returns whether or not the filter should always be processed in any filtering call. This precaution is necessary for filters that maintain state and need to be updated according to their response to filtering calls (see WhileMatchRowFilter for an example). At times, filters nested in RowFilterSets may or may not be called because the RowFilterSet determines a result as fast as possible. Returning true for processAlways() ensures that the filter will always be called.

Specified by:
processAlways in interface RowFilterInterface
Returns:
whether or not to always process the filter

filterAllRemaining

public boolean filterAllRemaining()
Determines if the filter has decided that all remaining results should be filtered (skipped). This is used to prevent the scanner from scanning a the rest of the HRegion when for sure the filter will exclude all remaining rows.

Specified by:
filterAllRemaining in interface RowFilterInterface
Returns:
true if the filter intends to filter all remaining rows.

filter

public boolean filter(org.apache.hadoop.io.Text rowKey)
Filters on just a row key.

Specified by:
filter in interface RowFilterInterface
Returns:
true if given row key is filtered and row should not be processed.

filter

public boolean filter(org.apache.hadoop.io.Text rowKey,
                      org.apache.hadoop.io.Text colKey,
                      byte[] data)
Filters on row key and/or a column key.

Specified by:
filter in interface RowFilterInterface
Parameters:
rowKey - row key to filter on. May be null for no filtering of row key.
colKey - column whose data will be filtered
data - column value
Returns:
true if row filtered and should not be processed.

filterNotNull

public boolean filterNotNull(SortedMap<org.apache.hadoop.io.Text,byte[]> columns)
Filters a row if: 1) The given row (@param columns) has a columnKey expected to be null AND the value associated with that columnKey is non-null. 2) The filter has a criterion for a particular columnKey, but that columnKey is not in the given row (@param columns). Note that filterNotNull does not care whether the values associated with a columnKey match. Also note that a "null value" associated with a columnKey is expressed as HConstants.DELETE_BYTES.

Specified by:
filterNotNull in interface RowFilterInterface
Returns:
true if null/non-null criteria not met.

readFields

public void readFields(DataInput in)
                throws IOException

Specified by:
readFields in interface org.apache.hadoop.io.Writable
Throws:
IOException

write

public void write(DataOutput out)
           throws IOException

Specified by:
write in interface org.apache.hadoop.io.Writable
Throws:
IOException


Copyright © 2008 The Apache Software Foundation