org.apache.hadoop.hbase.filter
Class StopRowFilter

java.lang.Object
  extended by org.apache.hadoop.hbase.filter.StopRowFilter
All Implemented Interfaces:
RowFilterInterface, org.apache.hadoop.io.Writable
Direct Known Subclasses:
InclusiveStopRowFilter

public class StopRowFilter
extends Object
implements RowFilterInterface

Implementation of RowFilterInterface that filters out rows greater than or equal to a specified rowKey.


Field Summary
protected  org.apache.hadoop.io.Text stopRowKey
           
 
Constructor Summary
StopRowFilter()
          Default constructor, filters nothing.
StopRowFilter(org.apache.hadoop.io.Text stopRowKey)
          Constructor that takes a stopRowKey on which to filter
 
Method Summary
 boolean filter(org.apache.hadoop.io.Text rowKey)
          Filters on just a row key.
 boolean filter(org.apache.hadoop.io.Text rowKey, org.apache.hadoop.io.Text colKey, byte[] data)
          Filters on row key and/or a column key. Because StopRowFilter does not examine column information, this method defaults to calling the rowKey-only version of filter.
 boolean filterAllRemaining()
          Determines if the filter has decided that all remaining results should be filtered (skipped).
 boolean filterNotNull(SortedMap<org.apache.hadoop.io.Text,byte[]> columns)
          Filters a row if: 1) The given row (@param columns) has a columnKey expected to be null AND the value associated with that columnKey is non-null. Because StopRowFilter does not examine column information, this method defaults to calling filterAllRemaining().
 org.apache.hadoop.io.Text getStopRowKey()
          An accessor for the stopRowKey
 boolean processAlways()
          Returns whether or not the filter should always be processed in any filtering call.
 void readFields(DataInput in)
          
 void reset()
          Resets the state of the filter.
 void rowProcessed(boolean filtered, org.apache.hadoop.io.Text rowKey)
          Called to let filter know the final decision (to pass or filter) on a given row.
 void validate(org.apache.hadoop.io.Text[] columns)
          Validates that this filter applies only to a subset of the given columns.
 void write(DataOutput out)
          
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

stopRowKey

protected org.apache.hadoop.io.Text stopRowKey
Constructor Detail

StopRowFilter

public StopRowFilter()
Default constructor, filters nothing. Required though for RPC deserialization.


StopRowFilter

public StopRowFilter(org.apache.hadoop.io.Text stopRowKey)
Constructor that takes a stopRowKey on which to filter

Parameters:
stopRowKey - rowKey to filter on.
Method Detail

getStopRowKey

public org.apache.hadoop.io.Text getStopRowKey()
An accessor for the stopRowKey

Returns:
the filter's stopRowKey

validate

public void validate(org.apache.hadoop.io.Text[] columns)
Validates that this filter applies only to a subset of the given columns. This check is done prior to opening of scanner due to the limitation that filtering of columns is dependent on the retrieval of those columns within the HRegion. Criteria on columns that are not part of a scanner's column list will be ignored. In the case of null value filters, all rows will pass the filter. This behavior should be 'undefined' for the user and therefore not permitted.

Specified by:
validate in interface RowFilterInterface

reset

public void reset()
Resets the state of the filter. Used prior to the start of a Region scan.

Specified by:
reset in interface RowFilterInterface

rowProcessed

public void rowProcessed(boolean filtered,
                         org.apache.hadoop.io.Text rowKey)
Called to let filter know the final decision (to pass or filter) on a given row. With out HScanner calling this, the filter does not know if a row passed filtering even if it passed the row itself because other filters may have failed the row. E.g. when this filter is a member of a RowFilterSet with an OR operator.

Specified by:
rowProcessed in interface RowFilterInterface
See Also:
RowFilterSet

processAlways

public boolean processAlways()
Returns whether or not the filter should always be processed in any filtering call. This precaution is necessary for filters that maintain state and need to be updated according to their response to filtering calls (see WhileMatchRowFilter for an example). At times, filters nested in RowFilterSets may or may not be called because the RowFilterSet determines a result as fast as possible. Returning true for processAlways() ensures that the filter will always be called.

Specified by:
processAlways in interface RowFilterInterface
Returns:
whether or not to always process the filter

filterAllRemaining

public boolean filterAllRemaining()
Determines if the filter has decided that all remaining results should be filtered (skipped). This is used to prevent the scanner from scanning a the rest of the HRegion when for sure the filter will exclude all remaining rows.

Specified by:
filterAllRemaining in interface RowFilterInterface
Returns:
true if the filter intends to filter all remaining rows.

filter

public boolean filter(org.apache.hadoop.io.Text rowKey)
Filters on just a row key.

Specified by:
filter in interface RowFilterInterface
Returns:
true if given row key is filtered and row should not be processed.

filter

public boolean filter(org.apache.hadoop.io.Text rowKey,
                      org.apache.hadoop.io.Text colKey,
                      byte[] data)
Filters on row key and/or a column key. Because StopRowFilter does not examine column information, this method defaults to calling the rowKey-only version of filter.

Specified by:
filter in interface RowFilterInterface
Parameters:
rowKey - row key to filter on. May be null for no filtering of row key.
colKey - column whose data will be filtered
data - column value
Returns:
true if row filtered and should not be processed.

filterNotNull

public boolean filterNotNull(SortedMap<org.apache.hadoop.io.Text,byte[]> columns)
Filters a row if: 1) The given row (@param columns) has a columnKey expected to be null AND the value associated with that columnKey is non-null. 2) The filter has a criterion for a particular columnKey, but that columnKey is not in the given row (@param columns). Note that filterNotNull does not care whether the values associated with a columnKey match. Also note that a "null value" associated with a columnKey is expressed as HConstants.DELETE_BYTES. Because StopRowFilter does not examine column information, this method defaults to calling filterAllRemaining().

Specified by:
filterNotNull in interface RowFilterInterface
Parameters:
columns -
Returns:
true if null/non-null criteria not met.

readFields

public void readFields(DataInput in)
                throws IOException

Specified by:
readFields in interface org.apache.hadoop.io.Writable
Throws:
IOException

write

public void write(DataOutput out)
           throws IOException

Specified by:
write in interface org.apache.hadoop.io.Writable
Throws:
IOException


Copyright © 2008 The Apache Software Foundation