|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Objectorg.apache.lucene.util.AttributeSource
org.apache.lucene.analysis.TokenStream
org.apache.lucene.analysis.TokenFilter
org.apache.lucene.analysis.TeeSinkTokenFilter
public final class TeeSinkTokenFilter
This TokenFilter provides the ability to set aside attribute states that have already been analyzed. This is useful in situations where multiple fields share many common analysis steps and then go their separate ways.
It is also useful for doing things like entity extraction or proper noun analysis as part of the analysis workflow and saving off those tokens for use in another field.TeeSinkTokenFilter source1 = new TeeSinkTokenFilter(new WhitespaceTokenizer(reader1)); TeeSinkTokenFilter.SinkTokenStream sink1 = source1.newSinkTokenStream(); TeeSinkTokenFilter.SinkTokenStream sink2 = source1.newSinkTokenStream(); TeeSinkTokenFilter source2 = new TeeSinkTokenFilter(new WhitespaceTokenizer(reader2)); source2.addSinkTokenStream(sink1); source2.addSinkTokenStream(sink2); TokenStream final1 = new LowerCaseFilter(source1); TokenStream final2 = source2; TokenStream final3 = new EntityDetect(sink1); TokenStream final4 = new URLDetect(sink2); d.add(new Field("f1", final1)); d.add(new Field("f2", final2)); d.add(new Field("f3", final3)); d.add(new Field("f4", final4));In this example,
sink1
and sink2 will both get tokens from both
reader1
and reader2
after whitespace tokenizer
and now we can further wrap any of these in extra analysis, and more "sources" can be inserted if desired.
It is important, that tees are consumed before sinks (in the above example, the field names must be
less the sink's field names). If you are not sure, which stream is consumed first, you can simply
add another sink and then pass all tokens to the sinks at once using consumeAllTokens()
.
This TokenFilter is exhausted after this. In the above example, change
the example above to:
...
TokenStream final1 = new LowerCaseFilter(source1.newSinkTokenStream());
TokenStream final2 = source2.newSinkTokenStream();
sink1.consumeAllTokens();
sink2.consumeAllTokens();
...
In this case, the fields can be added in any order, because the sources are not used anymore and all sinks are ready.
Note, the EntityDetect and URLDetect TokenStreams are for the example and do not currently exist in Lucene.
Nested Class Summary
static class
TeeSinkTokenFilter.SinkFilter
A filter that decides which AttributeSource
states to store in the sink.
static class
TeeSinkTokenFilter.SinkTokenStream
Nested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource
AttributeSource.AttributeFactory, AttributeSource.State
Field Summary
Fields inherited from class org.apache.lucene.analysis.TokenFilter
input
Constructor Summary
TeeSinkTokenFilter(TokenStream input)
Instantiates a new TeeSinkTokenFilter.
Method Summary
void
addSinkTokenStream(TeeSinkTokenFilter.SinkTokenStream sink)
Adds a TeeSinkTokenFilter.SinkTokenStream
created by another TeeSinkTokenFilter
to this one.
void
consumeAllTokens()
TeeSinkTokenFilter
passes all tokens to the added sinks
when itself is consumed.
void
end()
Performs end-of-stream operations, if any, and calls then end()
on the
input TokenStream.
NOTE: Be sure to call super.end()
first when overriding this method.
boolean
incrementToken()
Consumers (ie IndexWriter
) use this method to advance the stream to
the next token.
TeeSinkTokenFilter.SinkTokenStream
newSinkTokenStream()
Returns a new TeeSinkTokenFilter.SinkTokenStream
that receives all tokens consumed by this stream.
TeeSinkTokenFilter.SinkTokenStream
newSinkTokenStream(TeeSinkTokenFilter.SinkFilter filter)
Returns a new TeeSinkTokenFilter.SinkTokenStream
that receives all tokens consumed by this stream
that pass the supplied filter.
Methods inherited from class org.apache.lucene.analysis.TokenFilter
close, reset
Methods inherited from class org.apache.lucene.analysis.TokenStream
getOnlyUseNewAPI, next, next, setOnlyUseNewAPI
Methods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, restoreState, toString
Methods inherited from class java.lang.Object
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
Constructor Detail
TeeSinkTokenFilter
public TeeSinkTokenFilter(TokenStream input)
- Instantiates a new TeeSinkTokenFilter.
Method Detail
newSinkTokenStream
public TeeSinkTokenFilter.SinkTokenStream newSinkTokenStream()
- Returns a new
TeeSinkTokenFilter.SinkTokenStream
that receives all tokens consumed by this stream.
newSinkTokenStream
public TeeSinkTokenFilter.SinkTokenStream newSinkTokenStream(TeeSinkTokenFilter.SinkFilter filter)
- Returns a new
TeeSinkTokenFilter.SinkTokenStream
that receives all tokens consumed by this stream
that pass the supplied filter.
- See Also:
TeeSinkTokenFilter.SinkFilter
addSinkTokenStream
public void addSinkTokenStream(TeeSinkTokenFilter.SinkTokenStream sink)
- Adds a
TeeSinkTokenFilter.SinkTokenStream
created by another TeeSinkTokenFilter
to this one. The supplied stream will also receive all consumed tokens.
This method can be used to pass tokens from two different tees to one sink.
consumeAllTokens
public void consumeAllTokens()
throws IOException
TeeSinkTokenFilter
passes all tokens to the added sinks
when itself is consumed. To be sure, that all tokens from the input
stream are passed to the sinks, you can call this methods.
This instance is exhausted after this, but all sinks are instant available.
- Throws:
IOException
incrementToken
public boolean incrementToken()
throws IOException
- Description copied from class:
TokenStream
- Consumers (ie
IndexWriter
) use this method to advance the stream to
the next token. Implementing classes must implement this method and update
the appropriate AttributeImpl
s with the attributes of the next
token.
This method is called for every token of a document, so an efficient
implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class)
and AttributeSource.getAttribute(Class)
or downcasts,
references to all AttributeImpl
s that this stream uses should be
retrieved during instantiation.
To ensure that filters and consumers know which attributes are available,
the attributes must be added during instantiation. Filters and consumers
are not required to check for availability of attributes in
TokenStream.incrementToken()
.
- Overrides:
incrementToken
in class TokenStream
- Returns:
- false for end of stream; true otherwise
Note that this method will be defined abstract in Lucene
3.0.
- Throws:
IOException
end
public final void end()
throws IOException
- Description copied from class:
TokenFilter
- Performs end-of-stream operations, if any, and calls then
end()
on the
input TokenStream.
NOTE: Be sure to call super.end()
first when overriding this method.
- Overrides:
end
in class TokenFilter
- Throws:
IOException
Overview
Package
Class
Use
Tree
Deprecated
Index
Help
PREV CLASS
NEXT CLASS
FRAMES
NO FRAMES
SUMMARY: NESTED | FIELD | CONSTR | METHOD
DETAIL: FIELD | CONSTR | METHOD
Copyright © 2000-2009 Apache Software Foundation. All Rights Reserved.