|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
IndexWriter.rollback()
instead.
AttributeSource
shall be stored
in the sink.
Collector
can accept documents given to
Collector.collect(int)
out of order.
FilteredTermEnum.setEnum(org.apache.lucene.index.TermEnum)
QueryParser.addClause(List, int, int, Query)
instead.
IndexWriter.getAnalyzer()
.
IndexWriter.addIndexesNoOptimize(org.apache.lucene.store.Directory[])
instead,
then separately call IndexWriter.optimize()
afterwards if
you need to.
TeeSinkTokenFilter.SinkTokenStream
created by another TeeSinkTokenFilter
to this one.
PriorityQueue.updateTop()
which returns the new top element and
saves an additional call to PriorityQueue.top()
.
TermVectorEntry
.
AttributeSource
.AttributeImpl
s,
and methods to add and get them.AttributeSource.AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY
.
AttributeSource.AttributeFactory
for creating new Attribute
instances.
AttributeImpl
s.NumericField
type.
CharFilter
.AbstractField.getBinaryOffset()
is non-zero
or AbstractField.getBinaryLength()
is not the
full length of the byte[]. Please use AbstractField.getBinaryValue()
instead, which simply
returns the byte[].
null
for numeric fields
CachingSpanFilter.getDocIdSet(IndexReader)
instead.
CachingWrapperFilter.getDocIdSet(IndexReader)
instead.
Filter.getDocIdSet(IndexReader)
instead.
MultiTermQueryWrapperFilter.getDocIdSet(IndexReader)
instead.
QueryWrapperFilter.getDocIdSet(IndexReader)
instead.
n
bits.
name
in Directory
d
, as written by the BitVector.write(org.apache.lucene.store.Directory, java.lang.String)
method.
BooleanQuery.getMaxClauseCount()
clauses.PayloadTermQuery
char[]
buffer size)
for encoding int
values.
char[]
buffer size)
for encoding long
values.
IndexInput
.IndexOutput
.FieldCache
using getBytes()
and makes those values
available as other numeric types, casting as needed.FieldCacheSource
, already knowing that cache and field are equal.
FieldCacheSource
, without the hash-codes of the field
and the cache (those are taken care of elsewhere).
CharStream.correctOffset(int)
functionality over Reader
.CheckIndex.checkIndex()
instead
CheckIndex.checkIndex(List)
instead
CheckIndex.Status
instance detailing
the state of the index.
CheckIndex.Status
instance detailing
the state of the index.
CheckIndex.checkIndex()
detailing the health and status of the index.bit
to zero.
AttributeImpl.clear()
on each Attribute implementation.
AttributeImpl
instances returned in a new
AttributeSource instance.
Collector.collect(int)
on the decorated Collector
unless the allowed time has passed, in which case it throws an exception.
CompressionTools
instead.
For string fields that were previously indexed and stored using compression,
the new way to achieve this is: First add the field indexed-only (no store)
and additionally using the same field name as a binary, stored field
with CompressionTools.compressString(java.lang.String)
.
state.getBoost()*lengthNorm(numTerms)
, where
numTerms
is FieldInvertState.getLength()
if DefaultSimilarity.setDiscountOverlaps(boolean)
is false, else it's FieldInvertState.getLength()
- FieldInvertState.getNumOverlap()
.
FieldInvertState
).
MergeScheduler
that runs each merge using a
separate thread, up until a maximum number of threads
(ConcurrentMergeScheduler.setMaxThreadCount(int)
) at which when a merge is
needed, the thread(s) that are updating the index will
pause until one or more merges completes.MultiTermQuery.ConstantScoreAutoRewrite
, with MultiTermQuery.ConstantScoreAutoRewrite.setTermCountCutoff(int)
set to
MultiTermQuery.ConstantScoreAutoRewrite.DEFAULT_TERM_COUNT_CUTOFF
and MultiTermQuery.ConstantScoreAutoRewrite.setDocCountPercent(double)
set to
MultiTermQuery.ConstantScoreAutoRewrite.DEFAULT_DOC_COUNT_PERCENT
.
MultiTermQuery.SCORING_BOOLEAN_QUERY_REWRITE
except
scores are not computed.
TermRangeQuery
for term ranges or
NumericRangeQuery
for numeric ranges instead.
This class will be removed in Lucene 3.0.TeeSinkTokenFilter
passes all tokens to the added sinks
when itself is consumed.
len
chars of text
starting at off
are in the set
CharSequence
is in the set
overlap / maxOverlap
.
TopFieldCollector
from the given
arguments.
TopScoreDocCollector
given the number of hits to
collect and whether documents are scored in order by the input
Scorer
to TopScoreDocCollector.setScorer(Scorer)
.
AttributeImpl
for the supplied Attribute
interface class.
query
ValueSourceQuery
.
ValueSourceQuery
.
DateTools
or
NumericField
instead.
This class is included for use with existing
indices and will be removed in a future release.IndexableBinaryStringTools.encode(java.nio.ByteBuffer)
or
IndexableBinaryStringTools.encode(java.nio.ByteBuffer, java.nio.CharBuffer)
.
AttributeImpl
s using the
class name of the supplied Attribute
interface class by appending Impl
to it.
Byte.toString(byte)
Double.toString(double)
Float.toString(float)
TimeLimitedCollector.isGreedy()
.
TimeLimitingCollector.isGreedy()
.
Integer.toString(int)
Long.toString(long)
IndexWriter.getMaxSyncPauseSeconds()
.
Short.toString(short)
docNum
.
docNum
.
term
.
term
indexed.
term
.
ConstantScoreQuery.ConstantScorer.docID()
instead.
DocIdSetIterator.docID()
instead.
FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.FieldCacheTermsFilterDocIdSetIterator.docID()
instead.
FilteredDocIdSetIterator.docID()
instead.
ScoreCachingWrappingScorer.docID()
instead.
i
.
Document
at the n
th position.
SpanScorer.docID()
instead.
OpenBitSetIterator.docID()
instead.
IndexWriter.maxDoc()
(same as this
method) or IndexWriter.numDocs()
(also takes deletions
into account), instead.
t
.
term
.
term
.
DocIdSetIterator.NO_MORE_DOCS
if DocIdSetIterator.nextDoc()
or
DocIdSetIterator.advance(int)
were not called yet.
instead
.
n
th
Document
in this index.
Document
at the n
th position.
docNum
.
IndexWriter.merge(org.apache.lucene.index.MergePolicy.OneMerge)
double
value to a sortable signed long
.
DocIdSet
instance for easy use, e.g.
TermPositionVector
that stores only position information.
end()
on the
input TokenStream.
NOTE: Be sure to call super.end()
first when overriding this method.
TokenStream.incrementToken()
returned false
(using the new TokenStream
API).
StopAnalyzer.ENGLISH_STOP_WORDS_SET
instead
AlreadyClosedException
if this IndexWriter has been
closed.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
ValueSourceQuery.equals(Object)
.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
AttributeImpl.hashCode()
should be checked here for equality.
\
.
IndexSearcher.explain(org.apache.lucene.search.Weight, int)
or Weight.explain(org.apache.lucene.index.IndexReader, int)
instead.
doc
scored against
weight
.
doc
scored against
query
.
IndexWriter.expungeDeletes()
, except you can
specify whether the call should block until the
operation completes.
FieldCache.DEFAULT
; this will be removed in Lucene 3.0
FieldCache
, this will be removed in Lucene 3.0FieldCache.DoubleParser
, this will be removed in Lucene 3.0FieldCache.LongParser
, this will be removed in Lucene 3.0Field
.FieldCache
).FieldCache
.Filter
that only accepts documents whose single
term value in the specified field is contained in the
provided set of allowed terms.TopFieldCollector
.FieldCache.getBytes(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getDoubles(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getFloats(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getInts(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getLongs(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getShorts(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldComparator
for custom field sorting.SpanQuery
objects participate in composite
single-field SpanQueries by 'lying' about their search field.null
as its
detail message.
Document.getFields()
instead
FileFilter
, the FieldSelector allows one to make decisions about
what Fields get loaded on a Document
by IndexReader.document(int,org.apache.lucene.document.FieldSelector)
FieldValueHitQueue
TermVectorEntry
s
This is not thread-safe.FilterIndexReader
contains another IndexReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.TermDocs
implementations.TermEnum
implementations.TermPositions
implementations.MergePolicy.MergeSpecification
if so.
ChecksumIndexOutput.prepareCommit()
CheckIndex.checkIndex()
.
Tokenizer
chain,
eg from one TokenFilter to another one.FieldCache
using getFloats()
and makes those values
available as other numeric types, casting as needed.float
value to a sortable signed int
.
IndexWriter.commit()
) instead
minimumSimilarity
to term
.
FuzzyQuery(term, minimumSimilarity, 0)
.
FuzzyQuery(term, 0.5f, 0)
.
reader
which share a prefix of
length prefixLength
with term
and which have a fuzzy similarity >
minSimilarity
.
true
if bit
is one and
false
if it is zero.
Weight.scoresDocsOutOfOrder()
is used.
bit
to true, and
returns true if bit was already set
NumericField
type.
Float.NaN
if this
DocValues instance does not contain any value.
null
for numeric fields
SpanFilterResult.getDocIdSet()
QueryParser.getBooleanQuery(List)
instead
QueryParser.getBooleanQuery(List, boolean)
instead
Document.setBoost(float)
.
field
as a single byte and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as bytes and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
IndexWriter.commit(Map)
, from current index
segments file.
FieldComparator
to use for
sorting.
IndexReader.getCurrentVersion(Directory)
instead.
This method will be removed in the 3.0 release.
IndexReader.getCurrentVersion(Directory)
instead.
This method will be removed in the 3.0 release.
FieldComparatorSource
directly, instead.
Directory
for the index.
Directory
of the index that hit
the exception.
FSDirectory.open(File)
FSDirectory.open(File, LockFactory)
FSDirectory.open(File)
FSDirectory.open(File, LockFactory)
LockFactory
and
supply NoLockFactory.getNoLockFactory()
.
field
as integers and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as doubles and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
SortField.getComparatorSource()
Fieldable
name.
Fieldable
s with the given name.
QueryParser.getFieldQuery(String,String)
.
Field
s with the given name.
field
as floats and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as floats and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
QueryParser.getWildcardQuery(java.lang.String, java.lang.String)
).
IndexReader
this searches.
FieldCache.setInfoStream(PrintStream)
field
as integers and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as integers and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
field
as longs and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as longs and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
Float.NaN
if this
DocValues instance does not contain any value.
Float.NaN
if this
DocValues instance does not contain any value.
Number
, null
if not yet initialized.
positionIncrement == 0
.
Analyzer.getPositionIncrementGap(java.lang.String)
, except for
Token offsets instead.
PositionBasedTermVectorMapper.TVPositionInfo.getTerms()
) of TermVectorOffsetInfo objects.
AbstractField.getOmitTermFreqAndPositions()
AbstractField.getOmitTermFreqAndPositions()
FieldCache
parser that fits to the given sort type.
QueryParser.getWildcardQuery(java.lang.String, java.lang.String)
).
IndexWriter.setRAMBufferSizeMB(double)
if enabled.
BufferedIndexInput.readBytes(byte[], int, int)
.
IndexWriter.getReader()
, except you can
specify which termInfosIndexDivisor should be used for
any newly opened readers.
Searchable
s this searches.
segments_N
) associated
with this commit point.
segments_N
) associated
with this commit point.
PriorityQueue.initialize(int)
to fill the queue, so
that the code which uses that queue can always assume it's full and only
change the top without attempting to insert any new object.PriorityQueue.lessThan(Object, Object)
should always favor the
non-sentinel values).field
as shorts and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as shorts and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
field
and returns
an array of them in natural order, along with an array telling
which element in the term array each document uses.
field
and returns an array
of size reader.maxDoc()
containing the value each document
has in the given field.
TermFreqVector
.
FieldMaskingSpanQuery.extractTerms(Set)
instead.
QueryParser.getMultiTermRewriteMethod()
instead.
IndexWriter.commit(Map)
for this commit.
BooleanQuery.getAllowDocsOutOfOrder()
instead.
true
, if the unmap workaround is enabled.
ValueSourceQuery.hashCode()
.
o
is equal to this.
TopScoreDocCollector
and TopDocs
instead. Hits will be removed in Lucene 3.0.Collector
instead.Collector
class. This class will be removed when HitCollector
is
removed.TopScoreDocCollector
and TopDocs
instead. Hits will be removed in Lucene 3.0.TopScoreDocCollector
and TopDocs
:TopScoreDocCollector collector = new TopScoreDocCollector(hitsPerPage); searcher.search(query, collector); ScoreDoc[] hits = collector.topDocs().scoreDocs; for (int i = 0; i < hits.length; i++) { int docId = hits[i].doc; Document d = searcher.doc(docId); // do something with current hit ...
log(numDocs/(docFreq+1)) + 1
.
Similarity.idfExplain(Term, Searcher)
Similarity.idfExplain(Collection, Searcher)
true
if the lower endpoint is inclusive
true
if the lower endpoint is inclusive
true
if the upper endpoint is inclusive
true
if the upper endpoint is inclusive
true
if the lower endpoint is inclusive
true
if the lower endpoint is inclusive
true
if the upper endpoint is inclusive
true
if the upper endpoint is inclusive
IndexWriter
) use this method to advance the stream to
the next token.
IndexDeletionPolicy
or IndexReader
.index commits
.indexOf(int)
but searches for a number of terms
at the same time.
IndexReader.indexExists(Directory)
instead
This method will be removed in the 3.0 release.
IndexReader.indexExists(Directory)
instead.
This method will be removed in the 3.0 release.
true
if an index exists at the specified directory.
Directory
.IndexWriter
instead.getTerms
at which the term with the specified
term
appears.
IndexReader.getFieldNames(FieldOption)
.IndexSearcher.IndexSearcher(Directory, boolean)
instead
IndexSearcher.IndexSearcher(Directory, boolean)
instead
IndexSearcher.IndexSearcher(Directory, boolean)
instead
IndexWriter
creates and maintains an index.IndexWriter.IndexWriter(Directory, Analyzer,
boolean, MaxFieldLength)
IndexWriter.IndexWriter(Directory,Analyzer,boolean,MaxFieldLength)
instead, and call IndexWriter.commit()
when needed.
IndexWriter.IndexWriter(Directory,
Analyzer, boolean, MaxFieldLength)
IndexWriter.IndexWriter(Directory,Analyzer,boolean,MaxFieldLength)
instead, and call IndexWriter.commit()
when needed.
d
.
IndexWriter.commit()
when needed.
Use IndexWriter.IndexWriter(Directory,Analyzer,boolean,MaxFieldLength)
instead.
IndexWriter.IndexWriter(Directory, Analyzer, MaxFieldLength)
IndexWriter.commit()
when needed.
Use IndexWriter.IndexWriter(Directory,Analyzer,MaxFieldLength)
instead.
IndexWriter.IndexWriter(Directory,
Analyzer, MaxFieldLength)
IndexWriter.IndexWriter(Directory,Analyzer,MaxFieldLength)
instead, and call IndexWriter.commit()
when needed.
d
, first creating it if it does not
already exist.
IndexWriter.IndexWriter(Directory,Analyzer,MaxFieldLength)
instead, and call IndexWriter.commit()
when needed.
IndexWriter.IndexWriter(Directory,Analyzer,MaxFieldLength)
instead, and call IndexWriter.commit()
when needed.
IndexWriter.IndexWriter(Directory,Analyzer,boolean,MaxFieldLength)
instead, and call IndexWriter.commit()
when needed.
IndexDeletionPolicy
, for the index in d
,
first creating it if it does not already exist.
IndexWriter.IndexWriter(Directory,Analyzer,IndexDeletionPolicy,MaxFieldLength)
instead, and call IndexWriter.commit()
when needed.
IndexDeletionPolicy
, for the index in d
.
IndexWriter.IndexWriter(Directory,Analyzer,boolean,IndexDeletionPolicy,MaxFieldLength)
instead, and call IndexWriter.commit()
when needed.
IndexDeletionPolicy
, for
the index in d
.
IndexWriter.getReader()
has been called (ie, this writer
is in near real-time mode), then after a merge
completes, this class can be invoked to warm the
reader on the newly merged segment, before the merge
commits.IndexWriter
constructors.PriorityQueue.insertWithOverflow(Object)
instead, which
encourages objects reuse.
FieldCache
using getInts()
and makes those values
available as other numeric types, casting as needed.shift
bits.
shift
bits.
CachingWrapperFilter
, if this DocIdSet
should be cached without copying it into a BitSet.
Similarity.coord(int,int)
is disabled in
scoring for this query instance.
true
if the range query is inclusive
IndexWriter.isLocked(Directory)
instead.
This method will be removed in the 3.0 release.
IndexReader.isLocked(Directory)
instead.
This method will be removed in the 3.0 release.
true
iff the index in the named directory is
currently locked.
IndexWriter.isLocked(Directory)
ASCIIFoldingFilter
which covers a superset
of Latin 1. This class will be removed in Lucene 3.0.IndexReader.getTermFreqVector(int,String)
.
IndexReader.getTermFreqVector(int,String)
.
Character.isLetter(char)
.
Character.isWhitespace(char)
.
DocIdSetIterator
to access the set.
HitIterator
to navigate the Hits.
IndexDeletionPolicy
implementation that
keeps only the most recent commit and immediately removes
all prior commits after a new commit is done.IndexReader.lastModified(Directory)
instead.
This method will be removed in the 3.0 release.
IndexReader.lastModified(Directory)
instead.
This method will be removed in the 3.0 release.
Field
.
1/sqrt(numTerms)
.
fieldName
matching
less than or equal to upperTerm
.
fieldName
matching
less than or equal to upperTerm
.
a
is less relevant than b
.
AttributeSource
.
AttributeSource.AttributeFactory
.
IndexWriter.DEFAULT_MAX_FIELD_LENGTH
FSDirectory
, and its subclasses), this method
silently filters its results to include only index
files. Please use Directory.listAll()
instead, which
does no filtering.
Field
every time the Document
is loaded, reading in the data as it is encountered.
FieldSelectorResult.LOAD
case, but immediately return from Field
loading for the Document
.
CompressionTools
is used for field compression.
LOCK_DIR
is unused
because the write.lock is now stored by default in the
index directory. If you really want to store locks
elsewhere, you can create your own SimpleFSLockFactory
(or NativeFSLockFactory
,
etc.) passing in your preferred lock directory. Then,
pass this LockFactory
instance to one of
the open
methods that take a
lockFactory
(for example, FSDirectory.open(File, LockFactory)
).
Lock.obtain(long)
to try
forever to obtain the lock.
Lock.obtain(long)
waits, in milliseconds,
in between attempts to acquire the lock.
write.lock
could not be acquired.write.lock
could not be released.VerifyingLockFactory
.LogMergePolicy
that measures size of a
segment as the total byte size of the segment's files.LogMergePolicy
that measures size of a
segment as the number of documents (not taking deletions
into account).MergePolicy
that tries
to merge segments into levels of exponentially
increasing size, where each level has fewer segments than
the value of the merge factor.shift
bits.
shift
bits.
AttributeSource
.
AttributeSource.AttributeFactory
.
SimpleAnalyzer
.
Lock
.
FieldSelector
based on a Map of field names to FieldSelectorResult
sCharFilter
that applies the mappings
contained in a NormalizeCharMap
to the character
stream, and correcting the resulting changes to the
offsets.CharStream
.
Reader
.
MergePolicy.MergeException.MergePolicy.MergeException(String,Directory)
instead
MergePolicy.MergeException.MergePolicy.MergeException(Throwable,Directory)
instead
IndexWriter
uses an instance
implementing this interface to execute the merges
selected by a MergePolicy
.Directory
implementation that uses
mmap for reading, and SimpleFSDirectory.SimpleFSIndexOutput
for writing.NativeFSLockFactory
.
fieldName
matching
greater than or equal to lowerTerm
.
fieldName
matching
greater than or equal to lowerTerm
.
MultiPhraseQuery.add(Term[])
.TermPositions
for multiple Term
s as
a single TermPositions
.MultipleTermPositions
instance.
Searchables
.Query
that matches documents
containing a subset of terms provided by a FilteredTermEnum
enumeration.MultiTermQuery
, that exposes its
functionality as a Filter
.MultiTermQuery
as a Filter.
CustomScoreQuery.toString(String)
.
LockFactory
using native OS file
locks.NearSpansOrdered
, but for the unordered case.FieldCache.getBytes(IndexReader,String)
.
FieldCache.getBytes(IndexReader,String,FieldCache.ByteParser)
.
FieldCache.getDoubles(IndexReader,String)
.
FieldCache.getDoubles(IndexReader,String,FieldCache.DoubleParser)
.
NumericRangeFilter
, that filters a double
range using the given precisionStep
.
NumericRangeFilter
, that queries a double
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
NumericRangeQuery
, that queries a double
range using the given precisionStep
.
NumericRangeQuery
, that queries a double
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
FieldCache.getFloats(IndexReader,String)
.
FieldCache.getFloats(IndexReader,String,FieldCache.FloatParser)
.
NumericRangeFilter
, that filters a float
range using the given precisionStep
.
NumericRangeFilter
, that queries a float
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
NumericRangeQuery
, that queries a float
range using the given precisionStep
.
NumericRangeQuery
, that queries a float
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
FieldCache.getInts(IndexReader,String)
.
FieldCache.getInts(IndexReader,String,FieldCache.IntParser)
.
NumericRangeFilter
, that filters a int
range using the given precisionStep
.
NumericRangeFilter
, that queries a int
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
NumericRangeQuery
, that queries a int
range using the given precisionStep
.
NumericRangeQuery
, that queries a int
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
FieldCache.getLongs(IndexReader,String)
.
FieldCache.getLongs(IndexReader,String,FieldCache.LongParser)
.
NumericRangeFilter
, that filters a long
range using the given precisionStep
.
NumericRangeFilter
, that queries a long
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
NumericRangeQuery
, that queries a long
range using the given precisionStep
.
NumericRangeQuery
, that queries a long
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
FieldCache.getShorts(IndexReader,String)
.
FieldCache.getShorts(IndexReader,String,FieldCache.ShortParser)
.
TeeSinkTokenFilter.SinkTokenStream
that receives all tokens consumed by this stream.
TeeSinkTokenFilter.SinkTokenStream
that receives all tokens consumed by this stream
that pass the supplied filter.
FieldCache.getStringIndex(org.apache.lucene.index.IndexReader, java.lang.String)
.
TopDocs
instance containing the given results.
TokenStream.incrementToken()
and AttributeSource
APIs should be used instead.
TokenStream.next()
) but will be slower than calling
TokenStream.next(Token)
or using the new TokenStream.incrementToken()
method with the new AttributeSource
API.
ConstantScoreQuery.ConstantScorer.nextDoc()
instead.
DocIdSetIterator.nextDoc()
instead. This will be removed in 3.0
FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.FieldCacheTermsFilterDocIdSetIterator.nextDoc()
instead.
FilteredDocIdSetIterator.nextDoc()
instead.
Hit
instance representing the next hit in Hits
.
ScoreCachingWrappingScorer.nextDoc()
instead.
SpanScorer.nextDoc()
instead.
OpenBitSetIterator.nextDoc()
instead.
DocIdSetIterator.NO_MORE_DOCS
if there are no more docs in the
set.DocIdSetIterator.next()
.
FSDirectory
implementation that uses
java.nio's FileChannel's positional read, which allows
multiple threads to read from the same file without
synchronizing.NativeFSLockFactory
.
Field
.
DocIdSetIterator.nextDoc()
, DocIdSetIterator.advance(int)
and
DocIdSetIterator.doc()
it means there are no more docs in the iterator.
Field.Index.NOT_ANALYZED_NO_NORMS
LockFactory
to disable locking entirely.Character.toLowerCase(char)
.
MappingCharFilter
.NumericUtils
instead, which
provides a sortable binary representation (prefix encoded) of numeric
values.
To index and efficiently query numeric values use NumericField
and NumericRangeQuery
.
This class is included for use with existing
indices and will be removed in a future release.NumericUtils
, e.g.
NumericUtils
, e.g.
NumericUtils.intToPrefixCoded(int)
, e.g.
NumericUtils.longToPrefixCoded(long)
, e.g.
Field
that enables indexing
of numeric values for efficient range filtering and
sorting.precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
precisionStep
.
precisionStep
.
Filter
that only accepts numeric values within
a specified range.Query
that matches numeric values within a
specified range.TokenStream
for indexing numeric values that can be used by NumericRangeQuery
or NumericRangeFilter
.precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
precisionStep
.
precisionStep
using the given AttributeSource
.
precisionStep
using the given
AttributeSource.AttributeFactory
.
NumericUtils.splitIntRange(org.apache.lucene.util.NumericUtils.IntRangeBuilder, int, int, int)
.NumericUtils.splitLongRange(org.apache.lucene.util.NumericUtils.LongRangeBuilder, int, long, long)
.IndexReader.FieldOption.OMIT_TERM_FREQ_AND_POSITIONS
IndexReader.open(Directory, boolean)
instead.
This method will be removed in the 3.0 release.
IndexReader.open(Directory, boolean)
instead.
This method will be removed in the 3.0 release.
IndexReader.open(Directory, boolean)
instead.
This method will be removed in the 3.0 release.
IndexReader.open(Directory, boolean)
instead.
This method will be removed in the 3.0 release.
IndexReader.open(Directory, boolean)
instead
This method will be removed in the 3.0 release.
IndexReader.open(IndexCommit, boolean)
instead.
This method will be removed in the 3.0 release.
IndexCommit
.
IndexReader.open(Directory, IndexDeletionPolicy, boolean)
instead.
This method will be removed in the 3.0 release.
IndexDeletionPolicy
.
IndexDeletionPolicy
.
IndexReader.open(IndexCommit, IndexDeletionPolicy, boolean)
instead.
This method will be removed in the 3.0 release.
IndexDeletionPolicy
.
IndexDeletionPolicy
.
FSDirectory.open(File)
, but allows you to
also specify a custom LockFactory
.
IndexWriter.optimize()
, except you can specify
whether the call should block until the optimize
completes.
IndexWriter.optimize(int)
, except you can
specify whether the call should block until the
optimize completes.
Fieldcache
using getStringIndex().Document
for indexing and searching.CheckIndex.setInfoStream(java.io.PrintStream)
per instance,
instead.
Searchables
.Query
.
CheckIndex.checkIndex(List)
) was called with non-null
argument).
SpanNearQuery
except that it factors
in the value of the payloads located at each of the positions where the
TermSpans
occurs.SpanTermQuery
except that it factors
in the value of the payload located at each of the positions where the
Term
occurs.TokenStream
, used in phrase
searching.Collector
implementation which wraps another
Collector
and makes sure only documents with
scores > 0 are collected.NumericField
, NumericTokenStream
,
NumericRangeQuery
, and NumericRangeFilter
as default
prefix
.
PayloadFunction
to score the payloads, but
can be overridden to do other things.
PriorityQueue.add(Object)
which returns the new top object,
saving an additional call to PriorityQueue.top()
.
query
.
1/sqrt(sumOfSquaredWeights)
.
query
.
Directory
implementation.Directory
.
RAMDirectory
instance from a different
Directory
implementation.
RAMDirectory.RAMDirectory(Directory)
instead
RAMDirectory.RAMDirectory(Directory)
instead
IndexOutput
implementation.TermRangeFilter
for term ranges or
NumericRangeFilter
for numeric ranges instead.
This class will be removed in Lucene 3.0.collator
parameter will cause every single
index Term in the Field referenced by lowerTerm and/or upperTerm to be
examined.
TermRangeQuery
for term ranges or
NumericRangeQuery
for numeric ranges instead.
This class will be removed in Lucene 3.0.lowerTerm
but less than upperTerm
.
lowerTerm
but less than upperTerm
.
IndexReader
s.null
for numeric fields
Token.clear()
,
Token.setTermBuffer(char[], int, int)
,
Token.setStartOffset(int)
,
Token.setEndOffset(int)
,
Token.setType(java.lang.String)
Token.clear()
,
Token.setTermBuffer(char[], int, int)
,
Token.setStartOffset(int)
,
Token.setEndOffset(int)
Token.setType(java.lang.String)
on Token.DEFAULT_TYPE
Token.clear()
,
Token.setTermBuffer(String)
,
Token.setStartOffset(int)
,
Token.setEndOffset(int)
Token.setType(java.lang.String)
Token.clear()
,
Token.setTermBuffer(String, int, int)
,
Token.setStartOffset(int)
,
Token.setEndOffset(int)
Token.setType(java.lang.String)
Token.clear()
,
Token.setTermBuffer(String)
,
Token.setStartOffset(int)
,
Token.setEndOffset(int)
Token.setType(java.lang.String)
on Token.DEFAULT_TYPE
Token.clear()
,
Token.setTermBuffer(String, int, int)
,
Token.setStartOffset(int)
,
Token.setEndOffset(int)
Token.setType(java.lang.String)
on Token.DEFAULT_TYPE
IndexReader.reopen()
, except you can change the
readOnly of the original reader.
TeeSinkTokenFilter.SinkTokenStream.reset()
.
StandardAnalyzer.tokenStream(java.lang.String, java.io.Reader)
instead
FieldCache
using getStringIndex()
and reverses the order.IndexWriter
without committing
any changes that have occurred since the last commit
(or since it was opened, if commit hasn't been called).
Lock.With.doBody()
while lock is obtained.
Scorer.score(Collector)
instead.
Scorer.score(Collector, int, int)
instead.
Scorer
which wraps another scorer and caches the score of the
current document.FieldComparator
Similarity.scorePayload(int, String, int, int, byte[], int, int)
Scorer
which scores documents in/out-of order according
to scoreDocsInOrder
.
BooleanClause.Occur.SHOULD
clause in a
BooleanQuery, and keeps the scores as computed by the
query.
IndexSearcher.search(Weight, Filter, int, Sort)
, but you choose
whether or not the fields in the returned FieldDoc
instances should
be set by specifying fillFields.FieldDoc
, however in 3.0 it will move to not track
document scores.
Searchable.search(Weight, Filter, Collector)
instead.
Searchable.search(Weight, Filter, int)
instead.
Searcher.search(Query, Filter, int)
instead.
Searcher.search(Query, Filter, int)
instead.
Searcher.search(Query, Filter, int, Sort)
instead.
Searcher.search(Query, Filter, int, Sort)
instead.
Searcher.search(Query, Collector)
instead.
Searcher.search(Query, Filter, Collector)
instead.
n
hits for query
, applying filter
if non-null.
n
hits for query
.
Searcher.search(Weight, Filter, Collector)
instead.
TermEnum
.
CheckIndex.Status.SegmentInfoStatus
instances, detailing status of each segment.
MergeScheduler
that simply does each merge
sequentially, using the current thread.bit
to one.
Weight.scoresDocsOutOfOrder()
is used.
true
to allow leading wildcard characters.
Field
names to load and the Set of Field
names to load lazily.
b
.
FSDirectory.open(File, LockFactory)
or a constructor
that takes a LockFactory
and supply
NoLockFactory.getNoLockFactory()
. This setting does not work
with FSDirectory.open(File)
only the deprecated getDirectory
respect this setting.
MultiTermQuery.CONSTANT_SCORE_FILTER_REWRITE
is used.
double
value.
double
value.
true
, this StopFilter will preserve
positions of the incoming tokens (ie, accumulate and
set position increments of the removed stop tokens).
true
to enable position increments in result query.
float
value.
float
value.
IndexModifier.getMaxFieldLength()
is reached will be printed to this.
FieldCacheSanityChecker
.
int
value.
int
value.
long
value.
long
value.
Integer.MAX_VALUE
for
64 bit JVMs and 256 MiBytes for 32 bit JVMs) used for memory mapping.
MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT
when creating a PrefixQuery, WildcardQuery or RangeQuery.
AbstractField.setOmitTermFreqAndPositions(boolean)
AbstractField.setOmitTermFreqAndPositions(boolean)
BufferedIndexInput.readBytes(byte[], int, int)
.
Collector.collect(int)
.
SortField
and then use Sort.setSort(SortField)
SortField
and then use Sort.setSort(SortField)
SortField
s and then use Sort.setSort(SortField[])
MultiTermQuery.CONSTANT_SCORE_FILTER_REWRITE
is used.
IndexReader.open(Directory, IndexDeletionPolicy, boolean, int)
to specify the required TermInfos index divisor instead.
Token.setTermBuffer(char[], int, int)
or
Token.setTermBuffer(String)
or
Token.setTermBuffer(String, int, int)
.
QueryParser.setMultiTermRewriteMethod(org.apache.lucene.search.MultiTermQuery.RewriteMethod)
instead.
BooleanQuery.setAllowDocsOutOfOrder(boolean)
instead.
IndexInput
, that is
mentioned in the bug report.
Field.setTokenStream(org.apache.lucene.analysis.TokenStream)
FieldCache
using getShorts()
and makes those values
available as other numeric types, casting as needed.Similarity
that delegates all methods to another.
Analyzer
that filters LetterTokenizer
with LowerCaseFilter
FSDirectory
using java.io.RandomAccessFile.NativeFSLockFactory
.
LockFactory
using File.createNewFile()
.LockFactory
for a single in-process instance,
meaning all locking will take place through this one instance.TeeSinkTokenFilter
insteadField
rather than its value.
FieldSelectorResult.SIZE
but immediately break from the field loading loop, i.e., stop loading further fields, after the size is loaded
IndexReader.terms(Term)
to create a new TermEnum positioned at a
given term.
ConstantScoreQuery.ConstantScorer.advance(int)
instead.
DocIdSetIterator.advance(int)
instead. This will be removed in 3.0
FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.FieldCacheTermsFilterDocIdSetIterator.advance(int)
instead.
FilteredDocIdSetIterator.advance(int)
instead.
ScoreCachingWrappingScorer.advance(int)
instead.
SpanScorer.advance(int)
instead.
OpenBitSetIterator.advance(int)
instead.
1 / (distance + 1)
.
IndexDeletionPolicy
that wraps around any other
IndexDeletionPolicy
and adds the ability to hold and
later release a single "snapshot" of an index.SortField
and then use Sort.Sort(SortField)
SortField
and then use Sort.Sort(SortField)
SortField
s and then use Sort.Sort(SortField[])
int
back to a float
.
long
back to a double
.
FieldComparatorSource
instead.FieldComparatorSource
instead.TermVectorEntry
s.FieldCache.Parser
.
FieldCache.Parser
.
SpanFilterResult.SpanFilterResult(DocIdSet, List)
instead
match
whose end
position is less than or equal to end
.
include
which
have no overlap with spans from exclude
.
query
.
IndexReader
tries to make changes to the index (via IndexReader.deleteDocument(int)
, IndexReader.undeleteAll()
or IndexReader.setNorm(int, java.lang.String, byte)
)
but changes have already been committed to the index
since this reader was instantiated.StandardTokenizer
with StandardFilter
, LowerCaseFilter
and StopFilter
, using a list of
English stop words.StandardAnalyzer.StandardAnalyzer(Version)
instead.
StandardAnalyzer.STOP_WORDS
).
StandardAnalyzer.StandardAnalyzer(Version, Set)
instead
StandardAnalyzer.StandardAnalyzer(Version, Set)
instead
StandardAnalyzer.StandardAnalyzer(Version, File)
instead
StandardAnalyzer.StandardAnalyzer(Version, Reader)
instead
StandardTokenizer
.StandardTokenizer
.
StandardTokenizer
.
AttributeSource
.
AttributeSource.AttributeFactory
StandardAnalyzer.STOP_WORDS_SET
instead
LetterTokenizer
with LowerCaseFilter
and StopFilter
.StopAnalyzer.StopAnalyzer(boolean)
instead
StopAnalyzer.StopAnalyzer(Set, boolean)
instead
StopAnalyzer.StopAnalyzer(Set, boolean)
instead
StopAnalyzer.StopAnalyzer(Set, boolean)
instead
StopAnalyzer.StopAnalyzer(File, boolean)
instead
StopAnalyzer.StopAnalyzer(Reader, boolean)
instead
StopFilter.StopFilter(boolean, TokenStream, String[])
instead
StopFilter.StopFilter(boolean, TokenStream, Set)
instead.
StopFilter.StopFilter(boolean, TokenStream, String[], boolean)
instead
StopFilter.StopFilter(boolean, TokenStream, Set, boolean)
instead.
StopFilter.StopFilter(boolean, TokenStream, Set, boolean)
instead
StopFilter.StopFilter(boolean, TokenStream, Set)
instead
NumberTools.longToString(long)
timeToString
or
dateToString
back to a time, represented as a
Date object.
NumberTools.longToString(long)
back to a
long.
timeToString
or
dateToString
back to a time, represented as the
number of milliseconds since January 1, 1970, 00:00:00 GMT.
Field.Store.YES
is chosen).
n
within its
sub-index.
n
in the
array used to construct this searcher/reader.
n
in the array
used to construct this searcher.
AttributeSource
states to store in the sink.TeeSinkTokenFilter
insteadterm
.
TermDocs
enumerator.
term
.
TermPositions
enumerator.
TermFreqVector
to provide additional information about
positions in which each of the terms is found.t
.
collator
parameter will cause every single
index Term in the Field referenced by lowerTerm and/or upperTerm to be
examined.
lowerTerm
but less/equal than upperTerm
.
lowerTerm
but less/equal than upperTerm
.
lowerTerm
but less/equal than upperTerm
.
Token.termBuffer()
and Token.termLength()
directly instead. If you really need a
String, use Token.term()
TermVectorEntry
s first by frequency and then by
the term (case-sensitive)IndexReader.getTermFreqVector(int,String)
.TermPositionVector
's
offset information.sqrt(freq)
.
TimeLimitingCollector
instead, which extends the new
Collector
. This class will be removed in 3.0.TimeLimitingCollector
is used to timeout search requests that
take longer than the maximum allowed search time limit.Collector
with a specified timeout.
Token.Token(char[], int, int, int, int)
instead.
Token.Token(char[], int, int, int, int)
and Token.setType(String)
instead.
Token.Token(char[], int, int, int, int)
and Token.setFlags(int)
instead.
StandardTokenizer.TOKEN_TYPES
instead
Field.Index.ANALYZED
StandardTokenizer
filtered by a StandardFilter
, a LowerCaseFilter
and a StopFilter
.
TokenStream
enumerates the sequence of tokens, either from
Field
s of a Document
or from query text.Attribute
instances.
NumericTokenStream
for indexing the numeric value.
TopScoreDocCollector
instead, which has better performance.TopDocs
output.Collector
that sorts by SortField
using
FieldComparator
s.TopFieldCollector
instead.Collector
implementation that collects the top-scoring hits,
returning them as a TopDocs
.field
assumed to be the
default field and omitted.
Object.toString()
.Field.Index.NOT_ANALYZED
Integer.MAX_VALUE
.
IndexWriter.unlock(Directory)
instead.
This method will be removed in the 3.0 release.
true
, if this platform supports unmapping mmapped files.
CharArraySet
.
term
and then adding the new
document.
term
and then adding the new
document.
ValueSource
.LockFactory
that wraps another LockFactory
and verifies that each lock obtain/release
is "correct" (never results in two processes holding the
lock at the same time).WhitespaceTokenizer
.AttributeSource
.
AttributeSource.AttributeFactory
.
WildcardTermEnum
.
name
in Directory
d
, in a format that can be read by the constructor BitVector.BitVector(Directory, String)
.
IndexOutput.writeString(java.lang.String)
IndexOutput.writeString(java.lang.String)
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |