|
|||||||||||
PREV NEXT | FRAMES NO FRAMES |
BitSet.and(java.util.BitSet)
.
BitSet.andNot(java.util.BitSet)
.
?
and *
don't get
removed from the search terms.IndexWriter.getAnalyzer()
.
Field
.
Field
.
n
bits.
name
in Directory
d
, as written by the BitVector.write(org.apache.lucene.store.Directory, java.lang.String)
method.
BooleanQuery.getMaxClauseCount()
clauses.BrazilianAnalyzer.BRAZILIAN_STOP_WORDS
).
IndexInput
.IndexOutput
.
(x <= min) ? base : sqrt(x+(base**2)-min)
...but with a special case check for 0.
Filter.bits(org.apache.lucene.index.IndexReader)
.
CJKAnalyzer.STOP_WORDS
.
Filter
s to be chained.CzechAnalyzer.CZECH_STOP_WORDS
).
bit
to zero.
overlap / maxOverlap
.
query
Integer.MAX_VALUE
.
DateTools
instead. This class is included for use with existing
indices and will be removed in a future release.Encoder
implementation that does not modify the outputDocument
from a File
.
DutchAnalyzer.DUTCH_STOP_WORDS
).
docNum
.
docNum
.
term
.
term
.
docNum
.
i
.
t
.
term
.
term
.
n
th
Document
in this index.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
\
.
\
.
doc
scored against
weight
.
doc
scored against
query
.
Directory
as a directory of files.FilterIndexReader
contains another IndexReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.TermDocs
implementations.TermEnum
implementations.TermPositions
implementations.Highlighter
class.FrenchAnalyzer.FRENCH_STOP_WORDS
).
minimumSimilarity
to term
.
FuzzyQuery(term, minimumSimilarity, 0)
.
FuzzyQuery(term, 0.5f, 0)
.
reader
which share a prefix of
length prefixLength
with term
and which have a fuzzy similarity >
minSimilarity
.
GERMAN_STOP_WORDS
).
true
if bit
is one and
false
if it is zero.
field
to see if it contains integers, floats
or strings, and then calls one of the other methods in this class to get the
values.
HtmlDocument
object.
field
and calls the given SortComparator
to get the sort values.
Document
from an InputStream
.
QueryParser.getFieldQuery(String,String)
.
PrecedenceQueryParser.getFieldQuery(String,String)
.
Field
s with the given name.
field
as floats and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as floats and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
QueryParser.getWildcardQuery(java.lang.String, java.lang.String)
).
PrecedenceQueryParser.getWildcardQuery(java.lang.String, java.lang.String)
).
IndexReader
this searches.
field
as integers and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as integers and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
maxTokens
tokens from the underlying child analyzer,
ignoring all remaining tokens.
QueryParser.getWildcardQuery(java.lang.String, java.lang.String)
).
PrecedenceQueryParser.getWildcardQuery(java.lang.String, java.lang.String)
).
Searchable
s this searches.
field
and returns
an array of them in natural order, along with an array telling
which element in the term array each document uses.
field
and returns an array
of size reader.maxDoc()
containing the value each document
has in the given field.
SynonymTokenFilter
.
HtmlDocument
object.
HighFreqTerms
class extracts terms and their frequencies out
of an existing Lucene index.Fragmenter
, Scorer
, Formatter
,
Encoder
and tokenizers.HitIterator
to provide a lazily loaded hit
from Hits
.Hits
that provides lazy fetching of each document.HtmlDocument
class creates a Lucene Document
from an HTML document.HtmlDocument
from a File
.
HtmlDocument
from an InputStream
.
Directory
.path
.
path
.
d
.
log(numDocs/(docFreq+1)) + 1
.
true
if the lower endpoint is inclusive
true
if the upper endpoint is inclusive
true
if an index exists at the specified directory.
true
if an index exists at the specified directory.
true
if an index exists at the specified directory.
getTerms
at which the term with the specified
term
appears.
indexOf(int)
but searches for a number of terms
at the same time.
Similarity.coord(int,int)
is disabled in
scoring for this query instance.
true
if the range query is inclusive
true
iff the index in the named directory is
currently locked.
true
iff the index in the named directory is
currently locked.
IndexReader.getTermFreqVector(int,String)
.
Character.isLetter(char)
.
Character.isWhitespace(char)
.
Character.isLetter(char)
.
HitIterator
to navigate the Hits.
org.apache.lucene.lockDir
or java.io.tmpdir
system property
fieldName
matching
less than or equal to upperTerm
.
1/sqrt( steepness * (abs(x-min) + abs(x-max) - (max-min)) + 1 )
1/sqrt(numTerms)
.
a
is less relevant than b
.
Directory
implementation that uses mmap for input.fieldName
matching
greater than or equal to lowerTerm
.
MultiPhraseQuery.add(Term[])
.Searchables
.Query
that matches documents containing a subset of terms provided
by a FilteredTermEnum
enumeration.term
.
MultipleTermPositions
here.MultipleTermPositions
instance.
HtmlDocument
on the files specified on
the command line.
SimpleAnalyzer
.
SimpleAnalyzer
.
Lock
.
Lock
with the specified name.
Lock
.
"\\W+"
; Divides text at non-letters (Character.isLetter(c))
Fragmenter
implementation which does not fragment the text.Hit
instance representing the next hit in Hits
.
Character.isLetter(char)
.
BitSet.or(java.util.BitSet)
.
TokenFilter
and Analyzer
implementations that use Snowball
stemmers.Searchables
.Reader
, that can flexibly separate text into terms via a regular expression Pattern
(with behaviour identical to String.split(String)
),
and that combines the functionality of
LetterTokenizer
,
LowerCaseTokenizer
,
WhitespaceTokenizer
,
StopFilter
into a single efficient
multi-purpose class.prefix
.
Query
.
Query
.
query
.
Scorer
implementation which scores text fragments by the number of unique query terms found.1/sqrt(sumOfSquaredWeights)
.
Directory
implementation.Directory
.
RAMDirectory
instance from a different
Directory
implementation.
RAMDirectory
instance from the FSDirectory
.
RAMDirectory
instance from the FSDirectory
.
IndexOutput
implementation.lowerTerm
but less than upperTerm
.
term
.
ReqExclScorer
.
ReqOptScorer
.
Lock.With.doBody()
while lock is obtained.
NumberTools.longToString(long)
Similarity
that delegates all methods to another.
Fragmenter
implementation which breaks text up into same-size
fragments with no concerns over spotting sentence boundaries.Encoder
implementation to escape text for HTML outputFormatter
implementation to highlight terms with a pre and post tagStandardTokenizer
with StandardFilter
, LowerCaseFilter
, StopFilter
and SnowballFilter
.field
then by index order (document
number).
field
then by
index order (document number).
AUTO
).
AUTO
).
match
whose end
position is less than or equal to end
.
include
which
have no overlap with spans from exclude
.
RegexQuery
allowing regular expression
queries to be nested within other SpanQuery subclasses.StandardTokenizer
with StandardFilter
, LowerCaseFilter
and StopFilter
, using a list of English stop words.StandardAnalyzer.STOP_WORDS
).
StandardTokenizer
.SynExpand.expand(...)
).query
.
query
and
filter
.
query
sorted by
sort
.
query
and filter
,
sorted by sort
.
TermEnum
.
bit
to one.
b
.
IndexModifier.getMaxFieldLength()
is reached will be printed to this.
RegexCapabilities
implementation is used by this instance.
field
then by index order
(document number).
field
possibly in reverse,
then by index order (document number).
1 / (distance + 1)
.
timeToString
or
dateToString
back to a time, represented as a
Date object.
NumberTools.longToString(long)
back to a
long.
timeToString
or
dateToString
back to a time, represented as the
number of milliseconds since January 1, 1970, 00:00:00 GMT.
n
within its
sub-index.
n
in the array
used to construct this searcher.
TermFreqVector
to provide additional information about
positions in which each of the terms is found.t
.
HitCollector
implementation that collects the top-scoring
documents, returning them as a TopDocs
.HitCollector
implementation that collects the top-sorting
documents, returning them as a TopFieldDocs
.term
.
TermDocs
enumerator.
term
.
TermPositions
enumerator.
sqrt(freq)
.
field
assumed to be the
default field and omitted.
StandardTokenizer
filtered by a StandardFilter
, a LowerCaseFilter
and a StopFilter
.
StandardTokenizer
filtered by a StandardFilter
, a LowerCaseFilter
and a StopFilter
.
tokenStream(String, String)
and is
less efficient than tokenStream(String, String)
.
"\\s+"
; Divides text at whitespaces (Character.isWhitespace(c))
WildcardTermEnum
.
name
in Directory
d
, in a format that can be read by the constructor BitVector.BitVector(Directory, String)
.
BitSet.xor(java.util.BitSet)
.
|
|||||||||||
PREV NEXT | FRAMES NO FRAMES |