Uses of Annotation Interface
org.apache.lucene.util.IgnoreRandomChains
Packages that use IgnoreRandomChains
Package
Description
Text analysis.
Provides various convenience classes for creating boosts on Tokens.
Analyzer for Chinese, Japanese, and Korean, which indexes bigrams.
Construct n-grams for frequently occurring terms and phrases.
Basic, general-purpose analysis components.
Analyzer for Japanese.
Analyzer for Korean.
Miscellaneous Tokenstreams.
Analysis components for path-like strings such as filenames.
Analysis components for phonetic search.
Tokenizer that is aware of Wikipedia syntax.
-
Uses of IgnoreRandomChains in org.apache.lucene.analysis
Classes in org.apache.lucene.analysis with annotations of type IgnoreRandomChainsModifier and TypeClassDescriptionfinal class
This class can be used if the token attributes of a TokenStream are intended to be consumed more than once. -
Uses of IgnoreRandomChains in org.apache.lucene.analysis.boost
Classes in org.apache.lucene.analysis.boost with annotations of type IgnoreRandomChainsModifier and TypeClassDescriptionfinal class
Characters before the delimiter are the "token", those after are the boost. -
Uses of IgnoreRandomChains in org.apache.lucene.analysis.cjk
Classes in org.apache.lucene.analysis.cjk with annotations of type IgnoreRandomChainsModifier and TypeClassDescriptionfinal class
Forms bigrams of CJK terms that are generated from StandardTokenizer or ICUTokenizer. -
Uses of IgnoreRandomChains in org.apache.lucene.analysis.commongrams
Classes in org.apache.lucene.analysis.commongrams with annotations of type IgnoreRandomChainsModifier and TypeClassDescriptionfinal class
Construct bigrams for frequently occurring terms while indexing.final class
Wrap a CommonGramsFilter optimizing phrase queries by only returning single words when they are not a member of a bigram. -
Uses of IgnoreRandomChains in org.apache.lucene.analysis.core
Classes in org.apache.lucene.analysis.core with annotations of type IgnoreRandomChainsModifier and TypeClassDescriptionfinal class
Normalizes token text to lower case.final class
Removes stop words from a token stream. -
Uses of IgnoreRandomChains in org.apache.lucene.analysis.ja
Classes in org.apache.lucene.analysis.ja with annotations of type IgnoreRandomChainsModifier and TypeClassDescriptionfinal class
ATokenFilter
that adds Japanese romanized tokens to the term attribute.class
Normalizes Japanese horizontal iteration marks (odoriji) to their expanded form.class
ATokenFilter
that normalizes Japanese numbers (kansūji) to regular Arabic decimal numbers in half-width characters.Constructors in org.apache.lucene.analysis.ja with annotations of type IgnoreRandomChainsModifierConstructorDescriptionJapaneseTokenizer
(AttributeFactory factory, TokenInfoDictionary systemDictionary, UnknownDictionary unkDictionary, ConnectionCosts connectionCosts, UserDictionary userDictionary, boolean discardPunctuation, boolean discardCompoundToken, JapaneseTokenizer.Mode mode) Create a new JapaneseTokenizer, supplying a custom system dictionary and unknown dictionary. -
Uses of IgnoreRandomChains in org.apache.lucene.analysis.ko
Classes in org.apache.lucene.analysis.ko with annotations of type IgnoreRandomChainsModifier and TypeClassDescriptionclass
ATokenFilter
that normalizes Korean numbers to regular Arabic decimal numbers in half-width characters.final class
Tokenizer for Korean that uses morphological analysis.Constructors in org.apache.lucene.analysis.ko with annotations of type IgnoreRandomChainsModifierConstructorDescriptionKoreanTokenizer
(AttributeFactory factory, TokenInfoDictionary systemDictionary, UnknownDictionary unkDictionary, ConnectionCosts connectionCosts, UserDictionary userDictionary, KoreanTokenizer.DecompoundMode mode, boolean outputUnknownUnigrams, boolean discardPunctuation) Create a new KoreanTokenizer supplying a custom system dictionary and unknown dictionary. -
Uses of IgnoreRandomChains in org.apache.lucene.analysis.miscellaneous
Classes in org.apache.lucene.analysis.miscellaneous with annotations of type IgnoreRandomChainsModifier and TypeClassDescriptionfinal class
Characters before the delimiter are the "token", the textual integer after is the term frequency.final class
When the plain text is extracted from documents, we will often have many words hyphenated and broken into two lines.final class
Splits words into subwords and performs optional transformations on subword groups, producing a correct token graph so that e.g.Constructors in org.apache.lucene.analysis.miscellaneous with annotations of type IgnoreRandomChainsModifierConstructorDescriptionLimitTokenCountFilter
(TokenStream in, int maxTokenCount) Build a filter that only accepts tokens up to a maximum number.LimitTokenOffsetFilter
(TokenStream input, int maxStartOffset) Lets all tokens pass through until it sees one with a start offset <=maxStartOffset
which won't pass and ends the stream.LimitTokenPositionFilter
(TokenStream in, int maxTokenPosition) Build a filter that only accepts tokens up to and including the given maximum position. -
Uses of IgnoreRandomChains in org.apache.lucene.analysis.path
Classes in org.apache.lucene.analysis.path with annotations of type IgnoreRandomChainsModifier and TypeClassDescriptionclass
Tokenizer for path-like hierarchies.class
Tokenizer for domain-like hierarchies. -
Uses of IgnoreRandomChains in org.apache.lucene.analysis.phonetic
Classes in org.apache.lucene.analysis.phonetic with annotations of type IgnoreRandomChainsModifier and TypeClassDescriptionfinal class
TokenFilter for Beider-Morse phonetic encoding.Constructors in org.apache.lucene.analysis.phonetic with annotations of type IgnoreRandomChainsModifierConstructorDescriptionBeiderMorseFilter
(TokenStream input, org.apache.commons.codec.language.bm.PhoneticEngine engine, org.apache.commons.codec.language.bm.Languages.LanguageSet languages) Create a new BeiderMorseFilter -
Uses of IgnoreRandomChains in org.apache.lucene.analysis.wikipedia
Classes in org.apache.lucene.analysis.wikipedia with annotations of type IgnoreRandomChainsModifier and TypeClassDescriptionfinal class
Extension of StandardTokenizer that is aware of Wikipedia syntax.