| java.lang.Object org.apache.lucene.index.memory.AnalyzerUtil
AnalyzerUtil | public class AnalyzerUtil (Code) | | Various fulltext analysis utilities avoiding redundant code in several
classes.
author: whoschek.AT.lbl.DOT.gov |
Method Summary | |
public static Analyzer | getLoggingAnalyzer(Analyzer child, PrintStream log, String logName) Returns a simple analyzer wrapper that logs all tokens produced by the
underlying child analyzer to the given log stream (typically System.err);
Otherwise behaves exactly like the child analyzer, delivering the very
same tokens; useful for debugging purposes on custom indexing and/or
querying. | public static Analyzer | getMaxTokenAnalyzer(Analyzer child, int maxTokens) Returns an analyzer wrapper that returns at most the first
maxTokens tokens from the underlying child analyzer,
ignoring all remaining tokens. | public static String[] | getMostFrequentTerms(Analyzer analyzer, String text, int limit) Returns (frequency:term) pairs for the top N distinct terms (aka words),
sorted descending by frequency (and ascending by term, if tied).
Example XQuery:
declare namespace util = "java:org.apache.lucene.index.memory.AnalyzerUtil";
declare namespace analyzer = "java:org.apache.lucene.index.memory.PatternAnalyzer";
for $pair in util:get-most-frequent-terms(
analyzer:EXTENDED_ANALYZER(), doc("samples/shakespeare/othello.xml"), 10)
return <word word="{substring-after($pair, ':')}" frequency="{substring-before($pair, ':')}"/>
Parameters: analyzer - the analyzer to use for splitting text into terms (aka words) Parameters: text - the text to analyze Parameters: limit - the maximum number of pairs to return; zero indicates "as many as possible". | public static String[] | getParagraphs(String text, int limit) Returns at most the first N paragraphs of the given text. | public static Analyzer | getPorterStemmerAnalyzer(Analyzer child) Returns an English stemming analyzer that stems tokens from the
underlying child analyzer according to the Porter stemming algorithm. | public static String[] | getSentences(String text, int limit) Returns at most the first N sentences of the given text. | public static Analyzer | getSynonymAnalyzer(Analyzer child, SynonymMap synonyms, int maxSynonyms) Returns an analyzer wrapper that wraps the underlying child analyzer's
token stream into a
SynonymTokenFilter . | public static Analyzer | getTokenCachingAnalyzer(Analyzer child) Returns an analyzer wrapper that caches all tokens generated by the underlying child analyzer's
token streams, and delivers those cached tokens on subsequent calls to
tokenStream(String fieldName, Reader reader)
if the fieldName has been seen before, altogether ignoring the Reader parameter on cache lookup. |
getLoggingAnalyzer | public static Analyzer getLoggingAnalyzer(Analyzer child, PrintStream log, String logName)(Code) | | Returns a simple analyzer wrapper that logs all tokens produced by the
underlying child analyzer to the given log stream (typically System.err);
Otherwise behaves exactly like the child analyzer, delivering the very
same tokens; useful for debugging purposes on custom indexing and/or
querying.
Parameters: child - the underlying child analyzer Parameters: log - the print stream to log to (typically System.err) Parameters: logName - a name for this logger (typically "log" or similar) a logging analyzer |
getMaxTokenAnalyzer | public static Analyzer getMaxTokenAnalyzer(Analyzer child, int maxTokens)(Code) | | Returns an analyzer wrapper that returns at most the first
maxTokens tokens from the underlying child analyzer,
ignoring all remaining tokens.
Parameters: child - the underlying child analyzer Parameters: maxTokens - the maximum number of tokens to return from the underlyinganalyzer (a value of Integer.MAX_VALUE indicates unlimited) an analyzer wrapper |
getMostFrequentTerms | public static String[] getMostFrequentTerms(Analyzer analyzer, String text, int limit)(Code) | | Returns (frequency:term) pairs for the top N distinct terms (aka words),
sorted descending by frequency (and ascending by term, if tied).
Example XQuery:
declare namespace util = "java:org.apache.lucene.index.memory.AnalyzerUtil";
declare namespace analyzer = "java:org.apache.lucene.index.memory.PatternAnalyzer";
for $pair in util:get-most-frequent-terms(
analyzer:EXTENDED_ANALYZER(), doc("samples/shakespeare/othello.xml"), 10)
return <word word="{substring-after($pair, ':')}" frequency="{substring-before($pair, ':')}"/>
Parameters: analyzer - the analyzer to use for splitting text into terms (aka words) Parameters: text - the text to analyze Parameters: limit - the maximum number of pairs to return; zero indicates "as many as possible". an array of (frequency:term) pairs in the form of (freq0:term0,freq1:term1, ..., freqN:termN). Each pair is a single stringseparated by a ':' delimiter. |
getParagraphs | public static String[] getParagraphs(String text, int limit)(Code) | | Returns at most the first N paragraphs of the given text. Delimiting
characters are excluded from the results. Each returned paragraph is
whitespace-trimmed via String.trim(), potentially an empty string.
Parameters: text - the text to tokenize into paragraphs Parameters: limit - the maximum number of paragraphs to return; zero indicates "asmany as possible". the first N paragraphs |
getPorterStemmerAnalyzer | public static Analyzer getPorterStemmerAnalyzer(Analyzer child)(Code) | | Returns an English stemming analyzer that stems tokens from the
underlying child analyzer according to the Porter stemming algorithm. The
child analyzer must deliver tokens in lower case for the stemmer to work
properly.
Background: Stemming reduces token terms to their linguistic root form
e.g. reduces "fishing" and "fishes" to "fish", "family" and "families" to
"famili", as well as "complete" and "completion" to "complet". Note that
the root form is not necessarily a meaningful word in itself, and that
this is not a bug but rather a feature, if you lean back and think about
fuzzy word matching for a bit.
See the Lucene contrib packages for stemmers (and stop words) for German,
Russian and many more languages.
Parameters: child - the underlying child analyzer an analyzer wrapper |
getSentences | public static String[] getSentences(String text, int limit)(Code) | | Returns at most the first N sentences of the given text. Delimiting
characters are excluded from the results. Each returned sentence is
whitespace-trimmed via String.trim(), potentially an empty string.
Parameters: text - the text to tokenize into sentences Parameters: limit - the maximum number of sentences to return; zero indicates "asmany as possible". the first N sentences |
getSynonymAnalyzer | public static Analyzer getSynonymAnalyzer(Analyzer child, SynonymMap synonyms, int maxSynonyms)(Code) | | Returns an analyzer wrapper that wraps the underlying child analyzer's
token stream into a
SynonymTokenFilter .
Parameters: child - the underlying child analyzer Parameters: synonyms - the map used to extract synonyms for terms Parameters: maxSynonyms - the maximum number of synonym tokens to return per underlyingtoken word (a value of Integer.MAX_VALUE indicates unlimited) a new analyzer |
getTokenCachingAnalyzer | public static Analyzer getTokenCachingAnalyzer(Analyzer child)(Code) | | Returns an analyzer wrapper that caches all tokens generated by the underlying child analyzer's
token streams, and delivers those cached tokens on subsequent calls to
tokenStream(String fieldName, Reader reader)
if the fieldName has been seen before, altogether ignoring the Reader parameter on cache lookup.
If Analyzer / TokenFilter chains are expensive in terms of I/O or CPU, such caching can
help improve performance if the same document is added to multiple Lucene indexes,
because the text analysis phase need not be performed more than once.
Caveats:
- Caching the tokens of large Lucene documents can lead to out of memory exceptions.
- The Token instances delivered by the underlying child analyzer must be immutable.
- The same caching analyzer instance must not be used for more than one document
because the cache is not keyed on the Reader parameter.
Parameters: child - the underlying child analyzer a new analyzer |
|
|