Java Doc for AnalyzerUtil.java in  » Net » lucene-connector » org » apache » lucene » index » memory » Java Source Code / Java DocumentationJava Source Code and Java Documentation

Java Source Code / Java Documentation
1. 6.0 JDK Core
2. 6.0 JDK Modules
3. 6.0 JDK Modules com.sun
4. 6.0 JDK Modules com.sun.java
5. 6.0 JDK Modules sun
6. 6.0 JDK Platform
7. Ajax
8. Apache Harmony Java SE
9. Aspect oriented
10. Authentication Authorization
11. Blogger System
12. Build
13. Byte Code
14. Cache
15. Chart
16. Chat
17. Code Analyzer
18. Collaboration
19. Content Management System
20. Database Client
21. Database DBMS
22. Database JDBC Connection Pool
23. Database ORM
24. Development
25. EJB Server geronimo
26. EJB Server GlassFish
27. EJB Server JBoss 4.2.1
28. EJB Server resin 3.1.5
29. ERP CRM Financial
30. ESB
31. Forum
32. GIS
33. Graphic Library
34. Groupware
35. HTML Parser
36. IDE
37. IDE Eclipse
38. IDE Netbeans
39. Installer
40. Internationalization Localization
41. Inversion of Control
42. Issue Tracking
43. J2EE
44. JBoss
45. JMS
46. JMX
47. Library
48. Mail Clients
49. Net
50. Parser
51. PDF
52. Portal
53. Profiler
54. Project Management
55. Report
56. RSS RDF
57. Rule Engine
58. Science
59. Scripting
60. Search Engine
61. Security
62. Sevlet Container
63. Source Control
64. Swing Library
65. Template Engine
66. Test Coverage
67. Testing
68. UML
69. Web Crawler
70. Web Framework
71. Web Mail
72. Web Server
73. Web Services
74. Web Services apache cxf 2.0.1
75. Web Services AXIS2
76. Wiki Engine
77. Workflow Engines
78. XML
79. XML UI
Java
Java Tutorial
Java Open Source
Jar File Download
Java Articles
Java Products
Java by API
Photoshop Tutorials
Maya Tutorials
Flash Tutorials
3ds-Max Tutorials
Illustrator Tutorials
GIMP Tutorials
C# / C Sharp
C# / CSharp Tutorial
C# / CSharp Open Source
ASP.Net
ASP.NET Tutorial
JavaScript DHTML
JavaScript Tutorial
JavaScript Reference
HTML / CSS
HTML CSS Reference
C / ANSI-C
C Tutorial
C++
C++ Tutorial
Ruby
PHP
Python
Python Tutorial
Python Open Source
SQL Server / T-SQL
SQL Server / T-SQL Tutorial
Oracle PL / SQL
Oracle PL/SQL Tutorial
PostgreSQL
SQL / MySQL
MySQL Tutorial
VB.Net
VB.Net Tutorial
Flash / Flex / ActionScript
VBA / Excel / Access / Word
XML
XML Tutorial
Microsoft Office PowerPoint 2007 Tutorial
Microsoft Office Excel 2007 Tutorial
Microsoft Office Word 2007 Tutorial
Java Source Code / Java Documentation » Net » lucene connector » org.apache.lucene.index.memory 
Source Cross Reference  Class Diagram Java Document (Java Doc) 


java.lang.Object
   org.apache.lucene.index.memory.AnalyzerUtil

AnalyzerUtil
public class AnalyzerUtil (Code)
Various fulltext analysis utilities avoiding redundant code in several classes.
author:
   whoschek.AT.lbl.DOT.gov




Method Summary
public static  AnalyzergetLoggingAnalyzer(Analyzer child, PrintStream log, String logName)
     Returns a simple analyzer wrapper that logs all tokens produced by the underlying child analyzer to the given log stream (typically System.err); Otherwise behaves exactly like the child analyzer, delivering the very same tokens; useful for debugging purposes on custom indexing and/or querying.
public static  AnalyzergetMaxTokenAnalyzer(Analyzer child, int maxTokens)
     Returns an analyzer wrapper that returns at most the first maxTokens tokens from the underlying child analyzer, ignoring all remaining tokens.
public static  String[]getMostFrequentTerms(Analyzer analyzer, String text, int limit)
     Returns (frequency:term) pairs for the top N distinct terms (aka words), sorted descending by frequency (and ascending by term, if tied).

Example XQuery:

 declare namespace util = "java:org.apache.lucene.index.memory.AnalyzerUtil";
 declare namespace analyzer = "java:org.apache.lucene.index.memory.PatternAnalyzer";
 for $pair in util:get-most-frequent-terms(
 analyzer:EXTENDED_ANALYZER(), doc("samples/shakespeare/othello.xml"), 10)
 return <word word="{substring-after($pair, ':')}" frequency="{substring-before($pair, ':')}"/>
 

Parameters:
  analyzer - the analyzer to use for splitting text into terms (aka words)
Parameters:
  text - the text to analyze
Parameters:
  limit - the maximum number of pairs to return; zero indicates "as many as possible".
public static  String[]getParagraphs(String text, int limit)
     Returns at most the first N paragraphs of the given text.
public static  AnalyzergetPorterStemmerAnalyzer(Analyzer child)
     Returns an English stemming analyzer that stems tokens from the underlying child analyzer according to the Porter stemming algorithm.
public static  String[]getSentences(String text, int limit)
     Returns at most the first N sentences of the given text.
public static  AnalyzergetSynonymAnalyzer(Analyzer child, SynonymMap synonyms, int maxSynonyms)
     Returns an analyzer wrapper that wraps the underlying child analyzer's token stream into a SynonymTokenFilter .
public static  AnalyzergetTokenCachingAnalyzer(Analyzer child)
     Returns an analyzer wrapper that caches all tokens generated by the underlying child analyzer's token streams, and delivers those cached tokens on subsequent calls to tokenStream(String fieldName, Reader reader) if the fieldName has been seen before, altogether ignoring the Reader parameter on cache lookup.



Method Detail
getLoggingAnalyzer
public static Analyzer getLoggingAnalyzer(Analyzer child, PrintStream log, String logName)(Code)
Returns a simple analyzer wrapper that logs all tokens produced by the underlying child analyzer to the given log stream (typically System.err); Otherwise behaves exactly like the child analyzer, delivering the very same tokens; useful for debugging purposes on custom indexing and/or querying.
Parameters:
  child - the underlying child analyzer
Parameters:
  log - the print stream to log to (typically System.err)
Parameters:
  logName - a name for this logger (typically "log" or similar) a logging analyzer



getMaxTokenAnalyzer
public static Analyzer getMaxTokenAnalyzer(Analyzer child, int maxTokens)(Code)
Returns an analyzer wrapper that returns at most the first maxTokens tokens from the underlying child analyzer, ignoring all remaining tokens.
Parameters:
  child - the underlying child analyzer
Parameters:
  maxTokens - the maximum number of tokens to return from the underlyinganalyzer (a value of Integer.MAX_VALUE indicates unlimited) an analyzer wrapper



getMostFrequentTerms
public static String[] getMostFrequentTerms(Analyzer analyzer, String text, int limit)(Code)
Returns (frequency:term) pairs for the top N distinct terms (aka words), sorted descending by frequency (and ascending by term, if tied).

Example XQuery:

 declare namespace util = "java:org.apache.lucene.index.memory.AnalyzerUtil";
 declare namespace analyzer = "java:org.apache.lucene.index.memory.PatternAnalyzer";
 for $pair in util:get-most-frequent-terms(
 analyzer:EXTENDED_ANALYZER(), doc("samples/shakespeare/othello.xml"), 10)
 return <word word="{substring-after($pair, ':')}" frequency="{substring-before($pair, ':')}"/>
 

Parameters:
  analyzer - the analyzer to use for splitting text into terms (aka words)
Parameters:
  text - the text to analyze
Parameters:
  limit - the maximum number of pairs to return; zero indicates "as many as possible". an array of (frequency:term) pairs in the form of (freq0:term0,freq1:term1, ..., freqN:termN). Each pair is a single stringseparated by a ':' delimiter.



getParagraphs
public static String[] getParagraphs(String text, int limit)(Code)
Returns at most the first N paragraphs of the given text. Delimiting characters are excluded from the results. Each returned paragraph is whitespace-trimmed via String.trim(), potentially an empty string.
Parameters:
  text - the text to tokenize into paragraphs
Parameters:
  limit - the maximum number of paragraphs to return; zero indicates "asmany as possible". the first N paragraphs



getPorterStemmerAnalyzer
public static Analyzer getPorterStemmerAnalyzer(Analyzer child)(Code)
Returns an English stemming analyzer that stems tokens from the underlying child analyzer according to the Porter stemming algorithm. The child analyzer must deliver tokens in lower case for the stemmer to work properly.

Background: Stemming reduces token terms to their linguistic root form e.g. reduces "fishing" and "fishes" to "fish", "family" and "families" to "famili", as well as "complete" and "completion" to "complet". Note that the root form is not necessarily a meaningful word in itself, and that this is not a bug but rather a feature, if you lean back and think about fuzzy word matching for a bit.

See the Lucene contrib packages for stemmers (and stop words) for German, Russian and many more languages.
Parameters:
  child - the underlying child analyzer an analyzer wrapper




getSentences
public static String[] getSentences(String text, int limit)(Code)
Returns at most the first N sentences of the given text. Delimiting characters are excluded from the results. Each returned sentence is whitespace-trimmed via String.trim(), potentially an empty string.
Parameters:
  text - the text to tokenize into sentences
Parameters:
  limit - the maximum number of sentences to return; zero indicates "asmany as possible". the first N sentences



getSynonymAnalyzer
public static Analyzer getSynonymAnalyzer(Analyzer child, SynonymMap synonyms, int maxSynonyms)(Code)
Returns an analyzer wrapper that wraps the underlying child analyzer's token stream into a SynonymTokenFilter .
Parameters:
  child - the underlying child analyzer
Parameters:
  synonyms - the map used to extract synonyms for terms
Parameters:
  maxSynonyms - the maximum number of synonym tokens to return per underlyingtoken word (a value of Integer.MAX_VALUE indicates unlimited) a new analyzer



getTokenCachingAnalyzer
public static Analyzer getTokenCachingAnalyzer(Analyzer child)(Code)
Returns an analyzer wrapper that caches all tokens generated by the underlying child analyzer's token streams, and delivers those cached tokens on subsequent calls to tokenStream(String fieldName, Reader reader) if the fieldName has been seen before, altogether ignoring the Reader parameter on cache lookup.

If Analyzer / TokenFilter chains are expensive in terms of I/O or CPU, such caching can help improve performance if the same document is added to multiple Lucene indexes, because the text analysis phase need not be performed more than once.

Caveats:

  • Caching the tokens of large Lucene documents can lead to out of memory exceptions.
  • The Token instances delivered by the underlying child analyzer must be immutable.
  • The same caching analyzer instance must not be used for more than one document because the cache is not keyed on the Reader parameter.

Parameters:
  child - the underlying child analyzer a new analyzer



Methods inherited from java.lang.Object
native protected Object clone() throws CloneNotSupportedException(Code)(Java Doc)
public boolean equals(Object obj)(Code)(Java Doc)
protected void finalize() throws Throwable(Code)(Java Doc)
final native public Class getClass()(Code)(Java Doc)
native public int hashCode()(Code)(Java Doc)
final native public void notify()(Code)(Java Doc)
final native public void notifyAll()(Code)(Java Doc)
public String toString()(Code)(Java Doc)
final native public void wait(long timeout) throws InterruptedException(Code)(Java Doc)
final public void wait(long timeout, int nanos) throws InterruptedException(Code)(Java Doc)
final public void wait() throws InterruptedException(Code)(Java Doc)

www.java2java.com | Contact Us
Copyright 2009 - 12 Demo Source and Support. All rights reserved.
All other trademarks are property of their respective owners.