Learning Lucene: Analyzers, Tokenizers, and Filters

Lucene AnalyzersTokenizersTokenFilters
Lucene uses the concept of Analyzing to analyze the content before indexing, it comes with several built-in Analyzers, Tokenizers, and Token Filters. It's crucial to choose the right one that matches our need.

Create our own Analyzer
Let's create our own NGramANalyzer:
public class ShingleAnalyzer extends Analyzer {
 @Override
 protected TokenStreamComponents createComponents(String fieldName,
   Reader reader) {
  Tokenizer tokenizer = new StandardTokenizer(Version.LUCENE_4_9, reader);

  // Notice the order is important: stopFilter -> lowerCaseFilter ->
  // stemFilter ->shingleFilter
  TokenFilter stopFilter = new StopFilter(Version.LUCENE_4_9, tokenizer,
    StopAnalyzer.ENGLISH_STOP_WORDS_SET);
  // or we can crate our own stop words
  // TokenFilter stopFilter = new StopFilter(StopFilter.makeStopSet(
  // Version.LUCENE_4_9, "and", "of", "the", "to", "is", "their",
  // "can", "all"));
  TokenFilter lowerCaseFilter = new LowerCaseFilter(Version.LUCENE_4_9,
    stopFilter);
  TokenFilter stemFilter = new PorterStemFilter(lowerCaseFilter);

  // Notice ShingleFilter doesn't work well with SynonymFilter
  // https://issues.apache.org/jira/browse/LUCENE-3475
  // TokenFilter synonymFilter = new SynonymFilter(stemFilter,
  // getSynonymMap(), true);
  // ShingleFilter shingleFilter = new ShingleFilter(synonymFilter);
  ShingleFilter shingleFilter = new ShingleFilter(stemFilter);
  shingleFilter.setMinShingleSize(2);
  shingleFilter.setMaxShingleSize(2);
  shingleFilter.setOutputUnigrams(false);

  return new TokenStreamComponents(tokenizer, shingleFilter);
 }
 // We may create synonym from dict or property file
 private SynonymMap getSynonymMap() {
  SynonymMap.Builder sb = new SynonymMap.Builder(true);
  sb.add(new CharsRef("jump"), new CharsRef("leap"), true);
  sb.add(new CharsRef("lazy"), new CharsRef("sluggardly"), true);
  SynonymMap smap = null;
  try {
   smap = sb.build();
  } catch (IOException e) {
   e.printStackTrace();
  }
  return smap;
 }
}
We can also create our own Tokenizer: extend org.apache.lucene.analysis.Tokenizer and implement public boolean incrementToken(). incrementToken returns false for EOF, true otherwis.

Example:
Anatomy of a Lucene Tokenizer
Lucene.Net – Custom Synonym Analyzer - CodeProject
Run Analyzer Separately
We can use Lucene's tokenizing, stemming, stopword removal in some other NLP tasks.
public void runAnalyzer() {
 Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_4_9);
 String text = "the red fox jumped over the lazy dog";
 Reader reader = new StringReader(text);
 TokenStream ts = null;
 try {
  ts = analyzer.tokenStream(null, reader);

  // define and reuse Attribute outside of while oop
  CharTermAttribute charTermAttr = ts
    .getAttribute(CharTermAttribute.class);
  OffsetAttribute offsetAtt = ts.getAttribute(OffsetAttribute.class);
  PositionIncrementAttribute posAtt = ts
    .getAttribute(PositionIncrementAttribute.class);
  TypeAttribute typeAtt = ts.getAttribute(TypeAttribute.class);

  ts.reset();
  while (ts.incrementToken()) {
   System.out.println(charTermAttr.toString() + ", offset:"
     + offsetAtt.startOffset() + "-" + offsetAtt.endOffset()
     + ", position:" + posAtt.getPositionIncrement()
     + ",type" + typeAtt.type());
  }
 } catch (IOException e) {
  e.printStackTrace();
 }
}
Different analyzers for each field
In previous post, we are using same Analyzer for every fields.
IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_4_9,analyzer);

But in practical fields, we have to use different analyzer for different fields, no tokenize for keyword field, or lowercase some field in order.

We can use PerFieldAnalyzerWrapper to set different analyzer for each field. In the following example, StandardAnalyzer will be used for all fields except "firstname" and "lastname", for which KeywordAnalyzer will be used.
Map<String,Analyzer> analyzerPerField = new HashMap<>();
analyzerPerField.put("firstname", new KeywordAnalyzer());
analyzerPerField.put("lastname", new KeywordAnalyzer());

PerFieldAnalyzerWrapper aWrapper =
 new PerFieldAnalyzerWrapper(new StandardAnalyzer(version), analyzerPerField);
IndexWriterConfig iwConfig = new IndexWriterConfig(Version.LUCENE_49 , aWrapper);

Lucene Source Code
Tokenizer and TokenFilter extends TokenStream.
A TokenStream enumerates the sequence of tokens, either from Fields of a Document or from query text: main method: incrementToken(), end(), close(), reset().

TokenStream extends AttributeSource, which has two maps: Map, AttributeImpl> attributes and attributeImpls.
Attribute is interface, AttributeImpl implements Attribute.
 
References
Lucene Analyzer
Anatomy of a Lucene Tokenizer
Lucene.Net – Custom Synonym Analyzer - CodeProject
Post a Comment

Labels

Java (159) Lucene-Solr (110) All (60) Interview (59) J2SE (53) Algorithm (37) Eclipse (35) Soft Skills (35) Code Example (31) Linux (26) JavaScript (23) Spring (22) Windows (22) Web Development (20) Tools (19) Nutch2 (18) Bugs (17) Debug (15) Defects (14) Text Mining (14) J2EE (13) Network (13) PowerShell (11) Chrome (9) Continuous Integration (9) How to (9) Learning code (9) Performance (9) UIMA (9) html (9) Design (8) Dynamic Languages (8) Http Client (8) Maven (8) Security (8) Trouble Shooting (8) bat (8) blogger (8) Big Data (7) Google (7) Guava (7) JSON (7) Problem Solving (7) ANT (6) Coding Skills (6) Database (6) Scala (6) Shell (6) css (6) Algorithm Series (5) Cache (5) IDE (5) Lesson Learned (5) Miscs (5) Programmer Skills (5) System Design (5) Tips (5) adsense (5) xml (5) AIX (4) Code Quality (4) GAE (4) Git (4) Good Programming Practices (4) Jackson (4) Memory Usage (4) OpenNLP (4) Project Managment (4) Python (4) Spark (4) Testing (4) ads (4) regular-expression (4) Android (3) Apache Spark (3) Become a Better You (3) Concurrency (3) Eclipse RCP (3) English (3) Firefox (3) Happy Hacking (3) IBM (3) J2SE Knowledge Series (3) JAX-RS (3) Jetty (3) Restful Web Service (3) Script (3) regex (3) seo (3) .Net (2) Android Studio (2) Apache (2) Apache Procrun (2) Architecture (2) Batch (2) Build (2) Building Scalable Web Sites (2) C# (2) C/C++ (2) CSV (2) Career (2) Cassandra (2) Distributed (2) Fiddler (2) Google Drive (2) Gson (2) Html Parser (2) Http (2) Image Tools (2) JQuery (2) Jersey (2) LDAP (2) Life (2) Logging (2) Software Issues (2) Storage (2) Text Search (2) xml parser (2) AOP (1) Application Design (1) AspectJ (1) Bit Operation (1) Chrome DevTools (1) Cloud (1) Codility (1) Data Mining (1) Data Structure (1) ExceptionUtils (1) Exif (1) Feature Request (1) FindBugs (1) Greasemonkey (1) HTML5 (1) Httpd (1) I18N (1) IBM Java Thread Dump Analyzer (1) JDK Source Code (1) JDK8 (1) JMX (1) Lazy Developer (1) Mac (1) Machine Learning (1) Mobile (1) My Plan for 2010 (1) Netbeans (1) Notes (1) Operating System (1) Perl (1) Problems (1) Product Architecture (1) Programming Life (1) Quality (1) Redhat (1) Redis (1) Review (1) RxJava (1) Solutions logs (1) Team Management (1) Thread Dump Analyzer (1) Troubleshooting (1) Visualization (1) boilerpipe (1) htm (1) ongoing (1) procrun (1) rss (1)

Popular Posts